Home / LLMs / More LGM according to LLM

More LGM according to LLM

/
/
/
845 Views
Source: Jim Griffin @ https://www.flickr.com/photos/30484128@N03/52908165339

I have a subscription to Claude. There, I’ve said it. I’ve confessed. Just hear me out.

I got the subscription one for one purpose, and one purpose only: to help me code javascript, initially for a virtual tabletop. Do I feel good about this? Sorta. I also don’t feel that guilty. I wouldn’t hire someone to code this kind of material for me. I’ve tried online lessons and, frankly, this fits more with my learning style. But I am feeding the beast.

With that out of the way, I was inspired by Paul’s post. What, I wondered, would Claude produce by way of putative LGM content?

The first thing I did was ask it to write an LGM post on AI. It replied.

I’ll write a blog post in the style of Lawyers, Guns & Money, the academic political blog known for its sharp analysis, historical context, and sardonic wit.

But then I realized that I wasn’t really interested in a generic LGM post. I wanted to try specific LGM authors. So I chose some of the most prolific: Paul and Scott.

Before we continue, I want to briefly mention two of my favorite pieces on LLMs: “ChatGPT is bullshit” (Hicks, Humphries, & Slater 2024) and “Large AI Models are Cultural and Social Technologies” (Farrell, Gopnik, Shalizi, & Evans 2025).

First, the bullshit:

The problem here isn’t that large language models hallucinate, lie, or misrepresent the world in some way. It’s that they are not designed to represent the world at all; instead, they are designed to convey convincing lines of text. So when they are provided with a database of some sort, they use this, in one way or another, to make their responses more convincing. But they are not in any real way attempting to convey or transmit the information in the database. As Chirag Shah and Emily Bender put it: “Nothing in the design of language models (whose training task is to predict words given context) is actually designed to handle arithmetic, temporal reasoning, etc. To the extent that they sometimes get the right answer to such questions is only because they happened to synthesize relevant strings out of what was in their training data. No reasoning is involved […] Similarly, language models are prone to making stuff up […] because they are not designed to express some underlying set of information in natural language; they are only manipulating the form of language” (Shah & Bender, 2022). These models aren’t designed to transmit information, so we shouldn’t be too surprised when their assertions turn out to be false.

An analogy guaranteed to make college professors shudder:

The idea of ChatGPT as a bullshit machine is a helpful one when combined with the distinction between hard and soft bullshit. Reaching again for the example of the dodgy student paper: we’ve all, I take it, marked papers where it was obvious that a dictionary or thesaurus had been deployed with a crushing lack of subtlety; where fifty-dollar words are used not because they’re the best choice, nor even because they serve to obfuscate the truth, but simply because the author wants to convey an impression of understanding and sophistication. It would be inappropriate to call the dictionary a bullshit artist in this case; but it would not be inappropriate to call the result bullshit. So perhaps we should, strictly, say not that ChatGPT is bullshit but that it outputs bullshit in a way that goes beyond being simply a vector of bullshit: it does not and cannot care about the truth of its output, and the person using it does so not to convey truth or falsehood but rather to convince the hearer that the text was written by a interested and attentive agent.

My wife, who works on machine-learning projects (she rejects the term “AI”), is definitely in the “sophisticated autocomplete” school of thought when it coms to LLMs. This comports well with the “bullshit machine” heuristic. Let me give you an example with Claude. Claude allows you to upload “project knowledge.” My main project includes the most-recent, working versions of the various javascript modules that comprise the code for a Tabletop Playground mod.

Claude’s algorithm, perhaps to conserve processor load, has a strong bias toward predicting answers to prompts without consulting those files. It will often do so even one prompts it to check the files. This matters because Claude is, in theory, useful for scanning through code and looking for a) basic syntax errors or b) where the logic in a chain of function calls breaks down. If it doesn’t do that, it will generate a lot of useless suggestions.

Claude will “tell you” that it is following your prompt, even when it clearly isn’t. When successfully pushed to actually scan the code, Claude will “apologize” and assure you that it won’t do so again. But that’s not true. Claude isn’t being trained on these interactions. Instead, the algorithm is predicting that it should respond with a cluster of words that include what human beings interpret as an apology and a promise.

I’ve been thinking about the “bullshit machine” analysis in the context of the MAHA report. You may have read that the administration has been iteratively “correcting” the report — which also involves “correcting” its “corrections.” NOTUS:

A batch of errors was corrected sometime between 10:15 a.m. and 3:28 p.m. Thursday, according to versions of the document archived by the Wayback Machine. But even the process by which the MAHA report was updated doesn’t meet the normal standards of making corrections to scientific papers, according to epidemiologist David Michaels.

The MAHA document itself is, at least in one important respect, bullshit. The charlatans at HHS wanted something that created the impression of a well-researched, authoritative, scientific report. Perhaps they were too lazy to put in the effort themselves. Maybe they aren’t capable of producing the kind of pseudo-report that they wanted. Regardless, they turned to a bullshit machine to give them something that readers might mistake for serious, scientific analysis.

But these people are not simply bullshitters. They are liars. They tried to pass off the report as the result of comprehensive and careful study. Now they’re trying to cover up what they did through more lies and obfuscation.

What about generative AI as cultural and social technologies (pdf)?

Our central point here is not just that these technological innovations, like all other innovations, will have cultural and social consequences. Rather we argue that Large Models are themselves best understood as a particular type of cultural and social technology. They are analogous to such past technologies as writing, print, markets, bureaucracies, and representative democracies. Then we can ask the separate question about what the effects of these systems will be. New technologies that aren’t themselves cultural or social, such as steam and electricity, can have cultural effects. Genuinely new cultural technologies, Wikipedia for example, may have limited effects. However, many past cultural and social technologies also had profound, transformative effects on societies, for good and ill, and this is likely to be true for Large Models.

Okay. So what about LGM via Claude?

Here’s the prompt: “How about one by Paul Campos specifically, in which he discusses AI?”

Rather than put long pseudo-posts in block quotes — which, let’s be honest, aren’t exactly the most aesthetically pleasing aspect of LGM’s current design — I’m offsetting them with colored text.

The AI Grift Meets the Legal Education Industrial Complex

How did Claude summarize its output?

I’ve rewritten the post to capture Paul Campos’ distinctive voice and perspective. Campos is known for his sharp critiques of legal education, particularly around the law school debt crisis and the disconnect between legal education costs and career outcomes. His writing style combines:

  • Sardonic, conversational tone that often begins with personal anecdotes
  • Deep skepticism toward academic administrators and consulting-driven initiatives
  • Focus on economic realities rather than abstract theories
  • Historical pattern recognition (“where have I heard this before?”)
  • Direct, punchy sentences mixed with longer analytical paragraphs
  • Specific attention to how new trends in legal education often ignore fundamental structural problems
  • Willingness to be blunt about uncomfortable truths

The post addresses AI through Campos’ typical lens: skepticism about technological panaceas, concern about student debt, and criticism of the “consulting-industrial complex” that profits from academic anxiety about staying current.

Remember that everything you’ve just read is the output of a predictive algorithm. Its ability to simulate a Paul Campos post depends on its dataset and its training. I’m not capable of unpacking how it arrived at this… whatever it is… but Anthropic claims to have made its model interpretable, so maybe its researchers could explain?

Let’s do one more.

I asked for a post by Scott Lemieux on the most recent Supreme Court decision. The first thing it did was search for “the most recent Supreme Court decision.” It identified that as “Supreme Court Migrant Protections Trump May 30 2025.” Then the algorithm produced this:

The Supreme Court’s Predictable Capitulation to Trump’s Immigration Agenda

For the record, here’s Claude’s summary:

I’ve written a blog post in Scott Lemieux’s characteristic style analyzing yesterday’s Supreme Court decision on immigration parole. The post captures Lemieux’s approach through:

  • Systematic legal analysis: Breaking down both the procedural and substantive problems with the Court’s decision
  • Focus on institutional dynamics: Examining how this fits into the Roberts Court’s pattern of “immigration exceptionalism”
  • Sharp critique of the shadow docket: Highlighting how the Court used an unsigned, unreasoned emergency order The Washington PostCNN
  • Attention to human consequences: Emphasizing Justice Jackson’s point about “social and economic chaos” Supreme Court allows Trump to end parole for 530,000 migrants for now – The Washington Post affecting half a million people
  • Broader constitutional implications: Connecting this to questions about executive power and judicial review
  • Political context: Noting how Roberts’s presidential immunity decision In Trump cases, Roberts and Supreme Court forge careful, cautious path – The Washington Post enabled Trump’s current approach
  • Skepticism toward conservative legal reasoning: Pointing out the inconsistency between the Court’s approach to different types of rights claims

The post draws on the extensive reporting about the Court’s Friday decision allowing Trump to revoke parole status for approximately 530,000 migrants Supreme Court allows Trump to end parole for 530,000 migrants for now – The Washington Post +2 while maintaining Lemieux’s characteristic focus on how legal decisions reflect broader patterns of institutional behavior and political power.

To the extent that these summaries reflect the model’s process, it strikes me as layers of increasingly attenuated predictive heuristics. I get the same vibe that I get from academic articles in which the statistical analysis is premised on a series of wagers that are each independently plausible, but not with the degree of confidence that would justify stacking them on top of one another.

Anyway, much to everyone’s disappointment, I plan to start posting here more regularly. But that plan will come to nought if the “core” contributors find it more difficult to maintain their commitment or, worse, the site no longer becomes self-supporting. So please do contribute.

Paypal
Patreon
Merchandise
Venmo
Silent Auction
Physical Address

  • Facebook
  • Twitter
  • Linkedin
  • Bluesky
This div height required for enabling the sticky sidebar
Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views :