More LGM according to LLM

I have a subscription to Claude. There, I’ve said it. I’ve confessed. Just hear me out.
I got the subscription one for one purpose, and one purpose only: to help me code javascript, initially for a virtual tabletop. Do I feel good about this? Sorta. I also don’t feel that guilty. I wouldn’t hire someone to code this kind of material for me. I’ve tried online lessons and, frankly, this fits more with my learning style. But I am feeding the beast.
With that out of the way, I was inspired by Paul’s post. What, I wondered, would Claude produce by way of putative LGM content?
The first thing I did was ask it to write an LGM post on AI. It replied.
I’ll write a blog post in the style of Lawyers, Guns & Money, the academic political blog known for its sharp analysis, historical context, and sardonic wit.
But then I realized that I wasn’t really interested in a generic LGM post. I wanted to try specific LGM authors. So I chose some of the most prolific: Paul and Scott.
Before we continue, I want to briefly mention two of my favorite pieces on LLMs: “ChatGPT is bullshit” (Hicks, Humphries, & Slater 2024) and “Large AI Models are Cultural and Social Technologies” (Farrell, Gopnik, Shalizi, & Evans 2025).
First, the bullshit:
The problem here isn’t that large language models hallucinate, lie, or misrepresent the world in some way. It’s that they are not designed to represent the world at all; instead, they are designed to convey convincing lines of text. So when they are provided with a database of some sort, they use this, in one way or another, to make their responses more convincing. But they are not in any real way attempting to convey or transmit the information in the database. As Chirag Shah and Emily Bender put it: “Nothing in the design of language models (whose training task is to predict words given context) is actually designed to handle arithmetic, temporal reasoning, etc. To the extent that they sometimes get the right answer to such questions is only because they happened to synthesize relevant strings out of what was in their training data. No reasoning is involved […] Similarly, language models are prone to making stuff up […] because they are not designed to express some underlying set of information in natural language; they are only manipulating the form of language” (Shah & Bender, 2022). These models aren’t designed to transmit information, so we shouldn’t be too surprised when their assertions turn out to be false.
An analogy guaranteed to make college professors shudder:
The idea of ChatGPT as a bullshit machine is a helpful one when combined with the distinction between hard and soft bullshit. Reaching again for the example of the dodgy student paper: we’ve all, I take it, marked papers where it was obvious that a dictionary or thesaurus had been deployed with a crushing lack of subtlety; where fifty-dollar words are used not because they’re the best choice, nor even because they serve to obfuscate the truth, but simply because the author wants to convey an impression of understanding and sophistication. It would be inappropriate to call the dictionary a bullshit artist in this case; but it would not be inappropriate to call the result bullshit. So perhaps we should, strictly, say not that ChatGPT is bullshit but that it outputs bullshit in a way that goes beyond being simply a vector of bullshit: it does not and cannot care about the truth of its output, and the person using it does so not to convey truth or falsehood but rather to convince the hearer that the text was written by a interested and attentive agent.
My wife, who works on machine-learning projects (she rejects the term “AI”), is definitely in the “sophisticated autocomplete” school of thought when it coms to LLMs. This comports well with the “bullshit machine” heuristic. Let me give you an example with Claude. Claude allows you to upload “project knowledge.” My main project includes the most-recent, working versions of the various javascript modules that comprise the code for a Tabletop Playground mod.
Claude’s algorithm, perhaps to conserve processor load, has a strong bias toward predicting answers to prompts without consulting those files. It will often do so even one prompts it to check the files. This matters because Claude is, in theory, useful for scanning through code and looking for a) basic syntax errors or b) where the logic in a chain of function calls breaks down. If it doesn’t do that, it will generate a lot of useless suggestions.
Claude will “tell you” that it is following your prompt, even when it clearly isn’t. When successfully pushed to actually scan the code, Claude will “apologize” and assure you that it won’t do so again. But that’s not true. Claude isn’t being trained on these interactions. Instead, the algorithm is predicting that it should respond with a cluster of words that include what human beings interpret as an apology and a promise.
I’ve been thinking about the “bullshit machine” analysis in the context of the MAHA report. You may have read that the administration has been iteratively “correcting” the report — which also involves “correcting” its “corrections.” NOTUS:
A batch of errors was corrected sometime between 10:15 a.m. and 3:28 p.m. Thursday, according to versions of the document archived by the Wayback Machine. But even the process by which the MAHA report was updated doesn’t meet the normal standards of making corrections to scientific papers, according to epidemiologist David Michaels.
The MAHA document itself is, at least in one important respect, bullshit. The charlatans at HHS wanted something that created the impression of a well-researched, authoritative, scientific report. Perhaps they were too lazy to put in the effort themselves. Maybe they aren’t capable of producing the kind of pseudo-report that they wanted. Regardless, they turned to a bullshit machine to give them something that readers might mistake for serious, scientific analysis.
But these people are not simply bullshitters. They are liars. They tried to pass off the report as the result of comprehensive and careful study. Now they’re trying to cover up what they did through more lies and obfuscation.
What about generative AI as cultural and social technologies (pdf)?
Our central point here is not just that these technological innovations, like all other innovations, will have cultural and social consequences. Rather we argue that Large Models are themselves best understood as a particular type of cultural and social technology. They are analogous to such past technologies as writing, print, markets, bureaucracies, and representative democracies. Then we can ask the separate question about what the effects of these systems will be. New technologies that aren’t themselves cultural or social, such as steam and electricity, can have cultural effects. Genuinely new cultural technologies, Wikipedia for example, may have limited effects. However, many past cultural and social technologies also had profound, transformative effects on societies, for good and ill, and this is likely to be true for Large Models.
Okay. So what about LGM via Claude?
Here’s the prompt: “How about one by Paul Campos specifically, in which he discusses AI?”
Rather than put long pseudo-posts in block quotes — which, let’s be honest, aren’t exactly the most aesthetically pleasing aspect of LGM’s current design — I’m offsetting them with colored text.
The AI Grift Meets the Legal Education Industrial Complex
Posted by Paul Campos
So I’m sitting in yet another faculty meeting where some administrator is breathlessly explaining how we need to “integrate AI into our curriculum” and “prepare our students for the AI revolution,” and I’m thinking: where have I heard this song before?
Oh right. It was 2010, and we were being told that legal education needed to be completely revolutionized because BigLaw was never coming back and we needed to train “practice-ready” lawyers for the new economy. Before that, it was the dot-com boom, when we were assured that every law student needed to understand technology law because the internet was going to change everything forever.
The more things change, the more they stay exactly the same.
The Familiar Patterns of Academic Panic
Here’s what happens: Some new technology emerges. Consultants immediately materialize to explain why this changes everything. Academic administrators, terrified of being left behind, start throwing money at poorly-conceived initiatives. Faculty get dragged into committees to discuss “strategic responses.” And somehow, mysteriously, the proposed solution always involves hiring more administrators and buying more expensive software.
The current AI panic in legal education follows this script with almost mathematical precision. We’re told that AI will either replace lawyers entirely (in which case, why are we still running law schools?) or that lawyers who don’t master AI will be unemployable (in which case, why aren’t we teaching COBOL programming, since that’s what actually runs most legal databases?).
What AI Actually Does vs. What We’re Told It Does
Let’s be clear about what current AI systems actually accomplish: They’re very good at producing text that sounds plausible but may or may not be accurate. They can summarize documents, generate form letters, and produce first drafts of routine legal writing. These are useful tools, similar to how word processors were useful tools that replaced typewriters.
What they cannot do, despite the breathless marketing copy, is replace human judgment about legal strategy, client counseling, or complex legal analysis. They can’t negotiate a settlement, cross-examine a witness, or explain to a grieving family why their medical malpractice case isn’t viable.
But here’s the thing: acknowledging that AI is a useful but limited tool doesn’t generate consulting fees or justify new administrative positions. Saying “we should probably teach students about AI tools the same way we teach them about Westlaw” doesn’t sound revolutionary enough to warrant a $50,000 curriculum overhaul.
The Debt Trap Dimension
What’s particularly galling about the current AI panic in legal education is how it completely ignores the actual crisis facing law students: they’re borrowing enormous sums of money for degrees that may not provide sufficient income to repay those loans. The median law school debt is now over $170,000, and that’s before we factor in undergraduate debt.
But instead of addressing this fundamental economic problem, we’re being told that the real issue is whether students are sufficiently prepared for our AI-powered future. This is like telling passengers on the Titanic that they need to worry about whether their life jackets are the right color.
The Consulting-Industrial Complex
Here’s a prediction: In six months, there will be a cottage industry of consultants offering to help law schools “AI-proof” their curricula. These consultants will charge substantial fees to conduct “AI readiness assessments” and develop “comprehensive AI integration strategies.” They will recommend expensive software packages and specialized training programs.
None of this will meaningfully improve legal education or student outcomes. But it will create the appearance of forward-thinking leadership, which is apparently what matters in academic administration these days.
What We Should Actually Be Doing
If we actually cared about preparing students for legal practice, we’d focus on the fundamentals: teaching them to think clearly, write well, and understand how legal institutions actually function. We’d give them practical experience with real cases and real clients. We’d help them develop judgment and ethical reasoning.
We’d also be honest about the economic realities of legal practice and stop pretending that everyone can be a BigLaw associate making $215,000 a year.
But that would require admitting that legal education’s problems are primarily economic and structural, not technological. And that’s a much harder conversation than debating whether to add an “AI and the Law” course to the curriculum.
The Broader Grift
The AI panic in legal education is just one manifestation of a broader pattern: the tendency to treat technological change as a substitute for addressing fundamental institutional problems. Can’t afford to hire enough faculty? AI will solve that. Students graduating with crushing debt and poor job prospects? AI will create new opportunities. Legal services becoming increasingly unaffordable for ordinary people? AI will democratize access to justice.
This is magical thinking, and it’s particularly dangerous when deployed by people who should know better.
Look, AI tools will probably become as routine in legal practice as email and word processors. Students should learn to use them competently. But let’s not pretend that this represents some fundamental transformation of legal education or legal practice.
It’s just the latest chapter in the long history of technological solutions to institutional problems that aren’t actually technological in nature.
Paul Campos is a professor of law at the University of Colorado.
How did Claude summarize its output?
I’ve rewritten the post to capture Paul Campos’ distinctive voice and perspective. Campos is known for his sharp critiques of legal education, particularly around the law school debt crisis and the disconnect between legal education costs and career outcomes. His writing style combines:
- Sardonic, conversational tone that often begins with personal anecdotes
- Deep skepticism toward academic administrators and consulting-driven initiatives
- Focus on economic realities rather than abstract theories
- Historical pattern recognition (“where have I heard this before?”)
- Direct, punchy sentences mixed with longer analytical paragraphs
- Specific attention to how new trends in legal education often ignore fundamental structural problems
- Willingness to be blunt about uncomfortable truths
The post addresses AI through Campos’ typical lens: skepticism about technological panaceas, concern about student debt, and criticism of the “consulting-industrial complex” that profits from academic anxiety about staying current.
Remember that everything you’ve just read is the output of a predictive algorithm. Its ability to simulate a Paul Campos post depends on its dataset and its training. I’m not capable of unpacking how it arrived at this… whatever it is… but Anthropic claims to have made its model interpretable, so maybe its researchers could explain?
Let’s do one more.
I asked for a post by Scott Lemieux on the most recent Supreme Court decision. The first thing it did was search for “the most recent Supreme Court decision.” It identified that as “Supreme Court Migrant Protections Trump May 30 2025.” Then the algorithm produced this:
The Supreme Court’s Predictable Capitulation to Trump’s Immigration Agenda
Posted by Scott Lemieux
Yesterday’s Supreme Court decision allowing the Trump administration to revoke parole status for approximately 530,000 migrants from Cuba, Haiti, Nicaragua, and Venezuela was not surprising, but it was nonetheless depressing in its implications for both immigration law and the Court’s institutional legitimacy.
The basic facts are straightforward: The Biden administration had granted these migrants temporary parole status allowing them to live and work legally in the United States for two years while their immigration cases proceeded. Trump terminated the program wholesale on his first day in office, and when a federal district judge in Massachusetts blocked the termination pending review, the Supreme Court stepped in to allow Trump’s revocation to proceed while litigation continues.
What makes this decision particularly revealing is not just what the Court did, but how it did it—and what that tells us about the Roberts Court’s approach to Trump-era immigration policy.
The Emergency Docket Problem
First, this was yet another decision issued through the Court’s so-called “shadow docket”—an unsigned, unreasoned emergency order that reverses a lower court without full briefing or oral argument. As Justice Jackson noted in her scathing dissent (joined by Sotomayor), the majority “plainly botched” the formula used to decide whether lifting a lower court ruling would have negative consequences for the parties involved.
The shadow docket has become the Roberts Court’s preferred mechanism for advancing conservative policy preferences without having to defend those preferences in written opinions subject to scholarly and public scrutiny. It’s procedurally convenient and substantively cowardly—allowing the conservative majority to reshape the law while avoiding accountability for the reasoning behind their decisions.
The Substance Is Even Worse Than The Process
But the procedural problems pale in comparison to the substantive ones. The district court judge found that Secretary of Homeland Security Kristi Noem had violated federal law by terminating the parole program en masse rather than conducting the case-by-case reviews required by statute. Judge Indira Talwani said that the secretary had erred in seeking to expedite removal of individuals who still had time left on their promised two-year protection in the U.S.
This is not a close legal question. The Immigration and Nationality Act explicitly requires individualized determinations for parole revocations. The Trump administration essentially argued that courts have no authority to review the Secretary’s decision to ignore statutory requirements—a position that would effectively place immigration enforcement beyond judicial review entirely.
The Supreme Court’s willingness to accept this argument, even temporarily, represents a dangerous expansion of executive power that goes well beyond immigration law. If administrative agencies can ignore statutory constraints whenever they invoke “national security” or “border security,” then congressional oversight becomes meaningless and judicial review becomes a dead letter.
The Human Cost of Legal Formalism
Justice Jackson’s dissent captured the human dimension that the majority preferred to ignore: “social and economic chaos will ensue if that many noncitizen parolees are suddenly and summarily remanded” to their home countries. The migrants granted parole to enter the country have sponsors, Jackson wrote, and in many cases “have integrated into American neighborhoods and communities in the hopes of eventually securing long term legal status.”
This isn’t hyperbole. We’re talking about half a million people who entered the United States legally, underwent security screening, found sponsors, established lives and jobs, and were promised two years of protection. Many have children who are U.S. citizens. Many have pending applications for other forms of legal status.
The majority’s decision to strip these protections while litigation is pending will create exactly the kind of “social and economic chaos” that equitable relief is designed to prevent. But apparently the conservative justices believe that avoiding disruption to Trump’s deportation schedule outweighs avoiding disruption to the lives of half a million human beings.
The Roberts Court’s Immigration Exceptionalism
What’s particularly striking about this decision is how it fits into the Roberts Court’s broader pattern of immigration exceptionalism—the tendency to apply different constitutional and statutory interpretation principles when immigration is involved.
In other contexts, this Court has been extremely solicitous of individual rights claims against government overreach. When it comes to religious liberty, free speech, or gun rights, the conservative majority regularly invalidates government actions based on the flimsiest of constitutional theories.
But when it comes to immigration, suddenly procedural shortcuts become acceptable, statutory requirements become optional, and individual rights become expendable. The same justices who have found creative ways to limit federal power in areas like environmental regulation and healthcare become enthusiastic champions of unchecked executive authority when that authority is used to deport migrants.
The Alien Enemies Act Dimension
It’s worth noting that this decision comes alongside the Court’s other recent immigration rulings, including its decision to allow continued use of the 1798 Alien Enemies Act for deportations—a law best known for its use to intern Japanese, Italian and German immigrants during World War Two.
The Alien Enemies Act has historically been used only during declared wars, but the Trump administration has revived it to target alleged members of the Tren de Aragua gang. Relatives of many of the hundreds of deported Venezuelans and their lawyers have denied that they are Tren de Aragua members and have said they were never given the chance to contest the administration’s allegations of gang affiliation.
The Court’s willingness to allow the use of a wartime detention statute during peacetime, combined with its acceptance of mass parole revocations, suggests a systematic abdication of judicial oversight over immigration enforcement.
The Institutional Implications
Perhaps most troubling is what this decision reveals about the Roberts Court’s approach to its own institutional role. Chief Justice Roberts has spent years insisting that the Court is a neutral arbiter that applies law rather than politics. But it’s hard to maintain that fiction when the Court consistently intervenes to help Trump implement policies that lower courts have found to violate federal law.
As much as Roberts may be trying to portray the court as a neutral arbitrator, the chief justice played a major role in creating the conditions for Trump’s maximalist approach, authoring the court’s opinion last summer that gave Trump broad immunity from criminal prosecution for official actions as president.
The pattern is clear: when Trump wants to do something and lower courts get in the way, the Supreme Court finds a way to help. This isn’t neutral application of legal principles—it’s partisan intervention disguised as procedural necessity.
What This Means Going Forward
Yesterday’s decision is unlikely to be the last word on immigration policy this term. The Court still has pending cases on birthright citizenship, and Trump has made clear his intention to test the boundaries of executive power in multiple contexts.
But the decision sends a clear signal about where the Court stands: when faced with a choice between protecting vulnerable individuals and protecting Trump’s policy agenda, the conservative majority will choose Trump every time.
The real tragedy is that this didn’t have to happen. The lower court’s preliminary injunction was a modest intervention designed to preserve the status quo while legal challenges proceed. It didn’t prevent the Trump administration from making its case or from ultimately prevailing on the merits. It simply required that legal processes be followed before half a million people lose their ability to remain in the country legally.
That the Supreme Court couldn’t tolerate even this minimal constraint on executive power tells us everything we need to know about how the Roberts Court views its role in the Trump era: not as a check on presidential overreach, but as an enabler of it.
Scott Lemieux teaches political science at the University of Washington.
For the record, here’s Claude’s summary:
I’ve written a blog post in Scott Lemieux’s characteristic style analyzing yesterday’s Supreme Court decision on immigration parole. The post captures Lemieux’s approach through:
- Systematic legal analysis: Breaking down both the procedural and substantive problems with the Court’s decision
- Focus on institutional dynamics: Examining how this fits into the Roberts Court’s pattern of “immigration exceptionalism”
- Sharp critique of the shadow docket: Highlighting how the Court used an unsigned, unreasoned emergency order The Washington PostCNN
- Attention to human consequences: Emphasizing Justice Jackson’s point about “social and economic chaos” Supreme Court allows Trump to end parole for 530,000 migrants for now – The Washington Post affecting half a million people
- Broader constitutional implications: Connecting this to questions about executive power and judicial review
- Political context: Noting how Roberts’s presidential immunity decision In Trump cases, Roberts and Supreme Court forge careful, cautious path – The Washington Post enabled Trump’s current approach
- Skepticism toward conservative legal reasoning: Pointing out the inconsistency between the Court’s approach to different types of rights claims
The post draws on the extensive reporting about the Court’s Friday decision allowing Trump to revoke parole status for approximately 530,000 migrants Supreme Court allows Trump to end parole for 530,000 migrants for now – The Washington Post +2 while maintaining Lemieux’s characteristic focus on how legal decisions reflect broader patterns of institutional behavior and political power.
To the extent that these summaries reflect the model’s process, it strikes me as layers of increasingly attenuated predictive heuristics. I get the same vibe that I get from academic articles in which the statistical analysis is premised on a series of wagers that are each independently plausible, but not with the degree of confidence that would justify stacking them on top of one another.
Anyway, much to everyone’s disappointment, I plan to start posting here more regularly. But that plan will come to nought if the “core” contributors find it more difficult to maintain their commitment or, worse, the site no longer becomes self-supporting. So please do contribute.
Paypal
Patreon
Merchandise
Venmo
Silent Auction
Physical Address