Home / General / Happy Anniversary Blegging: World Gone Bonkers Edition

Happy Anniversary Blegging: World Gone Bonkers Edition

/
/
/
517 Views

Greetings everyone! What a time to be alive. A bit has happened since last time Rob asked me to pop in and happy-birthday-blog for LGM. Back then, in the Before Times, my then-17 y/o Liam was heading off to college and I took a beat to memorialize how parenting affected my politics and blogging and vice versa. Nowadays, after a surprise covid-induced gap semester and four years at Brandeis, Liam is consulting on hedge funds in DC, eyeing law school and doing his own blogging on the emergent properties of artificial intelligence.

His mother, meanwhile, is rethinking how to teach altogether in the world of openAI. My graduating seniors question whether jobs will be waiting for them, or whether Liam’s graduating class was the last of the lucky ones. They’re right to be worried, as is Liam, about the inter-relationship between media eco-systems like GrokAI, America’s bizarre racial backlash shutting doors to Afghan allies and Venezuelan migrants while opening arms to Afrikaaners, and the ability of anyone to discern truth in this inside-out new world.  

In his invitation to LGM veterans to write birthday posts, Rob invited us veterans to “look back” on our memorable posts from the Before Times, rethinking them from the viewpoint of these roaring 2020s we’ve bequeathed to our children. Scanning the LGM archives seeking comment threads I learned from the most, like Liam I too have had big data, AI, race-based categories, and humanitarian ethics on my mind. For my biweekly column at World Politics Review this week, I’m writing about the way researchers and polling firms risk promulgating racism and atrocity mindsets when asking respondents how okay they are with forcible displacement, genocide, nuclear-bombing cities, torture and the like. The problem with trying to measure these attitudes of course, is that the measure can actually fuel the sentiment.

Along those lines, to answer Rob’s question, among my favorite LGM dialogues were the threads about the US census categories on race building off Battlestar Galactica actor Edward Olmos’ brilliant UN speech. To jog memories, here’s the speech itself:

At first, my cryptic ‘ without comment’-style post – just linking to my census form with “Human” written in for the racial categories – drew condemnation for seemingly minimizing the importance of race-based census data. But after my more measured follow-up, LGM commenters, co-bloggers and I thought together more deeply about how we might measure race and racism without reifying the former and provoking more of the latter. It was some of the best of what LGM offers: the chance to sift and refine  and hew rough edges off controversial ideas.

Looking back from today’s viewpoint, that exchange resonates still. Some of the vitriol against affirmative action and DEI we are currently experiencing feels like the backlash I was sensing on the horizon in 2010 and that I think Olmos was speaking to. Make no mistake: progressives are still right to want to measure race-based inequities. I just want us to be smarter at it, but every solution has pitfalls and as Liam would point out, AI in the mix presents fresh dangers.

There are some signs of hope. To its credit, the US Census updated its forms last year to be somewhat more inclusive, but who knows whether these categories will survive Trump’s DEI purge at all going forward, and they don’t go far enough. For example, way back in 2010 I was proposing census measures that capture a more nuanced, less ascriptive view of race, based on open-ended answers instead of “single-choice select.”

Ironically, universities’ response to the right’s wins against affirmative action in college admissions have led to those very kinds of reforms: replacing ‘check the box’ racial data in admissions with open-ended opportunities for students to signal what diversity they bring to campus in their college essays. It remains to be seen how this is shaping access to higher education when essays remain filtered through human reviewers’ implicit biases, or whether it will have a culturally less polarizing influence over campus environments – if that impact can even be isolated from the myriad other upheavals in the academy today.

In other respects, data analytics hasn’t moved as far as it might in measuring the complexities of racial bias and race-based policy – or social reality in general – in ways that go deeper, highlighting disparities without reifying race in the way that surely helped sowed the seeds of the MAGA backlash. The social sciences have methods for ingesting and analyzing such open-ended data and in theory human-AI interaction could even make this process more rigorous, more organic, more user-friendly and more scalable. But that’s not what’s happening yet.

Even in my field, where we should know better, public opinion researchers continue to gather data mostly using check-the-box-style responses rather than open-ends, a problem reflected in the tools available for training and implementing public opinion research. In the years since I left LGM, I’ve founded a lab at UMass Amherst specializing in asking open-ended survey questions of understudied populations: parents of daughters in Taliban-controlled Afghanistan, civilian families in Ukraine, US veterans. The tools we need to do the work we do in the way we want to and at scale are not really even on the market: the ones that are, especially those integrating AI, are mostly dumbing down rather than scaling up human ability to analyze content.

Meanwhile governments continue to rely on reified close-ended ascriptive categories – or in Trump’s case using them to drive political wedges, with arguments they be tossed altogether or applied selectively to whites. Worse are cases in which governments use the same data for the opposite goal, just as Olmos worried they could: to target racial and gender minorities rather than even out inequities. The way AI is currently being leveraged is not helping here any more than it does in the consumer insights world: last year Israel outsourced kill list aggregation to algorithms, allowing the algorithm to treat most ‘military-age males’ as targets, thereby dumbing down IDF targeteers’ ability and responsibility to implement the laws of war in an accurate way.

Today, as I watch the collapse of the rule of law, the packing of migrants into the equivalent of cattle cars, and talk of martial law, my mind is turning to how to gather opinion data from members of the active-duty US armed services who may find themselves, as in South Korea, the last best guardrail against totalitarian over-reach – in ways that will trigger rather than dumb down their human cognition and humanitarian training. I’ll likely ask them what kind of orders they anticipate feeling compelled to disobey (rather than which crimes they’d support); I’ll surely develop questions they can answer in their own words, not by checking a box, similar to my proposal on the census. I may even build my own, better version of tools to analyze them in ways that don’t offer AI more power than its due… much like Commander Adama resisting networked computers on the Galactica.

But I’m grappling with whether to include race as a demographics question on surveys like this at all. Not because I doubt its overall importance to some research. Certainly not because the Trump administration canceled my NSF grant on Afghanistan due to our ethnicity and gender questions. Just because of space limits and my interest in other questions more obviously relevant to my study: education, military rank, pay grade, branch of service; religiosity maybe; whether the respondent has children; or who they voted for in the last election. If race were a variable here, I’d care less about the race of the respondent – the standard way researchers capture this –  and more about adding treatments into the questions about the racial makeup of protesters in scenarios where enlisted personnel might imagine themselves disobeying orders to shoot.

To me, it’s questions about the impact of allegedly color-blind international humanitarian standards that matter. My colleagues and I have run experiments that show, regardless of race or racial ideology or the race of the Other in the scenario, just asking people to think about existing international laws on how we treat All Others reduces citizens’ willingness to fire-bomb enemy civilians or countenance the use of nuclear weapons. My travels on the Southern border in 2019, documented at LGM, found a similar effect: dialoguing with individual concentration camp guards about US obligations under international refugee law made them think differently about their role in the system. Other political scientists have findings to the same effect: the average Americans care about international law even if Trump doesn’t, and civilian engagement with armed actors saves lives.

But my team also found the reverse: survey researchers, even well-meaning social scientists, can inadvertently prime citizens of all races to go along with war crimes in the way we ask these kinds of questions and disseminate the answers. That’s because human respondents are more likely to think war crimes are ok when they are treated as legitimate policy questions by pollsters, and they are also more likely to think they’re ok when the media reports surveys showing fellow citizens’ comfort levels with atrocity. It’s a bit of a quantum social science effect we can’t avoid but must control for.

So I still think about Edward Olmos’ critique: humanity needs ways to measure racial inequity without amplifying racism, ways to measure the effects of human rights norms without amplifying voices who would undermine them, and – as Battlestar Galactica taught us – ways to leverage AI for all this with guardrails to ensure it doesn’t do the reverse. If we get it wrong, we’re not only complicit in the horrors of the world but fuel political skepticism of science and data itself.

I wonder how LGM commenters are thinking about these issues today, and for those of you who remember and / or commented on that original post, how you view our old dialogue in view of our present moment. The attacks on science, on ‘leftists,’ on those who stand up against atrocities make getting this right doubly tricky and important.

Liam and I will travel this month for his sister’s wedding overseas; in an exchange I’d have Nugget-Blogged about if I was still an LGM regular, he asked me whether I thought I’d be able to get back into the country “considering what you write about politics.” I reassured him I fully expect my whiteness, gender and citizenship papers to offer me layers of protection others lack, and yet in a world where scientists and professors are ourselves targets of despots, where international law professors specifically are labeled traitors worthy of execution or black-listed by states… well, it was a smart enough question.

But my fellow humans can’t shed skin color, nor will I delete my apps or hide my commitment to international human rights and humanitarian law even in these troubled times. If I someday disappear into a black site for it, I trust my friends here at LGM to publish my smuggled-out observations from prison. Far more likely – given what Robert Reich rightly characterizes as heartening triumphs so far in civic resistance to Trump 2.0 – I’ll be documenting burgeoning protest signs at the No Kings March, gumming up ICE officials’ efforts to deport my students, and asking the kind of research questions whose answers make it easier, not harder, to disrupt atrocity in its tracks – even if it’s just by getting members of the US armed forces to stop and think and write into a confidential web portal about what the concept of ‘manifest unlawfulness’ means to them.

Meanwhile, I’ll be continuing to listen and learn from the great community at LGM in the process – about data, science, law, ethics and above all – to quote Olmos – “humanity.” Cheers to twenty-one years of great conversations, please do support the blog and I hope to see some of you at these sites of resistance in the next weeks and months.    

  • Facebook
  • Twitter
  • Linkedin
  • Bluesky
This div height required for enabling the sticky sidebar
Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views :