Home / General / The customer is always wrong

The customer is always wrong

/
/
/
221 Views
xr:d:DAFWoD7VYeg:11,j:44593341580,t:23010316

I know districts in which the young people prostrate themselves before books, and like savages kiss their pages, although they cannot read a single letter.”  

Jorge Luis Borges, “The Library of Babel.”

The historian David Bell explores (gift link) some of the parallels between the claims of 18th century enlightenment thinkers, and contemporary proponents of AI. He concludes that in the end the parallels break down in an ominous way:

It is here, with this question of engagement, that the comparison between the Enlightenment and A.I.’s supposed “second Enlightenment” breaks down and reveals something important about the latter’s limits and dangers. When readers interact imaginatively with a book, they are still following the book’s lead, attempting to answer the book’s questions, responding to the book’s challenges and therefore putting their own convictions at risk.

When we interact with A.I., on the other hand, it is we who are driving the conversation. We formulate the questions, we drive the inquiry according to our own interests and we search, all too often, for answers that simply reinforce what we already think we know. In my own interactions with ChatGPT, it has often responded, with patently insincere flattery: “That’s a great question.” It has never responded: “That’s the wrong question.” It has never challenged my moral convictions or asked me to justify myself.

And why should it? It is, after all, a commercial internet product. And such products generate profit by giving users more of what they have already shown an appetite for, whether it is funny cat videos, instructions on how to fix small appliances or lectures on Enlightenment philosophy. If I wanted ChatGPT to challenge my convictions, I could of course ask it to do so — but I would have to ask. It follows my lead, not the reverse.

By its nature, A.I. responds to almost any query in a manner that is spookily lucid and easy to follow — one might say almost intellectually predigested. For most ordinary uses, this clarity is entirely welcome. But Enlightenment authors understood the importance of having readers grapple with a text. Many of their greatest works came in the form of enigmatic novels, dialogues presenting opposing points of view or philosophical parables abounding in puzzles and paradoxes. Unlike the velvety smooth syntheses provided by A.I., these works forced readers to develop their judgment and come to their own conclusions.

What Bell is talking about is related to the broader problem of turning education into a consumer-driven for-profit (or quasi-profit because of our tax code) activity. If the central axiom of consumer capitalism is that the customer is always right, the whole basis of pedagogy might be said to be the opposite: the customer is always wrong, or at least often wrong, in ways that are not flattering to their self-esteem.

In a society in which money is basically God, how do we decouple the need for enlightenment and edification from a phony discourse in which students are told constantly that they are asking great questions, because it’s profitable to lie to them? That’s a great question.

Thank you for your attention to this matter.

  • Facebook
  • Twitter
  • Linkedin
  • Bluesky
This div height required for enabling the sticky sidebar
Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views :