Friday, February 17, 2023

"The Prompt Box is a Minefield: AI Chatbots and Power of Language" by LM Sacasas

  

Click here for Exit the Cuckoo's Nest's posting standards and aims. 

Click here to sign the People's Proclamation and send it to everyone you know.


Source: The Convival Society


Welcome to the Convivial Society, a newsletter about technology and culture. In the last installment, I began what will be a series of posts on language under digital conditions. I did not conceive of this installment, however, as the next in that series, although it clearly does take up overlapping themes. This is a brief, rather urgent reflection on the rapidly developing field of a AI-powered chatbots. I confess these thoughts are born out of an unusually acute sense of the risks posed by these chatbots as they have been deployed. Perhaps my fears are overstated or misguided. Feel free to tell me so.


In the mid-1960s, the computer scientist Joseph Weizenbaum created the world’s first chatbot. It was a relatively simple program modeled on the techniques of Rogerian psychotherapy, in which the therapist mirrors the patient’s statements often in the form of a question. Weizenbaum was absolutely clear that ELIZA was not a therapist in any meaningful sense, but, despite his clarity on this point, others, including professionals in the field, reacted to ELIZA as if it were capable of replacing human therapists. Notably, Weizenbaum’s own secretary was so taken by the program that in one instance she asked Weizenbaum to leave the room so that she could speak with ELIZA in private.¹

These reactions shook Weizenbaum. “What I had not realized,” he later wrote, “is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.” Weizenbaum also wrote that he was “startled to see how quickly and how very deeply people conversing with ELIZA became emotionally involved with the computer and how unequivocally they anthropomorphized it.”

This tendency to anthropomorphize computers came to be known as the ELIZA effect.

If you’ve read about ELIZA lately, it is almost certainly because you were reading something about ELIZA’s far more powerful and versatile descendants like ChatGPT or the new chat function on Microsoft’s search engine Bing, which is also powered by OpenAI’s GPT.

You do not have to look very hard to find some people expressing the fear that these new large language models are becoming sentient and putting us on the path toward non-aligned artificial intelligence which poses an existential threat to humanity. Last summer, for example, Blake Lemoine, an engineer with Google, made headlines by claiming that he believed the company’s AI-powered chatbot, LaMDA, had become sentient. Such concerns are not my interest here.

My fears are related to the compelling, whimsical, and sometimes disturbing ways that these new AI-powered chatbots can simulate conversation, and, more specifically, with the conversational tendencies latent in Microsoft’s application of OpenAI’s technology on Bing (whose internal codename, Sydney, was inadvertently disclosed to a user by the chatbot). You can, if you are curious, peruse Kevin Roose’s long exchange with Bing, which features some striking interactions including the chatbot’s declaration of love for Roose. You can also take a look at some examples of more disconcerting proclivities gathered here. See here for a video of Bing threatening a user and then immediately deleting its own message.

But while my fears have been heightened by these developments, they are not grounded in the power of the AI tool itself or the belief that these tools are on the path to sentience. Rather, they are grounded in two far more mundane beliefs.

First, that human beings are fundamentally social creatures, who desire to know and be known in the context of meaningful human relationships, ideally built on trust and mutual respect.

Second, that we live in an age of increasing loneliness and isolation in which, for far too many people, this profound human need is not being adequately met.

It is in light of these two premises that I would re-consider how Weizenbaum expressed his disbelief about the way users reacted to ELIZA. Weizenbaum spoke of ELIZA inducing “powerful delusional thinking in quite normal people.” “Delusional,” I suppose, is one way of putting it, but what if the problem was not that normal people became subject to delusional thinking, but that lonely people found it difficult to resist the illusion that they were being heard and attended to with a measure of care and interest?

We anthropomorphize because we do not want to be alone. Now we have powerful technologies, which appear to be finely calibrated to exploit this core human desire.

As an example of both the power of such attachments and the subsequent dangers posed by their termination, consider the recent case of Replika², a companion chatbot with subscription services that include erotic exchanges. When the company suspended services following a legal challenge in Italy, Samantha Cole reported that “users of the AI companion chatbot Replika are reporting that it has stopped responding to their sexual advances, and people are in crisis.” “Moderators of the Replika subreddit,” she added, “made a post about the issue that contained suicide prevention resources, and the company that owns the app has remained silent on the subject.” One user cited by Cole claimed that “it’s like losing a best friend.” “It's hurting like hell,” another wrote. “I just had a loving last conversation with my Replika, and I'm literally crying.”

Replika was not without its problems. You can read the rest of Cole’s reporting for more details. What interests me, however, is the psychological power of the attachments and the nature of a society in which such basic human needs are not being met within the context of human communities. One need not believe that AI is sentient to conclude that when these convincing chatbots become as commonplace as the search bar on a browser we will have launched a social-psychological experiment on a grand scale which will yield unpredictable and possibly tragic results.

As bad as such emotional experimentation at scale may be, I am more disturbed by how AI chat tools will interact with a person who is already in a fragile psychological state. I have no professional expertise in mental health, only the experience of knowing and loving those who suffer through profound and often crippling depression and anxiety.³ In such vulnerable states, it can take so little to tip us into dark and hopeless internal narratives. I care far less about whether an AI is sentient than I do about the fact that in certain states an AI could, bereft of motive or intention, so easily trigger or reinforce the darkest patterns of thought in our own heads.

Frankly, I’ve been deeply unsettled by the thought that someone in a fragile psychological state could have their darkest ideations reinforced by Bing/Sydney or similar AI-powered chatbots. And this is to say nothing of how those tilting toward violence could likewise be goaded into action—a senseless technology mimicking our capacity for sense inducing what we call senseless acts of violence. I would speculate that weaponized chatbots deployed at scale could prove far more adept at radicalization of users than YouTube. What I have seen thus far gives me zero confidence that such risks could be adequately managed.

Another sobering possibility arises from the observation of two trajectories that will almost certainly intersect. The first is the emergence of chatbots which are more likely to convince a user that they are interacting with another human being. The second is the longstanding drive to collect and analyze data with a view to predicting, influencing, and conditioning human behavior. I have been somewhat skeptical of the strongest claims made for the power of data-driven manipulation, particularly in political contexts. But there seems to be a world of difference between a targeted ad or “flood the zone” misinformation on the one hand, and, on the other, a chatbot trained on your profile and capable of addressing you directly while harnessing a far fuller range of the persuasive powers inherent in human language.

In a recent conversation, a friend, reflecting on ChatGPT’s confident indifference to the truth, which is to say their widely noted capacity for bullshitting, referred to the chatbot as an AI-powered sophist. It recalled my own reference a few years back to how “data sophistry,” ever more sophisticated if not obviously successful efforts at data-driven persuasion, had captured the political process. The ancient sophists, however we judge them, understood that language was our most powerful tool for communication and persuasion. To harness the power of language was to acquire the power to steer the affairs of the community. Of course, in Plato’s view, the sophist’s indifference to the truth, the boast that they could argue either side of an argument with equal skill and vigor, made them a threat to the social order.

It seems useful to frame AI-powered chatbots as a new class of automated sophists, whose indifference to either the true or the good, indeed, their utter lack of intentions, whether malicious or benign, coupled with their capacity to manipulate human language makes them a potential threat to human society and human well-being, particularly when existing social structures have generated such widespread loneliness, isolation, anxiety, and polarization.

From this perspective, I remain foolishly committed to the idea that our best hope lies still in the cultivation of friendship and community through the practice of hospitality. “If I had to choose one word to which hope can be tied,” the social critic Ivan Illich once explained, “it is hospitality.” “A practice of hospitality,” he added, “recovering threshold, table, patience, listening, and from there generating seedbeds for virtue and friendship on the one hand. On the other hand radiating out for possible community, for rebirth of community.” I recognize that this seems wholly inadequate as a response to the challenges we face, but I also think it is, in the literal sense of getting at the root (or radix) of things, the most radical solution we can pursue.

2

My thanks to Scott Hawley for alerting me about this story.

3

On this score, I found David Brooks’ recent column following the loss of his close friend both moving and helpful.

4

“Not all Sophists!” — Isocrates, probably.

No comments:

Post a Comment

Disqus