Koko's GPT-3 Experiment
What happens when human emotional support is secretly turned over to an AI?
In October, Koko, a emotional support chat service based in San Francisco, ran an experiment in which GPT-3 wrote responses to individuals needing support. Humans could edit the responses, but they weren’t always the authors. The company, and the experiement, were conceived by founder Robert Morris.
Apparently, this was human-machine collaboration that reached 4,000 people using the service. But there's no disclosure of how much of these exchanges represented the sentiment of the human behind the curtain, or the rote copy generated by the AI.
As you might guess, the experiment evolved into a Cat 5 shitstorm after Morris casually disclosed the experiment on Twitter. The thread is linked below.
A few thoughts:
The AI industry’s ruthless pragmatism
Personally, I found this really shocking. When conducting trials with humans there are well-established standards around how this is done. The concept of informed consent is deeply rooted in medicine with a legal and moral precedent. It stems from a history in medicine where human subjects were in the dark about what was done to them.
What's interesting about this thread is that the sense that flirting with humans is an even problem. In fact, it's discussed with the casual banter of a fourth grader experimenting with tomato plants — What happens when one gets sun and the other doesn't?
But over the course of the thread we see the dawning realization that flirting with emotionally fragile humans is something that may need more introspection. And in a miraculous twist, the thread culminates in a quote from Sherry Turkle (MIT professor critical of our dependency on technology). I’m not sure whether this represents a true reversal in perspective or a desperate attempt to salvage Koko from a stunt gone terribly wrong.
The mindset behind this event was captured nicely by Kate Crawford in The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence:
The AI industry has fostered a kind of ruthless pragmatism, with minimal context, caution, or consent-driven data practices while promoting the idea that mass harvesting of data is necessary and justified for creating systems of profitable computational “intelligence.”
The most remarkable thing is that people don’t seem that concerned about this. It’s like we’ve stepped aside to let stuff like this happen. This is sometimes called technological determinism — the idea that emerging technology sets the agenda, not us. I’ll write more on this later, so stay tuned.
Koko, like warm chocolate and a Disney movie
In a world where healthcare is a 'space' and patients are 'users,' language is critical to every startup’s success. And the naming of AI technology is carefully crafted to personify it.
With Koko, we get the mashup of a close friend and a fuzzy stuffed animal. Tufts' Daniel Dennett has suggested AI creators have outfitted their machines "with cutesy humanoid touches, Disneyfication effects that will enchant and disarm the uninitiated."
True to form, this is from the Koko site: We chose the name “Koko”, because it seemed to bring forth positive associations (like hot cocoa or “Koko” the gorilla), without being overly bright-sided (like “Happify” or something).
Consider your 14-year-old daughter disarmed.
Is this what you want for your child?
But I have to ask this question: Is this what you want for your child?
This question cuts to the core for me. As a pediatrician who has children at top of mind, this almost a touchstone question for the adoption of new technology. Be it through this app or any other tech mediated behavioral health intervention, I answer that with a clear ‘no.’
And I challenge anyone in the comments to tell me that they would want an algorithmically-generated piece of copy tending to their daughter during her moment of crisis.
I'm not alone in my defense of our most vulnerable. It turns out the most powerful in Silicon Valley are militant about protecting their own kids from the products they create and sell. But your kids? It seems that's another story.
Caring just enough to create the appearance of connection
What AI-mediated dialog like this demonstrates is that we've sold human care for the fast and cheap. We've given up on human connection for the illusion of a relationship with our tools. Sherry Turkle in Alone Together calls this our robotic moment. "At the robotic moment, the performance of connection seems connection enough." (italics are my emphasis). And in this case, the automated words on a screen are enough to create the appearance of care.
The need to even consider technology like this ultimately represents a human failure. A failure of family, community, and health services to provide for us. So much of this begins with tech-mediated disconnection and isolation. Personal desperation for real human connection is both cause and result.
More technology will not make us more human. And it won't fix our problems. Improved algorithms by Koko's developers will never change the fact that this program is a smoke and mirrors surrogate for what humans most need from each other.
Thank you Dr Vartabedian for bringing this to our attention I constantly find myself in meetings presentations with “ Innovators “ trying to solve health care problems with “ digital solutions “ when really the “ healthcare problems “ are deep societal problems rooted in profound economic systemic issues
Bravo! Very well said in your economical yet comprehensive and thoughtful style that has soul.