Blake Lemoine, an engineer for Google’s responsible AI organization, described the system LaMDA (language model for dialogue applications), with a perception of, and ability to express thoughts and feelings that was equivalent to a human child. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” Lemoine said.
The risk here is not that the AI is truly sentient but that we are well-poised to create sophisticated machines that can imitate humans to such a degree that we cannot help but anthropomorphize them — and that large tech companies can exploit this in deeply unethical ways. As should be clear from the way we treat our pets, or how we’ve interacted with Tamagotchi, we are actually very capable of empathizing with the nonhuman. Imagine what such an AI could do if it was acting as, say, a therapist. What would you be willing to say to it? Even if you “knew” it wasn’t human? And what would that precious data be worth to the company that programmed the therapy bot?