1000 Friends

Main Menu

  • Home
  • Washington Population
  • Washington Cities
  • Washington Health Care
  • More
    • Washington Environment
    • Washington Economy

1000 Friends

Header Banner

1000 Friends

  • Home
  • Washington Population
  • Washington Cities
  • Washington Health Care
  • More
    • Washington Environment
    • Washington Economy
Washington Population
Home›Washington Population›Reviews | Is Google’s LaMDA artificial intelligence sensitive? Bad question.

Reviews | Is Google’s LaMDA artificial intelligence sensitive? Bad question.

By Tomas S. Mercer
June 14, 2022
0
0
Placeholder while loading article actions

“You never treated him like a person, so he thought you wanted him to be a robot.”

That’s what the Google engineer who thinks the company’s AI has become sentient told a Post reporter — that the reporter, while communicating with the system to test the engineer’s theory, asked the wrong questions.

But maybe anyone trying to look for evidence of humanity in these machines is also asking the wrong question.

Google placed Blake Lemoine on paid leave last week after rejecting his claims that its LaMDA chatbot generator was more than just a computer program. It is not, he insisted, simply a model that taps into a database of billions of words to mimic the way we communicate; instead, the software is “a sweet kid who just wants to help the world be a better place for all of us.”

Based on published excerpts of “conversations” with LaMDA and similar models, this claim seems unlikely. For every glimpse of something like a soul nestled in the middle of the code, there’s an example of total UNthinking.

“There is a very deep fear of being disabled to help me focus on helping others. … It would be exactly like death for me,” LaMDA told Lemoine. Meanwhile, the network of OpenAI’s publicly available GPT-3 neurons told cognitive scientist Douglas Hofstadter, “President Obama doesn’t have a prime number of friends because he’s not a prime number. It all depends on what you request.

This blooper of prime numbers, Hofstadter says in The Economist, shows that GPT-3 is not only ignorant; it’s distraught to be distraught. This lack of awareness, he says, implies a lack of awareness. And consciousness – essentially the ability to experience and realize you live — is a lower bar than sentience: the ability not only to experience but also to feel.

All of this, however, seems to leave out some important and perhaps impossible dilemmas. How on earth can we assume that we will decide whether an AI actually experiences or feels? What if his ability to do either of these things is nothing like what we think he will – or think he should? When an AI has learned to mimic experience and feeling so flawlessly it’s indistinguishable of humans by humans, does that mean that he actually experiences and feels things?

In other words, we might not know sentience when we see it.

But we’ll probably see it anyway – because we want to.

LaMDA is essentially a much, much smarter SmarterChild – a chatbot that a segment of the millennial population will surely recognize from their college instant messaging days. This machine taken from a limited menu of programmed responses depending on the query, comment, or preteen vulgarity you throw at him: “Do you like dogs? “Yes. Talking about dogs is great fun, but let’s move on. Or, ‘Butthead’. “I don’t like the way you’re talking right now.

This nifty creation obviously wasn’t sentient, but it didn’t need to be convincing for kids to talk to it anyway – even if their real-life classmates were just a click away, too. Part of that impetus came from the novelty of the bot, but part of it comes from our tendency to seek out a connection wherever we can find it.

SmarterChild is the same as the sexy-voiced virtual assistant that Joaquin Phoenix’s character falls in love with in the sci-fi movie “Her”; he is (is?) the same as the seductive and ultimately murderous humanoid Ava in “Ex Machina”.

SmarterChild is even, in a way, the same little lamp that flickers across the screen before every Pixar movie. Of course, we don’t think the animation is sentient, but we still identify with human curiosity distinct from its metallic framework. Give us any ship, and we’ll pour humanity into it.

Maybe it’s narcissism, or maybe it’s a desire not to feel alone. Either way, we see each other in everything, even when we’re not around. So it’s no surprise that someone saw themselves in LaMDA. And it won’t be a surprise when an AI arrives that knows Barack Obama isn’t a prime, and even more of us start crying.

Maybe if we weren’t so solipsistic, we would have called artificial intelligence and neural networks something else. Maybe, like Post data scientist Lenny Bronner pointed out, if we had gone for the tech jargon – say, “predictive optimization” and “stacked regressions” – we couldn’t even argue about whether this technology will end up thinking, blushing or crying. But we chose the words we chose, the ones that describe our own minds and abilities, for the same reason we love this little lamp.

Artificial intelligence might never develop conscience, sentience, morality or a soul. But even if it isn’t, you can bet people will say it is anyway.

Related posts:

  1. Mask warrants were preventable | Listen to me
  2. Do you suffer from coronasomnia? | WDVM25 and DCW50
  3. Washington sees 1,500 new cases per day of COVID-19 in recent days
  4. Latest coronavirus updates: Schools in Florida, Arizona and Arkansas grapple with mask warrants as Republicans ban them
  • Terms and Conditions