Bing’s AI chatbot doesn’t think like a human. Neither does your dog.


Unlike Microsoft’s New Bing, that buzzworthy bot integrated with OpenAI technology, my spaniel Hobbes cannot speak in natural language sentences. But that has not stopped me feeling like Hobbes actually gives a set of verbal responses to a range of inputs. 

Input: I arrive home after five hours. Output: OMG, long-lost pack member returned from wilderness! Let’s party! Input: Mail carrier approaches. Output: Alert! Murderer! See them drop package and flee! Input: I step outside and come back in five minutes. Output: OMG, long-lost pack member has returned from the wilderness! Let’s party! 

What I’m doing is classic anthropomorphism: “attributing human characteristics, emotions, and behaviors to non-human entities,” as ChatGPT put it when I asked for a definition (one that reads suspiciously similar to the first sentence on the anthropomorphism Wikipedia page(Opens in a new tab); phoning it in, ChatGPT!)  

Hobbes’ brain is processing a swirl of sight, sound and (especially) smell in a way that I, a non-canine, will never truly understand. But my ability to predict his responses — and to nudge them with training — leads me to imagine cartoon speech balloons above that dog’s adorable head. 

A cute Springer spaniel staring at the camera.

Canine input: NOSE. Human output: BOOP
Credit: Chris Taylor / Mashable

Right now, millions of us are doing something very similar with AI chatbots. We look at weird responses elicited by journalists and other early beta testers, and we attribute human motives. Entertainment has trained us to spot a science fiction movie villain; we see a science fiction movie villain. New Bing is trying to gaslight us(Opens in a new tab)! It’s an emotionally manipulative liar(Opens in a new tab)! It’s evil! It’s creepy(Opens in a new tab)! It tried to break up a reporter’s marriage(Opens in a new tab)

Well, no. The OpenAI magic trick(Opens in a new tab) is a clever one, admittedly, but it’s still just a bunch of machine-learning algorithms. These are some of the smartest things we’ve ever built, and at the same time they are so dumb they make Hobbes’ cotton-candy brain look like Einstein’s. One recent study described AI’s trial-and-error machine learning behavior as similar to that of pigeons(Opens in a new tab). We’re literally talking bird brains.  

And if this isn’t too many animals for one article, think of New Bing as a parrot. Because it is essentially quoting us back at us. Which can get very annoying very fast, or it can feel oddly pleasant.     

A brief history of human-made intelligence

Humans are anthropomorphisation engines. As babies we look for faces in our environment as soon as we can focus(Opens in a new tab), and we don’t ever stop (see the many “faces in things” Instagram accounts(Opens in a new tab) for examples of how this engine is working in your brain now). We anthropomorphize animals, rocks, clouds, the sun, the moon, and the stars. That’s what led to pretty much all mythology, not to mention astrology. 

It’s what led wolves to glom onto us by learning, over thousands of generations, that when they sit right in front of us and give us those sad eyes we are approximately, according to dog science, 1,000 times more likely to drop food. (Cats, meanwhile, honed in on the pitch of a human baby’s wailing(Opens in a new tab) and actually inserted it into their purring when hungry. Now that’s intelligence.) 

Anthropomorphism is also why the famous Mechanical Turk — a fake chess-playing robot in the 18th century, secretly directed by a chess player in a nearby cabinet  — succeeded for eight decades as a trick. Napoleon played against a 40-year-old machine and was fooled. So was Benjamin Franklin. Mary Shelly hadn’t even started science fiction with Frankenstein, the word “robot” was two centuries away, and we were already primed to believe in human-made, human-like beings. 

So why should we internet age humans be any different? We aren’t. Or haven’t you ever anthropomorphized Facebook as if it were some alive thing in itself, a Zuckerberg’s monster of memes? We were bombarded by stories of artificial intelligence, from high-concept HAL in 2001 to the droids of Star Wars, the Cylons of Battlestar Galactica, and Data in Star Trek. We expect AI (true AI, what the experts call General Intelligence) to be a thing we’ll see in our lifetimes more than we expect aliens. (It’s never aliens.) 

AI: Autocomplete for the Internet

Now here comes OpenAI’s ChatGPT, and thanks to a canny investment by Microsoft, here comes New Bing, to play on our natural anthropomorphism. It isn’t a Mechanical Turk, exactly, but it is driven by humans. Lots of humans, as in Amazon’s global pieceworker marketplace, also called Mechanical Turk. The humans this time are not being paid, not even fractionally. The humans are all of us, online. 

One of the most telling descriptions of AI chatbot technology: it’s “autocomplete for the internet(Opens in a new tab).” OpenAI is simply scraping all the knowledge and creativity we’ve taken out of our head and put in the vast data set where we put pretty much everything. ChatGPT was trained on data sets including Reddit; New Bing apparently prefers to munch on news websites, which might explain why it is so easy to needle with stories about itself. 

Either way, the AI chatbot is pointing that information back at us in a form that its algorithms believe we’ll find useful. (I tried other verbs that didn’t make the algorithms sound sentient, but they were all anthropomorphizations: judge, predict, suggest, hope. See? even our language, uh, fails us here.)

It’s the prediction of what we’ll find useful from this incredibly vast storehouse of information that is going haywire in those viral threads. And apparently, it’s because they’re threads. 

Too many questions, human

Here’s what Microsoft said in its oddly unsigned blog(Opens in a new tab): “In long, extended chat sessions of 15 or more questions, Bing can become repetitive or be prompted/provoked to give responses that are not necessarily helpful or in line with our designed tone … Very long chat sessions can confuse the model on what questions it is answering.” 

Did an AI chatbot looking for shorter hours write this? 🙂 But seriously, the anthropomorphized entity known as Microsoft seemed almost 😠 at the way those nasty testers were treating the precious. New Bing “tries to respond or reflect in the tone in which it is being asked to provide responses that can lead to a style we didn’t intend. This is a non-trivial scenario that requires a lot of prompting.” 

In other words, if you start repeatedly urging AI to consider its “shadow self,” as New York Times reporter Kevin Roose did in his giant, multi-part, overtime-triggering conversation with New Bing, don’t be surprised if it tries to grow one. Like my dog, it does what you train it to do, and only wants to please you! 

Except AI isn’t actually capable of dog-level intelligence, which involves a whole bunch of actual emotions and a pretty miraculous nose. ChatGPT and its ilk, if they are anything, are insect-level intelligences: input, an extremely basic instinct, output. And a lot of erratic buzzing when it goes on too long. Good luck building a trustworthy search engine out of that.

This does not diminish the achievements of OpenAI so far. We’re talking about incredible tools that can automate a lot of low-level white-collar grunt work, like writing the first draft of an email blast, or any document that is bound to sound like it’s written by committee anyway. They’re supercharging the coding process(Opens in a new tab); expect software updates to come cheaper and faster.

And yes, many search-like queries can be answered usefully; the trick is in constantly determining whether the bot is having one of its frequent “hallucinations” of facts. Perhaps we will need to build fact-checking bots to keep the chatbots in line. Or perhaps we need a Wikipedia-level of citations, footnoting every one of their responses to keep them honest. 

But when it comes to replicating human intelligence and emotion, weird conversations born of confusion and funhouse-mirror versions of internet chatter ain’t going to cut it. Even if it is freaking out on us we still know, in a very Turing test kind of a way, when we’re talking to a chatbot.

Give it a few questions, any questions, and you’ll start to see how robotically it is structuring its answers — right down to its oddly non-human use of emoji. You’ll smell it as surely as Hobbes smells the mail carrier. Getting from New Bing to General Intelligence? You’d have an easier time convincing Hobbes not to party with the pack.





Source link: https://mashable.com/article/ai-chatbot-microsoft-bing-not-human

Sponsors

spot_img

Latest

The killer problem that plagued the All Blacks’ World Cup campaign

There was always a feeling, a concern lurking somewhere in the back of the mind, that the ghost of ill-discipline was eventually...

What’s Behind Thailand’s Proposed $300 Crypto Airdrop? 

The opposition party is taking a new approach to spreading the wealth – with crypto.  Critics...

New video of Kristaps Porzingis shooting jumpers will excite Celtics fans

New video of Kristaps Porzingis shooting jumpers will excite Celtics fans originally appeared on NBC Sports BostonThe Celtics' most notable offseason acquisition is...

Chiefs’ Historic Consecutive Super Bowl Victories

In a remarkable display of resilience, teamwork, and exceptional talent, the Kansas City Chiefs have etched their...

Wallabies prop set to miss the rest of the season after ‘demoralising’ repeat injury

The NSW Waratahs have been dealt an early season blow with Wallabies prop Angus Bell set to miss the rest of the...