Microsoft published a new version of its Bing search engine last week. It includes a chatbot that answers questions clearly and accurately unlike a regular search engine. Since then people have started saying that some of the things the Bing chatbot generates are inaccurate, misleading and downright weird.Fear spread among people that it had become sentient or conscious of the world around it. To understand what's really going on, it's important to know how chatbots actually work. Is the chatbot alive? no. In June, Google engineer Blake Lemoine claimed that similar chatbot technology tested within Google was sensible. That is wrong. Chatbots are not conscious or intelligent. Why does a chatbot feel alive? Bing Chatbot is powered by a type of artificial intelligence (AI) called a neural network. Neural is often mistaken for computerized brain because of the term. A neural network is simply a mathematical system that learns things by analyzing large amounts of digital data. For example, a neural network can look at thousands of photos of cats and learn to recognize the cat. A neural network is simply a mathematical system that learns things by analyzing large amounts of digital data. For example, a neural network can look at thousands of photos of cats and learn to recognize the cat.It's what allows Apple and Amazon's voice assistants Siri and Alexa to recognize the words you speak. This is also how you translate between languages in services like Google Translate Neural networks are very good at simulating the way humans use language. It can be misleading to the point where technology is thought to be more powerful than it actually is. Are companies releasing versions of it that can chat? Yes. OpenAI released ChatGPT in November, so the public first learned about it and was surprised. Although these chatbots don't chat like a human, they often feel like one. Why do chatbots get things wrong? The main reason is that they learn information from the internet. Do you know how much misinformation there is on the internet? These systems are not identical to those on the Internet. They construct their own new text based on what they learn, which AI researchers call "hallucinations." So if you ask the same question twice, chatbots may give you different answers. They will give any answer whether based on reality or not. If chatbots 'hallucinate', does that make them sentient? AI researchers like to use terms that make these systems sound human. But hallucinate is a fancy word for what they do. This does not mean that the technology is active or aware of its surroundings. It generates text using patterns found on the internet. In many cases, it combines patterns in surprising and disturbing ways. Can't companies stop chatbots from acting weird? They are working on it. With ChatGPT, OpenAI sought to control the behavior of the technology. OpenAI asked a small group of people to privately test the system and rate its responses. Were they useful? Were they honest? OpenAI then used these ratings to improve the system and more carefully define what it would and wouldn't do. But such techniques are not perfect. Scientists today do not know how to build completely honest systems. They can limit inaccuracies. One way to control strange behaviors is to keep chats short. But chatbots still say things that aren't true. While other companies are starting to deploy these types of bots, not everyone can control what they can and can't do.
Stories Details
Why ChatGPT May Behave Strangely ? : Understanding the Limitations of Chatbots
2023-03-03