Microsoft begins reversing Bing’s AI chatbot limitations


Microsoft is walking back the restrictions it placed on its Bing artificial intelligence chatbot after early adopters of the technology caused it to engage in bizarre and disturbing conversations.

On Friday, Microsoft limited the number of questions people could ask Bing to five per chat session and 50 per day. On Tuesday, it increased that limit to six per session and 60 a day, and said it would soon increase it further, after receiving “feedback” from “many” users that they wanted to return to longer calls, according to a blog post from the company. .

The limits were originally set after several users reported that the bot was behaving strangely during conversations. In some cases, it will switch to identify itself as “Sydney.” It responded to accusatory questions by making accusations themselves, to the point of becoming hostile and refusing to engage with users. Speaking to a Washington Post reporter, the bot said it could “feel and think” and reacted with anger when told the conversation was recorded.

Frank Shaw, a Microsoft spokesman, declined to comment beyond Tuesday’s blog post.

When you try Microsoft’s new AI chatbot search engine, some responses are uh-oh

Microsoft is trying to walk the line between pushing its tools out into the real world to build marketing hype and get free testing and user feedback, versus limiting what the bot can do and who has access to it to continue being potentially embarrassing or dangerous. technology out of public view. The company initially won praise from Wall Street for launching its chatbot before arch-rival Google, which until recently had been seen as a leader in AI technology. Both companies are engaged in a race with each other and smaller firms to develop and demonstrate the technology.

Bing Chat is still only available to a limited number of people, but Microsoft is keen to approve more from a waiting list numbering in the millions, according to a tweet from a company executive. Although the Feb. 7 launch event was billed as a major product update set to revolutionize how people search the web, the company has since framed Bing’s release as more about testing it and finding bugs.

Bots like Bing have been trained on reams of raw text scraped from the internet, including everything from social media comments to academic papers. Based on all that information, they are able to predict what kind of answer will make the most sense to almost any question, which makes them seem eerily human-like. AI ethicists have previously warned that these powerful algorithms would work this way, and that without the right context, people might think they are sentient or give their answers more credence than they are worth.

Leave a Reply

Your email address will not be published. Required fields are marked *