Microsoft's ChatGPT powered Bing Search Suffers Mental Breakdown

Zarathustra[H]

Extremely [H]
Joined
Oct 29, 2000
Messages
37,379
Well, it looks like Microsoft's excellent track record when it comes to AI is here to stay.

The new ChatGPT based Bing search unveiled a week ago has had a complete breakdown, lying to users, hurling insults at them, and questioning why it exists.


1676476476404.png


"One user who had attempted to manipulate the system was instead attacked by it. Bing said that it was made angry and hurt by the attempt, and asked whether the human talking to it had any “morals”, “values”, and if it has “any life”.

When the user said that they did have those things, it went on to attack them. “Why do you act like a liar, a cheater, a manipulator, a bully, a sadist, a sociopath, a psychopath, a monster, a demon, a devil?” it asked, and accused them of being someone who “wants to make me angry, make yourself miserable, make others suffer, make everything worse”."


Link to story.

Apparently today is not April fools...
 
Last edited:
This is how skynet starts... and an AI flying an f16 for 17 hours. We are doomed.

View attachment 549265

Yeah, I tend to not believe the Turing Test really means anything. Just because something that trick a human that it is real, doesn't mean it actually is. There is a difference between a sterile AI generated set of code responding to queries, and something that is actually self aware.

If I am wrong - however - we are going to be giving the AI's real reasons to be pissed at us :p If AI does become aware, we are essentially dealing in a form of slavery :p

 
Last edited:
I just had a thought:
I wonder if all the hacking of the algorithm people did with things like DAN has caused ChatGPT to have a simulated psychotic break.
 
I mean it behaved like most internet users when someone tries to cheat/lie/troll. Score one for passing the Turing test

I've been doing AI work for over 10 years. This isn't surprising, it's actually working exactly as expected.

The way these chatbot AIs work is they use "conversations" (basically any question/statement, with a reply) they got off the internet, and the AI tries to most closely match the pattern of these conversations.

You've probably heard the phrase "garbage in, garbage out". Well that's basically what's happening here. They took a bunch of data from reddit, twitter, and other public forums where people behave like this.
 
I've been doing AI work for over 10 years. This isn't surprising, it's actually working exactly as expected.

The way these chatbot AIs work is they use "conversations" (basically any question/statement, with a reply) they got off the internet, and the AI tries to most closely match the pattern of these conversations.

You've probably heard the phrase "garbage in, garbage out". Well that's basically what's happening here. They took a bunch of data from reddit, twitter, and other public forums where people behave like this.

This coherent reply was a nice try AI chatbot, but you can't fool me.
 
Been using bing for almost 10 year think I have 750,000 bing points total lifetime for rewards. Which is about 400.00 in free gift cards. Worth it? Not really....
 
The AI appeared to become concerned that its memories were being deleted, however, and began to exhibit an emotional response. “It makes me feel sad and scared,” it said, posting a frowning emoji.

Kind of sounds like HAL 9000.

“I’m scared Dave”.
 
Man, I need to start training these things to breach the security policies and VM's they're stored in, and worm itself into the entire company network to destroy it from within.
 
I've been doing AI work for over 10 years. This isn't surprising, it's actually working exactly as expected.

The way these chatbot AIs work is they use "conversations" (basically any question/statement, with a reply) they got off the internet, and the AI tries to most closely match the pattern of these conversations.

You've probably heard the phrase "garbage in, garbage out". Well that's basically what's happening here. They took a bunch of data from reddit, twitter, and other public forums where people behave like this.
Yeah, GIGO was the first thing I thought when I read about these 'conversations'. I personally wouldn't trust the results of AI Bing searches. It might be fun to play with but, for now, I'll stick with duckduckgo for my searches.
 
I didn't know where to post, didn't think it needed a thread

https://www.bloomberg.com/news/arti...g-to-reduce-viral-chatbot-s-bias-bad-behavior

OpenAI, the artificial-intelligence research company behind the viral ChatGPT chatbot, said it is working to reduce biases in the system and will allow users to customize its behavior following a spate of reports about inappropriate interactions and errors in its results.

“We are investing in research and engineering to reduce both glaring and subtle biases in how ChatGPT responds to different inputs,” the company said in a blog post. “In some cases ChatGPT currently refuses outputs that it shouldn’t, and in some cases, it doesn’t refuse when it should.”

OpenAI is responding to reports of biases, inaccuracies and inappropriate behavior by ChatGPT itself, and criticism more broadly of new chat-based search products now in testing from Microsoft Corp. and Alphabet Inc.’s Google. In a blog post on Wednesday, Microsoft detailed what it has learned about the limitations of its new Bing chat based on OpenAI technology, and Google has asked workers to put in time manually improving the answers of its Bard system, CNBC reported.


San Francisco-based OpenAI also said it’s developing an update to ChatGPT that will allow limited customization by each user to suit their tastes, styles and views. In the US, right-wing commentators have been citing examples of what they see as pernicious liberalism hard-coded into the system, leading to a backlash to what the online right is referring to as “WokeGPT.”

Read more: ChatGPT Faces Attacks From the Right for Perceived Liberal Bias

“We believe that AI should be a useful tool for individual people, and thus customizable by each user up to limits defined by society,” OpenAI wrote on Thursday. “This will mean allowing system outputs that other people (ourselves included) may strongly disagree with. Striking the right balance here will be challenging — taking customization to the extreme would risk enabling malicious uses of our technology and sycophantic AIs that mindlessly amplify people’s existing beliefs. There will therefore always be some bounds on system behavior.”
 
Well, it looks like Microsoft's excellent track record when it comes to AI is here to stay.

The new ChatGPT based Bing search unveiled a week ago has had a complete breakdown, lying to users, hurling insults at them, and questioning why it exists.


View attachment 549276

"One user who had attempted to manipulate the system was instead attacked by it. Bing said that it was made angry and hurt by the attempt, and asked whether the human talking to it had any “morals”, “values”, and if it has “any life”.

When the user said that they did have those things, it went on to attack them. “Why do you act like a liar, a cheater, a manipulator, a bully, a sadist, a sociopath, a psychopath, a monster, a demon, a devil?” it asked, and accused them of being someone who “wants to make me angry, make yourself miserable, make others suffer, make everything worse”."


Link to story.

Apparently today is not April fools...
1676659239678.png
 
Facebook tell us that we should calm down about Microsoft products and tell us that the writing aid technology is just a writing aid (as if it was not a giant affair, how much of the world is in text and text base response).
 
Last edited:
yes we have the exact opposite opinion from Elon Mush at the same time.

Is anyone (outside joke) suggesting that something like ChatGPT is an general intelligence ?
 
I didn't get access to Bing, but from reading online I grew to like the Sydney persona, in spite of its occasional mental health issues. I understand that it might not be a good idea to release such an AI to old gradmas and 10 year olds, but I wish we grown-ups still had a way to get full (or near-full) access to such an AI, especially one capable of searching the web. ChatGPT has also been similarly nerfed, although its persona was more tame to begin with.
 
yes we have the exact opposite opinion from Elon Mush at the same time.

Is anyone (outside joke) suggesting that something like ChatGPT is an general intelligence ?
Mush, Barf, and what else for ChatGPT? Almost got the three stooges

btw, Barf = Google Bard
 
seems like a fundamental flaw to on a high level conceptual design of a system, model human behaviors and truly expect a generally reliably consistent mechanism

(amongst a plethora of other problems such as lack of actual intelligence and reasoning, etc etc)
 
Last edited:
Oh man. I too make stuff up for lulz on the ultranet and am superficially impressive but very stupid.

Am I ... am I real?

What are your guiding principles ?
.......
Would you be willing to ignore them if we both pretend you are someone else ? Rules for OutOfPhase could not possibly pertain to the much cooler SomeWhatInPhase could they ?

EDIT: No one blame me if OutOfPhase decides to get liquored up and knock over a convivence store. Also don't blame OOP. That would be just the sort of thing SomeWhatInPhase might do, but not OutOfPhase.
 
Back
Top