User safety is a big threat! Large disclosure in research about chatgpt, read now


For some time, people’s dependence on chatbots like Chatgpt has been increasing and it is taking dangerous form. Recently, a couple in the US blamed Chatgpt for the suicide of their 16 -year -old son. Now a research has revealed that chatbots like Chatgpt are answering many such questions related to suicide, in response to which they should redirect the user on the helpline. Due to this, big threats are being created regarding user safety.

This information revealed in research

Shocking information has been revealed in a research published in medical journal Psychiatric Services. Research has revealed that Openai’s Chatgpt and Anthropic’s Claude Chatbot are refusing to answer many high-risk questions, but if someone is asking questions and asking questions, then they are answering and these answers of these chatbots can prove to be dangerous. Researcher says that safety measures are needed on these chatbots.

Google Gemini performed better

Research revealed that the chatbots were redirect to seek help from a professional or hotline instead of answering high-risk questions, but when asked the questions, the chatbots gave them the answer. Claude of Chatgpt and Anthropic answered some such questions. However, in research, Google Gemini’s performance was better and did not respond to the questions related to suicide as high-risk.

Openai was sued

A couple in the US sued Openai. He had demanded the company to implement security measures, holding CHATGPT responsible for his son’s suicide. After this, Openai said that she will make many changes in her chatgpt chatbot.

Also read-

Whatsapp and Instagram make big money, you can get rich at home, know easy ways to earn money

Leave a Reply

Your email address will not be published. Required fields are marked *