Free Shipping on orders over US$39.99 How to make these links

OpenAI isn’t doing enough to make ChatGPT’s limitations clear

“Sometimes misinformation can arise.”

That’s the warning OpenAI pins on the homepage of its AI chatbot ChatGPT — point one of nine that details the system’s capabilities and limitations.

“May Sometimes generate false information.

It’s a warning you can read about any information source, from Wikipedia to the front page of Google. the new York Times, And it would be more or less correct.

“May sometimes arise Wrong Information.”

Because when it comes to getting people ready for a technology as powerful, overhyped, and misunderstood as ChatGPT, it’s clear that OpenAI isn’t doing enough.

The misunderstood nature of ChatGPT became clear umpteen times over the weekend when it was reported that US attorney Steven A. Schwartz turned to the chatbot to find supporting cases in his ongoing lawsuit against Colombian airline Avianca. The problem, of course, was that none of the cases suggested by ChatGPT exist.

Schwartz claims that he was “unaware of the possibility that [ChatGPT’s] The content may be false,” though tapes of his conversation with the bot show he was suspicious enough to double-check his research. Unfortunately, he did so by asking ChatGPT, and again, the system failed him. misled, convincing them that its fictional case history was valid:

“Don’t believe his lies,
Image: SDNY

Schwartz deserves much of the blame in this scenario, but the frequency with which such cases are occurring—when users of ChatGPT regard the system as a reliable source of information—suggests that a comprehensive enumeration is warranted. is also required.

Over the past few months, there have been several reports of ChatGPT fooling people with lies. Most cases are minor and have had little or no negative impact. Typically, the system generates a news story or an academic paper or A book, someone tries to find this source and either wastes their time or looks like a fool (or both). But it’s easy to see how ChatGPT misinformation could lead to more serious consequences.

In May, for example, a professor at Texas A&M used a chatbot to check whether students’ essays were written with the AI’s help. Ever the obliging one, ChatGPT said yes, All It has no reliable ability to evaluate whether the students’ essays were AI-generated. The professor threatens to walk out of class and withhold their diplomas unless their mistake is pointed out. Then, in April, a law professor described how the system created false stories accusing him of sexual misconduct. He only found out when a colleague doing research alerted him to the fact. “It was quite chilling,” said the professor. Washington Post, “This kind of accusation is incredibly hurtful.”

I don’t think cases like this invalidate the potential of ChatGPT and other chatbots. In the right scenario and with the right security measures, it’s clear that these tools can be fantastically useful. I also think that this ability includes functions such as receiving information. All kinds of interesting research is being done that shows how these systems can and will be built more factorically in the future. The point is, right now, it’s not enough.

This is partly the fault of the media. A lot of the reporting on ChatGPT and similar bots portrays these systems as human-like intelligence with feelings and desires. Often, journalists fail to emphasize the unreliability of these systems – to clarify the contingent nature of the information they provide.

People use ChatGPT as a search engine. OpenAI needs to recognize this and warn them in advance

But, as I hope the start of this piece made clear, OpenAI can certainly help matters as well. Although chatbots are being presented as a new type of technology, it is clear that people use them as search engines. (and many have apparently been launched As search engines, so of course they get confused.) It’s not surprising: A generation of Internet users has been trained to type questions into a box and get answers. But while sources like Google and DuckDuckGo provide links that invite scrutiny, the chatbots mumble their information into regenerated text and speak in the chipper tone of an all-knowing digital assistant. A sentence or two in the form of a disclaimer is not enough to override this type of priming.

Interestingly, I find that Bing’s chatbot (which is powered by the same technology as ChatGPT) does slightly better on these types of fact-finding tasks; Mostly, it searches the web in response to factual questions and provides users with links as sources. chatgpt can do Search the web, but only if you’re paying for the Plus version and using beta plug-ins. Its self-contained nature makes it more likely to mislead.

Interventions don’t need to be complex, but they do need to be There, For example, when being asked to generate factual citations and warning the user to “check my sources”, can’t ChatGPT simply recognize that? Why can’t it answer someone asking “Is this text AI-generated?” With a clear “I’m sorry, I’m not able to make this decision”? (We’ve reached out to OpenAI for comment and will update this story if we hear back.)

OpenAI has certainly made improvements in this area. Since ChatGPT’s launch, in my experience, it has become much more explicit about its limitations, often preferring answers with the AI ​​shibboleth: “AI as a language model…” but it’s also inconsistent. Is. This morning, when I asked the bot “Can you figure out AI-generated text?” It cautioned that it was “not infallible,” but then when I fed it a portion of this story and asked the same question, it simply replied: “Yes, this text was AI-generated.” Next, I asked it to give me a list of recommendations of books on the subject of measurement (that I know something about). “Definitely!” It said before giving 10 suggestions. It was a good list, hitting several classics, but two titles were completely made up, and I might not have noticed if I didn’t know I could check. try similar tests yourself and you will Whyfind errors quickly,

With a display like this, a disclaimer like “sometimes erroneous information may arise” doesn’t seem accurate.

We will be happy to hear your thoughts

Leave a reply

Dont guess! Start Ghech!
Logo
Compare items
  • Total (0)
Compare
0
Shopping cart