Skip to content

And the Chatbot Says: 10+5 = 25

You may have heard about ChatGPT in the past month. Launched in November, it became a household name last month as the media blared Microsoft's confirmation of its investment of $10 billion in the Chatbot service.
 
ChatGPT rode that news to become the fastest growing app of all time with more than 100 million users, with techy types singing its praises.
 
Chatbots in general can be informative, entertaining and mind-blowing like Alexa and Siri or as frustrating as the support Chatbots that force you to formulate a half-dozen variations of your query before turning you over to a human.
 
Of course, Chatbots rely on humans "training" them. The old tech maxim of "garbage in, garbage out" applies here.
 
For example, a member of a Managed IT Services forum I participate in posted the chat he had with ChatGPT on the correct sum of 10 + 5. ChatGPT dutifully responded, "The sum of 10 + 5 is 15". Awesome.
 
Then the human responded, "No, 10+5 = 25." That set off a humorous and scary exchange in which ChatGPT agreed the answer is 25. Even after the human insisted that 15 is correct, ChatGPT apologized and insisted 10+5=25.
 
Fortunately, such abuse can be fixed by humans. That 10 + 5 exchange occurred on Jan. 24. Today, ChatGPT provided me with the right answer of 15 and I could not persuade it otherwise.
 
ChatGPT did answer as expected to some typical Alexa / Siri style questions but regurgitated what I've seen in search engines for searches about the Cloud computing market that I have not been satisfied with.
 
As I rephrased my queries as statements rather than questions, the results got better, including more realistic numbers on the Cloud Consulting market in the United States than I have seen on search engines and a spot-on, six-paragraph summary of the importance of considering corporate culture in Cloud vendor selection.
 
However, the lack of attribution on its Cloud Consulting market numbers made it impossible to know if they came from a credible source or not. Even in instances where the source was identified, I could not then find any reference to that particular study elsewhere on the Internet.
 
For actionable information, that doesn't exactly engender confidence for executives and information professionals trying to make critical decisions. Without attribution of authoritative sources, how do we know that the information is true?
 
Bottom line -- ChatGPT is only three months old and should improve. But as it stands, while it can be used to find clues for research, I wouldn't trust it as an authoritative source on any information it provides.
 
If you haven't already, you can try it yourself. It's pretty simple. Go to https://chat.openai.com, sign up for an account following the prompts, and enter your first question in the chat box at the bottom of the screen.
 
I am Eric Magill, an Information Professional. Contact me for help with time-sucking research that leaves you stressed out and frustrated. You can also request an online meeting with me.