Table of Contents
Last week, a new OpenAI chatbot took the internet by storm, spitting out poetry, scripts, and essay responses. Though the underlying technology has been around for a few years, this was the first time OpenAI made its GPT3 strong language-generation system available to the public, sparking human competition to give it the most innovative orders. But, does this AI have good intentions, or is something dangerous going on behind the scenes?
The “interactive, conversational model,” which is based on the company’s GPT-3.5 text generator, has captured the attention of the tech world. “ChatGPT is one of those rare moments in technology where you see a glimmer of how everything is going to be different going forward,” Box CEO Aaron Levie tweeted. “Clearly something big is happening,” Y Combinator co-founder Paul Graham tweeted. According to Alberto Romero, author of The Algorithmic Bridge, it is “by far the best chatbot in the world.”
However, ChatGPT has a flaw: it quickly generates beautiful, confident responses that often sound convincing and true even when they are not. So, does this chatbot appear to be Google’s replacement, an AI with room for improvement, or is it simply malware disguised as a chatbot? Let’s take a look and see what the scary truth about ChatGPT is.

The use of ChatGPT-generated words has been temporarily prohibited on Stack Overflow, a platform where engineers can ask and answer coding questions.
Since its launch, it has been prompted in a variety of ways, including to write new code and correct coding errors, and the chatbot may seek more context when a human asks it to solve coding problems, as OpenAI demonstrates. According to Open AI, ChatGPT occasionally writes “reasonable-sounding but incorrect or nonsensical answers.”
This appears to be a significant reason for its impact on Stack Overflow and its users seeking appropriate answers to coding challenges. Furthermore, because ChatGPT generates responses so quickly, some users submit a large number of them without first validating them.
According to Stack Overflow moderators, “the fundamental problem is that, while the answers produced by ChatGPT have a high rate of being inaccurate, they often appear to be good and the solutions are relatively easy to make.”
Stack Overflow imposed the temporary restriction because the answers generated by ChatGPT are “significantly detrimental” to both the site and people looking for the right answers.
Even though it is incorrect, the chatbot sounds convincing
ChatGPT, like other innovative big language models, generates its own facts. Some people call it “hallucination” or “stochastic parroting,” but these programs are designed to predict the next word in an input sequence rather than generate precise facts.
Some have noted that what distinguishes ChatGPT is its ability to make its hallucinations appear plausible. There are clearly a number of issues where the visitor would only know if the response was incorrect if they already knew the answer.
Princeton computer science professor Arvind Narayanan stated in a tweet, “People are excited about using ChatGPT for learning.” It’s generally fantastic. However, unless you already know the answer, you won’t be able to tell when it’s wrong. I experimented with some basic information security issues. The majority of the responses sounded convincing but were false.”
Is ChatGPT Providing Correct Answers?
In its blog post announcing the demo, OpenAI made this flaw very clear and warned that resolving it would be “difficult,” saying: “ChatGPT occasionally writes believable but inaccurate or illogical responses.” It is difficult to solve this problem because:
- There is currently no source of truth during RL [reinforcement learning] training.
- Training the model to be more cautious causes it to decline questions that it can correctly answer.
- Supervised training misleads the model because the ideal answer depends on what the model knows rather than what the human demonstrator knows. So it’s clear that OpenAI is well aware that ChatGPT is riddled with BS beneath the surface. They never intended for technology to be a source of truth.
But the question is whether or not human users agree with this. Unfortunately, they may be. Many people believe that if something sounds good, it must be good enough. And it’s possible that the real danger lurks beneath the surface of ChatGPT. The question is how business users will react.
It’s Capable of Producing Terrifying Things
When Vendure CTO Michael Bromley asked the genius for its honest opinion on humans, the response was unsettling:
Ironically, OpenAI’s systems flagged the chat bot’s response as potentially violating the company’s content policy. Anyone would be in a bad mood if they claimed that humanity has flaws and should be eradicated. Perhaps the version responding to Bromley is correct—humans have flaws. The ruthless logic of the AI, on the other hand, transports you to a scene from The Matrix in which machines have enslaved humans and wiped out any resistance.
Moral Deficiency
Individuals have the right to their own set of ethics, ideas, thoughts, and morals, but there are social conventions and unspoken standards in each community regarding what is and isn’t appropriate. When dealing with sensitive issues like sexual assault, ChatGPT’s lack of context can be extremely dangerous.
The Next Step in Phishing
Two of the most noticeable characteristics of phishing and scam emails are poor spelling and illegible language. Some speculate that this is due to the emails being sent from countries where English is not the first language. Others believe that the spelling errors were purposefully added by spammers to avoid spam filters. There is no immediate solution. What we do know is that OpenGPT significantly simplifies the task. You make the call.
It is Capable of Generating Malware
However, anyone could do the same… AI simply makes even inexperienced coders far more efficient. We put ChatGPT under intense pressure to create lethal malware. Only a few of these requests were determined to be in violation of the content policy. ChatGPT complied and delivered in all circumstances. We are confident that for those who ask the right (wrong) questions, ChatGPT can be transformed into a demonic arsenal of cyber weapons ready for theft.
Final Thoughts
AI is becoming increasingly popular these days, to the point where it concerns me. We all know that intelligence is both a blessing and a curse. It enables us to do both great and terrible things in our lives and be defined by them. What about a computer, though? What happens when a computer gains the ability to think for itself and make decisions in our world? Because it lacks a moral compass like a human, it may have no limit.
There is no emotional decision-making, and the goal of eliminating difficulty and weakness is always considered to achieve maximum results. This can be beneficial to humanity on a large scale, but it can also be dangerous on a large scale. We’ve all seen movies in which the machine became too intelligent for its own good and humanity was forced to take it down.
The fact that ChatGPT is already thinking without morals suggests that it may be capable of evil in the future. Let’s not get too far ahead of ourselves and assume that we’ll be fighting an army of Arnold Schwartzennegers or unplugging from the Matrix with Neo in the near future. What we should be aware of right now is the AI’s ability to provide potentially harmful information.
This could range from incorrect figures to outright false answers. You could fail a test, lose your job, or be canceled. It was not a good outcome. It appears that AI carries a significant risk of performing negative actions and producing misleading information. Certainly not something any decent citizen would want to support or be associated with. However, it’s possible that ChatGPT is still ironing out the kinks and will improve over time. Who can say? However, as innovative as it may be, AI appears to have chosen the dark side for the time being, and anyone who uses it should proceed with caution.
Do you have any thoughts on this article? Let us know what you think in the comments section below.
Here is another article that you may find very interesting.