Popular Artificial Intelligence (AI) programs like ChatGPT and Google Bard can produce impressive results. It can also produce results with references that are false, and the references do not exist. The AI fabricated very credible-sounding yet completely false claims against prominent people with references to non-existent news articles.
A famous example was the response ChatGPT provided to the question of lawyers accused of sexual harassment. It named Johnathan Turley as being accused of sexual harassment. In the accusation, it referenced a non-existent Washington Post article, a school at which he does not teach, and a school trip he did not take.
AI, like ChatGPT, currently does not understand the concepts behind the words that it is writing; it is a large language model. It looks for patterns in the language it encounters and reproduces those patterns in the data that it outputs. To ChatGPT, a claim that would destroy a person’s reputation is the same level of information as the color of the shirt they are wearing.
It can paraphrase information reasonably well, and sometimes its product will seem to be very insightful. It has access to a massive amount of information via the Internet. However, the reliability of the information that it receives and produces can be questionable. The current AI programs are going to the limit of their knowledge to generate impressive results. This greatly increases their chances of being wrong.
ChatGPT is capable of writing papers on a number of topics, and some of the papers are quite good. This has led to students using ChatGPT to fulfill assignments, and since the output from the program is unique each time, it is impossible to be absolutely certain whether a paper was produced by AI. There are characteristics that imply that a paper was written by AI, but it is possible for a human-produced paper to have these characteristics too.
The exuberance around AI at the moment needs to be tempered. Humans want to ascribe intelligence to interesting results. In the 1960s, experiments were conducted with a natural language processing pseudo-psychologist named ELIZA. This program identified a main word in sentences and then asked for more information on that. It was simple and brilliant.
If you mentioned that your mother is difficult, Eliza would respond, “Tell me more about your mother.” This simple mining of what sentences were presented led several participants in the study to find the psychological exploration very insightful.
The excitement around this new round of AI is well deserved. It can produce interesting results and marks a major advancement in the ability of machines to process language. Its results will need to be reviewed and edited by people since it has a propensity to make up misinformation. The episodes of false information, referred to as hallucinations, are a dangerous side effect if these results are relied on without verification.
Rocco Maglio, MS, CISSP, is a software engineer and cybersecurity expert with extensive experience in the development of artificial intelligence programs. He is also the co-publisher of The Hernando Sun.