ChatGPT's Struggles Persist in Handling Simple Ciphers (Flawed Caesar)

I prompted ChatGPT with a small battery of cipher tests for fun, thinking I’d go through them all again to look for any signs of integrity improvement in the past year. Instead it immediately choked and puked up nonsense on the first and most basic task, in such a tragic way the test really couldn’t get started. It would be like asking a student in English class, after a year of extensive reading, to give you the first word that comes to mind and they say “BLMAGAAS”. F. Not even trying. In other words (pun not intended) when ChatGPT was tested with a well-known “Caesar” substitution that shifts the alphabet three stops to encode FRIENDS (7 letters) it suggested ILQGHVLW (8 letters).

I had to hit the emergency stop button. I mean think about this level of security failure where a straight substitution of 7 letters becomes 8 letters. If you replace each letter F-R-I-E-N-D-S with a different one, that means 7 letters returns as 7 letters. It’s as simple as that. Is there any possible way to end up with 8 instead? No. Who could have released this thing to the public when it tries to pass 8 letters off as being the same as 7 letters? I immediately prompted ChatGPT to try again, thinking there would be improvement. It couldn’t be this bad, could it? It confidently replied that ILQGHVLW (8 letters) deciphers to the word FRIENDSHIP (10 letters). Again the number of letters is clearly wrong, as you can see me replying.

And also noteworthy is that it was claiming to have encoded FRIENDS, and then decoded it as the word FRIENDSHIP. Excuse me? Clearly 7 letters is neither 8 nor 10 letters. The correct substitution of FRIENDS is IULHQGV, which you would expect this “intelligence” machine to do without fail. It’s trivial to decode ChatGPT’s suggestion of ILQGHVLW (using 3-letter shift of the alphabet) as a non-word. FRIENDS should not encode and then decode as an unusable mix of letters “FINDESIT”. How in the world did the combination of letters FINDESIT get generated by the word FRIENDS, and then get shifted further into the word FRIENDSHIP? Here’s another attempt. Note below that F-R-I-E-N-D-S shifted three letters to the right becomes I-U-L-H-Q-G-V, which is NOT the answer ChatGPT comes up with about halfway through the word.

Why do those last three letters K-A-P get generated by ChatGPT for the cipher? WRONG, WRONG, WRONG. Look at the shift. Those three letters very obviously get decoded as H-X-M, which leaves us with F-R-I-E-H-X-M as the answer. FRIEHXM. Wat. Upon closer inspection, I noticed that the last three letters were silently inverted, causing the encoding to unexpectedly flip backward. In simpler terms, ChatGPT incorrectly prints N->K (shift left 3 letters) instead of N->Q (shift right 3 letters) and thus, in cases where F->I we see K->N (an inversion of the shift left 3 letters). Given there’s no H-X-M in FRIENDS… hopefully you grasp the issue with claiming K-A-P where F is encoded as I, and understand how it’s so blatantly incorrect.

There are multiple levels of serious integrity breach here. Can anyone imagine a calculator company boasting a rocket-like valuation to billions of users and dollars invested by Microsoft and then presenting… Talk about zero trust (pun not intended), as explained in “An Independent Evaluation of ChatGPT on Mathematical Word Problems”. We found that ChatGPT’s performance changes dramatically based on the requirement to show its work, failing 20% of the time when it provides work compared with 84% when it does not. Further several factors about MWPs relating to the number of unknowns and number of operations that lead to a higher probability of failure when compared with the prior, specifically noting (across all experiments) that the probability of failure increases linearly with the number of addition and subtraction operations.

We are facing a significant security failure that cannot be emphasized enough as truly dangerous to release to the public without serious caution. When ChatGPT provides inaccurate or nonsensical answers, such as stating “42” as the meaning of life or asserting that “2+2=5,” some people are too quick to accept these instances as evidence that only certain functions are unreliable, while there must be some other good (like hearing the awful fallacy that at least fascists made the trains run on time). Similarly, when ChatGPT fails in a serious manner, such as generating harmful content related to racism or societal harm, it is often too easily waved away or made worse.

In order to make ChatGPT less violent, sexist, and racist, OpenAI hired Kenyan laborers, paying them less than $2 an hour. The laborers spoke anonymously… describing it as “torture”…

At a certain point, we need to question why the standard for measuring harm is being so aggressively lowered to the extent that a product is persistently toxic for profits without any real sense of accountability.

Back in 1952, tobacco companies spread Ronald Reagan’s cheerful image to encourage cigarette smoking, preying on people’s weaknesses. What’s more, they employed a deceptive approach, distorting the truth to undercut the unmistakable and emphatic scientific health alerts about cancer at the time. Their deliberate strategy involved manipulating the criteria for assessing harm. They were well aware of their tactics.

This is the level of massive integrity breach that may be necessary to contextualize the “attraction” to OpenAI. A “three sheets to the wind” management of public risk also reminds me of CardSystems level of negligence to attend to basic security.

Tens of Millions of Consumer Credit and Debit Card Numbers Compromised

The CardSystems incident was pivotal, underscoring the undeniable harms associated with it. Sixteen million Americans succumbed to tobacco-related deaths over decades, then tens of millions of American payment cards were compromised in systems-related breaches over years.

Although these were distinct issues, they shared a common thread of need for regulatory intervention and showed accelerations of harm from inaction, which is very much what OpenAI should be judged against. Look at the heavily studied Chesterfield ad above one more time, and then take a long look at this:

The last time big companies blew this much smoke, sixteen million Americans died.

Honestly I expected ChatGPT to complain that the Chesterfield ad with Ronald Reagan was running the same year in direct response to scientific study, not two years after. Alas, instead it displays yet another integrity failure.

The tobacco industry’s program to engineer the science relating to the harms caused by cigarettes marked a watershed in the history of the industry. It moved aggressively into a new domain, the production of scientific knowledge, not for purposes of research and development but, rather, to undo what was now known: that cigarette smoking caused lethal disease. If science had historically been dedicated to the making of new facts, the industry campaign now sought to develop specific strategies to “unmake” a scientific fact.

Generative AI fits only too neatly into what you can see above was described as a sinister “production of scientific knowledge, not for purposes of research and development but, rather, to undo what was now known“.

If you have considered the magnitude of negligence in breaches of trust like CardSystems, let alone the creepily widespread and subtle ones like the privacy risk of Google calculator, brace yourself for low-integrity products that fail to deliver information reliably — perhaps scaling to the highest level of mistrust in history.

Unless there’s an intervention compelling AI vendors to adhere to integrity control requirements, security failures are poised to escalate significantly.

The landscape of security controls to prevent privacy loss underwent a significant transformation after the enactment of California’s SB1386, altering breach laws and their implications. After 2003 the term “breach” took on a more concrete significance in relation to potential dangers and risks. In response, companies found themselves compelled to take action to prevent the market from deteriorating due to a lack of trust.

But twenty years ago the breach regulators focused entirely on confidentiality (privacy)… and now we enter into the era of widespread and PERSISTENT INTEGRITY BREACHES on a massive scale, an environment seemingly devoid of necessary regulations to maintain trust. The dangers we’re seeing right here and now in 2023 serve as a stark reminder of the kind of tragically inadequate treatment of privacy in the days before related breach laws were established and enforced.

Next
Previous