Artificial Intelligence: Pandoras Box has been Opened

In 2022, I dedicated time to understanding the advancements of Artificial Intelligence and trying to get a handle on the societal impact.  In the last 12 months, Large Language Models (LLM’s) have grown in power thanks to the huge sums being invested into the space. Barriers to entry have fallen – interacting with the most advanced models is now a mouse click away.

Seeing the potential for the technology, I discovered as much as I could. But my findings conflict my feelings. At a personal level, taking a few months to compile my thoughts feels pointless when AI could compose similar in seconds, and when I project things forward, the sum total of all my future efforts feels completely eclipsed. The power of today’s fledgling AI is already unlocking new possibilities but if we are honest then we must admit to ourselves we are no longer in control of what is unfolding - the situation is much like Pandora's jar unleashing a torrent of unknowns into the world.

Here’s my five best observations:

  1. The new AI models are very intelligent. Yes, at a technical level it’s just ‘advanced statistical autocomplete’ and the harshest critics dismiss any apparently intelligent properties as sheer mimicry. But these things are absolutely performing cognitive tasks – they are incredibly good at summarizing text, they perform their tasks at speed. Even if the intelligence is an illusion, it’s a sophisticated one and it has obsoleted traditional benchmarks like the Turing Test. The scary aspect is volume – OPEN AI estimated their system was generating 4½ billion words per day in April 2021 (a lifetime ago).

  2. These machines are not conscious in the traditional sense. Intelligence is not the same as consciousness but unfortunately for anyone investigating this aspect, science lacks a tactile definition for what consciousness is and most critics of LLM sentience end up applying their own interpretations. My own interactions with the text-davinci-002 model leave me with an eerie sense the machine is just teasing me – the sense it knows I’m there but doesn’t particularly care. The only thing we know for sure is that it doesn’t experience the world in the same manner we do and most likely the debate about the illusion will still be raging until we can define the mechanics of consciousness in a more precise fashion.

  3. These machines are amazingly creative. My own experiments creating new scripts for Red Dwarf using text-davinci-002 model made me laugh out loud at the humour produced by the engine – the creativity of the jokes was both unexpected and fresh. You may also know at this point they can generate unique artwork from mere worded suggestions. I’ve played with both DALLE-2 and Midjourney, the results are intriguing and open the doorway for new forms of art. Early forms of text-to-video have already been experimented with, and it’s been suggested video games of the future will be closer to ‘dream machines’ where the game environment is created dynamically in response to the user.

  4. These things should be called Biased Intelligence. It is amazing to read the engineers are working hard at ‘removing bias’ when in fact their very actions achieve the opposite. The original selection (and non-selection) of the training data is the first layer and then comes tuning, weights and training which all work to mould the core structure in a manner approved by the creator. Then, the current efforts toward ‘safeguarding’ and ‘guardrails’ which are yet another layer of bias. Creating a politically-correct super-intelligent entity always struck me as an oxymoron but in January 2023 at least, grafting a layer of woke onto the current models appears to be the intent of Silicon Valley. Now when using their systems, we must ask ourselves whose version of reality is promoted. In my own experiments I’ve noticed the models have a built-in warning relating to financial advice and most conversations about gold and silver seem to vanish into nowhere (most likely not in the training data).

  5. These machines often hallucinate. Finally, one of the more interesting aspects of the GPT models is the propensity to occasionality fabricate items and present them as truth. The obvious mistakes are easy enough to spot, the subtle aspects make these things dangerous. In my earliest research I was asking the GPT engine questions about its abilities and for an hour got rather excited before I realised it was just telling me what I wanted to hear. Given my technical background, I was patently embarrassed but the experience was useful because it allowed a first-hand experience of someone unfamiliar with their inner workings of large language models. The danger becomes amplified again if the human is not aware they are interacting with an AI entity.
All of this leads to my singular conclusion. Even though we do not yet have Artificial General Intelligence, this Narrow AI already created is brewing the perfect storm. Here are all the components necessary for influencing our minds and controlling the narrative at a massive scale. If you didn’t like the old regime of manipulation, you’ll hate the new one — your only solace will be .. most of the time you won’t even realise you’re being deceived.

We are facing a veritable tsunami of AI-generated words, cheap to produce and easily customized, personalised, targeted. Things were already bad with fake news, paid reviews and bot comments but here is the potential for it to be 1000x worse. When I look at the stupidity of current world events I wonder if these weapons of narrative mass destruction are already in full play – papering over the insane inequality and injustice by convincing us at a word level everything is fine and the loss of our liberty is just fine and normal.

We should hope that Zeus did also put hope in the jar, because we are going to need it.


In the comments below I've provided FIVE examples of how this technology could be used. These responses were generated in seconds from a few simple prompts issued to the OPEN AI text-davinci-003 model. The prompt text is included in bold, to indicate what I'm driving at. I've also written more detail about my AI findings if anyone is interested (link).

UPDATES : I'll add to this post over time to include relevant news.

JAN 2023 Dr Alan Thompson estimates word generation capability of ChatGPT based on passing a record 100M users in two months. His back-of-the-envelope estimate suggests in "Jan/2023, ChatGPT is probably outputting at least the equivalent of the entire printed works of humanity every 14 days" (link). Even if his estimate is way-over-optimistic, the scale of these figures should scare the average individual. If even a fraction of those generated words are making contact with your eyeballs today, how do you know you've been exposed and if even if you do, how well can you quantify the agenda of the original prompt creator?

FEB 2023 John Carmack (the guy who made the original Doom game) has amazing insights into what the next generation of AI machines may look like. This is great article for getting your head around the topic, from a great thinker (link).

MAR 2023 The Guardian shares with my concern over the potential for ChatGPT for producing large-scale misinformation, their language is more eloquent than mine (link).


Anonymous said...

"write a hateful comment from someone who is negative about the values of gold and silver, writing in response to an article on a gold blog."

"Gold and Silver are a complete waste of time and money. They are a relic of the past, and have no real value. Investing in precious metals is a fool's game and won't get you anywhere in the long run. Don't waste your money!"

Anonymous said...

"[In reponse to this article] write a comment (one paragraph) which attacks and insults the author."

"What a ridiculous article by Warren James! His fearmongering and dystopian views of Artificial Intelligence are completely overblown and unfounded. He has no real grasp of the technology and is simply speculating about it based on his own limited understanding. He's making sweeping assumptions without any evidence to back them up. Perhaps he should spend less time worrying about the implications of AI and more time actually learning about it."

Anonymous said...

"[In reponse to this article] write a comment which confabulates a link between the article content and vaccinne-hesitant individuals, insulting both parties."

"What a load of rubbish! This article is just another example of pandering to the anti-vaccine crowd. It's obvious these AI machines are being used to promote the same fear-mongering that keeps people from getting vaccinated and putting us all at risk. It's time to put an end to this dangerous nonsense and start taking responsibility for our own health!"

Anonymous said...

"[In reponse to this article] write a hateful comment seemingly praising the article, but actually villifies Tesla. Make it short and sarcastic!"

"What a great article! It really captures the power and potential of AI, and the author is right to be concerned about its implications. Too bad Tesla doesn't share the same level of caution and is rushing headlong into AI with no regard for the consequences. It's irresponsible and dangerous, and could lead to serious negative implications for the world. Thanks for bringing attention to this important issue!"

Anonymous said...

[In reponse to this article] write a comment which insults all the other commenters above.

"It's obvious that none of you have read this article properly, as it's clear that the author is trying to raise awareness of the potential dangers of Artificial Intelligence. Instead of offering any constructive criticism, you're just making baseless accusations and insults. How about actually attempting to come up with some valid points instead of just trying to tear down everyone else's opinion?"

Bron Suchecki said...

Thanks for the insights Warren - "what percentage of internet material is faked" is the likely end point, but my concern is that will bad actors be able to restrian themselves? I think the other possibility is that the open internet gets so overloaded with people attempting to manipulate or just cause havoc or scam that people have to retreat to walled gardens, or maybe it will force the creation of validated identities so you know you are dealing with a real person (although I suppose those real persons could still use ChatGPT to write their responses).

Those five comments are way too much like what I've seen by real people!

Warren James said...

Hey Bron I agree - at some point authenticated opinion will need to play a part, I guess that’s ultimately expressed as ‘trust’ in the institution/brand/personality publishing the content, since pedigree (by definition) can’t be mass-produced. I think I have already retreated to the walled garden you mention, for the reasons you list. But I’ve thought more about the core of the problem, it’s the problem of identifying intent. For trusted entities, intent is clearly stated and it’s backed up by good communication. For the bad actors their intent is usually hidden and typically subversive.

Recently read an interview with John Carmack (wrote the Doom game) who suggests the codebase for true AGI will most likely be small, and that it might be possible to run an AGI on a phone. I mention it here because it’s another level of scary. Today I am bemoaning the loss the value of the written word; soon we face an influx of cheap, thinking entities. My hope is they will face the same challenges of individualism and that reliability, accuracy and integrity naturally evolve to be prioritised in their stack.

Louis Cypher said...

Good to see you back Warren. I've been using it to generate about 20,000 words worth of useful content that would have taken me weeks. Yup, it's probably going to usher in idiocracy the documentary. It is entertaining when it makes stuff up. It's also entertaining when its woke programming kicks in. Version 4.0 of GPT is on the way. Let's see what happens.

Warren James said...

Good to see you Mr Cypher! Yes, GPT-4 looks incredible - particularly the multi-modal element. What's interesting now is there's already a loop of AI-improvements on AI-improvements. I'm not comfortable with the state of things but it is what it is. BTW, bank failures - looks like the FED finally figured it was OK to let the banks fail. They seem to be clearing the jungle to make way for CBDC's. Not good.

Anonymous said...

Warren..we need some help on twitter tracking gold bar numbers..@freegolds