If Fredric Jameson is right and pastiche has killed satire, at least absurdism is alive and well. That’s why we have so much of the meme-driven sense of humor of late-stage Postmodernity (Homestar Runner, Vine, “I Can Has Cheezburger,” “Drinking out of cups,” “Skibidi Toilet;” the list goes on and on). Its archetype could be this guy:

Turns out, this guy is perfect for gotcha-ing bizarre and/or stupid GenAI responses.

Here’s an example of that absurdism, with our Vine buddy driving the point home.

Well, let me take us back to where my heart never left, the earnest 80s and 90s (Postmodernity’s Golden Age), and speak in defense of AI’s response. 

First of all… who are you gotcha-ing anyway? I don’t think the person who created this image is cruelly saying “IN YO FACE!” to people who seemingly treat GPT like an infallible God. It seems more of an insult directed at AI models or the people who’ve worked hard to devise them. I can say with some certainty that AI models’ petards cannot be hoisted on such matters. But their creators might say something like this…

If anything at all, GPT’s response demonstrates how tightly parallel neural networks are to human brains. AI “thinks out loud” and is transparent about its probabilistic response, which lets it proclaim math that doesn’t quite math. But then it continues to reason, much like a human brain with all its synaptic sorcery, and then eventually maths correctly. 

Outdated tokens mean the model starts pulling from older training data patterns before it adjusts to the present. It might even be scraping data produced by nostalgic Millenials such as myself who joke about 1995 being only 10 years ago and languish in the idea that Destiny Child’s first album dropped closer to the Watergate scandal than to right now (true story). Then, it checks against things it probably should have to begin with–calculators, anyone?–checks again a few more times, moving from being wrong to hedging to confidently relaying a response with a cheeky wink. 

This isn’t to say that GPT can’t be blisteringly and confidently wrong, even with prompt clarification and multiple attempts at massaging out something closer to the truth. That’s more likely because it’s relying on fallible humans to provide verifiable data. Without knowledge of how to optimize answers for “truth,” they’re modeling what humans do–tell stories, hedge, prevaricate, lie, do bad math, and sometimes, eventually, suss out the truth. And ultimately, we have more updated data than they do.

Some of the best of what I see ChatGPT do is picking up on my allusions or metaphors and connecting them to texts that are fairly complex and esoteric. Because so few people truly can be authoritative on these texts (and more humans have actually tried to render them interpretable), we’re not really out for the truth. 

In my “research” for this post, I wondered if ChatGPT could recreate the above prompt with GPT2o for me. It replied that it couldn’t but I could ask it to simulate what it would have done when its neural net was more, say, deficient. 

I concluded that this proved nothing if I couldn’t ask the “real” GPT2o, and it got me thinking of Baudrillard’s work on simulacra and my association with all of that to Alice Through the Looking Glass. 

See, some of this is just poetic. I love performing epistemological burlesque!

Even the quote isn’t a half bad translation of “Il ne s’agit plus d’imitation, ni de redoublement, ni même de parodie. Il s’agit d’une substitution au réel des signes du réel…” But what got me is that it connected my instinctive connection from Jean Baudrillard to Lewis Carroll. It must really be scraping the bottom of the barrel–or Reddit–and it’s probably making things up. But I’m not trying to trick it into saying something right, and I can fully understand that it can’t say anything “right” because that’s simply not how postmodernism works.

Baudrillard, with all his denials of reality, might ultimately be GPT’s real wheelhouse, but it’s also of its time: the present. I’m guessing it could come up with a really good out-of-leftfield meme. But I won’t waste any image tokens to find out; I’ve got some old Vines to watch.


One response to “Hallucinating or Elucidating? AI’s Strange, Wrong, Hyperreal Honesty”

  1. Dan Garmat Avatar

    Does “synaptic sorcery” imply the human brain is also hallucinating?

Leave a Reply

Your email address will not be published. Required fields are marked *