The Fine Line Between Reality and AI

Navigating the World of Paul T. Goldman, George Santos, and ChatGPT 


Last night I watched the finale of Paul T. Goldman on Peacock, saw the latest George Santos “story” on the news and rewrote my “about” page here on LinkedIn using ChatGPT. I woke up at 4 AM with a realization that many of you may have already had. We could be seeing the end of “reality” as we’ve known it.

Let me back up. In 2009 I released a documentary film called Beer Wars. In the film, I followed 2 entrepreneurs and their families for over 3 years as they struggled to succeed in a competitive world run by giant multinationals. It turned out that making a documentary is harder than making a scripted film (which I had done before). Several of my editors worked in reality TV and they told me stories of how fake their shows were. They weren’t a good fit for my vision because I was after the truth, not bending it. 

So fast forward to 2023. We are on the precipice of an AI revolution. The one I’ve been waiting for since watching the Jetsons on Saturday mornings.

I’ve been using AI tools for years. We all have. Alexa, Siri, navigation, facial recognition, etc. have become part of our daily lives. But we are at an inflection point. The release of ChatGPT and DALL-E-2 from OpenAI have brought its potential to the mainstream. Much has been written about the ethical and moral dilemmas, loss of jobs, deep fakes and other issues but I want to address a broader question. Going forward, how do we discern what is real?

Paul T Goldman is a series that’s hard to define. Suffice to say that all is not as it seems. And Paul T. Goldman (who plays himself) isn’t even the protagonist’s real name. You must watch this mind-bending show to understand how difficult it is to tell fact from fiction. George Santos (not his real name) is a US Congressman whose life story is just that. Made up. And it seems to get stranger by the day.

And then there’s ChatGPT which I’ve been using daily (instead of Google). I decided to teach it about myself since I’m not famous by asking it to edit my bio on LinkedIn. And then the bio on my website. And then last night, I asked it about myself (I mean, who hasn’t done that on Google?) And boom, it gave me an answer. One it did not have in early December when I first asked. I’m basically “training” the AI but it accepts what I tell it. And so, here is my concern. As we move forward and integrate AI into everything, how do we tell what is real? What do we believe?