By Ryan Gravette, IDLA Technology Director
In 1995 Ron Howard created a movie that depicts the story of the Apollo 13 mission. (Was it really 29 years ago?!?) Tom Hanks plays Jim Lovell navigating adversity of equipment failure. Prior to the movie’s release, most of public perception focused on the failure of landing on the moon as the “one that didn’t make it.” Following the release of the film most individuals had a changed perception of the event. Instead of being a failure, it became an event that highlighted human ingenuity, perseverance, and overcoming adversity. The medium and acting allowed individuals to “Live the moment” within the mission. Was it a recording of the mission? No, everything was faked. It was a “Re-enactment”, of a historical event through the lens of Ron Howard. But it not being real didn’t change the fact that it was classified as a docudrama grounded in actual events. The end result was that the film changed public perception of an event; it changed how people thought.
I recently took an old family photo of my grandparents and uncle, uploading it as the prompt for video creation in a tool called Kling. It took the photo, cleaned it up, and then animated interactions in a five-second clip. The photo happened, I was there, but the actual scene deptected in the video never did. It wasn’t them, but it was. The movie made from my family photo moved me to tears, my family members seemed more real even if they were not. This blending of fiction and non-fiction has been around in writing for some time, but these are new mediums. New mediums that create a fictional world from non-fictional sources create actors from photos directed by the AI.
Who then controls the perceptions of the individuals in the AI. In my animated photo, the wind was blowing in the background moving the trees while my grandfather turned and smiled at my grandmother. The AI chose an interaction that didn’t happen, and my perception of the event surrounding the picture forever changed. It wasn’t at the scale of Apollo 13, but it did change my memory of the event. The director of that interaction wasn’t me, it was the AI.
As OpenAI, Google, and Anthropic continue developing tools, additional video capabilities are likely to follow. This coincides with a push towards Artificial General Intelligence (AGI), artificial intelligence that is more intelligent than humans across multiple domains. Combined, I fear that this level of directorial control might not just be animating family photos, but may play a role in reshaping our perspectives of historical events and crafting public perception at an unprecedented scale.
The transition from AI as a tool that retrieves and presents information to one that actively shapes narratives represents a profound shift. Just as Ron Howard’s interpretation of Apollo 13 rewrote public perception through the power of cinema, AI systems are now becoming directors of our collective and personal histories. But unlike human directors, whose biases and creative choices we can study and understand, AI systems operate through complex algorithms whose decision-making processes often remain opaque.
When my AI-animated family photo showed my grandfather turning to smile at my grandmother, it wasn’t drawing from historical truth but creating an emotionally resonant moment that felt true. This ability to generate convincing alternate realities raises important questions: Who curates these AI-generated narratives? What values and perspectives are embedded in their training? How do we maintain the distinction between historical fact and AI-enhanced storytelling when the line between them becomes increasingly blurred?
While we certainly need micro-oversight of AI (ensuring appropriate content generation for students, for instance), we must also grapple with the macro implications of AI systems potentially becoming the primary architects of our historical understanding. Just as the Apollo 13 film transformed a “failed mission” into a triumph of human ingenuity, AI could reshape countless other historical narratives – for better or worse.
We’re entering an era where truth and creative interpretation are becoming more intertwined than ever before. The challenge isn’t simply to regulate AI’s output but to develop a new literacy for this age of collaborative storytelling between humans and machines. We must learn to appreciate the enhanced emotional connections AI can create while remaining mindful of the authentic historical record it draws from.
The future isn’t just about AI looking up human knowledge – it’s about a complex interplay between human memory, artificial enhancement, and shared understanding. As we move forward, we must ensure that this collaboration enriches rather than replaces our connection to genuine historical moments and personal memories. The power to reshape perception comes with the responsibility to preserve truth, even as we embrace new ways of experiencing and sharing our stories.
Apple, Google, OpenAI and Microsoft are all working on ways to understand if something was generated with AI. Idaho’s own swear.com is working on ways to identify authentic human interactions, but will this be enough? Does the truthfulness of an image impact how is imprinted on our memories? Malcolm Gladwell has some insights into this with his new book “Revenge of the Tipping Point” and we will explore more calls to action in a future blog post.