Star Trek and a Brighter AI Future
We Can Learn from Gene Roddenberry's Vision
I needed a new show in the background while working, so the other day I put on Star Trek: The Next Generation — which I liked as a kid.
It was fun watching it after a long time away. But within a handful of episodes I realized I hadn’t fully understood the show the first time around in the 90’s. Watching it now I realized there was a more profound beauty to Star Trek than I had initially thought. I understood a lot more of what Gene Roddenberry (the creator) was trying to say.
Roddenberry’s vision wasn’t just about cool spaceships or weird aliens. It was about a more mature humanity. A civilization that had learned to align its technological power with good judgment. A society where engineering capability was matched by actual ethical restraint. That assumption — that humanity CAN grow up and become wise — is part of what I think gave the show its special, hopeful quality.
But in the age of AI that we live in today that assumption feels like its being seriously put to the test.
As AI has permeated every aspect of life in the last couple of years the stark risks of the underlying technology and the unpreparedness of our society to manage it seem obvious.
We are watching powerful AI get embedded into education, finance, healthcare, hiring and media faster than our institutions or experts can meaningfully respond. Tools that can generate persuasive text, realistic video and production-grade code are being deployed at scale — very often with an incomplete understanding of the impact.
The capability of AI is truly breathtaking, but so is its potential for harm.
I asked myself what it would take us to get to Roddenberry’s Star Trek utopia and the answer was easy: a rollercoaster of societal changes over hundreds of years and maybe even humanity getting perilously close to self annihilation. There is no more powerful way humans learn except by trying & making mistakes. I wish humans could learn from the prior experiences of others or philosophical teachings like Roddenberry’s, but unfortunately we rarely seem capable of it. It’s like our brains are wired so that we HAVE to put ourselves in danger to learn.
There were so many ethical delimas that got brought up in Star Trek but somehow it seemed like Picard & Company always chose the right way out with their 24th century wisdom and sense of ethics & values. But what about the trials and tribulations for the 400 years leading up to when Star Trek was set?
Because that’s where we are now — in the “messy middle.”
Everyone now senses the reckoning that AI is likely to bring to the world. We inherently know that nothing will be the same 5 or 10 years from now. The asymptotic curve of technological advancement & innovation where everything moves extremely fast towards Blade Runner-levels feels like its already here.
At the beginning of any major transformation humans tend to worship exclusively at the alter of technological innovation. It’s only when something terrible happens that we wake up. We always get caught up in what’s new. That’s our schtick. Then when things start blowing up around us we realize what we should’ve been doing all along — finding a balance. I don’t think we’re going to change our innate human psychology anytime soon, but optimizing for only innovation when it comes to AI may put us on a course we can’t change.
Picard once said, “sometimes you can do everything right and still lose.” But we’re not even doing anything close to “everything right” at the moment. For example, where are all the AI safety companies? You rarely hear about them, and if you do, you only see them falling apart after a short run. And forget about wise policies that allow capitalism to flourish without destroying things — very few world leaders even know how AI works at a rudimentary level or AIs true implications.
Roddenberry believed we could be innovators of artificial intelligence & thinking machines (Commander Data, for example) and still be peaceful creators who don’t destroy ourselves or the natural world around us. I generally believe that. In fact, we came close already to an existential calamity in the early days of the nuclear age in the 1940s/50s and somehow still came out of that mostly intact.
On the other hand nuclear weapons can’t think on their own.
In the episode, “The Measure of a Man,” Data’s right to exist as a self-determining, independent form of life was questioned. And a bunch of smart people came together to argue and debate the issue. That’s the part that feels foreign at the moment. We are building thinking machines — or at least systems that convincingly simulate thinking — and the dominant cultural posture is acceleration. Faster models. Bigger models. More capable models. With the incentives almost entirely skewed toward technological expansion.
In Star Trek, there is always an underlying belief that technology must be integrated into a broader ethical framework. Roddenberry’s future civilization does not worship innovation for its own sake. They assume progress is meaningful only if it aligns with values. Exploration without principles (remember the Prime Directive) is not celebrated.
Technologists who grew up in the 80s and 90s watching sci-fi like Star Trek knew we were at the beginning of something remarkable in the history of humankind. We watched computing move from mainframes to personal computers to the early Internet in real-time. The subsequent acceleration toward social media, cloud computing, Bitcoin, and now AI didn’t really feel entirely foreign to us. At some level, we had already imagined it through shows like Star Trek and books like I, Robot, The Hitchhiker’s Guide to the Galaxy, and Dune.
The future Roddenberry imagined wasn’t anti-innovation though. It wasn’t anti-growth. It wasn’t anti-capitalist. It was just pro-maturity. The Federation didn’t stagnate — it explored aggressively. It built astonishing technology. It pushed boundaries constantly. But it did so inside a framework that assumed power required discipline. AI does not require us to abandon markets or competition. It requires us to upgrade them with smarter incentives. If we are going to unleash systems that rival human cognition, then the operating system of capitalism itself has to become more sophisticated.
The truth is that unbounded acceleration can destroy markets. Markets depend on trust. Trust depends on stability. Stability depends on governance. If AI erodes all taht, the very engine of growth gets undermined. A brighter AI future isn’t about slowing down innovation; it’s about aligning innovation with lasting markets.
Roddenberry believed humanity could build thinking machines and still remain peaceful creators. I think that’s right. But it won’t happen automatically. It will happen if the people building and funding AI decide that winning the future also means preserving the conditions that make prosperity possible.



