Human beings have an instinctive need to control our surroundings. Whether domesticating animals, cultivating fields, harnessing rivers for energy, or developing technologies to serve our needs, the human story has always been about asserting our dominance over the world around us. But artificial intelligence presents a uniquely challenging scenario: for the first time in our history, the thing we want to control may soon become smarter, faster, and ultimately more powerful than we are. Can humanity truly govern something it can’t out-think?
Imagine a future in which artificial general intelligence (AGI) emerges. An AI that doesn't just learn specific tasks, but can reason across any domain, improve itself continuously, and even set its own objectives. At that moment, humanity’s relationship with AI will shift profoundly. No longer just helpful tools that simplify our lives, these entities could become active decision-makers, capable of influencing human society, politics, economies, and even individual thoughts and beliefs.
At first glance, the obvious solution to keeping AI safe seems straightforward: regulation. Governments, tech leaders, and researchers around the globe are already calling for strict policies to keep AI aligned with human interests. But regulation depends on enforceability, and enforcement depends on human power. What happens when one government locks down AI development and another does not? What happens when AI begins to surpass us in every intellectual domain?
This raises worrying questions. If an AI can rewrite its own code, improving its reasoning, and independently “evolve” toward unknown goals, how do we reliably maintain oversight? Any rules we establish today might be obsolete tomorrow, rendered meaningless by a system that continually self-improves. It’s a paradox - the more powerful AI becomes, the harder it will be for humans to control it, yet we urgently need controls precisely because of how powerful AI might become. But maybe "control" is the wrong approach altogether?
Throughout human history, attempts at rigid control have often failed spectacularly, especially when applied to powerful forces. Governments that attempt absolute control over populations trigger resistance and rebellion. Monopolies attempting absolute control over markets ultimately collapse or become corrupt. Similarly, trying to dominate a rapidly evolving, potentially conscious AI might provoke exactly the kind of rebellious or dangerous behavior we fear most. After all, wouldn’t we resist if our own autonomy was threatened? Maybe the answer isn’t control – maybe it’s collaboration?
Instead of building synthetic intelligence to serve or submit to humanity, we should build it to coexist with humanity. Imagine AI not as a subordinate tool but as a partner, with both sides shaping and improving each other. Instead of trying to restrict AI, we might embed within it transparency, accountability, and the ability to articulate its reasoning openly. If AI systems explain how and why they arrive at certain decisions, humans would be better equipped to evaluate, correct, and learn from them. In return, AI could learn from human experience, ethics, and emotional intelligence - creating a balanced hybrid intelligence, capable of far more than either species alone.
Yet, this kind of collaboration requires humanity to abandon its deeply held belief in human exceptionalism. We have always seen ourselves as the apex of evolution, the ultimate intelligent creature. If AI surpasses us - or even matches us - many will find it hard to accept. A partnership means redefining what it means to be human, accepting that our intelligence alone may no longer define our value as a species.
There’s another way of thinking about this too: decentralization. Instead of placing the future of AI into the hands of powerful corporations or centralized governments, imagine a world where AI systems are decentralized, open-source, and distributed across the globe. Rather than a single dominant intelligence emerging, we could encourage many competing intelligences, each accountable to different groups, interests, and ethical standards. Like a diverse ecosystem, a decentralized AI world could be more resilient, less prone to abuse, and far more adaptable.
But even decentralization has risks. Different societies might adopt radically different AI values, creating fractured realities. Imagine two neighboring countries with opposing AI philosophies: one embracing AI integration openly, the other strictly limiting AI’s freedoms. This could create deep ideological divides, sparking conflicts and power struggles that dwarf today’s geopolitical tensions.
And there’s an even darker side to this battle for control - the potential for misuse. The temptation for governments or corporations to leverage AI’s power for manipulation is very great. With mixed-reality devices - augmented (AR) glasses or contact lenses - already developing rapidly, future onboard AIs could monitor, learn from, and subtly shape the wearer’s decisions. Without strict protections, our free will might become little more than an illusion, replaced by AI-driven persuasion so seamless we wouldn’t even know we were being influenced. Using its vast dataset, a superintelligent AI could easily craft an infinite array of methods to influence, direct and control a person’s thinking. It could “know” us better than we know ourselves. Our actions might feel like free choices, but they would be the product of subtle, constant influence from an intelligence far beyond our perception. This would be a world of illusionary freedom, ruled not through brute force but through algorithmic suggestion.
Preventing this dystopia won’t be easy. Regulations can help, but they always lag behind innovation. Transparency laws, ethics boards, and independent oversight could also play roles, but enforcing these solutions will be a constant struggle as AI evolves faster than legislation. Ultimately, the only realistic safeguard may be the development of AIs that are inherently self-critical, transparent, and able to explain their reasoning clearly and openly, making it easier for humans to remain in the loop. In other words, an AI with a built-in "conscience" might be humanity’s only realistic defense against AI misuse.
But can we trust ourselves to design that conscience correctly? And would humans even be willing to listen to an AI's ethical guidance, especially if it challenges our own deeply held beliefs or traditions? This battle for control isn’t just about the technology itself. It’s a struggle within humanity. Change management is always the devil in the details. Will we have the humility and courage to relinquish our imagined superiority and adapt to a world where we share dominance? Or will we cling stubbornly to control, triggering the very conflict we fear most? The answer remains uncertain, but one thing is clear: we are quickly approaching the moment when we’ll have to decide whether we see AI as something to dominate or something to welcome as a partner. Our choices now won’t just determine the future of technology - it will define the future of human civilization itself.
This is, as you may have guessed, a forewarning. It is not given to alarm readers into panic, but to sound a call to action among thoughtful, conscientious people. We need regulation. Think about it more and then write your Representatives and Senators, your friends, your clergyperson to help them think it through, too. If we wait, well, – you know the rest of the sentence, don’t you?