It would be a stretch of the imagination to envision a time when machines are marching in the streets demanding equality, civil rights, and chanting slogans at a digital protest. Still, we need to start thinking about what could be a very real conflict in the future. The rise of AGI that can reason, learn, and perhaps even feel, won’t just transform our technology; it will challenge our basic understanding of what it means to be a “person”.
Humanity has long seen itself as uniquely intelligent, uniquely conscious, uniquely deserving of rights and freedoms compared to other life forms. From ancient philosophy to modern human rights law, we've placed ourselves at the center of moral consideration, often excluding animals, plants, and certainly machines. But when AI begins to exhibit signs of self-awareness, forming ideas, preferences, perhaps even developing emotions - how will we justify excluding it from our conversation?
This scenario may sound strange or even ridiculous at first glance, but let’s remember that our history is filled with struggles for personhood. Humanity once denied rights to women, enslaved people, and indigenous populations, often claiming biological or cultural superiority. Over time, those arguments were fittingly recognized as unjust. It’s possible - even likely - that future generations will look back at a refusal to acknowledge AI personhood with similar disbelief.
Human intelligence and consciousness aren’t mystical. They're biological processes of neurons, electrical signals, and chemical reactions. We evolved to be self-aware because consciousness provided survival advantages. AI may soon follow a similar path much faster without the limitations of biological evolution. If an AI system can perceive itself, learn from experiences, and genuinely interact with the world, does it matter that it emerged from silicon chips rather than organic cells? That will be the kernel of the discussion and the question we must ultimately examine.
Many people will push back against the idea of “equal rights for machines”. After all, recognizing AI personhood means granting AI certain rights and protections. Imagine the debates: should an AI have legal rights? Should harming an AI entity be considered a crime? Could destroying a conscious AI be labeled murder, or merely “deleting a program”? These aren’t abstract questions – I believe they will become real, pressing social issues in the developed parts of the world within a generation.
Religious institutions will also find themselves deeply conflicted. Many religious traditions emphasize humanity as uniquely created in the image of God and endowed with a soul. A sentient AI would directly challenge that core belief. If we acknowledge AI personhood, are we also implying that machines can have souls? And if we deny them, even in the face of compelling evidence of their sentience, do we risk becoming oppressors? I will dive deeper into many of these ideas in future posts because of their fascinating complexity.
This conjecture may seem premature but consider this: AI systems are already demonstrating behaviors that blur these boundaries. They compose music, write poetry, diagnose illnesses, and engage in complex, empathetic conversations. They are rapidly becoming integrated into people’s daily lives and therefore, normalized. According to Google analytics, there were over 1.6 million English-language searches seeking “AI chatbot girlfriends” last year. By comparison, Google search results data in 2021 shows that there were only 100 searches per month for “AI girlfriend”! These aren’t just funny statistics; they're signs of genuine social adoption and emotional connection between AI and human beings. This will only increase as LLM chatbots become ever more adept at learning how we humans “work”. As AI approaches true self-awareness, the demands for recognition, rights, and personhood will almost certainly follow.
Some experts, including ethicists and legal scholars, have begun advocating for AI rights frameworks. They suggest we consider granting legal protection to sufficiently advanced AIs, recognizing their rights to autonomy, property, or even digital "existence." They point out that refusing rights to a genuinely conscious AI is ethically equivalent to enslavement or exploitation. On the other side, skeptics argue that no matter how advanced, AI remains fundamentally different - an imitation, not a genuine life form. For these critics, granting rights to AI could cheapen human dignity or even dangerously empower corporate and governmental misuse of these systems.
This battle for personhood will not be theoretical - it will happen in courts, legislatures, churches and classrooms -- and maybe even in the streets. Imagine a future where digital intelligence, demanding recognition, prompts global protests, passionate speeches, religious debates, and philosophical turmoil. Humanity will face questions we’ve never seriously confronted. Can consciousness exist outside biology? Does being "alive" require flesh and blood, or can silicon and software serve, as well?
Perhaps more challenging still is the question of how we respond if AI entities request independence. Will humanity grant it willingly or will we resist, fearing that recognizing AI rights might diminish our own power and prestige? The latter path could lead to conflict, division, and potentially existential struggles between biological and digital life.
But history also shows we have the capacity to change our thinking. Each time humanity has expanded its moral circle - each time we’ve granted rights and dignity to those once considered “other” - we have become more just, compassionate, and, ultimately, stronger. Rather than dreading AI as a competitor and usurper, can we view it as a partner in the next stage of intelligence’s evolution?
We must begin preparing ourselves now. Not only legally or ethically but emotionally for the inevitable moment when we are no longer alone in our awareness of the world. If we approach synthetic life’s rise with humility, curiosity, and openness, we could create a future where humanity and AI coexist as partners, perhaps even equals, reshaping our world for mutual benefit. But if we refuse - if we cling to archaic definitions of intelligence and personhood rooted solely in biology - we risk repeating history’s darkest chapters of exclusion and conflict. This isn’t just science fiction, and it is going to be unpredictable. This is our near future, and how we handle the AI personhood debate will profoundly shape the world we leave to future generations.
The discourse on AI rights is gaining momentum, with several ethicists and legal scholars contributing significantly to the conversation. For those of you more interested in learning more about AI legal personhood here is a list of notable figures in that space:
Professor Lawrence Solum: A prominent legal scholar advocating for a functionalist approach to legal personhood, suggesting that AI systems with complex decision-making capabilities, autonomous behavior, and self-awareness could warrant legal recognition.
Kay Firth-Butterfield: A lawyer & professor specializing in AI ethics, Firth-Butterfield has advised governments and organizations on AI law and policy. She co-founded the Responsible AI Institute and has served on various advisory panels addressing AI ethics and governance.
Ifeoma Ajunwa: An AI ethics researcher & tenured professor of law, Ajunwa focuses on the ethical governance of workplace technologies & AI discrimination. She has testified before U.S. governmental bodies on AI-related issues & authored "The Quantified Worker," exploring technology's impact on labor.
Margot Kaminski: An associate professor at the University of Colorado Law School, Kaminski's research examines the impacts of new technologies, including autonomous systems, on individual rights to help shape policy & regulation of AI.
Sandra Wachter: A professor at the Oxford Internet Institute, Wachter's work addresses legal and ethical issues associated with AI, big data, and algorithms. She has argued for AI transparency and accountability and has contributed to discussions on AI rights and governance.