The message arrived at 11 PM: "We're pushing the launch tomorrow. They approved everything except the guardrails you suggested."
I stared at my phone in the darkness, feeling the familiar tension between technological progress and responsible deployment. After working on this AI project—one that would affect millions of users—critical safety measures were being sidelined for speed to market. This moment crystallized what I'd been struggling to articulate since my work began at the intersection of technology and human consciousness: technical solutions alone cannot address the challenges we face.
My journey into this space wasn't planned. It began with my own burnout when I realized how profoundly my relationship with technology had shaped my attention, relationships, and sense of agency. What started as personal inquiry evolved into research and collaborations with technologists, ethicists, and Indigenous knowledge keepers—all grappling with how we might align our technological capacities with our highest human potential.
Two weeks ago, I released an essay called "The Net of Intention," exploring how our unconscious relationship with technology undermines our attention and agency. The piece examined how we might design and use technology intentionally rather than allowing it to exploit our attention for profit—themes that will be magnified with AI.
After attending an AI conference in Menlo Park and reflecting on conversations that followed, I realized something essential was missing. While I explored how AI mirrors our consciousness in the section "AI as Our Mirror," this perspective alone doesn't capture the holistic reality we're facing. What emerged was a deeper understanding of the interconnected nature of our AI future.
As our technological capacities exponentially advance, we find ourselves at an unprecedented inflection point. To truly understand the path forward, we must recognize the interconnected flywheel between systems design, personal consciousness, and planetary impact. Like a three-legged stool, if any one dimension falters, the entire structure becomes unstable.
Beyond Binary Thinking: The False Polarity of Regulation
Our discourse around AI governance often falls into reductive binaries: open versus closed, regulated versus unregulated, centralized versus decentralized. These polarities obscure the nuanced reality that effective governance requires balance rather than extremes.
In spring 2023, I attended a dinner in DC with tech executives and policymakers. I witnessed firsthand how regulatory frameworks are consistently outpaced by technological advancement. By the time policies addressing large language models were being drafted, the industry had already moved on to integrating even more capabilities.
The polarity between centralized control and unrestricted openness represents a false choice. An example that illustrates this complexity is the Colorado AI Act of 2024. This legislation mandates AI developers to exercise reasonable care to prevent algorithmic discrimination and requires clear disclosures to consumers when interacting with AI systems. While it aims to safeguard against AI misuse, it also acknowledges the need for innovation, demonstrating that neither complete openness nor total control offers a comprehensive solution.
Instead of a one-size-fits-all regulation, it’s worth exploring a governance ecosystem with multiple layers: foundational safety standards established through thoughtful regulation, transparent systems that enable external audit and contribution, and ethical frameworks that evolve alongside technological capabilities.
But even perfect governance frameworks cannot succeed without addressing the consciousness of those who create and use these systems—bringing us to the second leg of our stool.
Consciousness as Infrastructure: The Inner Dimension of AI Development
In boardrooms and engineering labs across Silicon Valley, decisions made in moments of stress, ambition, or fear ripple through algorithms that influence billions of lives. The state of mind from which we design technology is not separate from the technology itself.
Central to this dynamic is the psychological concept of locus of control—our beliefs about whether we can influence outcomes in our lives or whether external forces determine them. This fundamental orientation profoundly influences how we design, regulate, and interact with technology.

I remember while leading a workshop at Google I asked participants—ranging from executives to engineers and product managers—to sit in silence for just two minutes before discussing a challenging design tradeoff. The difference in conversation quality was striking. After the brief pause, participants listened more deeply, considered wider implications, and found creative solutions that balanced competing interests. This simple intervention fundamentally altered the decision-making process and shifted their locus of control inward—from reacting to external pressures toward responding from a centered place of agency.
Without conscious awareness of our motivations, blind spots, and biases, even the most sophisticated AI safety measures will prove insufficient. Joseph Weizenbaum, creator of the ELIZA program and author of "Computer Power and Human Reason," warned decades ago about what's now known as the "Eliza Effect"—our tendency to attribute human-like intelligence and understanding to machines that merely simulate comprehension. This phenomenon becomes increasingly dangerous as AI systems grow more sophisticated, creating ever more convincing illusions of understanding. Our relationship with technology—whether exploitative or mutually enhancing—reflects our relationship with ourselves and each other.
As AI systems increasingly mediate our experience, many people report feeling that technology is happening to them rather than for or with them. This external locus of control breeds resignation, anxiety, and disengagement from crucial technological governance questions.
For more details on the necessity of Consciousness as Infrastructure, feel free to check out the AI as our Mirror section from The Net of Intention. Yet self-awareness and governance frameworks, however robust, cannot address the third critical dimension: our technology's physical impact on the planet that sustains us.
The Material Reality & Regenerative Potential of AI
The ethereal nature of digital technology often obscures its very real material consequences. Each time we prompt a large language model or generate an image, we initiate processes that consume energy, require water for cooling, and depend on minerals extracted from the earth. AI's carbon footprint is not metaphorical but measurable—and growing at an alarming rate.
Training recent AI models like GPT-4 has a significant environmental footprint. According to Harvard Business Review, by 2026, the computing power dedicated to training AI is expected to increase tenfold, leading to a surge in energy and water usage. The global AI energy demand is projected to exceed the annual electricity consumption of countries like Belgium, while data centers' substantial water usage for cooling can exacerbate scarcity in regions already under stress, from Arizona to Chile.
Understanding these material realities isn't about inducing guilt or halting progress—it's about empowering humans to make conscious choices with full awareness of their consequences. We can each take meaningful action: developers can optimize model efficiency, businesses can prioritize green computing infrastructure, and people can become more intentional about their AI usage. Companies like Hugging Face are enabling this agency through tools like their carbon impact calculator, making previously invisible environmental costs visible. Meanwhile, companies like Salesforce are integrating energy efficiency alongside performance metrics through initiatives like the AI Energy Score, redefining what constitutes 'state-of-the-art' to include environmental impact.
Yet despite these challenges, AI also holds unprecedented potential to support planetary healing—if we develop it with explicit intention that integrates both technological innovation and ancestral wisdom. This integration is not merely beneficial but essential, representing a profound reconciliation of seemingly opposing worldviews that have been artificially separated in modern thinking. While technological innovation drives forward with its empirical, efficiency-oriented perspective, ancestral wisdom offers what technology alone cannot: a deep understanding of interconnectedness, cyclical thinking that spans generations, and ways of knowing that encompass emotional, spiritual, and relational dimensions beyond mere data processing.
In Sanikiluaq, an Inuit community in Nunavut, Canada, the PolArctic project exemplifies this integration, combining traditional Indigenous knowledge with AI to identify previously undiscovered fishing locations. By weaving local land and ocean wisdom with scientific data and remote sensing, the project developed an AI model that supports sustainable inshore fishing while helping the community adapt to climate change. This approach honors the community's intimate, multi-generational relationship with their bioregion—a relationship based not on extraction but on reciprocity—while amplifying this knowledge through technology to address contemporary challenges.
These applications offer glimpses of what researchers call "planetary intelligence"—an emergent property where Earth's living systems and technological components collectively exhibit intelligence at a planetary scale. This perspective challenges our traditional views of intelligence as solely individual or human-centric. When Indigenous intergenerational knowledge is integrated with technologies like Climate TRACE's emissions tracking systems, we enhance this planetary intelligence by fostering more sustainable and holistic approaches to managing Earth's complex systems.
Achieving this potential requires expanding our metrics beyond just technical performance and profit—complementing them with equally rigorous assessments of AI's impact on biodiversity, climate stability, and human flourishing. What if our benchmarks for "state-of-the-art" AI included not only processing speed and accuracy but also carbon efficiency, ecosystem benefits, and well-being outcomes? This requires long-term thinking that considers impacts across centuries rather than quarters—reflecting an expanded consciousness that encompasses future generations and non-human life in our ethical considerations.
Weaving the Threads: The Integrated Approach
The three dimensions—governance systems, consciousness development, and planetary impact—cannot be addressed in isolation or by individuals acting alone. True transformation requires us to move beyond self-focused perspectives toward collaborative action that recognizes our fundamental interconnectedness.
This shift from "I" to "we" represents our most promising path forward. No single technologist, company, or government can adequately address AI's challenges. Only by weaving strong networks of collaboration—across disciplines, cultures, and worldviews—can we create systems that reflect our collective wisdom rather than amplify our individual shadows.
None of these actions will transform global AI systems overnight, but they represent a crucial truth: meaningful change begins in our immediate circles. The teacher helping seniors navigate technology shifts their relationship with AI from fear to agency. The watershed monitoring project creates data that informs local environmental decisions. The family guidelines strengthen our community's consciousness around technology use.
The most powerful actions often happen not in corporate boardrooms or legislative chambers but in living rooms, classrooms, and community centers. Being kind to your neighbor, hosting a conversation, or volunteering locally creates ripples that extend far beyond what we can measure.
What we face now is not merely a technological challenge but an evolutionary invitation to strengthen each leg of our three-legged stool. We must craft governance systems that balance innovation with responsibility, mature our consciousness to match our technological power, and ensure that our digital creations regenerate rather than deplete the planet that sustains us. If any one of these dimensions falters—if we create perfect regulations without the consciousness to implement them, or develop sustainable technology without ethical governance, or expand awareness without addressing material impacts—the stool cannot stand.
The stability of our future rests on our ability to hold these three dimensions in dynamic balance. When governance, consciousness, and planetary impact align, we create the foundation for technologies that, like the natural systems they mimic, leave the places they touch more alive, more diverse, and more whole than they found them. This integration—bringing together our most advanced innovations with our most ancient wisdom—represents our greatest hope for creating a future where technology serves as a conduit for our highest human capacities rather than amplifying our deepest shadows. The three-legged stool, once balanced, becomes not just a metaphor but a blueprint for planetary flourishing.
With gratitude,
Rachel
Emergence with Rachel Weissman is weekly essays on human potential for regenerative progress — interlacing art & design, ecology, futurism, human potential, mystical wisdom, and technology.
If you find this writing valuable, share it with a friend, and consider subscribing if you haven’t already.