The Net of Our Intention
A Call For a New Paradigm in How We Interact With and Design Technology
Moments before my alarm sounded, my eyes cracked open. The room is silent besides the hum of the air purifier. Without thought, my hand reaches out—not for water, but for the sleek rectangle on my nightstand. Blue light floods my retinas as notifications cascade down the screen—my heart begins to race. My thumb moves in practiced ritual, scrolling through messages that won't matter by afternoon. Ten minutes vanish before I realize: the first conscious act of my day wasn't chosen—it was surrendered. In this small, ordinary moment, the first thread in my net of intention has already frayed.

This reflexive gesture, repeated in millions of bedrooms each morning, reveals the quiet battle for human attention that shapes our collective future. With each action—whether we're using technology or creating it—we either reinforce our net of intention through mindful purpose or allow it to unravel, letting our attention slip through holes tugged wider by algorithmic currents designed to capture rather than cultivate.
Our increasingly unconscious relationship with technology hasn't evolved by accident. It's been shaped more by the incentives of market forces than by the deeper rhythms of human values. Yet this isn't a simple tale of villainous tech companies versus innocent users. The reality, as I've discovered through my own struggles with digital boundaries, exists in the complex interplay between individual choice and systemic design.
Reclaiming our agency demands a journey from unconscious reaction to conscious creation—a path that begins with personal awareness but must extend to systemic transformation. The future of our creations, especially AI, depends on whether we design them as extensions of our highest aspirations or unwitting amplifiers of our unexamined shadows.
The Force of Intention
A few years ago, I found myself mindlessly scrolling through social media while my friend tried to show me her artwork. The look in her eyes as I half-attended to her masterpiece haunts me still. That evening, I wrote in my journal:
"Where was my intention? Where did it go?"
This personal reckoning launched my exploration into intention's power—not just as a concept but as a tangible force that shapes our experience.
The concept of intention predates modernity; the Greeks had their telos, and psychologists today discuss "theories of intention." These, however, only hint at intention's transformative power. Neuroscientist Andrew Huberman's research on "intentional priming" reveals how setting clear intentions reshapes our neural architecture. When we deliberately focus our attention, our brain's reticular activating system filters incoming stimuli, prioritizing information relevant to our intention while dampening distractions.

This neural reshaping isn't merely metaphorical. Through neuroplasticity, intention-focused practice creates new synaptic connections while pruning others, physically altering our brain's structure. What we repeatedly attend to becomes what we're primed to notice. Intention doesn't just direct our attention—it constructs the lens through which we perceive reality.
Yet intention doesn't operate in isolation; it's both an internal compass and a social force. When you or I set an intention, we orient our psychological and physiological energy toward a goal. When we share an intention—like the collaborative vision behind Wikipedia—we begin to co-author cultural and technological ecosystems. Wikipedia arose from a shared telos: the democratization of knowledge. This simple, potent intention challenged longstanding notions about expertise, authority, and wisdom, weaving a new strand into our collective net of human understanding.
A paradox emerges, however: although we live surrounded by devices engineered to capture attention, we rarely question the underlying why of these engagements. This unconsciousness is troubling when smartphones, AI-driven recommendations, and algorithmic feeds capitalize on primal reward pathways meant to help us seek nourishment and connection—but are now manipulated to keep our eyes on a screen.
"Men have become the tools of their tools," Henry David Thoreau warned in 1854; philosophers Marshall McLuhan and John M. Culkin later echoed: "We shape our tools, and thereafter, our tools shape us." Whenever our tools no longer serve our deeper purposes—drawing us instead into a vortex of infinite scrolls and notifications—intention has been subverted. Each intrusive alert or exploitative design choice punctures the net of our intention, allowing our attention to leak away drop by precious drop.
Consider this: I was reviewing my screen time from a couple of weeks ago. There was one day, yes, a single day, when I picked up my phone 134 times. This shocked me and made me recognize how thoroughly my attention had been captured. Yet realizing these holes in our net of intention offers the first opportunity to mend them, beginning our journey toward greater consciousness of how we direct our most precious resource: our attention.
The Unconscious Attention Trap
We live in an attention economy, where apps and platforms battle for every spare second of our day. Our attention has become the ultimate currency—collected in fragments and converted into profit. Over time, these fragments accumulate into days, weeks, and years of our lives, creating enormous wealth for those who best capture (and monetize) our engagement.
To understand what we lose when our attention fragments, contrast two experiences I had earlier this month. One Tuesday, I allowed notifications to interrupt me throughout a writing session. My work felt labored, and disconnected. Three hours yielded two mediocre paragraphs. One Saturday, I turned off all devices and wrote in a notebook by a window. Time melted away as ideas flowed effortlessly. In just ninety minutes, I drafted an entire section requiring minimal revision. The difference wasn't talent or topic—it was the quality of my attention, paired with my intention of creating this boundary. Uninterrupted focus created a flow where creativity flourished; fragmented attention produced only frustration.
Reducing humans to "users" in this race to monetize attention is subtly but profoundly dehumanizing. It suppresses our nuanced emotional, ethical, and spiritual dimensions, assigning worth primarily based on usage metrics. We become data points—engagement tracked, behavior predicted—while our creativity, compassion, and quality of connection are scarcely measured.
Systems thinking helps us see that these seemingly tiny moments of distracted scrolling form a self-reinforcing feedback loop: notifications beget engagement, engagement drives profit, and profit incentivizes more addictive designs.
A University of Chicago study (2017) illustrates how the very presence of a smartphone depletes our cognitive capacity, even when unused. This "brain drain" highlights how reflexive attention to our devices eats into our ability to be fully present—whether we're problem-solving at work, playing with our children, or listening intently to a friend.
These aren't small losses. They're tears in the fabric of our lives.
"What is it you plan to do with your one wild and precious life?" poet Mary Oliver asked. That question has new gravity in an age where our attention—the essence of how we live—is systematically captured by institutions prioritizing profit over well-being. The average American now spends over seven hours a day on digital devices, largely in modes of passive consumption rather than active creation.
Each minute spent in unconscious digital consumption is a minute not spent in the full, embodied experience of being human. And these minutes add up. They become our lives.
The Systemic Drive—Profit Over People
While thoughtful design can make technology more humane, no single product or feature exists in isolation. They operate within a powerful economic framework—one where capitalism often values growth and profit above all else. As Tristan Harris, former Google design ethicist, puts it:
“We call it the race to the bottom of the brainstem, to light up more and more parts of your nervous system because if I light up more parts of your nervous system than the other guy, I'm going to get more of your attention.”
Designers, engineers, product managers, and AI specialists are routinely tasked with maximizing revenue, data capture, or market dominance. Within these incentives, success is measured by metrics—time on platform, click-through rates, ad impressions—not the depth of human connection or enrichment.
I've witnessed this firsthand. Working in technology in Silicon Valley for nearly 10 years, I've witnessed countless strategy sessions where executives openly discussed "product stickiness" as a success marker. The goal was simple: make the user spend more time using the product. The metrics were celebrated as they led to more profit; the human cost went unmentioned. As my personal meditation and consciousness practice expanded, I started to recognize how thoroughly profit incentives had eclipsed human well-being.
Yet we must acknowledge counterexamples—companies that balance profit with purpose. Patagonia's environmental commitment, Basecamp's rejection of venture capital to maintain its values, and Ecosia's tree-planting search engine demonstrate viable alternatives to extraction-based models. These outliers prove that business success doesn't require exploitative design. The choice between profit and ethics is often presented as inevitable, but these companies reveal it as a false dichotomy—one that limits our imagination of what capitalism could become.
Still, these examples remain exceptions. More commonly, the tension manifests in an "intimacy race"—where companies vie to be the first choice for users seeking convenience and connection, enticing them to share ever more personal data. Trust and intimacy yield richer datasets to be monetized through targeted advertising or sold to other entities. This dynamic is exponentially increasing with artificial intelligence, as AI systems require vast amounts of personal data to function effectively while simultaneously becoming more sophisticated at predicting and influencing user behavior. As the stakes grow higher, the methods for capturing user attention and data grow more sophisticated as well.
In this race, dark design patterns flourish: confirmshaming, hidden privacy settings, confusing unsubscribe processes. Each dark pattern is a mechanism that undermines user autonomy in the name of corporate metrics, creating holes in our net of intention. Over time, we risk designing our collective future into a state of mental and physical atrophy, as humorously yet unsettlingly portrayed in Pixar's WALL-E.
The problem isn't "technology" or "capitalism" in themselves but the unchecked incentives that lead to exploitative design. Recognizing and naming these incentives is the first step in reform. The next step is evolving our economic models—through policy, new business structures, or grassroots movements—so that technology can serve life rather than drain it. Leaders in technology, education, policymaking, and beyond must question whether our present paradigms support more than just profit metrics.
As our journey toward consciousness deepens, we begin to see that repairing the net of our intention requires not just personal discipline but systemic reimagination. The question becomes: How might we design technology that strengthens rather than undermines human flourishing?
Designing Intentional Technology
At every level of technology creation—from the earliest product concept to the final interface—our choices convey assumptions about human nature and value. Intentional technology design does not treat ethics as an afterthought; it embeds them into the foundation. To do this effectively, we must see our tradeoffs plainly:
If a feature increases engagement by exploiting fear or anger, what does it cost people emotionally and cognitively?
If we prioritize frictionless sharing, do we sacrifice the introspective pause that fosters discernment and depth?
By making these tradeoffs explicit, designers and engineers can strengthen our collective net of intention, ensuring fewer opportunities for our attention to slip away unnoticed. What would technology look like if designed to nurture rather than deplete human potential? Instead of prioritizing clicks or "time on platform," intentional technology design asks:
How does this experience nurture the full human being—cognitively, emotionally, and spiritually?
A deeper layer of these tradeoffs lies in how technology aligns with human motivation.
Are we appealing primarily to intrinsic motivations—curiosity, creativity, connection, personal growth—or relying on extrinsic incentives like instant notifications, streaks, and digital "points"?
Research in self-determination theory highlights that when we honor innate drives for autonomy, competence, and relatedness, we support genuine well-being. However, if we design products around extrinsic rewards, people may chase superficial metrics instead of developing deeper engagement or personal insight.
Consider Spotify's "Spotify Wrapped," which balances commercial interests with genuine user value. It transforms streaming data into an opportunity for meaningful reflection—listeners discover patterns in their habits and share them socially, fostering connection. While it certainly boosts user engagement, it also prompts self-awareness, encouraging users to celebrate their own musical discoveries rather than accumulate more listens.
Similarly, Signal's privacy-first approach is rooted in the question, "How can we provide functionality without compromising user trust?" and yields features such as sealed sender technology. By respecting personal autonomy and privacy, Signal appeals to intrinsic values like trust and security. In contrast, companies that retrofit privacy onto extractive business models often create contradictory user experiences that erode trust, leaving the net of intention riddled with holes.
When Apple introduced Screen Time in 2018, they not only offered usage data but also visualized it in stark, personal terms. This direct confrontation between our intended and actual tech usage can jar us into realigning with our intrinsic motivations: to spend our time intentionally, nurture relationships, and preserve mental clarity. It's the essence of making tradeoffs visible:
“Yes, you can gain ten extra minutes of engagement—but at what cost to a user’s relationships, mental well-being, or authentic sense of fulfillment?”
Not every company succeeds in this balancing act. I once explored working with a leadership development company whose business directly contradicted its stated mission. While publicly promising agency and trust, its internal metrics incentivized co-dependency and profit—regardless of the impact on the end customer. The irony was painful: a company designed to foster authentic leadership was structurally motivated to create dependency in its clients. The net of intention here wasn't just frayed; it was fundamentally compromised by competing aims.
Ultimately, designers, product owners, and executives become custodians of attention—and, by extension, of human motivation. Seeing themselves in this light—as stewards rather than merely builders—shifts the focus from extracting value to amplifying human potential. By designing for intrinsic motivation at every juncture, we craft technologies that sustain genuine well-being and align with the human desire to learn, connect, and grow.
As our consciousness expands, we recognize that intention must be woven into technology itself. But this recognition leads to an even more personal question: How do we cultivate intention in our own relationship with the technologies that now pervade our lives?
Cultivating an Intentional Relationship with Technology
The most ethical technology in the world cannot compensate for lapses in our own awareness. We still bear responsibility for using technology in ways that enrich us rather than deplete us. This reclamation happens in ordinary moments—small acts of intention that shape how our digital ecosystem intersects with our lives.
Writer and artist Austin Kleon uses two desks: an “analog” desk for generating ideas by hand, and a “digital” desk for refining work on a computer. This spatial boundary is more than a productivity hack; it’s a physical manifestation of conscious choice regarding when and how technology serves his creative process. Each boundary we draw—whether by disabling notifications, scheduling offline blocks, or deciding to keep devices out of the bedroom—tightens the stitches in our net of intention.

My journey toward digital intentionality has been marked by stumbles and revelations. For years, I prided myself on "multitasking" efficiently—answering emails during calls, checking social media while writing, constantly toggling between tasks. I was busy, certainly. But effective? Hardly. The turning point came when I realized constant context-switching was costing me hours of productivity and leaving me mentally exhausted.
Now, my practice includes "unitasking"—giving one activity my full attention before moving to the next. I batch similar tasks, schedule specific times for emails and messages, and create physical distance from my phone during focused work. These aren't just productivity techniques; they're intentional practices of attention reclamation—ways of mending my personal net of intention one conscious choice at a time.
These small decisions particularly matter in life's transitions—waking, going to bed, and shifting from work to personal time. Our brains are more susceptible to external influence during these liminal moments. A device-free morning routine or a device-free dinner can transform entire days and deepen the quality of our relationships.
One powerful micro-habit is simply pausing before picking up the phone and asking, "Why now?" Are we escaping an uncomfortable emotion that a more mindful practice could help us integrate? Could we spend this moment in rest, reflection, or face-to-face conversation? Each time we insert that tiny pause, we strengthen our net of intention.
When dining with friends, one person's suggestion—"Let's put our phones away"—can pivot an evening from superficial distraction to authentic presence. Technology is not the villain, but our unconscious use can be. By co-creating analog spaces, we create what social theorists call "counter-environments" that help us maintain healthier norms amid the systemic pull toward perpetual engagement.
As our awareness deepens, we begin to recognize that our relationship with technology exists within a larger technological ecosystem—one now being fundamentally reshaped by artificial intelligence. This recognition brings us to perhaps the most crucial frontier of intention: AI's emergence as both a mirror and a magnifier of our collective consciousness.
AI as Our Mirror—The Urgency of Human Evolution
We are now at a crossroads where artificial intelligence is reshaping our creative, socio-economic, and existential landscape. As corporations race to develop Artificial General Intelligence (AGI)—actively shelving other priorities to meet the urgency—we face one of the most pivotal moments in human history. The same attention-hijacking mechanisms we see in social media could be supercharged by AI if left solely guided by profit motives or unexamined biases.
Technology magnifies the consciousness of its creators; AI especially mirrors our collective intentions, fears, biases, and aspirations. If our motivations are primarily extraction and control, we will see a rapid and breathtaking amplification of those shadows. Although policy, safety measures, and technical guardrails are crucial, they address only external conditions. If we neglect the internal condition—our own consciousness—AI may well become a hyper-accelerated feedback loop for the worst of our tendencies.
This mirroring function is already visible. Consider how early image generation models reproduced and amplified societal biases, creating predominantly white male doctors and predominantly white female nurses when prompted for healthcare professionals. These weren't conscious choices by developers but unconscious patterns in the training data that reflected societal biases. The AI magnified what was already present in our collective consciousness.
Similarly, recommendation algorithms don't create polarization; they amplify existing tendencies toward confirmation bias and tribal thinking. When YouTube's algorithm recommends increasingly extreme content, it's following the breadcrumbs of human engagement patterns—our tendency to be drawn toward emotional intensity and simplistic narratives over nuance and complexity.
Many initiatives, from AI safety research to regulation proposals, aim to curb potential harms. Yet, they cannot substitute for the deeper transformation needed at the level of awareness, integrity, and compassion. Our net of intention must stretch to include AI, ensuring that as we push the boundaries of what machines can do, we also expand the boundaries of our own ethical and spiritual maturity.
In 2016, I led UX design for one of Salesforce's first AI products, pioneering AI design principles while maintaining a daily meditation practice. Working closely with an AI ethicist, I could see firsthand that we were only accounting for half of the equation. We meticulously examined technical safeguards, bias detection, and transparency guidelines—all crucial components of ethical AI. Yet discussions of the consciousness and intentions of the humans creating and using these systems remained largely absent. In design meetings, we rarely asked how the state of mind of developers influenced code or how the awareness level of users would shape their interactions with AI. My meditation practice made this blind spot glaringly obvious: we were building sophisticated mirrors without considering who would be gazing into them.
The potential for AI to serve the common good is immense: personalized healthcare, accelerated climate solutions, expanded educational access. Some companies recognize this, designing AI with explicit guardrails around harmful content while optimizing for human flourishing. Google's PAIR (People + AI Research) initiative and recent AI safety and alignment coverage from MIT demonstrate growing awareness that AI development requires not just technical sophistication but ethical foresight. These efforts represent early attempts to weave intention directly into AI systems—creating tools that expand rather than exploit human potential.
But that future depends not just on how we code our machines but on how we cultivate our hearts. AI's outputs, fueled by vast human-generated data, reflect our hidden biases and cultural assumptions back to us. Recognizing this mirror can spur us toward personal and collective growth—if we choose to see it as a call to evolve rather than just a technical challenge to "fix."
As our understanding deepens, we recognize that the relationship between human consciousness and artificial intelligence represents perhaps the most consequential application of intention in human history. How we direct our collective attention—toward exploitation or flourishing, toward consciousness or unconsciousness—may well determine the trajectory of technology and society for generations to come.
*Since sharing this essay: I released The Three-Legged Stool of AI's Future, which is an expansion of AI as Our Mirror—The Urgency of Human Evolution.
Charting the Course for a Conscious Future
Each morning, upon removing my eye mask, my hand still reaches toward the nightstand—but now, it reaches not for my phone but for a journal. The first moments of consciousness become an opportunity to set intention rather than surrender it. This small act doesn't change global systems, but it begins to reweave my personal net of intention, one thread at a time.
Each day, we all make countless micro-choices—whether to tap a notification or take a breath, whether to write an email or call a friend. Technology itself is neither villain nor savior: it magnifies our intentions and the tradeoffs we accept. Understanding these tradeoffs at both personal and systems levels grants us the agency to guide technology toward collective flourishing.
We must also acknowledge the larger economic frameworks driving our innovations. As long as unbridled profit outshines ethics, we'll see manipulative interfaces, addictive loops, and data extraction practices that corrode our individual and communal dignity. We can, however, shift these paradigms—through public policy, stakeholder capitalism, platform cooperatives, or grassroots movements—to align profit with planetary and human flourishing.
AI is a grand mirror reflecting our collective psyche. If we remain unconscious, it will magnify our biases, inflame divisions, and accelerate destructive tendencies. But if we rise to this moment—if we cultivate deeper empathy, curiosity, and responsibility—AI can become an unparalleled ally in confronting humanity's most urgent crises.
Ultimately, we must decide whether to remain passive "users" or become conscious co-creators. This is our moment of reckoning and possibility, where every human endeavor—engineering, policymaking, art, education, mysticism—can converge on designing a more life-affirming trajectory.
Will we fortify the net of our intention or allow it to unravel?
In summoning the courage to meet this frontier with humility and wisdom, we can shape a future in which technology amplifies our brightest qualities—creativity, compassion, and sense of meaning. By weaving mindful design, ethical business models, and daily practices that honor our attention, we repair the net that sustains our shared humanity. In doing so, we reclaim our role as co-authors of reality, casting our collective intention into a world that urgently needs our conscious care.
Perhaps tomorrow morning, you, too, will reach for your phone in that liminal moment between sleep and wakefulness. But perhaps—just perhaps—you'll pause first. You'll notice the automatic impulse. You'll take a breath. And in that tiny space between stimulus and response, you'll reclaim the first thread of your day's intention. This is how the net begins to mend—not all at once, but one conscious moment at a time.
With gratitude,
Rachel
Emergence with Rachel Weissman is weekly essays on human potential for regenerative progress — interlacing art & design, ecology, futurism, human potential, mystical wisdom, and technology.
If you find this writing valuable, share it with a friend, and consider subscribing if you haven’t already.
This was so well-crafted.
The analog desk idea is great. Whenever I sit at my work desk with my three screens something in me feels pained. No wonder I’ve been sitting at my kitchen table to stare out the window and write with a pencil.