Shryas Bhurat | Date: Apr 11 2026
There is a moment, somewhere between observation and understanding, where meaning arrives before explanation. Not as logic, but as a felt sense the way you know someone is behind you before you turn around, or the way a room shifts the second someone uncomfortable enters it. That invisible layer between raw sensory input and conscious meaning is what I've come to think of as spatial context and the more I look, the more I believe it is the foundational intelligence our species has been quietly unlearning.
The Room That Knows You're There
My entry point into this question wasn't a philosophy seminar or a meditation retreat. It happened at UC Berkeley, when I was studying for my Master's of Design in Human-Computer Interaction and first met Lucy Greco.
Lucy Greco has been blind since birth and has championed digital accessibility for decades. She began using computers in 1985, discovering early on how technology could level the playing field for blind students. While working with Lucy during my masters thesis, I unlearnt a lot about human & what we have lost as a species.
She could tell I was at the door before I knocked. She could sense the shift in airflow, the slight acoustic change, the change in my footsteps that told her: someone's here. Her world wasn't smaller than mine. It was, in many ways, fuller. She inhabited space differently not through vision, but through a web of calibrated awareness that most of us simply haven't developed.
Science has a name for what Lucy was doing. Research from Harvard Medical School and Massachusetts Eye and Ear found that the brains of people born blind make new structural and functional connections in the absence of visual information, resulting in measurably enhanced abilities in hearing, smell, touch, and even language and memory. The visual cortex repurposes itself, processing spatial and auditory signals with a specificity that sighted brains rarely cultivate. This is neuroplasticity at its most radical: the brain rewiring entire architectures to recover spatial understanding through different channels.
The Ancient Intelligence We Inherited
Across the ancient world, this kind of perception was expected. In the texts of classical Indian philosophy, pratyakṣa (perception) was understood as the primary means of knowledge a cognition arising from sense-object contact, but one that extended beyond the five gross senses into what philosophers called alaukika or extraordinary perception. The Nyāya school distinguished between ordinary (laukika) perception and a finer, intuitive reading of reality the kind gained through deep attentive practice.
In Vedic and Hindu traditions, extraordinary perception (atīndriya jñāna) was treated not as mystical fantasy but as a trainable faculty the capacity to access information "beyond ordinary senses," developed through disciplined attention and stillness.
If the ancient sages were right that these capacities are latent rather than lost then the question becomes practical: how do we reactivate them?
Article published in Calm's Neuroscience review notes that long-term meditators develop increased volume in brain areas linked to attention, sensory awareness, and global body awareness and that the superior longitudinal fasciculus, a white matter tract connecting attentional and sensory systems, grows measurably larger. The consequence: better communication between front and back brain regions, and a tendency to perceive reality as more interconnected.
This was spatial context as a civic and spiritual practice not a mystical detour, but the operational core of how ancient people read the world.
What We Built
The progression of human-computer interaction has been, in almost every way, a narrowing of the sensory channel.
The most urgent design principle emerging from this field isn't technical. It's philosophical. Adobe article by Silka Miesnieks says, if we don't have to think and behave like computers for them to understand us, computers can become our creative allies, but only if designers stop building for screens and start building for senses. UX frameworks for spatial computing now explicitly ask designers to layer not just visual and auditory experience, but olfactory, tactile, vestibular (balance), proprioceptive (body awareness), and even interoceptive (internal state) channels.
This is the human body, finally, being taken seriously as a design surface.
Designing for What We Are, Not What We've Settled For
The dominant paradigm of UX was built on a reductive assumption: that the most important thing is information delivery to the eyes. Menus, buttons, dashboards, notifications a world of two-dimensional rectangles optimized for a single finger and one focused gaze. This was never the full human. It was a human lobotomized down to a useful surface for clicking.
The argument isn't anti-technology. Technology built from sensory intelligence will be the most powerful kind we've ever made because it will finally map to the full bandwidth of human cognition rather than a narrow slice of it.
But the technology can only go where the human has been. An engineer who has never inhabited their own spatial awareness can't design spatial awareness into a system. A product team that treats human perception as a single visual channel will build products that require users to adapt to machines, rather than machines that adapt to users.
This is why the question loops back, always, to the personal. To spending time in silence. To noticing what you notice. To asking whether the information arriving through touch, smell, gut-sense, and peripheral vision is being processed or discarded.
Somewhere in that triangle lies the real design brief: not to build smarter machines, but to build humans who can meet them with the full range of what they were always capable of.
The world has always been speaking in spatial context. The question is whether we've been listening. Which brings me to another question if AI systems could be stress tested to evolve in particular dimension, will they evolve to become better?