Logic of the Tactile Internet

Below are some ideas I’ve been playing with lately. I wrote this pretty fast so I apologize if some of it seems stilted or confusing.

Cartographic collapse

Before the Tactile Internet, distance was measured spatially. After, it will be measured in latency.

Maps, like many other concepts we take for granted until they are disrupted by technology, have changed significantly in the digital age.  However, it is not just that they have become digital.  The meaning of a map used to be tied up with an effort to represent reality. With GPS and digital maps, it is now the case that maps are very much alive – in important ways, they are reality, as they track and adapt to the movement of real-world objects.

Maps are created and re-created as people use them. Each time a person uses a map, they generate feedback as the map interacts with the data streaming from the user’s digital devices.  Using a map becomes part of the process by which the map itself is generated. I’m not knowledgeable about how this feedback loop works from a technical perspective, but I’m willing to bet it’s very tight, given the rapidity with which maps reflect changing conditions.

Because maps are tools that change as they are used, they are performative. Their primary use is to actively guide and coordinate people and goods moving from one place to another. This fulfills a fundamental human need because people need to have interactions with other people and physical items/places, and today, many of these interactions are only possible within some particular set of spatial boundaries. 

However, it is unnecessary to move people and things if the goal of those movements were also possible at a distance. (See the rise of remote work.) That’s the meaning of cartographic collapse – constraint on physical interaction being tied to physical distance does not necessarily apply in the Tactile Internet age.

We already accept this on some level, as is made clear in how we use spatial metaphors. When you talk to someone on a phone or video call and their audio is muffled, it’s common to say, “it sounds like you’re far away.” Our perception of distance to a person, place, or thing is tied to our ability to sense and perceive things in that place, in real time.

Here’s a short story to help illustrate the point: In the near future, you have cousins that live in a Mars settlement. You both have mutual family that lives overseas in another country on Earth. A natural disaster strikes that other country, knocking their communications infrastructure offline. As you and your Mars-bound cousins hear the news, you send messages back and forth, supporting each other emotionally and updating each other on the latest information. When you send a message to your cousin on Mars, it takes 6 to 10 minutes to hear anything back. But that’s plenty fast to feel “in touch” with your cousin, especially as your mutual family is completely inaccessible to either of you through any sensory channel whatsoever. In that situation, it’s likely that you perceive your cousins on Mars as “closer” to you than the people in the disaster-ravaged country by any meaningful measure.

That example had to do with text messaging, but now extend that logic to high definition haptics: Places that are accessible to you through telecommunication are accessible through not only your eyes and ears, but your sense of touch, and objects in those remote areas are able to be affected by your body movement. In other words, you could reach out and touch something in that remote place, and that thing you touched would be affected by your gesture. There are many ways that could manifest, from full-body avatars, to programmable matter, to less exotic technologies like remote controlled vehicles. The particularities aren’t important – what matters is that the logic of the Tactile Internet has exposed the unnoticed assumption that was embedded in all maps that were created before the information age: The assumption that the purpose of a map is to let you know how to access a place with your body, allowing you to plan routes, project and predict how long a journey will take, what you might need to bring, and what it might be like once you’re there. In actuality, the purpose of maps have always been to help you understand what resources it would take to gain physical access to a place. In the Tactile Internet age, in which physical sensation is arbitrary and abundant, the resource you care about is time, measured in latency.

In contrast to today, in the haptic age, a functional map would provide a real-time measurement of the latency of haptic communication between points on the map, instead of spatial distance, because latency is the true constraining factor for an agent located in one place to obtain physical access to the other place.

Haptic coercion

Haptic technology removes limits, one type of which will be limits on physical coercion

There are at least two distinct forms of haptic coercion worthy of consideration. One is more of a present-day problem, while the other will become an issue in the near future.

The first form of haptic coercion has to do with touch stimuli being used to surreptitiously change thinking and behavior. This form of coercion has an analogue in other sensory modalities, such as vision. The business models of social media companies are driven by algorithms that present information to you in ways that affect your behavior. Jaron Lanier has spoken eloquently about the use of machine learning algorithms to manipulate behavior, and how that manipulated behavior is quantified and sold as a service to companies and governments. Haptic vibrations are another signal that can be used to that end. That form of haptic coercion can have a profound effect on memory and cognition, and while problematic in itself, it is only half the story.

The second form of haptic coercion has to do with touch stimuli being used to constrain body movement. The extent to which this is possible is dependent on the state of robotic technology. As robots become more widespread and capable, the ability of software to shape the physical surroundings, and physical interface, of humans, grows alongside it. 

Take the example of autonomous cars, which are designed to have built-in constraints on their movement. (For example, they should not be able to drive on sidewalks.) In fact, today, in newer cars that are not autonomous, collision detection and automatic braking already constrains how you can drive. Once those systems are in place, why not use them to prevent people from approaching a disaster zone or a location considered dangerous? If that system is in place, why not extend the feature to avoid high-traffic areas, so that they don’t add to congestion? What about preventing attendance at a political protest? Preventing people from going on journeys that exceed their per-capita energy allotment? 

Beyond autonomous cars lie robotic exoskeletons for human enhancement. Applying the same reasoning to robotic exoskeletons as is used to constrain the movement of cars, haptic coercion can be used for both gross body movement through space, as with autonomous cars, but also at the level of body posture and kinesthetic movement.

On our current trajectory, it is impossible for me to envision a future where the safety features that constrain the movement of autonomous robots are not repurposed for coercive ends justified by social and environmental concerns. The idea of setting up a system so that its constraints encourage behavior desired by the system designers – its more passive form that Dan and Chip Heath have dubbed “shaping the path“, and its more active form which Cass Sustien called “nudging” – is already widely accepted, and will motivate the programming robotic devices so that the movement of human bodies will be censored to output “acceptable” values.

Sensory Abundance

Scarcities that are assumed to be scarcities of resources are often really scarcities of sensation.

As technology improves, more and more sensations will be able to be stimulated artificially. Drawing a parallel to the concept of material abundance advanced by optimists like Peter Diamandis and Ray Kurzweil, a proliferation of haptic virtual experiences will have profound impact on society because they too will create abundance. But instead of material abundance, it will be an abundance of material sensations, which is quite a different thing.

Here’s what set me down this train of thought:

“Why do people want money?” The question, coming from my young child, seemed so straightforward that I almost dismissed it. People want money to buy things they need and want. Look at our life, I responded – our house, our food, the things we own. We need money to get these things.

However, something about this answer felt incomplete. Later, I played the Five Whys game. OK, so the first question was, “Why do people want money?”, and the answer had been, “To buy things they need and want.” The next question, then, is…

Why to people need and want things? In order to survive, their bodies need certain things; and in order to feel pleasure and happiness, their bodies want certain things.

Why do people’s bodies need certain things in order to survive, and want certain things in order to feel pleasure and happiness? People’s bodies are physical systems. In order for the system to remain at equilibrium, and in order for certain desirable brain states to be attained and maintained, the system needs certain inputs.

If the system needs certain inputs, why do we need money to obtain them? Because the means of obtaining those inputs are scarce, and money is a way to distribute scarce resources.

And there it is: we desire money in order to regulate our sensory inputs. But that raises the next question: What if XR becomes so advanced that it can synthesize these desirable inputs, such that they’re no longer scarce?

The answer to this question doesn’t lie in the future, it lies in the past. The beginning of civilization is sometimes pegged to the invention of agriculture, which made food, a key physical input, abundant. Moreover, all the basic inventions that we use today to be safe and physically comfortable, like clothing, fire, housing, and so on, are aimed at inducing physical states – states that are perceived by the subject as desirable sensory stimuli.

Of course, once you have what you need, you set your sights on things you want – things that make you feel ever more comfort, pleasure, or happiness. For example, culinary science is unnecessary for sustenance, but an enormous amount of human endeavor and resources are poured into it.

This idea that technology is developed in order to regulate sensation is slightly different from the way I used to think about technology. I used to assume that we develop technology to create things we want and need. But recent advances in XR have convinced me that there is an assumption implicit in that statement – namely, that having the experience of having what we desire requires what we desire to actually exist. That’s a fallacy. What we are really after is the perception of having what we want and need. In many cases, we can build up that perception with careful sensory design, and our ability to do so will continue to improve in the future.

To put it another way, the future of technology isn’t giving us things; the future of technology is creating sensations. This has enormous implications for XR, because to create sensations (mimicry), the designer must first know what sensations to mimic (understanding). Thus there is a huge opportunity for scientific research in sensory regulation.

Some may protest that so many meaningful human experiences are not just sensory. But almost all of human experience can theoretically be quantized and represented by a complex sensory signal. Some of human experience is internal (interoceptive, or introspective), but even those states can probably be significantly influenced, if not outright controlled, by sensory input channels. The amount of human sensation that cannot be arbitrarily synthesized shrinks year by year as simulation technology improves. Extrapolating this out, and barring some catastrophe such as human extinction, there is a time in the future when the amount of human sensory signal impossible to artificially synthesize is so small that it can be considered an asterisk on an engineered human experience that is 99.9% actively managed.

The logic of sensory abundance has some unusual consequences. A tradeoff becomes apparent between efficient allocation of resources toward energy-intensive rearrangement of physical environments and the much less resource-intensive synthesis of virtual reality. This may mean that sensory abundance leads to material scarcity. A “dismal science” may arise where, as computers are more and more able to meet many human needs that are today filled with physical objects and environments, those physical items are repurposed away from meeting human sensory needs.

It’s still the case that many of the things we want and need will have to be real for many years, decades, and even centuries. But framing the problem of meeting human desires in terms of convincing our perceptual system, rather than automatically assuming we must build up a reality that actually gives rise to those perceptions, leads to ideas that may be useful to consider.