We envision a future with distributed wearable computing. Similar to the human body, there would be multiple sensors on body, powered by one distributed computing hub (which we call Wearable Brain). We have coined the term Electronic Nervous System (ENS) for this distributed computing platform. Dr. Shreyas Sen describes the vision in detail in the IEEE Spectrum paper.
What if the brain could leverage the capability of the smartphone processor seamlessly, in real-time, essentially expanding our real-time knowledge to the entire internet? We’d become augmented superhumans.
Why distributed computing?
Our biological brain is amazing when it comes to logical reasoning and dexterity. Machines on the other hand trump the brain when it comes to numerical computation and memory. It’s a pity that even though we have a smartphone in our pocket, the brain can “talk” to the smartphone only when it is held up. What if the brain could leverage the capability of the smartphone processor seamlessly, in real-time, essentially expanding our real-time knowledge to the entire internet? We’d become augmented superhumans. To achieve this high-speed human-computer interaction, the real-time AI processor on the body (Wearable Brain), needs to know what we see, hear and sense. The real-time information capture requires can only be achieved by distributed computing with multiple sensors on different parts of the body. Motion sensors on different parts of the body will enable the Wearable Brain to create a realistic digital twin of our body. No one likes to charge multiple devices every day, so most of these sensors need to be charging-free patches on the body.
Enabling technology: Wi-R
The information processing architecture of the human body is enabled by the incredibly efficient, wire-like nerves. On the other hand, today’s wearables communicate via Bluetooth and other wireless signals which are 10,000 times inefficient compared to nerves. Wi-R is a new non-radiative wireless technology with the performance and security of wires, but the convenience of wireless. Compared to Bluetooth, Wi-R offers 100X higher energy efficiency, 10X higher data rates, lower latency, and higher reliability with its interference robustness and low bit error rates. However, Wi-R is confined near the surface which leads to unique advantages in physical security, multi-node co-existence, touch detection, and communication.
Discourse on the Ixana’s distributed computing vision of the future
Smartphone in our pocket has nearly all of the components required to be an auxiliary brain. However, the challenge is, the smartphone is not communicating with the brain, in real-time. This is the entire reason we use a smartwatch. The smartphone in our pocket can’t help us get a quick peek at the notifications. The pocket is not a good location to capture body physiological signals either. In a way, smartwatch is already abstracting out smartphone functionalities, in a different form factor. And people want that, as is evidenced by the growing smartwatch market. In fact, the number of wearable devices is on the rise with smartwatch, earbuds, various fitness and physiological trackers.
Smartphone basically consists of a screen, processor, memory, bunch of sensors, input ports/protocols and battery. An “out-of-the-box” question to ask, is, do all of these need to be in a single package? What if the sensors, processors and screen were all in different locations on the body?
Why would you do that? Pocket is an undesirable location for a lot of sensors to be. For example, the GPS, could be on your feet. That’s the reason why, smartphone GPS based distance trackers, can’t really track your treadmill run. Heart-rate sensors, pulse monitors etc should be close to the skin. The screen is best placed in an accessible location, e.g. your head or wrists. Battery should ideally be close to the battery-hog electronics e.g. the processor/screen. This is what we call: “deconstructing the smartphone” with distributed computing – placing all of the smartphone components in their most desirable locations on the body. If you believe in the “metaverse” future, sensors on different parts of the body are mandatory to create our digital twin that closely mimics our physical self-all-day.
In Ixana’s distributed computing vision, our body would have a few charging-free electronic patches containing sensors, placed in relevant locations as well as the camera/microphone on the head to register whatever you see/hear. But a centrally placed processor communicates with these sensors unlike today’s wearables, where every wearable has its own CPU.
The major technical bottleneck of distributed wearable computing is communication power. Communication typically consumes 4 orders more power than switching a bit. That’s why every wearable has a CPU to communicate as little as possible. To enable this “deconstructed smartphone” with a single wearable CPU, we need intra-body communication power to at least be 100X lower. Fortunately, Dr Sen has made an important discovery, Wi-R, that makes this possible.
Other technologies that are good to have, to implement this vision of distributed computing include lower power sensors e.g. low power cameras and low power machine learning processors.