I just read an intriguing article (I used Google Glass: the future, but with monthly updates) that discussed the benefits and drawbacks of the Google Glass project in its current state. I, for one, am ready to be a cyborg. The promise of Google Glass is that it will give us a heads-up display at every moment of our life. While scanning your environment for clues, Glass provides the wearer with just-in-time information about the things it “sees” or senses by their location. With a spoken command, audio-visual content drawn from that environment can be logged for posterity. As a big Evernote user, I can see a great potential for integration. The information you need can be available as you need it. The information you want to save can be captured with little friction.
In fact, much of the innovation I see occurring is related to connecting us to the vast sea of information more quickly and meaningfully. We are creating a meta-mind that not only knows what it knows, but can quickly learn what it needs to know with assistance from technology. It is changing the way we interact with the world. With the advent of the Internet of Things, it will change the way we interact with the objects in our life, as well.
Imagine walking around your house and as you glance at a light fixture, your heads-up display alerts you that the bulb is beyond its expected lifespan. By uttering a few commands, you can have a new bulb sent to the house so that there is no time spent in darkness and no need to warehouse supplies to keep your home functioning. Imagine it can alert you that your car will need fueling or charging as it overhears you making plans for a trip with your friends. The power of self-aware objects that are also aware of shifts in your context that can alert you to new needs created by new requirements will help streamline and optimize your relationship with them. Think of the Nest thermostat and imagine if all your objects learned about the way you used them and adjusted their behaviors accordingly.
However, there is a source of information and a complex system to be navigated that hasn’t received its due in the development of intelligent interfaces: You.
As a pet project, I’ve been working on an app to leverage contextual data with qualitative input from the user to make some inferences about well-being. The thesis is that we don’t need complex data from the user to know a lot about what that user is going through. By making it a largely mobile interaction, we know where and when the user is when they answer a simple question regarding their state of mind. Once you know that, you can poll that vast sea of data and know what the temperature is when they respond. You can also know if their at work or home or their favorite tavern. By asking a few qualifying questions along the way, you can know who their favorite sports teams are and whether or not they recently won or lost. I’ve only been at it a short while, but have already had a surprising revelation. Despite the bad rap that Mondays get, it is clear that Thursdays are my worst days. A little self-reflection leads me to the conclusion that Thursday is the day when tasks are butting up against the end of the week. Friday brings release to that pressure, but Thursday is pure stress…
One of the challenges I face is that, no matter how randomly I poll the user for their mental state, I rarely hit on a moment of crisis or elation. Likewise, I am unlikely to take the additional steps to launch an app and log my feelings in those moments at the edge cases of emotion, either. Even when the application polls me at the correct time, I’m unlikely to take the time to follow through in extreme moments. Imagine if I could just speak my mind to a wearable device: “Okay, Glass. I feel tired…” The machine could log how I feel and even ask follow up questions to clarify. A much richer record could be created.
Now, let’s take it another step:
Imagine if the machine could be monitoring and capturing information about the user that aren’t immediately apparent. Let’s say it’s monitoring your levels of different nutrients. Now, it’s time to get lunch and the machine realizes that you are low on potassium but your level of sodium is high. It can recommend that you get a spinach salad, but skip the bacon. Given its access to Big Data, it can also recommend nearby restaurants that have highly-rated dishes that match your criteria. It could even go so far as to pair you with nearby friends whose needs could also be met by that establishment and contact them to make lunch plans.
Another interesting case would have the machine monitoring activity and recognizing by biometrics that you are entering a period of high stress. It could alert you to the danger and you could take time to find a nearby activity that had proven to reduce stress indicators – maybe taking a walk past a particular landmark or talking to a specific acquaintance.
There is a possibility for the experiential equivalent of precision pharmaceuticals. As the machine learns more about you and your habits and pairs that with a pool of big data, it can help you optimize your choices and activity to make the machine that is you run more effectively, efficiently, and create a compelling experience as often as possible.