Nike Wants You to Take a Seat (plus more from Uber & Adobe)

Nike’s game chairs, Uber’s walking profiles & Adobe’s real-time content personalization

Sign up to uncover the latest in emerging technology.

Nike’s game chairs, Uber’s walking profiles & Adobe’s real-time content personalization

1) Nike’s Sports Chair with live game integration

Nike have been working on a smart chair for athletes in live sporting environments.

The patent explores two potential use-cases for the chair: temperature control and team communication.

Taking the first, controlling the temperature of muscles in a sporting environment is important for performance and decreasing risk of injury – hence why we ‘warm up’ ahead of sporting activities.

But during an actual sporting game, the way athletes currently control their temperatures can be sub-optimal. For instance, some players will wait on the bench and join a game mid-way through. The time that passes from warming up and actually playing can mean there’s significant cooling of muscle temperature, increasing the chances of injury and decreasing muscle performance.

To mitigate the risk of temperature loss, some athletes take to wearing thermally insulated clothes. But this can also not be optimal. Keeping the whole body insulated can affect the body’s ability to regulate core temperature and create a heightened sense of fatigue.

So Nike’s Smart Chair is looking to use biometric sensors and thermal transducers to maintain an athlete’s temperature to an optimal range. The chair will also look to track hydration, weight, heart rate, respiration rate, or galvanic skin response.

By integrating identity sensors (e.g. RFID chips in an athlete’s clothing), the chairs can also help track an athlete’s real-time weight, know when an athlete is dehydrated and alert the athlete to drink water is necessary.

All of this data can then be fed to a team’s data team for real-time analysis, displayed on an athlete’s headset, and even shown on sports broadcasts.

Besides helping temperature control, the chairs could also enable different modes of communication. For instance, this could include simple audio communication between staff on the coaching team and individuals or groups of players. Or, more interestingly, the chair could include a visual display system that shows highlights, potential strategic plays or even something to relax a player.

The concept of this chair actually seems to have come out of a collaboration between Nike and architect Greg Lynn in 2016. The fact that this patent filing is an update on one previously made 2 years ago may suggest that Nike are still exploring building out this technology for commercial use.

One final (speculative) thought on this – could chairs be an interesting new consumer product for Nike? As the world moves towards work-from-home and chairs transcend furniture & utility to becoming a productivity tool, I wonder if Nike’s obsession with comfort and athletic performance could be applied to the knowledge worker space with a smart, comfortable chair that lifts workers’ productivity at home.

2) Uber’s walking profile for time estimation

The key learning from this patent filing is how far big tech companies begin to look for their next wave of optimisations, once they’ve established product-market-fit and dominated an industry.

When a user requests a car from Uber, Uber provides a time calculation estimate for the arrival of the car. However, it doesn’t take into account how long the user will take to walk towards the pick-up location. In turn, this can result in sub-optimal outcomes, such as drivers arriving early and waiting for the user to arrive.

In this filing, Uber describes a system of creating walking profiles that feed into time estimations for scheduling services.

These profiles will take into account environmental conditions such as: temperature, ice condition, snow condition (e.g., light, moderate, heavy), rain condition (e.g., light, moderate, heavy), incline condition (e.g., moderate, steep), decline condition (e.g., moderate, steep), or other particular geographies.

So when a user requests a car, Uber will take into account the user’s environmental conditions, their walking speed (historic and current) and their distance from the pick-up location, to create a walking forecast for the user. This data will then be used to adjust the expected arrival time at the pick-up spot, so that it matches with the time the driver arrives.

In essence, the goal is to minimise waiting time for drivers and riders, particularly when a user needs to walk to a pick-up location.

One thought that came to mind when reading through this patent is how different cultures’ relationship with time could be taken into account for the arrival estimates. For instance, India has a much looser relationship with time than people in Britain. I would hazard a guess that the time drivers spend waiting for passengers is higher in India than it is in the UK.

If cultural relationships to time were to be taken into account for arrival time estimates, maybe an Uber passenger in India would be alerted about their driver’s upcoming arrival earlier than they would be in the UK, even if walking pace were the exact same. Or even more sneakily, maybe Uber’s app may start to show a driver as having “arrived” on the app a few minutes before it actually has, just to factor in the behaviour of Indian passengers only starting to walk a few minutes after their driver has arrived.

3) Adobe’s real-time content personalisation

This patent application is weird, but fascinating.

Adobe are looking to enable hyper-personalised content based on what what a user is doing at a given point in time.

The content personalisation system will process live data, including object detection classifiers, facial detection models, and audio-detection to determine context.

So in the example illustrated, Adobe could process a live-video feed to identify features of the user (e.g. age, gender, emotion), objects that are included in the video (e.g. a coffee cup being held), and live audio to determine the words being said and the tone of the voice. Using this, Adobe could provide real-time personalised recommendations for digital content. In this instance, the coffee cup being held by a student leads to content recommendations that focus around productivity, coffee and homework help.

Hyper-personalisation in real-time, based on live context is obviously a big step up compared to simply looking at a user’s historic data (e.g. browser cookies).

The biggest question mark is around understanding the context that could exist where users would be broadcasting themselves in a live-video feed and (near) simultaneously browsing the internet where they would explore hyper-personalised content. Also, worth mentioning the privacy questions around this.

One context I could imagine is if the machine learning model embedded itself into a user’s phone camera. As you open a camera app on your phone, there’s potentially a lot of rich context that could inform a content recommendation model for when you next browse the web.

Or maybe in a VR world, where the line is removed between ‘browsing online’ and ‘broadcasting online’, there could be interesting applications for real-time personalised content based on what you’re doing virtually.