Nvidia Makes the Road Safer (plus more from Disney & EA)
Play as or watch yourself in games & movies, control vehicles with your gaze.
Sign up to uncover the latest in emerging technology.
play as or watch yourself in games & movies, control vehicles with your gaze
1. Nvidia – controlling a vehicle using gaze detection
Nvidia is looking to make it safer and faster for drivers to be able to perform specific vehicle operations.
Rather than just relying on voice instructions, Nvidia want to take into account the gaze direction of the driver.
Using sensor data and a spatial map of the inside of the vehicle, Nvidia will be able to determine where a driver is looking, and then correspond this to any voice instructions from a driver. For example, if the driver looked towards the entertainment system and said “turn it up”, the car’s system will be able to determine that the driver is referring specifically to the volume on the entertainment system.
This patent filing is in a similar space to an application filed by Ford in #037 PATENT DROP where they wanted to use brain machine interfaces for drivers to be able to control car operations with their thoughts.
There are potential safety benefits to implementing these methods of controlling a vehicle. For instance, these initiatives remove the need to fumble around for controls. However, it feels like the vehicle is becoming a playground to explore new form factors for users to interact with technology.
Over the long run, we’re moving further down the path of becoming cyborgs, controlling more machines with our consciousness. Exciting or terrifying futures – take a pick.
2. Disney – tunable models for changing faces in images
Disney is working on making it easier to ‘tune’ peoples’ faces in the frames of a video, in order to change: identity, age, lighting conditions & more.
For example, if there was a scene in a movie where there needed to be a younger Will Smith on screen, Disney’s system will be able to tweak the facial identity of the ‘older Will Smith’ and make him look younger.
While systems exist for being able to do this now, the current methods have a few disadvantage. Firstly, the current techniques tend to use neural networks that work best on low resolution images. Secondly, it’s costly and time consuming to keep retraining neural network models when wanting to change facial identities.
Without diving into the ‘how’, there’s some interesting implications of this technology.
In the short run, this kind of technology makes the movie-making process more efficient. For example, if a scene needs to be reshot with an actor and that actor isn’t available, Disney will be able to transplant the actor’s face into the re-shot scene.
Where things get potentially exciting is in the long-run where Disney could insert ‘viewers’ into a movie / cartoon and create personalized pieces of content. This is a theme that’s recurring across a number of patent applications from big tech companies, especially Snap creating personalized series of content that features a user and their friends.
3. EA – generating models of a character from images
Following the theme of the Disney filing, EA is looking into enabling users to generating character models based on reference images.
Using neural networks, the system will choose the relevant character attribute parameters from the reference image and then output a close-fitted character model.
While a lot of games do enable character customisation, it can take a lot of time and skill for a user to edit the character properties to resemble a real-world person. EA wants to remove the work using AI.
Why is this interesting? It feels like the next phase of entertainment – whether gaming or movies – is deeply personalised by being able to feature the viewers. It makes the process of consuming media more participatory, more open to remixes, and more open to being memeified (which is its own distribution channel).