Facebook Listens Closely (plus more from Microsoft & Ford)

Ford’s crypto & smart contracts, Facebook’s audio acoustics for VR, & more

Sign up to uncover the latest in emerging technology.

Ford’s crypto & smart contracts, Facebook’s audio acoustics for VR, & more

1. Facebook – Audio Acoustics in VR / AR

Facebook want to make the experience of hearing virtual audio feel as though it is actually originating from a user’s local environment.

For example, imagine you’re in a mixed-reality environment where there is an AR dinosaur in your living room. When the dinosaur roars, Facebook want to adjust the sound of the roar so that it reflects the acoustics of your living room.

In order to do this, the audio assembly on a user’s headset will capture the following bits of information:

  • Location information of the user’s headset within their local environment

  • Visual information captured by the imaging sensors on a headset

  • Audio data (e.g. reverberation time) captured by microphones

  • Information describing the audio content (e.g. location information of the virtual sound source)

All of this information will then be sent to an audio server to generate a model of the user’s local environment, including a 3D visual representation of the room, along with acoustic parameters that describe the acoustic properties of the various surfaces in a room (e.g. a wooden table, a brick wall). Based on this, the audio content of the virtual content will be rendered and presented to a user.

The intention of this patent is to make virtual sounds feel as if they are actually originating from your physical space. So far, the mainstream conversation around ‘spatialised audio’ has mostly revolved around sounds feeling like they are coming from some point in a 3D space. Facebook is taking this one step further by wanting to incorporate the acoustics of a user’s room, making spatialised audio feel more contextual to their environment.

The distinction between virtual and physical environments is increasingly blurring – welcome to the future.

2. Microsoft – Novel Pose Synthesis

This patent filing from Microsoft describes a method for taking an image of an object, and showing it from a new viewpoint.

This works by taking a source image which shows the object in its original viewpoint. For example, as show in the top row of the image above, let’s suppose we have an image of a chair taken from a side-view. Now suppose we wanted to have an image of this chair from the front-perspective. Microsoft’s patent describes taking a ‘depth map’ of the original image of the chair, and then synthesising it to correspond to a new perspective. Microsoft’s patent would then transfer the properties of the original image onto the depth map of the desired perspective. In turn, we are presented with a prediction of what the original chair would look like from a different viewpoint.

Why is this interesting?

Microsoft describe that this filing will be useful in computer vision, specifically in understanding a 3D scene, and object recognition independently of the viewpoint.

For example, let’s suppose you’re wanting to redecorate your living room. Your smartphone app scans the room and identifies all of the objects, irrespective of the angle from which you’re pointing the phone. Then let’s say you want to see what your living room would look like with another chair from Ikea. As you choose a chair from Ikea’s website, the smartphone app could take the 2D image from Ikea’s website, and turn it into a 3D object which could be manipulated and seen from all different angles.

In essence, Microsoft is looking at being able to turn images of 2D objects into 3D models, that retain the physical characteristics of the original 2D object from all viewpoints.

When you think about how AR works right now, it requires 3D models to be created, that then map onto the 2D-perspective of our cameras. A constraint on the possibilities of the AR world is the number of 3D models that can be created. With Microsoft’s filing, we could theoretically start the process of ‘converting’ our global library of 2D images into 3D models.

As is a growing theme in Patent Drop, we are marching into a world where the virtual and the physical worlds continually merge.

3. Ford – smart contract fulfilment + cryptocurrency

Ford is thinking about how smart traffic control systems can financially reward drivers in exchange for minor inconveniences.

For example, imagine a vehicle wants to merge into a lane of traffic. To do this, other vehicles would need to slow down, and in turn an inconvenience is created for them. Ford is proposing a system where the vehicles that need to slow down receive a payment for slowing down, while the vehicle that wants to do the slowing down, incurs a payment.

In a world of smart autonomous vehicles, the management of this could be done via smart contracts and even possibly cryptocurrency. The vehicle that wants to do the merging could form a smart contract with the vehicles that will be negatively affected. The smart contract could either be manually agreed by the inconvenienced vehicles, or it could be done automatically. Once the move is executed, real money or cryptocurrency could be automatically transferred between the parties.

This smart contracting system could be used widely across the road infrastructure system. For example, if a traffic light changes prematurely to accommodate a city bus, this could entitle all vehicles with a certain distance of the light to light to a certain payment.

What’s most interesting about this filing is how smart infrastructure and the Internet of Things could enable new forms of automated transactions with smart contracts and cryptocurrency – and Ford’s filing is providing a peek into that world.