Lockheed Martin Lands the Aircraft (plus more from Ford & Nvidia)

Self-landing military planes, cars that don’t stink, and live inside photos

Sign up to uncover the latest in emerging technology.

Self-landing military planes, cars that don’t stink, and live inside photos

1. Lockheed Martin – evaluating landing zones for autonomous vehicles

First time feature of a defense company on Patent Drop.

This patent application looks at how Lockheed Martin’s autonomous aircraft can evaluate different potential landing zones.

To decide where to land, Lockheed Martin’s aircraft will integrate real-time cellular data to detect the presence of people in a potential landing area. More specifically, the aircraft will calculate cell phone density, which is the amount of cell phones in a particular area.

The real-time cell phone data will be used in conjunction with sensor data, such as LADAR (laser detection and ranging) sensors and LIDAR (light detection and ranging) sensors, as well as existing world maps that contain data around terrain and any obstacles (man-made or natural).

The aircraft will be configured to have an acceptability criteria for determining which landing zones are acceptable and which aren’t. For instance, there would a threshold value for cell phone density that would make a landing space deemed safe.

Why is this interesting? At least to me, I always find it fascinating how the world is awash with different data sources and what complex activities the processing of that data can help enable – in this example, cellular data helping military aircraft land autonomously.

Beautiful and scary.

2. Ford – making shared cars less smelly

Funny and curious.

According to Ford, a key differentiator in the future mobility market is odour control – especially in a world of shared vehicles (e.g. robo-taxis),

Odour mitigation is pretty complex when you think about it. Firstly, odour is mostly a psychological response and is difficult to map to the chemistry that produces the odours. Secondly, responses to odour is complex. It can vary based on cultural, demographic, geographic or other backgrounds. Lastly, the response to odours is time dependent. An odour that is gradually increased may cause no response to humans.

In this filing, Ford mention a number of potential odour management systems.

One relies on an e-nose (electrochemical nose) that contains a number of sensors to identify the presence of potential smells in the air. To mitigate any odours, the vehicle will rely on devices that either add, move or remove air from a vehicle interior. For example, window controllers, HVAC systems, filtration systems or air freshener systems.

Another system relies on human noses. In this example, the car might ask a human to describe a new smell detected in the air and whether they like it or not. The chemical fingerprint of the smell would then be stored, alongside the description of it and whether it’s considered good or bad. The system would then figure out the best odour mitigation strategy for that smell, and store that learning for the next time it encounters that smell.

In a world with shared car ownership and self-driving fleets of Taxis, Fords wants to make sure the future doesn’t stink.

3. Nvidia – panorama generation for VR

Nvidia is a fascinating company that’s powering the next wave of a number of major tech trends: automation, the metaverse, crypto, and more.

This patent application is short but interesting, and points towards Nvidia wanting to bridge the existing digital world into a VR-friendly world.

Virtual Reality allows users to be able to have a 360-degree full view of a scene. However, most of our cameras and images aren’t panoramic, and therefore can’t be fully enjoyed in VR yet.

In this filing, Nvidia want to use generative-adversarial-networks (GANs) to take standard images and turn them into panoramic images.

At first glance, this might seem pretty mundane.

But in reality, this filing begins to turn every existing form of 2d media into a scene that people can enter in VR. At present, these generated scenes may just be static, panoramic images. But GANs could eventually be used to animate the characters, and generate appropriate sound for the scene. AI could be used to animate the people in the image and turn them into characters that you can interact with. VR may not just be about generating future worlds but jumping back into historic worlds.