The most important attribute in today’s cutting-edge drones is the ability to see and understand the world so that the drone can safely navigate to its destination and around obstacles. Last week TechOptimals learned that Amazon had acquired a team of a dozen computer vision experts based in the Austrian city of Graz. Amazon subsequently confirmed the news and made some of the key players available for an interview. The team, which includes a number of veterans from Microsoft, represents some of Europe’s top experts in computer vision. They will serve as the core of a newly launched development center in Graz that will focus on developing technology for the Prime Air drone delivery fleet.

The relatively simple process of landing in a customer’s backyard to deliver a package is a serious challenge for a flying robot to perform safely. “We may determine there is patio furniture or bushes. You are trying to distinguish between those obstacles and things like light shining through the leaves of a tree, reflections on water or a plate glass window,” says Paul Viola, vice president of science at Prime Air. “While we are descending we are making dynamic updates to our decision making process, and we need to keep our eyes open because things could change as well. We could see things moving, a soccer ball in the backyard, and we have to immediately sense and avoid them.”

A SWIMMING POOL MAY BE PERFECTLY FLAT, BUT IT’S NOT A GOOD PLACE TO LAND

Konrad Karner, the team lead at Amazon’s Prime Air lab in Austria, described how a delivery drone might approach a landing like this. “To enable safe navigation through a given environment, one ingredient is the precise knowledge of surrounding objects. By applying modern computer vision techniques we can determine both the geometric properties as well as the semantic meaning about individual objects. “The drone will not just see the shape of the world around it, but understand its properties. “It’s important not to hit any objects while descending, but we also need to have an understanding of the objects the drone is seeing. A swimming pool may be a perfectly flat landing spot from a geometric point of view, but not exactly where we want our drones delivering packages!”

In the past Karner has worked on a variety of interesting projects related to mapping, imaging, and computer vision. Crucially, he has a background in approaches to computer vision that can process large amounts of data very quickly. “When looking at Amazon’s use case, instead of focusing on the term ‘real-time,’ I’d rather use the term ‘throughput,'” he explained. “Real-time can be anything from microseconds to milliseconds, or even seconds. Throughput however deals with the amount of — in our case — pixels per second. While in previous positions my team was focused on huge amounts of data processing in a non-real-time environment, now we focus on real-time performance on smaller data. The throughput is the same in both cases.”

RECONSTRUCTING 3D GEOMETRY FROM FLAT IMAGES

 One of Karner’s previous projects, Metropogis, tried to create a robust 3D model of a city based on simple 2D images captured by consumer-grade digital cameras. “Our focus has largely been on reconstructing geometry from images, in combination with recognizing objects in the surrounding environment. This will translate well into Prime Air as we focus on building safe sense-and-avoid systems for our customers,” Karner explained. “We not only invented new technologies but also made them scalable to run on several thousand computers in parallel.”

Amazon’s drones are aiming to deliver packages under five pounds in 30 minutes or less with a range of 15 miles. After leaving the controlled airspace of the warehouse with their cargo, the drones will rise up to a cruising altitude of several hundred feet. They will travel to the destination at around 60 miles an hour, then attempt to land and drop off the goods. To avoid other drones and planes moving at high speeds, the drones will come equipped with sensors like radar and lidar. They will almost certainly have some form of collaborative sense-and-avoid, sharing and receiving location with other manned aircraft through a technology along the lines of ADS-B. And yet, with all that tech onboard, computer vision will still be key. “Vehicle-to-vehicle communication is important,” says Viola. “But birds don’t have that, so we’ll need to keep our heads up.”

“I’M CONVINCED ROBOTS WILL BECOME UBIQUITOUS”

While Amazon hasn’t tipped its hand about any plans for autonomous vehicles outside of drones, Viola believes the Prime Air program has a lot of natural overlap with driverless car projects. “To some extent I think there is a really good analogy with autonomous driving. We can see the rapid pace of improvement there, and we leverage as much as we can out of that community,” he explained. “Various sensors, algorithms, and other technology are getting developed rapidly there, and at scale. That is being broadly communicated through the academic community.”

Karner also sees the parallels, and is bullish on the broader future of robotics. “In the long term I’m convinced that robots will become ubiquitous. To me, it all started with mowing the lawn, which is quite standard these days, to cleaning the house. Next will be the delivery of goods via drones and transportation by self-driving cars.”

LEAVE A REPLY