Amazon’s chief roboticist discusses the latest advances in the field and how his team is using machine learning to make its robots smarter
Starting with its acquisition of Kiva Systems for $775 million back in 2012, Amazon has been steadily investing in a robotic future. From delivery drones to a rumored home robot to a robotics picking challenge, Amazon definitely wants useful, practical robots to happen. We’re not always sure that they’re going about it the right way, but we are always in favor of companies with as much clout as Amazon has recognizing that robotics is worth focusing on, especially with an understanding that some problems are going to take years of work to solve.
Brad Porter is the vice president of robotics at Amazon. He joined the company over a decade ago, initially working on Amazon’s web operations and e-commerce architecture. He later joined a team led by Jeff Wilke, chief executive of worldwide consumer, as a distinguished engineer, and during that time he oversaw technical preparations for Amazon’s first Prime Day and helped establish the Prime Air drone delivery organization. Porter earned bachelor’s and master’s degrees in computer science at MIT before joining Netscape and later helping start an early cloud technology company. Now leading the company’s robotics efforts, he oversees teams in Seattle, Boston, and Europe. He spoke with IEEE Spectrum via email.
IEEE Spectrum: How did you get started in robotics originally, and how did you end up at Amazon Robotics?
Brad Porter: I’ve always enjoyed learning about technology but my interest in robotics started with a Radio Shack Armatron robotic arm my parents gave me. I spent hours working with it and learning to control it to do very basic tasks. Even at a young age, I was so interested in what this robotic arm was able to do with a few simple controls I programmed. Looking back, I think that’s one of the moments that really shaped where I wanted to go with my education and career.
I joined Amazon in 2007 but wasn’t initially part of the robotics department. In fact, when I joined Amazon, I was brought in to work on the website development platform. Throughout the past 11 years, I’ve had a lot of different roles and have been able to work on some really cool projects like eCommerce Platform Architecture and Prime Air. After working on these projects, I eventually moved to the robotics team to find new ways in which we could improve our global operations network.
What makes Amazon Robotics unique?
This isn’t specific to the robotics team but the culture at Amazon as a whole is unique in the way it operates and encourages people to be curious, look around corners and fail fast. We want our teams to be inspired by the work they’re doing and find new ways to invent something that no one has done before. In the robotics group, that willingness to be curious, look around corners and fail quickly is allowing us to embrace emerging technology and techniques, such as AI-based controls.
We’re fairly familiar with the way your warehouse robots operate, I think, by bringing items to human pickers. What else can these robots do? What opportunities are there to improve them?
We do have robots we call drive units that use sophisticated route planning to bring items to our associates to pick customer orders. We started working with these robots in 2012 when we acquired Kiva Systems. Since this acquisition, we’ve continued to innovate and find new ways to improve these technologies and make them more efficient. These robots have also helped us maximize space and increase areas in which we can store more inventory. The team also has made them larger in size so they are able to move actual pallets of product in addition to the smaller drive units that move inventory pods. Today we have more than 100,000 drive units deployed throughout our global fulfillment network.
But many forget those aren’t the only robots we have in our fulfillment centers. We have also deployed around 30 palletizer systems throughout our FCs. Another popular robot you might see in our fulfillment centers is called RoboStow. This robot is a 6-ton robot that has the ability to move pallets of products up to 24-feet high and directly onto the larger drive units.
How do you think the state-of-the-art in robotics right now compares to the perception that robotics has in your industry specifically, and from society in general?
In some ways people are surprised that robots in our buildings, for the most part, don’t look like what you see in movies. Our buildings look like high-tech manufacturing plants with storage systems and conveyors. The robots blend in seamlessly with other more traditional industrial automation.
The revolution is how much high-tech we can pack into a robot under the covers. The combination of high-speed wireless networks, batteries, higher-quality sensors at lower-costs and more compute horsepower in smaller packages allows us to rethink traditional solutions, like fixed conveyor belts with more flexible solutions like robotic drives, while allowing us to more flexibly rethink how processes work. The result is we’re able to make these systems more autonomous and more collaborative with humans.
But there’s still so much to do. I look forward to the day I walk into one of our buildings and walk alongside all types of robots skittering around at my feet like in Star Wars. We’re not there yet.
Did the results of the Amazon Robotics Challenge make you more optimistic about the near future of robotics, or less?
The Amazon Robotics Challenge was created to inspire the next generation of inventors and we were blown away by what we saw. It was so inspiring to see what these teams brought to the table that it’s hard not to be optimistic about the future of robotics.
What do you think are the biggest challenges in robotics right now?
There have been so many advances in robotics and yet, we still are in the very beginning stages of innovation and we’re still learning how to best work with robotics. At Amazon, we have to consider the range of inventory items we’re dealing with and when it comes to robotic solutions, that in and of itself, is a challenge for us. We also have to consider the scale in which we deploy robots. It’s great if we test a robot in a small environment and it works perfectly, but we need it to also work in a larger environment without disrupting the entire fulfillment process. I’m so impressed with what my team has been able to achieve in the past few years and I’m excited to see how they overcome these obstacles.
What is the most interesting research you’ve seen recently?
I think we’re inspired by the progress in reinforcement learning for games like Atari or Dota 2. In some ways these are a class of controls problem—given an input image or stream of images, what is the right combination of outputs to invoke. We’ve been playing with these techniques and showed a fun little demonstration at our MARS conference last year where our robotic arm learned to play beer pong just using feedback from humans as to whether it was getting better or worse. Our robot was able to learn this constrained task as fast as humans. Of course, the advantage humans have is they have the ability to judge themselves.
Can you describe any specific engineering challenges that you’ve had to solve?
One of the interesting challenges we have is how to deal with misplaced inventory in our fulfillment center. Our systems rely on having knowledge of where every item you might want to buy is stored, but sometimes items end up stored in the wrong place, or they fall out. When a customer buys it and we go to pick it from the bin, if it isn’t there we have a problem. This really slows us down and we have to scramble to find another one of that product either in the same building or maybe even at a different building in the network and make sure we can still get it to the customer on time. But aside from counting inventory to make sure it is in the place we expect it (and we have a lot of inventory) we’ve tried to create more efficient ways.
One of the simple ideas that has worked really well is to simply take a photo of the storage pod. By combining the photo with knowledge of what is supposed to be in every slot, we then use machine learning to assign a probability that a slot might not have the right inventory—maybe it looks too full or not full enough or the wrong colors. This technique isn’t perfect because we’re only seeing the front of the pod and not everything in the shelf, but by using those predictions, we’ve been able to identify and correct misplaced inventory with less manual counting.
What kinds of amazing things is Amazon Robotics working on that you’d love to tell us about but can’t?
You’ll just have to wait and see!
[ Amazon Robotics ]