Georgia Tech’s augmented-reality interface gives control over complex robots to the people who need them
Robots offer an opportunity to enable people to live safely and comfortably in their homes as they grow older. In the near future (we’re all hoping), robots will be able to help us by cooking, cleaning, doing chores, and generally taking care of us, but they’re not yet to the point where they can do those sorts of things autonomously. Putting a human in the loop can help robots be useful more quickly, which is especially important for the people who would benefit the most from this technology—specifically, folks with disabilities that make them more reliant on care.
Ideally, the people who need things done would be the people in the loop telling the robot what to do, but that can be particularly challenging for those with disabilities that limit how mobile they are. If you can’t move your arms or hands, for example, how are you going to control a robot? At Georgia Tech, a group of roboticists led by Charlie Kemp are trying to figure out how to make this work, by developing new interfaces that enable the control of complex robots through the use of a single-button mouse and nothing else.
Pages: 1 2