Researchers tried out several new devices to get closer to the ideal needed for deep learning and neuromorphic computing
What’s the best type of device from which to build a neural network? Of course, it should be fast, small, consume little power, have the ability to reliably store many bits-worth of information. And if it’s going to be involved in learning new tricks as well as performing those tricks, it has to behave predictably during the learning process.
Neural networks can be thought of as a group of cells connected to other cells. These connections—synapses in biological neurons—all have particular strengths, or weights, associated with them. Rather than use the logic and memory of ordinary CPUs to represent these, companies and academic researchers have been working on ways of representing them in arrays of different kinds of nonvolatile memories. That way, key computations can be made without having to move any data. AI systems based on resistive RAM, flash memory, MRAM, and phase change memory are all in the works, but they all have their limitations. Last week, at the IEEE International Electron Device Meeting in San Francisco, researchers put forward some candidates that might do better.
Pages: 1 2