DARPA is working on new chip design. Promises high efficiency, powerful compute and…low accuracy?

Photo credit DARPA

The Defense Advanced Research Projects Agency (DARPA) is the US Department of Defense (DoD) agency responsible for the development of new technologies for use by the military. 

The scientists at DARPA routinely launch projects so audacious and ambitious that their project list looks like something conjured up by science fiction writers and Hollywood directors rather than legitimate scientists.  Thought controlled bionic arms? Check.  (See project Proto2).  Iron man style powered exoskeleton?  You bet.  (See project XOS)  Battlefield telepathy? Yep. (See project Silent Talk)  Throw in a half dozen autonomous robots and an assortment of super-soldier tactical gear and you start to get a picture of a group that lives on the bleeding edge of science and engineering. 

Given the radical technology innovations these guys are dreaming up it’s no surprise that they’re getting frustrated with the slow pace of computer chip efficiency improvements.  When your projects include surveillance systems that can “track everything that moves” in an entire city (CTS), you obviously need computers with serious processing power.  But, just as importantly, these DARPA projects also require computers that make efficient use of electricity. 

Per chip processing power has continued to double every 18 months (roughly in accordance with Moore’s Law.)  However, chip energy efficiency has reached a near dead end.  In other words, power scaling has all but ceased.  As a result, battery powered devices can’t keep up with the energy demands of the computer chips.      

To address the chip efficiency issue DARPA is throwing away the digital rule book and designing a new generation of ANALOG computer chips.  Instead of using the energy intensive, Boolean logic strategy of driving voltage into transistors to change their state from zero to one, Darpa is examining low power “probabilistic” computing possible using analog computing.

Daniel Hammerstein, DARPA program manager for project UPSIDE, expects intelligence, surveillance and reconnaissance (ISR) systems utilizing the new technology to be faster and “orders of magnitude more power-efficient.”

How does it work?  According to the DARPA press release; “UPSIDE envisions arrays of physics-based devices (nanoscale oscillators may be one example) performing the processing. These arrays would self-organize and adapt to inputs, meaning that they will not need to be programmed as digital processors are. Unlike traditional digital processors that operate by executing specific instructions to compute, it is envisioned that the UPSIDE arrays will rely on a higher level computational element based on probabilistic inference embedded within a digital system.”  (A super fast, super efficient, self organizing, self programming nano-computer?  What could possibly go wrong?)  

Probability computing abandons the its-either-one-or-zero straightforwardness of digital computing.  As a result, it sacrifices some accuracy.  For imaging and surveillance applications, the inexact nature of analog, probability processing may be sufficient. 

Ben Vigoda, the general manager of the Analog Devices Lyric Labs group seems to think that the technology may be applicable to the problem of energy consumption by data centers and server farms.  In an article for Wired, Vigoda stated, “We’re using a few percent of the U.S.’s electricity bill on server farms and we can only do very basic machine-learning,” says Vigoda. “We’re just doing really, really simple stuff because we don’t have the compute power to do it. One of the ways to fix this is to design chips that do machine-learning.”

Maybe.  But a large portion of those server farms using all that electricity are financial sector facilities.  I don’t see them (for example) rushing out to cut their power bills by introducing errors into their data. 

For the foreseeable future, information systems still require three core characteristics; Confidentiality, Integrity and Availability (CIA).  I don’t see many takers for computing strategies that sacrifice one of these core principles for better energy efficiency.