
It’s notoriously difficult to consistently measure the energy usage of AI models, but DARPA wants to put an end to that uncertainty with new “energy-aware” machine learning systems.
The Mapping Machine Learning to Physics (ML2P) program, which opened solicitations on Tuesday, aims to do something that’s simple, at least on paper. It wants to map the efficiency of various forms of machine learning directly to, as the name suggests, physics. In this case, it will use “precise granular measurements in joules,” the Defense Advanced Research Projects Agency said.
“Today when we build machine learning models, we only optimize for performance, and we miss other characteristics. A very important characteristic is how much energy it’s using,” ML2P founding program manager Bernard McShea said in a video accompanying a press release about the program DARPA published on Wednesday.
“What we want to do is consider mapping machine learning model performance to physical characteristics,” McShea added. “We want to do this so we can balance model performance with the amount of resources it’s taking up.”
For DARPA, concerns over AI energy usage are particularly pertinent given the battlefield and edge uses of the technology that the Pentagon wants to get in the hands of soldiers. In the field, systems are generally battery powered, and if they’re not optimized for a balance of performance and energy usage, warfighters might find themselves coming up short.
“We want to move beyond optimizing just for accuracy and instead understand, for every joule of electricity, what level of performance we’re getting back,” McShea said. “That will enable us to build AI that is smarter, leaner, and more useful to the warfighter.”
The ML2P team doesn’t intend to stop there, however. Any performer selected for the program will have to release their documentation, algorithms, code, and tutorials under a permissive open-source license. So the tools they build to help AI track its own energy use will be made available to the wider research community.
“The key transition objective of ML2P is to make ML2P software the gold standard for ML construction and simulation of power usage and trade-offs,” McShea told The Register in an email.
Others have tried to estimate AI energy usage before, but those figures are often incomplete because researchers lack visibility into the closed-source models that dominate the field. Factors like hardware, workload type, location, and even operating conditions can all influence the energy used per query. McShea said that he wants ML2P to account for all of those upstream and downstream machine learning design choices and return accurate, real-world data.
“ML2P will redefine power to be a ‘first-class citizen’ throughout the ML life cycle,” DARPA added in its program solicitation document.
Selected performers will have their work cut out for them, and they’ll be on a tight schedule to get their tasks done. DARPA plans to divide ML2P into two 12-month phases. The first six months will be devoted to experiment setup, and the remaining 18 months of the two-year program will be used to gather experimental data on whatever methods teams develop. Starting in month seven, a government test-and-evaluation team will run in parallel for the remainder of the program to validate performers’ findings and identify which approaches work best.
McShea also hopes ML2P can lead to the development of more energy-efficient AI hardware.
“By enabling principled simulation of machine learning model performance on general-purpose compute systems, it could provide insights into how hardware should be optimized for AI workloads,” McShea said.
Interested parties, who DARPA wants to draw from disciplines including electrical engineering, mathematics, logic, machine learning, and others, will have until December 8 to submit their proposals for a slice of the expected $5.9 million ML2P budget. ®