Automated techniques could make AI development easier

“BERT takes months of computation and is very expensive, like a million dollars to generate that model and repeat those processes,” says Bahrami. “So if everyone wants to do the same thing, then it’s expensive, it’s not energy efficient, it’s not good for the world.”

Although the field looks promising, researchers are still looking for ways to make autoML techniques more computationally efficient. For example, methods like neural architecture search currently build and test many different models to find the best fit, and the energy it takes to complete all those iterations can be significant.

AutoML techniques can also be applied to machine learning algorithms that do not involve neural networks, such as creating random decision forests or support vector machines to classify data. Research in those areas is more advanced, with many coding libraries already available for people who want to incorporate autoML techniques into their projects.

The next step is to use autoML to quantify uncertainty and address reliability and fairness issues in algorithms, says conference organizer Hutter. In that view, the standards around reliability and fairness would be similar to any other machine learning constraints, such as accuracy. And autoML could automatically capture and correct biases found in those algorithms before they are published.

the search continues

But for something like deep learning, autoML still has a long way to go. The data used to train deep learning models, such as images, documents, and recorded speech, is often dense and complicated. It takes immense computational power to drive. The cost and time to train these models can be prohibitive for anyone but researchers working in private companies with deep pockets.

One of the conference competitions asked participants to develop alternative energy-efficient algorithms for neural architecture searching. It is a considerable challenge because this technique has infamous computational demands. It automatically walks through countless deep learning models to help researchers choose the right one for their application, but the process can take months and cost more than a million dollars.

The goal of these alternative algorithms, called zero-cost neural architecture search proxies, is to make neural architecture search more accessible and environmentally friendly by significantly reducing your appetite for computation. The result takes only seconds to run, instead of months. These techniques are still in early stages of development and often unreliable, but machine learning researchers predict that they have the potential to make the model selection process much more efficient.

Leave a Comment