AWS has launched a new open source machine learning project, Neo-AI, which makes the code for its key SageMaker Neo machine learning service available to developers for the first time. It is the second time in a few months the company has released source code for projects into the open.
SageMaker Neo, announced at Amazon’s re:Invent 2018 cloud tech conference in Las Vegas back in November, allows data technicians to train machine learning models once, and then run them anywhere in the cloud and at the edge.
The new Neo-AI project, a machine learning compiler on the Apache Software License, brings the SageMaker Neo code into the open, including for processor vendors, device makers, and AI developers in the IoT space. A Neo-AI repository is available on GitHub.
Amazon, with a reputation for hoarding software based on open-source tools, looks to be giving back; the Neo-AI project release follows the company’s Firecracker virtualisation project, also announced at re:Invent 2018, as an open source initiative.
The market for AI software, playing into the fragmented IoT space, is complex for developers seeking to bring new machine learning innovations to a wide variety of hardware platforms.
Optimising machine learning models for multiple hardware platforms is difficult, because developers need to tune them manually for each platform’s hardware and software configuration.
For edge devices, constrained in compute power and storage, this task becomes more challenging, as the tuning required to achieve sufficient performance becomes more involved. Worse, good tools are not readily available; the process requires some trial and error.
“The tuning process requires rare expertise in optimisation techniques and deep knowledge of the hardware. Even then, it typically requires considerable trial and error to get good performance because good tools aren’t readily available.
Software differences between the model and the device complicate efforts further, making them incompatible. Developers tend to stick with devices that exactly match their model’s software requirements, said AWS.
“All of this makes it very difficult to quickly build, scale, and maintain machine learning applications,” it said.
Neo-AI reduces the effort to tune machine learning models for deployment on multiple platforms by automatically optimising TensorFlow, MXNet, PyTorch, ONNX, and XGBoost models. AWS said they will perform at up to twice the speed, with no loss in accuracy.
It also converts models into an efficient common format to eliminate software compatibility problems, and allows sophisticated models to run on constrained devices.
Neo-AI supports platforms from Intel, NVIDIA, and Arm, with support for Xilinx, Cadence, and Qualcomm coming soon. The project will be steered by contributions from these companies, among others.
Naveen Rao, general manager of Intel’s AI products group, said: “To derive value from AI, we must ensure deep learning models can be deployed just as easily in the data centre and in the cloud as on devices at the edge.”
Arm said combination with its NN SDK, designed for neural network frameworks like TensorFlow and Caffe, will help developers run machine learning on a wider variety of edge devices.
“Arm’s vision of a trillion connected devices by 2035 is driven by the additional consumer value derived from innovations like machine learning,” said Jem Davies, general manager and vice president at the company’s machine learning group.