New APIs add real-time visual search, object detection, tracking, and on-device translation for Android and IoS developers.
Google says four new APIs will help developers integrate machine learning into applications they build. These APIs include object detection, tracking, on-device translation, and AutoML Vision Edge. The company’s new ML kit for Android and iOS improves customizability and UX assistance and increases base APIs.
The software development tools will provide real-time visual search that pairs with a cloud solution of the user’s choice to classify and track objects. The Translation API will offer on-device functionality to:
- Enable offline dynamic translations
- Overcome language barriers with other users
- Translate content
The AutoML Vision Edge API will let developers create custom image classification models for apps that classify similar objects such as plants, animals, food or textiles.
The company’s Material Design team will provide new, open source design patterns for ML apps. They’re available on the Material.io website.
“We are excited by this first year and really hope that our progress will inspire you to get started with Machine Learning,” says Brahim Elbouchikhi, Google Director of Product Management.