Google Makes It Easier to Build Machine Learning Into Mobile Apps

Machine Learning API

New APIs add real-time visual search, object detection, tracking, and on-device translation for Android and IoS developers.

Google says four new APIs – object detection, tracking, on-device translation and AutoML Vision Edge – will help developers integrate machine learning into applications they build. The company says its new ML kit for Android and iOS meets requests for improved customizability, UX assistance, and increased base APIs.

The software development tools will provide real-time visual search that can be paired with a cloud solution of the user’s choice to classify and track objects. The Translation API will now offer on-device functionality to enable offline dynamic translations and can be used to overcome language barriers with other users or to translate content.  Adidas is said to be already using the functionality.

The AutoML Vision Edge API will let developers create custom image classification models, useful for apps that need to classify different types of similar objects such as plants, animals, food or textiles. It can run locally on user’s devices for more convenience.

Google also said it will provide new, open source design patterns for ML apps, produced with the Material Design team and available on the website.

“We are excited by this first year and really hope that our progress will inspire you to get started with Machine Learning,” said Brahim Elbouchikhi, Google Director of Product Management, in a blog post.

More information: Get started here.

Sue Walsh

About Sue Walsh

Sue Walsh is News Writer for RTInsights, and a freelance writer and social media manager living in New York City. Her specialties include tech, security and e-commerce. You can follow her on Twitter at @girlfridaygeek.

Leave a Reply