Google Makes It Easier to Build Machine Learning Into Mobile Apps

PinIt
Machine Learning API

New APIs add real-time visual search, object detection, tracking, and on-device translation for Android and IoS developers.

Google says four new APIs will help developers integrate machine learning into applications they build. These APIs include object detection, tracking, on-device translation, and AutoML Vision Edge. The company’s new ML kit for Android and iOS improves customizability and UX assistance and increases base APIs.

The software development tools will provide real-time visual search that pairs with a cloud solution of the user’s choice to classify and track objects. The Translation API will offer on-device functionality to:

  • Enable offline dynamic translations
  • Overcome language barriers with other users
  • Translate content  

The AutoML Vision Edge API will let developers create custom image classification models for apps that classify similar objects such as plants, animals, food or textiles.

Download Infographic Now: Manufacturing Leaders’ Views on Edge Computing and 5G

The company’s Material Design team will provide new, open source design patterns for ML apps. They’re available on the Material.io website.

“We are excited by this first year and really hope that our progress will inspire you to get started with Machine Learning,” says Brahim Elbouchikhi, Google Director of Product Management.

More information: g.co/mlkit. Get started here.

Sue Walsh

About Sue Walsh

Sue Walsh is News Writer for RTInsights, and a freelance writer and social media manager living in New York City. Her specialties include tech, security and e-commerce. You can follow her on Twitter at @girlfridaygeek.

Leave a Reply

Your email address will not be published.