How Image Annotation is Leading the Way in ML and AI

PinIt

Without advanced image annotation techniques, it is not possible to overcome the hurdle of preparing an AI-based automated system to tag images or recognize objects of interest.

Artificial Intelligence is like a child superhero. It’s got impeccable memory, unimaginable capabilities, and limitless endurance. However, it doesn’t know what to do or when to do it. It needs to be told. Data annotations, or image annotations in the case of computer-vision AI, hold the key to this communication. Without high-quality datasets and ML training, image-based decision-making by AI runs the risk of courting catastrophes.

Through annotation, images are labeled, classified, segmented, and made usable so that the AI can be taught the correlations between the labels and the images and the shapes and objects in the images. We show which patterns are significant and to act where required.

For AI to know what to do (defining executable action sequences) is far easier than training it to be aware of when to act (recognizing triggers). It reads data, pixel numbers, and coordinates but can’t make sense of them unless annotations provide the clues, and machine learning keeps showing it which logic to apply where. The training goes on until the AI can connect the dots by itself and gets ready to be employed.

The AI or image annotation – which comes first?

Once the AI’s ‘awareness’ is sufficiently developed, it can be brought on to the image sorting and tagging process itself, reducing human intervention and speeding up delivery. When the AI begins tagging images, the output becomes free of unpredictable errors.

But the depth of the AI’s understanding rests primarily on the quality and volume of the image datasets used to train it and the logic of the machine learning model. The quality and volume of these image datasets, in their turn, depend upon the accuracy and speed of the techniques employed for annotation.

Therefore, breakthroughs in image annotation techniques blaze the path for AI development and machine learning models.

It’s become a virtuous cycle. Better annotation = Better AI = Better annotation, and the cycle keeps turning. And it can also turn in reverse if annotation fails because the AI would also fail.

How image annotation is leading the way in ML/AI

Availability of machine learning training data is the key driver to improve AI performance. In contrast, image annotation is the technique used to create this training data for AI and ML.

Here are the reasons that make image annotation so very important for ML and AI projects.

1. The annotation workload

With billions of images ready at hand, it is impossible to finish the image annotation work of big datasets in time without the backing of AI and robotic processes. The process begins with high-quality sample image sets for machine learning models. The level of diligence must be sky high, and work precision needs to be near perfect. The workload in certain tracts begins lessening only when AI and ML are both sufficiently developed to shoulder portions of the workload.

Except for building the initial high-quality datasets, full annotation of images by humans is irrational, even with the advanced tools available. On average, an expert human annotator takes 15-30 minutes to tag an image following all parameters in a given project. And that time is increasing every day due to the greater range of labeling criteria added to projects.

At the current rate, tagging a dataset of 100,000 images will take approximately 8.5 years for a human annotator working 8 hours a day, 365 days a year without leaves, and at a robotic level of performance.

Recently, ImageNet, an image database that provides data free to researchers, hired tens of thousands of workers on Amazon’s Mechanical Turk. While simple images were tagged fast, sometimes in 5 minutes, complex scenes requiring semantic segmentation, where the number of objects to be tagged was high, took between 30-90 minutes each.

Image annotation is the most arduous portion of the entire computer-vision AI model training workflow. This is why there’s a race to develop new and better techniques of image annotation. And it leads the way in AI and ML development and research.

2. Different types of image annotation need a different kind of expertise

Image annotation is done from scratch by experienced human annotators, primarily for the initial datasets of machine learning models. In regular workflows, ready image datasets from both non-commercial and commercial sources, or proprietary in-house sources, can be used to set the ball rolling. The humans in the loop can then move on to largely supervisory roles.

In general, image annotation work comes in four types:

  1. Image classification: Similar objects and their repeated presence are marked in images with object tags, and then the images are classified according to their tags.
  2. Object detection: Object recognition involves labeling objects in an image according to location, shape, and other parameters. Object recognition can be highly complex in given fields like medical imagery, where multi-frame data from equipment like MRI scans must be annotated continuously and with high precision. The stakes involve life and limbs.
  3. Boundary recognition: Image annotation work usually involves training a machine to recognize the boundary lines of objects in an image. These can relate to individual objects, background objects, or general topography.
  4. Segmentation: Segmentation is essential in deep learning and more sensitive applications. It is generally of three types:
    1. Semantic segmentation: This is used largely to group similar objects according to presence, location, size, shape, and other parameters, mostly where it is unnecessary to track components of these groups across images.
    2. Instance segmentation: This is also called object class labeling. It is used to count and track the location, number, size, shape, and presence of objects.
    3. Panoptic segmentation: This is a hybrid of semantic and instance segmentation showing labeling of background objects (semantic) and individual objects (instances). An example would be the marking of urban growth through satellite imagery.

3. Low volumes, low-quality image datasets don’t work

An AI is not hard or impossible to mislead. Much like ethical hacking, scientists and AI companies themselves keep testing the fidelity of computer-vision AI through various methods designed to fool it.

Some of such publicly known instances are termed adversarial examples. While most such tests are done under ‘white box’ situations, in many instances, the testers do not possess any inside knowledge.

In one of the most famous ‘black box’ examples, a team of MIT researchers in 2017 successfully fooled Google’s Vision AI into believing the photo of a row of machine guns was that of a helicopter.

That’s five years in the past. And, of course, there are innumerable such instances under ‘white box’ testing conditions.

Justifiably, stakeholders in the computer-vision AI industry have been trying to make the systems as foolproof as possible. Thus, the necessity of feeding more and more high-quality data into the machine learning system to up its accuracy has gained prime priority.

4. Ever-evolving image annotation techniques

Regardless of the tool used, image annotation techniques have increased manifold. Here are seven major techniques of which one or more are used by image annotation companies for successful project completion. Both commercial and open-source tools are used for image annotation.

The principal techniques include:

  • Landmarking: In this type of annotation technique, pose-point annotations are used mainly in body and face annotations for emotion detection and facial recognition applications.
  • Bounding box: Bounding boxes are used with symmetrical objects, and when the shape of the object doesn’t matter so much. Bounding boxes may be in 2D or can be cuboids.
  • Polygons: This method is used to mark the edge vertices of the object’s outline and define its shape. It is used more with irregular objects of interest.
  • Polyline: Usually used with open shapes like power lines or road curb lines, these are one or more segments that indicate a continuous line.
  • Masking: This is used to highlight shapes and areas of interest by hiding irrelevant objects.
  • Transcription: This involves text labeling of any multimodal data which carries both images and text.
  • Tracking: Tracking is used to plot object movement across multiple image frames of videos. Sometimes, interpolation is used in annotating tracking, allowing the skipping of nearby, similar frames while the skipped annotation is auto-filled.

5. Almost every use of computer-vision AI is mission-critical

Almost every instance of the application of computer-vision AI, ranging from 3D heart imagery to virology to defense, to environmental science to emotion detection, is mission-critical, and there’s no place for slip-ups.

Predictably, the global data annotation tools market, which was worth $695.5 million in 2019, is expected to reach $6,450.0 million by 2027: growing at a CAGR of 32.54% from 2020 onwards.

6. Developments rolling out every other day

New machine learning solutions in image annotations are under continuous development, with those like Google’s Fluid Annotation, able to outline objects and backgrounds in images and provide class labels at three times the normal speed of previous annotation practices.

Breakthroughs are rolling out back-to-back, with almost every tech giant and software company joining the race to devise new applications and provide fresh solutions.

In February this year, a California-based AI company raised more than $9 million in a funding round. The company, which serves teams at Qualcomm, LG, and Samsung as a vendor, claims to have developed a new system of auto-labeling. Here the AI itself reports in its labeling output the percentage to which it is certain of labeling accuracy. That cuts down human verification significantly, as much can be excluded from further checks.

7. Use of synthetic data to train AI

The use of synthetic computer-generated image data for training AI is an active field and involves deep learning, use of virtual environments, crowdsourcing of CAD models, etc.

Most developmental efforts in the field of synthetic data depend on GAN or the generative adversarial network. This involves two deep learning models. One learns how to generate realistic image data, and the other system, a discriminator, learns how to mark differences between the generator’s output and actual field data. One of the popular systems in this field is StyleGAN which has been open-sourced by NVIDIA and depends on CUDA and Google’s TensorFlow.

Conclusion

As we said, an AI is like a super kid. Machine learning helps it grow, giving it the values and references it needs to function. So, for it, lack of images and poor quality of initial datasets are as bad as poor textbooks and learning materials for a kid. Given that before making any image-based decision, an AI must recognize and detect objects of interest learned through MLM, its ability fundamentally depends upon the level of object labeling its learning datasets had.

Without advanced image annotation techniques, it is not possible to overcome the hurdle of preparing an AI to tag images or recognize objects of interest. So, it’s the continual breakthroughs in image annotation techniques that stand between the failure of the computer-vision revolution and its successful realization in practice.

Snehal Joshi

About Snehal Joshi

Snehal Joshi heads the business process management vertical at HabileData, the company offering quality data processing services to companies worldwide. He has successfully built, deployed and managed more than 40 data processing management, research and analysis and image intelligence solutions in the last 20 years. Snehal leverages innovation, smart tooling and digitalization across functions and domains to empower organizations to unlock the potential of their business data.

Leave a Reply

Your email address will not be published. Required fields are marked *