Manufacturers today are increasingly using intelligent
sensors and cameras spread throughout their plants to monitor processes
and equipment. However, artificial intelligence (AI) is needed to derive
actionable insights from the vast quantity of data these systems produce.
Implementing AI can be challenging. Manufacturers who do so
successfully hope to improve operations, reduce downtime, and increase product
quality. To explore AI implementation issues, best practices, and potential
benefits, RTInsights interviewed Brian McCarson, Vice President and Senior
Principal Engineer of Internet of Things Group who leads the Industrial Systems
Engineering and Architecture Group at Intel. Below is the summary of the conversation.
Why the
interest in AI in manufacturing?
RTInsights: Why is there such great interest in using
AI in manufacturing?
McCarson: There are a few things that I think are
driving the desire for AI. The first is that some tasks are simply dangerous
for humans to perform because they involve machinery that runs at really high
temperatures, high voltages, or lots of moving parts that could cause injury. Finding
ways to perform tasks would normally require putting a human in that dangerous
environment to monitor or maintain the process is one of the key drivers. That
is especially the case in regions with strict governance and laws around
protecting employees’ safety.
The second driver for the use of AI is the optimization of
machine performance. There’s a litany of data that can come off some of the
most complex machinery that’s being used in manufacturing today. Sometimes the
volume of data and the speed that the data comes off is at a pace where you can
have a human conduct all the required analysis.
Monitoring a sensitive machine requires a data scientist to
find a statistically significant signal of all that time series machine data. Using
AI to help identify machines’ problems as quickly as possible can help improve
machine uptime and reduce the number of defects or defective parts that those
machines might be manufacturing. Improving machine uptime can significantly
improve the operational expenses of a factory and increase the revenue
potential per machine.
The third driver for using AI is in trying to detect defects in the production of goods. For example, suppose you are a touch screen manufacturer and are building screens for an iPhone, laptop, TV, computer monitor, or digital display. All it takes is one faulty pixel to deem that part unfit for sale and distribution to partners or customers. The human eye is not designed to reliably identify single pixel failures efficiently. A human would have to scan the entire display area carefully visually. And that can be very time-consuming. Also, evolution has designed our eyes to efficiently detect and filter out motion and not to discern sub-mm optical anomalies. An AI system can be designed to identify even the most subtle defects on a product, and this can help significantly reduce the number of customer returns, scrap, or rework that a factory would suffer without AI. These efforts can deliver significant improvements when it comes to operating margins and given that some companies operate on single-digit margins, this can be the difference between profit and loss during tough economic times. Having something that can reduce the number of customer returns from 3% to 1% can be the difference between having enough funds to invest in growth for your company or developing new products.
Some of these techniques can contribute to a significant
improvement in the success of a company. Ten years ago, deploying AI in a factory
felt mostly like science fiction for the vast majority of the world’s
factories; only the most profitable companies on the planet had large enough checkbooks
to be able to hire data scientists and invest in these kinds of capabilities. Now,
the technology is affordable and readily available. AI is much more accessible
to developers worldwide with advances in tools and techniques and with some of
the community efforts that have happened. Deploying AI is now in the realm of
reality from a cost-benefit analysis for many of the world’s factories.
What are
the use cases for AI in manufacturing?
RTInsights: What are some of the use cases for AI in
manufacturing?
McCarson: Machine and equipment monitoring, predictive maintenance, product quality control, and visual anomaly detection, electrical test anomaly detection, and chemical analysis anomaly detection are all examples. AI can be used for supply line optimization to find the best way to manage inventory through a factory given the ever-changing status of equipment and inventory levels over time.
AI can also be used to improve the work environment’s safety
to identify when a human is coming into the region of some autonomous mobile
robot or an area that could have a hazard. In such cases, AI can proactively
shut down, pause, or idle equipment to prevent any potential injury. These are
some of the main areas of interest that I’ve been seeing with different
factories and deployments.
What are
the AI implementation challenges?
RTInsights: What are the technical and organizational
challenges manufacturers must overcome to successfully implement AI?
McCarson: The first and most important is finding a
partner that can help with a scalable AI solution. There are different
approaches to AI that can be used. One is developing a custom solution for your
very specific use case with your specific lighting conditions, with your
specific camera, all of these unique details, and then optimizing those AI
models or algorithms for that particular use case. Such an approach can be highly
effective for that use case when deployed. On the other hand, it can be
expensive because every time there’s a change in your product line, if you move
to night shift versus day shift and you’ve got skylights in your building, if
you have different lighting conditions around different machines, all those
variables can affect how your model or algorithm works.
Secondly, you need to find a partner that can understand how
to deploy AI across a wide variety of different use cases, in a variety of
different lighting conditions, and different operating environments. The
partner must be someone who’s built-in some of the necessary features to make
your AI solution at least somewhat lighting, zoom, rotation and scale-agnostic.
They must make it easy to configure for and expand to new machines and add new
AI capabilities. Finding such a partner is what’s going to deliver the best
long-term return on investment. Short-term, it may seem like it’s more
expensive to go with the supplier that may be charging 10% or 20% more for
their solution versus another competitor. But if the cost of ownership is three
or four times operationally versus the upfront cost, then that’s a very big
difference. So, factoring in the total cost of ownership over the life of the
solution and the scalability of the AI capabilities to me is critically
important.
The other important factor to understand is,
organizationally, how will this be maintained in your factory? Sometimes for
very large factories, there can be two completely different organizations.
There can be an operational technology organization, which includes the
engineers and the mechanically oriented people that are working on the factory
floor, maintaining the machines. You can have an IT organization that’s working
in your on-premises data centers, deploying some of these advanced
capabilities. If the two groups do not have a common set of goals and objectives
for what they want to accomplish with a given AI technology, it can create many
headaches down the road. Enrolling your IT and your OT organizations upfront
when you’re selecting your partners or the solution you want to deploy improves
the likelihood of implementation success. In that way, the whole organization
will be more invested in that solution, and they will work together to try to
improve it over time.
What are
some best practices when implementing AI in manufacturing?
RTInsights: What are some best practices to follow to
ensure success when implementing AI in manufacturing?
McCarson: First off, when you’re architecting your
overall AI solution that’s going to be added into your factory, you should
think about how that solution is going to interact with other applications in
your manufacturing environment. For example, if I want to deploy an application
for defect monitoring at an inline inspection station, I need to know how that
application is going to interact with my manufacturing execution system that
all the machines rely on and with the management. How will it work with the
inventory management system that my technicians and my operations managers rely
on? If everything is hard-coded, that creates a risk of creating an environment
where any change could result in factory automation downtime.
If you take the approach of trying to design in a
microservices type architecture, then what you can end up with is a scenario
where it’s as easy as downloading or removing an app to your phone. Your
smartphone has a microservices architecture. You go to the app store, add an
application, customize it, log in, set your features and settings, add your
email address, and billing information. If you decide the next day, “I
don’t like that app, I’m going to remove it,” you just simply delete it.
That’s some of the value of a microservices architecture. You can bring in
applications, and you can remove applications. And you can decide how those
applications interface with one another in a common API infrastructure with a
common, or at least interoperable, data bus and message bus. What we are trying
to do at Intel is influence the market to move towards a microservices
framework because of the scale advantages, ease of use, and ease of deployment.
If you think about deploying AI from that perspective, you
have the flexibility to incorporate new technology and innovate. If
everything’s hard-coded, everything must be un-hard coded, and then something
new must be recoded into it. In contrast, in a microservices framework, you
simply add, remove, or disable applications, and you can innovate on those
applications in real-time. That’s one factor.
Another best practice for implementing AI, specifically when there’s a vision-based solution, is to consider lighting control. If your camera will be ingesting data, you need to make it as lighting normalized as possible. What do I mean by that? We’ve seen a lot of factories try to implement AI and fail. And when we come in and investigate to help them, the first thing we find is they didn’t have good lighting control. They were trying to deploy a visual-based AI solution, but they didn’t factor in the time of day. They have a lot of skylights, and different machines have different lighting scenarios. Some machines are in shadows while some are under bright lights. Due to these reasons, trying to deploy one algorithm, that’s unique to that specific use case, across all those machines is particularly challenging. Such factories end up giving up because the AI vision solution would only work on a couple of machines and not others. In some cases like this, we’ve come in and helped control the camera’s lighting to normalize the situation across machines.
It’s just the simple act of adding in a backlight behind the
camera that helps supply a constant quantity of photons, so most photons that
are entering the camera are always coming from a source that’s always on. That
means if it’s a cloudy day and you have skylights, if you’re in a dark region
of a factory versus a bright region of a factory, the camera sees some
normalcy. This allows you to scale much more effectively across different use
cases.
A third best practice is being aware of the language or
languages that your sensors are going to be speaking to your database. Many
companies want to look at products coming off the machine with image analysis,
but they also want to look at the time series data coming off that machine.
They want to see how the machine is performing from its machine parameters.
Creating a database structure where you can seamlessly
ingest both the product quality information coming off that machine and store
that in the same database that you can store the time series data coming off
that machine allows you to find new insights. It allows you to perform
correlations between the quality of the products and the state of the machine.
In summary, those are probably the three most important areas. They include using a microservices approach, designing your models or algos for scalability, controlling the environment for any image data collection by using backlighting for vision AI applications, and concentrating on how your database is structured so that you can take full advantage of all the data coming off your system.
Learn more about implementing AI in manufacturing in Intel’s new eBook: Eight Key Considerations When Implementing AI In Manufacturing.
Learn more about the technologies that are building the future of industry at the online Intel Industrial Summit 2020, September 23-24th. Register now