Sponsored by ADLINK
Center for Edge Computing

Why Companies and Workers Shouldn’t Be Afraid to Use AI in Robotics Efforts

PinIt
ai in robotics

The role artificial intelligence will play in robotics will largely be determined by use cases and the resolution of privacy and computational issues.

Robots have the potential to change the way companies do business by automating many tasks. When combined with artificial intelligence (AI) technology, robots can become autonomous. But the combination of these technologies raises many issues. Will AI-aided robots eliminate jobs or assist workers, making them more productive? As AI gives robots capabilities such as machine vision, there also are privacy issues. And then there is the challenge of how to work with the large volumes of data that might be used. Where do you process that data? In the cloud? At the edge?

To get a better understanding of these issues, use cases, and the role of edge, we sat down with Joe Speed, CTO IoT Solutions and Technology at ADLINK, a provider of leading-edge computing solutions
that support the transition to connected Industrial IoT systems across industries. Also taking part in our discussion was Nick Fragale, Founder of Rover Robotics, which develops rugged, industrial-grade robots using ROS, the robot operating system.

Are There Concerns about Using AI?

RTInsights: It seems companies are reluctant to use AI due to different fears. What types of concerns do you hear from potential users when it comes to their adoption of AI?

Speed: Most of the concerns I hear around AI have to do with some of the privacy aspects. You hear people raise concerns when you talk about facial recognition, and some of the other aspects, like AI, applied to topics like mass surveillance. People get a bit nervous. I do not necessarily see that much fear or concerns about AI in the kinds of spaces where we tend to focus. Most of our technology is usually in, on, or near something—like equipment, a process, the work cell, the facility. That’s usually where our AI is being used.

In these cases, the application of AI is taking an existing process and ensuring that it operates reliably. It helps with machine health and other things. It lets companies take a work cell and make it work more efficiently. And companies can take an existing legacy system, existing machinery, or an existing process and instrument and make it safer.

In a lot of these applications, we’re not really encountering the privacy concerns related to facial recognition and mass surveillance. The systems are used within your company versus systems that would surveille people in public. In a company setting, the use of AI focuses on making a process or operation better, helping workers do their jobs better. What we see for some of these things is that AI, specifically machine learning applied to computer vision, is very hot. Another very popular use is around sensor fusion. In such use cases, the issue is how do I combine vision with other kinds of sensor data or telemetry from existing legacy equipment and then put those together to have a better understanding of what is going on.

Fragale: I’d say from our perspective, our customers have come from the research and academic space, and so they are very open to using AI. The average age of our customer is probably somewhere around 30. Now that we’re going to be moving into the logistics market with our new product, the Rover AMR 100, that’ll change. But so far, we haven’t seen any resistance to implementing AI.

How is AI Being Used in Robotics?

RTInsights: That’s a great segue. Obviously, one area of interest with AI is robotics. How is AI being used in robotics?

Speed: One of the biggest areas is perception. Just think cameras, though it’s a bit more than that. You have many different technologies that can be used to give the robot perception. The obvious one is cameras, but even within cameras, is it a single camera, is it a stereo camera, is it a 3D depth-sensing camera, is it visible spectrum or infrared? Then you also have some other technologies that give perception, which you might almost think of as visual, but they’re a bit different. It’s things like LIDAR. (Light Detection and Ranging is a remote sensing method that uses pulsed laser light to measure distances.) With LIDAR, basically, think of it as radar. The way I explain it to my family is you know what radar is? Sure. Well, same thing but lasers instead of radio waves.

You’ve got the laser spinning around. It bounces off things. Using it for autonomous vehicles, you don’t actually see a car. But what you do see is a car-shaped cloud of dots, and it also gives you other things because of the Doppler effect. You can tell if that point cloud is in motion. Is it moving towards me or away, at what rate?

See also: Why Edge Computing Can Help IoT Reach Full Potential

Then you also have ultrasound and radar and some other things that you might not necessarily think about today. When you start getting autonomous operations, these ranging technologies will play a role. Like in the case of Rover, you’ve got a 40-pound robot, and it’s operating autonomously. But if you take that from being a 40-pound robot to being 400 pounds, 4,000 pounds, now you’re in the class of [equipment] that starts to become dangerous. How do you do things like safely operate heavy machinery, and how do you do that in a more autonomous or automated fashion without injuring people or causing property damage? Some of these other technologies can be used.

For example, you might use ultrasound for the very close kinds of things where you might not necessarily have camera coverage. In robotics, there are two places that I see AI applying, and then they (AI and robotics) end up blending together, which is when you have an autonomous operation, especially mobile robots, and things that move.

LIDAR, radar, and ultrasound can aid navigation. They can be used to answer questions like: How does a robotic or autonomous system know where it is, how does it know where it’s going, how does it do that without running into things or people? Then you also have robots that actually interact with their environment. The classic example of this would be when you think about an industrial robot, such as an arm. How does the arm perceive what’s around it? An example of this is robotics parts picking, where an arm picks parts out of a bin and then puts them it into a thing that you’re assembling or into another bin. This is a very popular application. Then you can obviously combine AI and robotics. You also can have mobile robots with actuators with grippers which are able to interact with their environment.

That’s really the whole field of AI machine learning. That’s where we see this being applied.

Fragale: Our perspective at Rover Robotics is pretty similar to Joe’s perspective. But I’d say, by and large, the number-one thing we see people using AI for is cameras, analyzing camera data, and specifically doing inspections. Any company that’s wanting to continue monitoring for something can use this technology. The application could be an oil company wanting to monitor if their pipes are getting rust on them. They can do that now 24/7 with a mobile robot. Or you might have a warehouse facility where you need to inspect RFID tags to take inventory. Anything that you want to inspect for in your facility, you can now do with a robot and a camera.

Which Industries are Using AI and Robotics?

RTInsights: Are there particular industries, such as manufacturing, logistics, elder care, or customer service, where we’re already seeing AI and robotics used?

Fragale: Yes. I’d say the biggest industries are logistics, manufacturing, and construction. Those are ones where robots are already using AI to do things. In the case of construction, there’s a lot of companies trying to reclaim the money that’s lost every year from inefficiencies. For instance, making sure that you install all the correct piping and sprinklers and all the safety equipment before you lay the concrete is very important for any construction project. But with a lot of subcontractors involved, there can often be a problem. If you have a robot go around your construction site and look for things like that, things that are critical in the overall construction project, then you can recover a lot of that cost that’s typically lost.

Speed: Yes, definitely, inspection is a big one. We do a ton of business around inspection, specifically visual inspection. Out in the field, there are 400,000 cameras connected to our vision systems doing this and other kinds of use cases. Where it gets really interesting for me though is instead of fixed cameras on an assembly line, on a workbench, or on a conveyor inspecting things as they go by, is to take the two themes of the AI-based visual inspection and the autonomous robotics and combine those. Think about the robot. Instead of physical goods being brought to the camera, the camera goes to the thing that needs inspection. You have the construction example with mobile robots performing inspections by roaming the site. There’s supposed to be an air duct. Is it in place? Are we ahead of schedule or behind schedule?

Another example is a retailer using AI and a robot to look at what’s in stock. The retailer could then compare what a robot physically observes to what the store management and warehouse logistics
systems say is in stock. That’s an area that I’m super excited about, and we’re all in with open robotics.

These days, robotics is pretty much spelled R-O-S, which is a robot operating system. It’s neither a robot nor an operating system. It’s an open-source framework for developing robotics. We’re working with and contributing to that. Then, you put this together with AI vision, again, a field with a lot of emphasis around the open source. As these things are combined, it is going to be a really interesting time.

What are the Top Robotic-Assisted Functions Being Performed? 

RTInsights: Let’s drill down into what robotic-assisted functions are being performed across all application areas. What do you see in the marketplace?

Speed: With the AI vision and robotics, a lot of folks think, is this going to replace a worker? However, there are many use cases where the technology helps workers, rather than replacing them. There is a whole field of collaborative robots, which are robots working with people and robots working together, collaborating on a task. For example, look at the things that Rover does. Suppose you have a human that’s doing a function and they have to roam. Say they need to move around a farm to perform a task.

What if you had the Rover, for the sake of argument, work as an autonomous wheel barrel that follows the worker around and is always right where it needs to be with whatever the worker needs. Those kinds of use cases, the ones of robots holding a piece in place while a human does a task, ones where the robots assist the worker. I’ve got a real passion for and have done a lot of work around assistive technology for helping the elderly and disabled. I see amazing potential for these things sensing and interacting with the people.

See also: Why IoT Still Lags in Practical Business Applications

Fragale: I’d say for us, you can categorize our customers into two categories. Either they’re carrying things with the robot, so carrying goods across the warehouse or carrying goods across the farm, or they’re putting sensors on the robot and collecting data. Those are by and large the two largest functions that we see our robots being used by companies.

How Do Edge Computing and AI Fit Together?

RTInsights: That is a perfect lead into my last question. Such systems can collect large amounts of data from many sensors and IoT devices. With all that data being generated, and the need for fast analysis, is this the perfect storm for using edge computing and AI together?

Speed: I definitely think so. Our friends at AWS, they talk about ‘why do edge’? They talk about the law of physics. Are you able to get the data in the kinds of volumes that are generated to the cloud? There are many things depending on RF, network topology, other kinds of things. There’s the law of economics. Is it economically feasible? Probably not when you’ve got mobile operators that do things like charge by the byte. Even if you have the network infrastructure, the bandwidth to get all the data to the cloud, is it economical? Once you get it to the cloud, depending on the kinds of workloads, is it economical work with that kind of data volumes? There is an interesting university study that compared and contrasted, for example, doing things like video processing and audio processing workloads using AWS technologies, using them in the cloud and using them with the edge.

They looked at things like AWS IoT Greengrass, which does machine learning analysis using models developed in the cloud. What they (the university researchers) came up with is basically the economics are eight times better doing these workloads at the edge. But for me, even more important than the economics is the latency. Many times, you move these things to the edge because you need it to happen right then, to be very fast in the moment. If I bring the video to the cloud and do analytics and then bring a decision back, it may be too late, too slow. A person’s been injured, a piece of equipment has been broken, or the building’s burned down. That’s one example of using edge.

Then also you get into these issues of the law of the land. We do believe in developing the models, training the models in the cloud. Developing a model is computationally expensive and doing it on a
small piece of equipment at the edge if, for example, that took a day, you could spin up that same model in the cloud and have it done in an hour. But when analyzing the data itself, you need to think about some of the privacy issues you were talking about before. How do you securely handle data with personally identifiable information or do facial recognition? In the factory, you know who the worker is.

However, with all that raw data, there may be some sensitivity issues. There may be legal, societal, or cultural issues around taking that data out of the workplace to somewhere else. That’s where you get into these issues of the law of the land. Working at the edge, it neatly satisfies a lot of these kinds of needs.

Fragale: The explosion in data that needs to be analyzed quickly is, indeed, the perfect storm for using edge computing. We see a lot of customers that are excited about cloud computing, especially with robotics. They think that, if I can stream Netflix at 4K or I can stream all this video data back and forth, it’ll be easy to stream image data to the cloud and then do processing there, where you have a lot more resources. But I think what often gets overlooked is that if you’re a robot and you’re roaming around, you’re now having to bounce between different access points. We see a lot of customers who are excited about cloud computing run into that roadblock, and it’s something that they get hung up on for months, and then they switch to edge computing. Even in a warehouse, if you’re trying to integrate your robots into a warehouse, you’ll be bouncing from one access point to another, and you’ll be losing connection often.

See also: Center for Edge Computing

Then you’re faced with telling your customer, “Hey, you need to upgrade to better routers because your new router isn’t 80211.AC compliant.” Then they ask, “What the heck do those numbers mean?” Then you say, “Okay, forget it. We’ll put more compute on the robot so that we can do these tasks.” That problem only gets worse when you move outside. With safety-critical robots roaming around outside, as Joe said, you can’t have the images being sent to the cloud and then back to the robot in order to make a decision of whether to stop or not before crossing the street. It just doesn’t work for safety-critical applications. The latency is too large for that.

Speed: The cloud is very important in all of this, but not necessarily in the way that a lot of people think. If I can do the analytics and machine learning at the edge, I do not need to send large amounts of data to the cloud. This eliminates latency issues. If I’m on an oil rig site up in Alaska, I see a bear, instead of sending video of the bear, what you do is you send information—There is a bear. What is the particular event or inference that you found? We see a lot of that. Just think in terms instead of streaming data to the cloud, stream information, stream the output of the analytics.

You also must put these systems together to be reliable. This is something I used to deal with when working on a connected car and topics of cloud-augmented analysis and automotive safety. You need to think of it as something that is usually connected, often connected, occasionally connected. How do you put these systems together to work with basically an assumption that you’re going to have an unreliable network? If you can get it to work right in that environment, you’ll basically be okay. But you must have always-pristine connectivity with latencies within a certain SLA, or you’re going to run into problems in the real world.

Salvatore Salamone

About Salvatore Salamone

Salvatore Salamone is a physicist by training who has been writing about science and information technology for more than 30 years. During that time, he has been a senior or executive editor at many industry-leading publications including High Technology, Network World, Byte Magazine, Data Communications, LAN Times, InternetWeek, Bio-IT World, and Lightwave, The Journal of Fiber Optics. He also is the author of three business technology books.

Leave a Reply

Your email address will not be published. Required fields are marked *