As the manufacturing industry shifts toward smart technologies and automation, the threat of bad actors compromising industrial capability rises. SecureAmerica Institute (SAI) partners at The Ohio State University are developing an external system to predict robotic untrustworthiness and identify behavior that may indicate a cyber intrusion as part of SAI’s nationwide project call to empower a secure domestic manufacturing enterprise.
“As emerging technology is embraced on a global scale, robots are becoming more intertwined into our daily lives,” said Dr. Ted Allen, associate professor of industrial and systems engineering and computer science engineering at Ohio State. “It’s imperative we monitor artificial intelligence and have the ability to shut down these systems independently should they become compromised.”
Robots can fail in ways that aren’t readily apparent and detecting faults is integral to keep operations secure and efficient. The Ohio State team leveraged low-cost camera systems and human biofeedback to predict and validate if robots were behaving in a trustworthy manner.
“We launched an artificially intelligent manufacturing lab in 2019 where we implemented sensors and cameras to monitor operations and gather data to see if robots were behaving correctly,” said Vimal Buck, senior researcher at Ohio State’s Center for Design and Manufacturing Excellence. “It’s important to look at the software and determine if these automated systems were performing normally or if their behavior was indicative of the system being compromised.”
Buck and his team sought to incorporate a human element into the project concurrent with digital assessments. “We analyzed human feedback to determine if it’s possible for human beings to detect these types of sophisticated intrusions,” he said. “Is it possible to predict the failure of robots by analyzing biosignals of operators around the world? This is one of the questions we are hoping to answer.
“Computer systems can appear inscrutable and evaluating how researchers interact with robots was very important to us,” Buck continued. “Understanding the human physiological aspects of losing trust with these systems is important for us to think about moving forward.”
The research team also deployed tactics to investigate robotic cybersecurity boundaries using two types of threat testing: tabletop attacks and replay attacks. During a replay attack, hackers tap into the physical lines connecting a computer to a robot in order to monitor, send or change data and information. A tabletop attack is geared more toward assessing vulnerabilities without actually executing an attack on a system.
“The main driver of this project is an awareness of what is happening in your factory,” Allen said. “Digitization has catalyzed the democratization of programming talent. This will be incredibly important as more small-to-medium manufacturers and small businesses adopt smart manufacturing practices. They will be able to bridge the digital divide and keep their operations safe and efficient.”