By: MS&T Special Correspondent Andy Fawkes
A digital twin is now a commonplace term and is taking on more prominence in a more connected, sensor rich, and data-driven world. A digital twin serves as the real-time digital counterpart of a physical object, process or person. When they are leveraged across the lifecycle, a digital thread of data and decisions can be created. Further, as the digital twin can be freely manipulated and recorded in its virtual world, then large “synthetic” datasets of it and its environment can be computer generated. Such synthetic data can assist teaching a computer through machine learning how to react to certain situations or criteria, replacing previously expensive or inaccessible real-world-captured training data. Synthetic data of visual images is particularly common but it can be generated for other sources such as Lidar and combined for sensor fusion projects.
To explore the uses of digital twins and synthetic data we spoke to the SecureAmerica Institute (SAI), Amentum, and Unity about the progress they are making in developing a real-time modeling and simulation framework to model and predict multiple sensor fusion scenarios through machine learning. With the continued exponential growth of sensors and sensor fusion, the project is providing deep insights to their utility and the power of simulation more generally. Although specifically in support of enhancing the resiliency of U.S. defense manufacturing, the project is also influencing wider technological advancements in military training and operations.