LLNL investigates machine learning for defect prevention in metal Additive Manufacturing
September 17, 2018
Researchers at Lawrence Livermore National Laboratory, California, USA, are exploring machine learning to process the data obtained during metal Additive Manufacturing builds in real time to very quickly detect whether a build will be of satisfactory quality. The team reports that it is developing convolutional neural networks (CNNs), a popular type of algorithm primarily used to process images and videos, to predict whether a part will be satisfactory by looking at as little as 10 milliseconds of video from within the build chamber.
Brian Giera, Principal Investigator on the project, stated, “This is a revolutionary way to look at the data that you can label video by video, or better yet, frame by frame. The advantage is that you can collect video while you’re printing something and ultimately make conclusions as you’re printing it. A lot of people can collect this data, but they don’t know what to do with it on the fly, and this work is a step in that direction.”
Sensor analysis of metal AM parts carried out post-build can be expensive. Giera stated that CNNs could offer a valuable understanding of the Additive Manufacturing process and the quality of each part, enabling users to correct or adjust the build in real time if necessary.
LLNL researchers developed the neural networks using about 2,000 video clips of melted laser tracks under varying conditions, such as speed or power. They scanned the part surfaces with a tool that generated 3D height maps, using that information to train the algorithms to analyse sections of video frames (each area called a convolution). This process would be too difficult and time-consuming for a human to do manually.
Bodi Yuan, a student at the University of California student and an LLNL researcher, developed algorithms able to automatically label the height maps of each build, and used the same model to predict the width of the build track, whether the track was broken and the standard deviation of width. Using these algorithms, the researchers were able to take video of in-progress builds and determine if the part exhibited acceptable quality.
Researchers reported that the neural networks were able to detect whether a part would be continuous with 93% accuracy, making other strong predictions on part width.
“Because convolutional neural networks show great performance on image and video recognition-related tasks, we chose to use them to address our problem,” Yuan explained. “The key to our success is that CNNs can learn lots of useful features of videos during the training by itself. We only need to feed a huge amount of data to train it and make sure it learns well.”
Ibo Matthews, a co-author on the paper, leads a group which has for some years been collecting various forms of real-time data on the Laser Powder Bed Fusion (LPBF) metal AM process, including video, optical tomography and acoustic sensors. While working with Matthews and his group to analyse build tracks, Giera concluded it wouldn’t be possible to do all the data analysis manually, and turned to neural networks could simplify the work.
“We were collecting video anyway, so we just connected the dots,” he stated. “Just like the human brain uses vision and other senses to navigate the world, machine learning algorithms can use all that sensor data to navigate the 3D printing process.”
The neural networks described in the paper could theoretically be used in other 3D printing systems, Giera said. Other researchers should be able to follow the same formula, creating parts under different conditions, collecting video and scanning them with a height map to generate a labeled video set that could be used with standard machine-learning techniques.
He added that work is still required to detect voids within parts that can’t be predicted with height map scans, but could reportedly be measured using ex situ X-ray radiography. The researchers will also look to create algorithms which incorporate multiple sensing modalities besides image and video.
“Right now, any type of detection is considered a huge win. If we can fix it on the fly, that is the greater end goal,” he stated. “Given the volumes of data we’re collecting that machine learning algorithms are designed to handle, machine learning is going to play a central role in creating parts right the first time.”
The project was funded by the Laboratory Directed Research and Development program. Further co-authors on the paper included LLNL scientists and engineers Gabe Guss, Aaron Wilson, Stefan Hau-Riege, Phillip DePond and Sara McMains.