fbpx

Use Case: Defect Detection in Metal Surfaces

This blog post was originally published at PerceptiLabs’ website. It is reprinted here with the permission of PerceptiLabs.

With the rise of Industrial IoT (IIoT) and Industry 4.0, ML is playing an ever-increasing role in enabling automation. With this in mind, we built an ML model that can classify defects on metal materials using image recognition.

Raw materials like plywood, metal, and plastics, often undergo rigorous quality control measures to ensure they meet the needs and requirements of the industries in which they’re used. In some cases, such materials might be classified into different grades based on the type and quantity of defects present, while in other cases materials may be discarded for having defects.

With the rise of Industrial IoT (IIoT) and Industry 4.0, ML is playing an ever-increasing role in enabling automation. This includes the use of computer vision to analyze and detect defects with products like raw materials on assembly lines, as well as out in the field (e.g., to detect metal fatigue).

With this in mind, we built an ML model that can classify defects on metal materials using image recognition. This involved preparing and wrangling the training data, building a .csv file to map that data to the classifications, and iterating with the model in PerceptiLabs.

Dataset

We used a subset of images from this Kaggle dataset which represent 10 different metal defects:

  • Crease: vertical or transverse folds across a metal strip caused during the uncoiling process.
  • Crescent Gap: defects caused by cutting, in the shape of a half circle.
  • Inclusion: surface defects in various shapes (e.g., fish scale shape) which may be loose and easy to fall off or pressed into the metal.
  • Oil Spot: contamination caused by mechanical lubricant that affects the product’s appearance.
  • Punching: steel strips with additional, unwanted punching holes caused by mechanical failure.
  • Rolled Pit: periodic bulges or pits on the metal’s surface often caused by work or tension roll damage.
  • Silk Spot: wave-like plaque on the surface often caused by the uneven pressure or temperature of a roller.
  • Welding Line: a weld line that occurs when a steel strip is changed.
  • Water Spot: spots which occur when drying the metal during production.
  • Waist Folding: wrinkle-like folds caused by low-carbon.

Note: see Kaggle for additional information about these defects.


Figure 1: Images from the training dataset showing various metal creases.

We used PerceptiLabs’ Data Wizard to resize the images to a resolution of 224×224 pixels and created a .csv file to map the images to their respective classifications. Below is a partial example of how the .csv file looks:

image_path

target

silk_spot/silkspot_640.jpeg

6

water_spot/waterspot_20.jpeg

8

Example of a .csv file to load data into PerceptiLabs by mapping images to classifications.

The target column specifies the nominal values that represent each image’s classification. These classifications are shown here:

Defect

Nominal Classification Value

Crease

0

Crescent Gap

1

Inclusion

2

Oil Spot

3

Punching

4

Rolled Pit

5

Silk Spot

6

Waist Folding

7

Water Spot

8

Welding Line

9

Model Summary

Our model was built with just three Components:

Component 1: ResNet50 include_top=No, input_shape=(224,224)
Component 2: Dense Activation=ReLU, Neurons=128
Component 3: Dense Activation=Softmax, Neurons=10


Figure 2: Topology of the model in PerceptiLabs.

Training and Results


Figure 3: PerceptiLabs’ Statistics View during training.

We configured the model to run using a Cross Entropy loss function and the ADAM optimizer with a learning rate of 0.001, across 10 epochs in batches of 64.

With a training time of 22 minutes and 33 seconds, we were able to achieve a training accuracy of 98.51% and a validation accuracy of 80.4%. In the following screenshot from PerceptiLabs, you can see how both training and validation accuracy ramped up the most during the first epoch. Training accuracy continued to climb until around the fifth epoch while validation accuracy remained fairly stable from around the second epoch with a temporary decrease between the fifth and seventh epochs:


Figure 4: Accuracy Plot.

Figure 5 shows the loss during training:


Figure 5: Loss Plot.

Training loss started relatively high, decreased the most during the first epoch, and continued to gradually decline throughout training. Validation loss started relatively low and remained fairly stable with little decrease throughout training.

Vertical Applications

For IIoT and Industry 4.0 applications, a model like this could be used to automate visual quality assurance processes which identify defects. For example, images or video streams received by a camera on an assembly line could be analyzed to identify material defects periodically or in real time. This data could also be compared to that from other cameras to help identify where problems in the manufacturing process are occurring (e.g., to identify machines that need maintenance).

The model itself could also be used as the basis for transfer learning to create models that detect other material defects such as those found in the manufacturing of plywood. In this case, defects may come from the wood itself rather than from a manufacturing process, in which case the product might be assigned a quality grade and priced accordingly.

Summary

This use case is a simple example of how ML can be used to identify defects in materials using image recognition. If you want to build a deep learning model similar to this, run PerceptiLabs and grab a copy of our pre-processed dataset from GitHub.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top