fbpx

Use Case: Air Conditioner Piston Check

This blog post was originally published at PerceptiLabs’ website. It is reprinted here with the permission of PerceptiLabs.

With the growing use of computer vision in Industrial IoT (IIoT) and Industry 4.0 to analyze and detect defects, we were inspired to build an image recognition model that can classify images of AC pistons.

Materials used in manufacturing or construction can have all sorts of defects ranging from physical anomalies like breaks to chemical issues like oily surfaces. These issues can occur during manufacturing, installation, or post installation (e.g., due to wear, environmental exposure, etc.). Such issues often need to be detected as soon as possible to avoid subsequent problems or to be flagged for repair or reclassified as a lower grade of quality, depending on the situation.

In cases involving high-precision components, the detection of such defects becomes even more important to avoid subsequent problems. One such example is in the manufacturing of pistons for air conditioner (AC) units which must be built to within tight tolerances so that the units operate reliably in the field.

With the growing use of computer vision in Industrial IoT (IIoT) and Industry 4.0 to analyze and detect defects, we were inspired to build an image recognition model that can classify images of AC pistons as either normal (i.e., no defects), oily/greasy, or defective (i.e., broken, out of shape, or dropped).  This involved preparing and wrangling the training data, building a .csv file to map that data to the classifications, and iterating with the model in PerceptiLabs.

Data

For this model, we used images from this Kaggle dataset, which represent three classifications of AC pistons:

  • Defect 1: broken, out of shape, or have been dropped.
  • Defect 2: have oily, greasy, or rusty stains.
  • Normal: normal, non-defective pistons.

Figure 1 shows examples of some of the normal AC piston images:


Figure 1: Images from the training dataset depicting normal AC pistons.

We pre-processed the images to resize each of them into a resolution of 80×80 pixels and created a .csv file to map the images to nominal classification values of 0 for Defect 1, 1 for Defect 2, and 2 for Normal. Below is a partial example of how the .csv file looks:

image_path

target

defect_1/defect_0.jpeg

0

defect_2/defect2_13.jpeg

1

normal/normal_5.jpeg

2

Example of a .csv file to load data into PerceptiLabs where 0 is Defect 1, 1, is Defect 2, and 2 is Normal.

Model Summary

Our model was built with just three Components:

Component 1: Convolution Path_size=3, stride=2, feature_maps=16
Component 2: Dense Activation=ReLU, Neurons=128
Component 3: Dense Activation=Softmax, Neurons=3


Figure 2: Final model in PerceptiLabs.

Training and Results


Figure 3: PerceptiLabs’ Statistics View during training.

We trained the model with 25 epochs in batches of 32, using the ADAM optimizer, a learning rate of 0.001, and a Cross Entropy loss function.

With a training time of around 21 seconds, we were able to achieve a training accuracy of 100% and a validation accuracy of 98.25%. In the following screenshot from PerceptiLabs, you can see how the accuracy ramped up to these percentages over the 25 epochs, with much of the increase occurring within just the first six epochs:


Figure 4: Accuracy Plot.

At the same time, the loss decreased the most during the first three epochs:


Figure 5: Loss Plot.

Summary

This use case is a simple example of how ML can be used to identify material defects using image recognition. If you want to build a deep learning model similar to this in just minutes, run PerceptiLabs and grab a copy of our pre-processed dataset from GitHub.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top