Detecting Real-time Waste Contamination Using Edge Computing and Video Analytics

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA.

The past few decades have witnessed a surge in rates of waste generation, closely linked to economic development and urbanization. This escalation in waste production poses substantial challenges for governments worldwide in terms of efficient processing and management. Despite the implementation of waste classification systems in developed countries, a significant portion of waste still ends up in landfills or incineration due to contamination issues, resulting in the unsustainable wastage of recyclable materials.

This post describes our edge computing and computer vision solution that detects plastic bag contamination in waste collection trucks. The solution uses the NVIDIA Metropolis application framework, including NVIDIA Jetson, NVIDIA TAO Toolkit, and NVIDIA DeepStream SDK.

Why waste management practices must change

Conventional waste management methods, such as landfilling and incineration, not only fail to address the growing waste problem but also pose grave environmental and health risks. It’s become increasingly imperative for countries to enhance waste recycling and management practices to ensure a sustainable future.

In the realm of local waste management, contamination within household waste stands out as a formidable hurdle that significantly impedes the recycling process. This obstacle looms large on the radar of local governments, prompting the adoption of innovative strategies like bin-tagging and waste auditing to tackle the issue head-on and gather essential contamination-related data for informed decision-making.

Yet, the prevalent practice of bin-tagging relies heavily on manual intervention, often executed by waste collection truck drivers who visually inspect waste containers using onboard cameras. Unfortunately, this labor-intensive method not only strains the capabilities of the drivers but also introduces subjectivity and potential data discrepancies, demanding additional analysis and time resources.

The urgency to revolutionize waste management practices and bolster efficiency and sustainability has never been more pronounced. To this end, the development of an automated waste contamination detection system, leveraging cutting-edge technologies, emerges as a pivotal need.

Edge computing solution

To help address the waste contamination problem, we developed an edge computing video analytics solution based on NVIDIA Jetson and the NVIDIA Metropolis framework using the latest technologies:

  • Computer vision
  • Intelligent video analytics
  • Edge computing
  • AI

The proposed system is based on the idea of capturing video of waste from the truck hopper, processing it using the NVIDIA Jetson edge AI platform to detect plastic bag contamination, and storing the contamination-related information for further analysis. The YOLOv4 deep learning model is trained using our Remondis Contamination Dataset (RCD) and deployed on the edge computing solution using NVIDIA DeepStream.

Remondis Contamination Dataset

Training convolutional neural network (CNN) models for computer vision tasks necessitates a substantial dataset comprising relevant images. However, the issue of waste contamination detection remains inadequately explored, especially within the intricate backdrop of real-world utility. A majority of existing research oversimplifies the problem by employing straightforward data containing only one type of contaminant, often with high-resolution visuals. Regrettably, these models exhibit limitations in tackling authentic scenarios where contamination coexists with various other waste constituents, accompanied by challenging lighting situations and diminished image quality.

In response, we’ve pioneered RCD, a novel training dataset meticulously curated from historical records of the recycling company Remondis. This dataset showcases images capturing instances of plastic bag contamination, offering a diverse array of lighting conditions, capture angles, and low resolutions—a far more representative depiction of real-world complexities.

The final dataset consisted of 1,125 samples (968 for training, 157 for validation) with a total of 1,851 bbox annotations (1,588 for training, 263 for validation). Figure 1 shows a few of the annotated samples from the RCD.


Figure 1. Samples from the Remondis Contamination Dataset

System development

We developed an automated solution for detecting plastic bag contamination in waste trucks, using onboard analog cameras to process images and deploying computer vision models leveraging NVIDIA accelerated computing. This concept is illustrated in Figure 2.

The developed system consists of:

  • An analog camera (Mitsubishi C4010) installed on the truck to capture the truck hooper where the waste is collected from bins.
  • NVIDIA Jetson TX2 system-on-module to process and infer waste images using trained computer vision models.
  • Computer vision model (YOLOv4 with CSPDarkNet) to detect plastic bag contamination in the images.


Figure 2. Conceptual illustration of the proposed system

The YOLOv4 model with CSPDarkNet_tiny backbone was used for the plastic bag detection. The model was trained using NVIDIA TAO powered by Python, TensorFlow, and Keras. NVIDIA DGX Platform was used to train the model on images from RCD.

A three-stage approach was adopted for development of the proposed solution (Figure 3):

Stage I: Data Preparation

  1. Raw dataset was collected from Remondis historical records and online sources.
  2. Collected data was processed and filtered to identify ‌potential training candidates.
  3. Processed dataset was labeled for plastic bag by drawing bounding boxes using LabelImg software.

Stage II: Model Training

  1. From the existing model zoo, suitable models were selected for training.
  2. NVIDIA TAO toolkit was used to train the models for plastic bag contamination detection.
  3. Training performance was closely monitored to ensure the normal training process.

Stage III: Testing and Validation

  1. The trained models were exported as NVIDIA TensorRT engines and deployed on the NVIDIA Jetson TX2.
  2. Hardware performance was validated in terms of GPU usage, temperature, CPU usage, and FPS.
  3. The hardware was deployed in the field and additional data was collected to fine-tune the model.


Figure 3. Process flow diagram for developing the proposed system

The best trained model (YOLOv4 with CSPDarkNet_tiny backbone) was exported as TensorRT and deployed on the Jetson TX2 module using the NVIDIA DeepStream SDK. The hardware setup (Figure 4) was tested in the laboratory using the same camera as installed on the garbage collection truck. After validation in the lab, the hardware setup was deployed on the garbage collection truck for field testing and additional data collection.


Figure 4. Hardware setup for real-time plastic bag contamination detection

In terms of computer vision model performance, the first deployed base model was able to achieve the mAP@50 of 63% and FPS of 24.8 for NVIDIA Jetson TX2. The model was trained using data collected from the field after the first deployment. This achieved improved results. An increase of 10% was achieved in mAP@50 of the model after retraining with field data. Furthermore, ‌performance in terms of false positive (FP), false negative (FN) and true positive (TP) was also improved (Table 1). Figure 5 shows several examples of the computer vision model successfully detecting ‌plastic bag contamination in the images.

Base model Retrained model Percentage change
False positives (FP) 176 112 36.6% decreased
False negatives (FN) 239 218 8.29% decreased
True positives (TP) 338 359 6.21% increased

Table 1. The performance comparison of base model and retrained model on field-collected test data


Figure 5. Computer vision model predictions

Future research directions

Computer vision holds promising potential for enhancing understanding of waste contamination by extracting information related to contaminants. Some highlighted future directions include:

  • Multi-class detection: The developed solution can be extended to detect the multiple classes of plastic bags and packaging material. This will help in better understanding the trends of which types of plastic bags are common in the contamination and control measures could be placed accordingly.
  • Pothole detection: The deployed edge computer can be deployed with multiple trained computer vision models using the same computational resources. In this context, another model could be trained to detect the potholes on the roads and help the councils in identifying the damaged roads and fix them in quick time.
  • Roadside trash detection: A similar concept as for pothole detection, the other cameras installed on the trucks could be used to detect roadside trash. This can help in better managing the environment and educating the community.

Summary

This post has detailed our edge computing video analytics solution for detecting plastic bag contamination in waste collection trucks. Solutions of this sort help improve waste recycling, increase sustainability, and educate the community.

A computer vision solution powered by the NVIDIA Jetson platform for edge AI and robotics was used to detect the presence of any plastic bag in the footage captured by the camera installed on the garbage collection truck. To help the training of the computer vision model in this unique application domain, we developed a novel challenging dataset referred to as Remondis Contamination Dataset.

The successful deployment and encouraging results of the computer vision model in contamination detection suggests a large scope in improving waste management. Such a system can be extended to detect multiple classes of contamination and multiple classes of plastic bags for better understanding. Furthermore, other cameras installed on the truck could be used to detect ‌potholes and roadside trash.

Related resources

Umar Iqbal
Senior Research Scientist, NVIDIA Research

Johan Barthelemy
Developer Relations, NVIDIA

Tim Davies
eResearch Lead, SMART Infrastructure Facility, University of Wollongong, Australia

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top