Automatic Visual Inspection - Defects Detection using CNN

Despite their great accuracy, neural networks are not very popular in fields like medical, finance, education, and others where predictive explainability are essential. The objective of this work is to create and train a model using PyTorch Pipeline that divides photos into "Good" and &quo...

Full description

Saved in:
Bibliographic Details
Published in:2022 6th International Conference on Electronics, Communication and Aerospace Technology pp. 584 - 589
Main Authors: V, Srilakshmi, Kiran, G Uday, Guntupalli, Yashwanth, Gayathri, Ch Navya, Raju, A Sivarama
Format: Conference Proceeding
Language:English
Published: IEEE 01-12-2022
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Despite their great accuracy, neural networks are not very popular in fields like medical, finance, education, and others where predictive explainability are essential. The objective of this work is to create and train a model using PyTorch Pipeline that divides photos into "Good" and "Anomaly" classes and, if the image is categorized as an "Anomaly," a bounding box is returned for the fault. While this work appears straightforward and similar to other item detection tasks, there is a problem that it lacks bounding box labels. Fortunately, this problem can be solved by the model in the inference mode, trained without labels for defective regions, and is able to forecast a bounding box for a defective region in the picture, by processing feature maps from the deep convolutional layers. This work discusses the strategy and talks about how to use it for the purpose of defect detection in the real world. A 400-image dataset that includes pictures of both perfect objects (classed as "good") and imperfect objects (classed as "anomalies") has been used. The dataset is unbalanced; there are more examples of good than bad photographs. Any form of object, such as a bottle, cable, pill, tile, piece of leather, a zipper, etc., may be seen in the images.
DOI:10.1109/ICECA55336.2022.10009402