Automated detection of textured-surface defects using UNet-based semantic segmentation network

Over the recent years, developing a reliable auto-mated visual inspection system/approach for manufacturing and industry sectors which are moving toward smart manufacturing operations faces lots of significant challenges. Traditional visual inspection techniques which are developed based on manually...

Full description

Saved in:
Bibliographic Details
Published in:2020 IEEE International Conference on Prognostics and Health Management (ICPHM) pp. 1 - 5
Main Authors: Enshaei, Nastaran, Ahmad, Safwan, Naderkhani, Farnoosh
Format: Conference Proceeding
Language:English
Published: IEEE 01-06-2020
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Over the recent years, developing a reliable auto-mated visual inspection system/approach for manufacturing and industry sectors which are moving toward smart manufacturing operations faces lots of significant challenges. Traditional visual inspection techniques which are developed based on manually extracted features, can rarely be generalized and have shown weak performance in real applications in different industries. In this paper, we propose a novel and automated visual inspection system which can outperform the statistical methods in terms of detection and the quantification of anomalies in image data for performing critical industrial tasks such as detecting micro scratches on product. In particular, an end-to-end UNet-based fully convolutional neural network for automated defect detection in industrial surfaces is designed and developed. The proposed network has the capability to accept raw images as input and the output is pixel-wise masks. In order to avoid overfitting and improve the model generalization, we use real-time data augmentation approach during our training phase. To evaluate the performance of the proposed model, we use a publicly available data set containing ten different types of textured-surfaces with their associated weakly annotated masks. The findings indicate that despite working with roughly annotated labels, our results are in agreement with previous works and show improvements regarding the detection time.
DOI:10.1109/ICPHM49022.2020.9187023