Fully automated magnetic resonance detection and segmentation of brain using convolutional neural network
IT provides us with an overview of several related areas of research. Since no other algorithms have been proposed that exactly match our goal, many similar tasks are examined, especially the detection and text extraction from brains, and the detection of objects in scanned images and the classifica...
Saved in:
Published in: | Ibn Al-Haitham Journal for Pure and Applied Sciences Vol. 34; no. 4; pp. 130 - 141 |
---|---|
Main Author: | |
Format: | Journal Article |
Language: | English |
Published: |
بغداد، العراق
جامعة بغداد، كلية التربية ابن الهيثم
20-10-2021
University of Baghdad |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | IT provides us with an overview of several related areas of research. Since no other algorithms have been proposed that exactly match our goal, many similar tasks are examined, especially the detection and text extraction from brains, and the detection of objects in scanned images and the classification of forms. In general, these algorithms consist of detecting brain-like objects, followed by the extraction of information in the form of human-readable text, similar to what we aim to do The new image, which may only contain a part of the brain, is matched to the existing one, and the MRI is executed again. This multi-view approach ensures robust text extraction when occlusions or reflections are present 'as mentioned in [1]. The final output of the algorithm consists of the detected brain type, the recognized text, its position on the brain, as well as its confidence. The goal of the practical task is to create a prototype of a deep learning application that allows the user to acquire images, on which the previously listed steps to extract the content are performed. Deep learning is a machine learning method that similarly solves problems to how a human brain solves problems. This past decade it has become an x powerful instrument for solving various tasks such as speech recognition, language processing, and numerous imaging tasks. It has also opened up many possibilities for more accurate tools x such as prediction, segmentation, and analysis of medical images [2]. Deep learning methods are also relatively easy to deploy, and a deep learning architecture that is built for one task can be trained to work, as we show in Table IT provides us with an overview of several related areas of research. Since no other algorithms have been proposed that exactly match our goal, many similar tasks are examined, especially the detection and text extraction from brains, and the detection of objects in scanned images and the classification of forms. In general, these algorithms consist of detecting brain-like objects, followed by the extraction of information in the form of human-readable text, similar to what we aim to do The new image, which may only contain a part of the brain, is matched to the existing one, and the MRI is executed again. This multi-view approach ensures robust text extraction when occlusions or reflections are present 'as mentioned in [1]. The final output of the algorithm consists of the detected brain type, the recognized text, its position on the brain, as well as its confidence. The goal of the practical task is to create a prototype of a deep learning application that allows the user to acquire images, on which the previously listed steps to extract the content are performed. Deep learning is a machine learning method that similarly solves problems to how a human brain solves problems. This past decade it has become an x powerful instrument for solving various tasks such as speech recognition, language processing, and numerous imaging tasks. It has also opened up many possibilities for more accurate tools x such as prediction, segmentation, and analysis of medical images [2]. Deep learning methods are also relatively easy to deploy, and a deep learning architecture that is built for one task can be trained to work, as we show in Table 1. |
---|---|
ISSN: | 1609-4042 2521-3407 |
DOI: | 10.30526/34.4.2710 |