In 2020, a total of 308.102 new cases of primary brain and Central Nervous System (CNS) cancers were registered globally, and 251.329 cancer-related deaths occurred globally. Brain Tumors (BTs) are one of the most serious brain conditions. The term BT refers to the abnormal proliferation of cells in the brain, which can be categorized as either malignant or non-cancerous. During the initial assessment of BTs, structural MRI is mainly used to identify the lesion location and assess the resulting mass effect on the brain. In recent years, methods such as image enhancement, segmentation, object detection, and classification have attracted significant interest for disease diagnosis and planning early treatment. Accurate BT segmentation is crucial in clinical practice for monitoring disease progression and supporting diagnosis. However, manual annotation of a large number of multimodal MRI images is time-consuming and subjective, making automatic and semi-automatic approaches necessary. Deep Learning (DL), particularly U-Net and 3D U-Net, has emerged as state-of-the-art solutions, achieving significant improvements in segmentation accuracy, even in complex imaging scenarios. The present study introduces a novel double-branch 3D Ui-Net architecture, an extension of traditional 3D U-Net, designed specifically for 3D medical image segmentation and feature regression. However, this study focuses exclusively on the BT segmentation task. The model was trained and validated on the BraTS2020 dataset and tested on BraTS2017. Based on the conducted study, the proposed double-branch 3D Ui-Net demonstrates robust segmentation performance, evaluated using the Dice Coefficient (DC) metric. Specifically, the model shows a DC of 0,87 ± 0,12 on the training set, 0,86 ± 0,11 on the validation set, and 0,86 ± 0,12 on the testing set. Nevertheless, some limitations may occur in difficult cases, although the model still provides clinically reliable information. Additionally, analysis of geometric and spatial features confirms that the segmentation preserves the main morphological characteristics of the tumors. The main strength of the proposed model is its ability to perform 3D volumetric segmentation, enabling intuitive 3D tumor visualization, which is useful for surgical planning. The limitations highlighted in this study can be addressed in future work, particularly the improvement of segmentation in difficult cases. Overall, the study demonstrates not only the technical efficiency of the proposed DL method but also its potential clinical impact.
In 2020, a total of 308.102 new cases of primary brain and Central Nervous System (CNS) cancers were registered globally, and 251.329 cancer-related deaths occurred globally. Brain Tumors (BTs) are one of the most serious brain conditions. The term BT refers to the abnormal proliferation of cells in the brain, which can be categorized as either malignant or non-cancerous. During the initial assessment of BTs, structural MRI is mainly used to identify the lesion location and assess the resulting mass effect on the brain. In recent years, methods such as image enhancement, segmentation, object detection, and classification have attracted significant interest for disease diagnosis and planning early treatment. Accurate BT segmentation is crucial in clinical practice for monitoring disease progression and supporting diagnosis. However, manual annotation of a large number of multimodal MRI images is time-consuming and subjective, making automatic and semi-automatic approaches necessary. Deep Learning (DL), particularly U-Net and 3D U-Net, has emerged as state-of-the-art solutions, achieving significant improvements in segmentation accuracy, even in complex imaging scenarios. The present study introduces a novel double-branch 3D Ui-Net architecture, an extension of traditional 3D U-Net, designed specifically for 3D medical image segmentation and feature regression. However, this study focuses exclusively on the BT segmentation task. The model was trained and validated on the BraTS2020 dataset and tested on BraTS2017. Based on the conducted study, the proposed double-branch 3D Ui-Net demonstrates robust segmentation performance, evaluated using the Dice Coefficient (DC) metric. Specifically, the model shows a DC of 0,87 ± 0,12 on the training set, 0,86 ± 0,11 on the validation set, and 0,86 ± 0,12 on the testing set. Nevertheless, some limitations may occur in difficult cases, although the model still provides clinically reliable information. Additionally, analysis of geometric and spatial features confirms that the segmentation preserves the main morphological characteristics of the tumors. The main strength of the proposed model is its ability to perform 3D volumetric segmentation, enabling intuitive 3D tumor visualization, which is useful for surgical planning. The limitations highlighted in this study can be addressed in future work, particularly the improvement of segmentation in difficult cases. Overall, the study demonstrates not only the technical efficiency of the proposed DL method but also its potential clinical impact.
The new double-branch 3D Ui-Net: design, development, and validation of brain tumor segmentation
STROPPA, CHIARA
2024/2025
Abstract
In 2020, a total of 308.102 new cases of primary brain and Central Nervous System (CNS) cancers were registered globally, and 251.329 cancer-related deaths occurred globally. Brain Tumors (BTs) are one of the most serious brain conditions. The term BT refers to the abnormal proliferation of cells in the brain, which can be categorized as either malignant or non-cancerous. During the initial assessment of BTs, structural MRI is mainly used to identify the lesion location and assess the resulting mass effect on the brain. In recent years, methods such as image enhancement, segmentation, object detection, and classification have attracted significant interest for disease diagnosis and planning early treatment. Accurate BT segmentation is crucial in clinical practice for monitoring disease progression and supporting diagnosis. However, manual annotation of a large number of multimodal MRI images is time-consuming and subjective, making automatic and semi-automatic approaches necessary. Deep Learning (DL), particularly U-Net and 3D U-Net, has emerged as state-of-the-art solutions, achieving significant improvements in segmentation accuracy, even in complex imaging scenarios. The present study introduces a novel double-branch 3D Ui-Net architecture, an extension of traditional 3D U-Net, designed specifically for 3D medical image segmentation and feature regression. However, this study focuses exclusively on the BT segmentation task. The model was trained and validated on the BraTS2020 dataset and tested on BraTS2017. Based on the conducted study, the proposed double-branch 3D Ui-Net demonstrates robust segmentation performance, evaluated using the Dice Coefficient (DC) metric. Specifically, the model shows a DC of 0,87 ± 0,12 on the training set, 0,86 ± 0,11 on the validation set, and 0,86 ± 0,12 on the testing set. Nevertheless, some limitations may occur in difficult cases, although the model still provides clinically reliable information. Additionally, analysis of geometric and spatial features confirms that the segmentation preserves the main morphological characteristics of the tumors. The main strength of the proposed model is its ability to perform 3D volumetric segmentation, enabling intuitive 3D tumor visualization, which is useful for surgical planning. The limitations highlighted in this study can be addressed in future work, particularly the improvement of segmentation in difficult cases. Overall, the study demonstrates not only the technical efficiency of the proposed DL method but also its potential clinical impact.| File | Dimensione | Formato | |
|---|---|---|---|
|
Thesis_masterdegree_ChiaraStroppa_PDFA.pdf
embargo fino al 12/06/2027
Descrizione: Thesis master's degree Chiara Stroppa
Dimensione
6.79 MB
Formato
Adobe PDF
|
6.79 MB | Adobe PDF |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.12075/24544