A Tversky Loss-based Convolutional Neural Network for Liver Vessels...

13
A Tversky Loss-based Convolutional Neural Network for Liver Vessels Segmentation Nicola Altini 1 , Berardino Prencipe 1 , Giacomo Donato Cascarano 1 , Antonio Brunetti 1 , Gioacchino Brunetti 3 , Leonarda Carnimeo 1 , Francescomaria Marino 1 , Andrea Guerriero 1 , Laura Villani 2 , Arnaldo Scardapane 2 and Vitoantonio Bevilacqua 1* 1 Polytechnic University of Bari, 70126 Bari, Italy 2 Bari Medical School, Bari, Italy 3 Masmec Biomed SpA, Via delle Violette, 14 - 70026 Modugno (BA), Italy * Author to whom correspondence should be addressed [email protected] Abstract. The volumetric estimation of organs is a crucial issue both for the di- agnosis or assessment of pathologies and for surgical planning. Three-dimen- sional imaging techniques, e.g. Computed Tomography (CT), are widely used for this task, allowing to perform 3D analysis based on the segmentation of each bi- dimensional slice. In this paper, a fully automatic set-up based on Convolutional Neural Networks (CNNs) for the semantic segmentation of human liver paren- chyma and vessels in CT scans is considered. Vessels segmentation is a crucial task for surgical planning because the correct identification of vessels allows sep- arating the liver into anatomical segments, each with its own vascularization. The proposed liver segmentation CNN model is trained by minimizing the Dice loss function, whereas the Tversky loss-based function is herein exploited in de- signing the CNN model for liver vessels segmentation, aiming at penalizing the false negatives more than the false positives. The adopted loss functions allows us both to speed up the network convergence during the learning process, and to improve the segmentation accuracy. In this work, the training set from the Liver Tumor Segmentation (LiTS) Challenge, composed of 131 CT scans is considered for training and tuning the architectural hyperparameters of the liver parenchyma segmentation model and 20 CT scans of the SLIVER07 dataset are used as a test set for a final estimation of the proposed method. Moreover, twenty CT scans from the 3D-IRCADb are considered as a training set for the liver vessels seg- mentation model and four CT scans from Polyclinic of Bari are used as an inde- pendent test set. Obtained results are very promising, being the determined Dice Coefficient greater than 94 % for the liver parenchyma model on the considered test set, and an accuracy greater than 99 % for the suggested liver vessels model. Keywords: Convolutional Neural Network, Dice, Tversky, Liver Segmenta- tion, Vessels Segmentation.

Transcript of A Tversky Loss-based Convolutional Neural Network for Liver Vessels...

Page 1: A Tversky Loss-based Convolutional Neural Network for Liver Vessels Segmentationic-ic.tongji.edu.cn/paper/509/509.pdf · 2020-06-09 · A Tversky Loss-based Convolutional Neural Network

A Tversky Loss-based Convolutional Neural Network for

Liver Vessels Segmentation

Nicola Altini1, Berardino Prencipe1, Giacomo Donato Cascarano1, Antonio Brunetti1,

Gioacchino Brunetti3, Leonarda Carnimeo1, Francescomaria Marino1,

Andrea Guerriero1, Laura Villani2, Arnaldo Scardapane2

and Vitoantonio Bevilacqua1*

1 Polytechnic University of Bari, 70126 Bari, Italy 2 Bari Medical School, Bari, Italy

3 Masmec Biomed SpA, Via delle Violette, 14 - 70026 Modugno (BA), Italy *Author to whom correspondence should be addressed

[email protected]

Abstract. The volumetric estimation of organs is a crucial issue both for the di-

agnosis or assessment of pathologies and for surgical planning. Three-dimen-

sional imaging techniques, e.g. Computed Tomography (CT), are widely used for

this task, allowing to perform 3D analysis based on the segmentation of each bi-

dimensional slice. In this paper, a fully automatic set-up based on Convolutional

Neural Networks (CNNs) for the semantic segmentation of human liver paren-

chyma and vessels in CT scans is considered. Vessels segmentation is a crucial

task for surgical planning because the correct identification of vessels allows sep-

arating the liver into anatomical segments, each with its own vascularization.

The proposed liver segmentation CNN model is trained by minimizing the Dice

loss function, whereas the Tversky loss-based function is herein exploited in de-

signing the CNN model for liver vessels segmentation, aiming at penalizing the

false negatives more than the false positives. The adopted loss functions allows

us both to speed up the network convergence during the learning process, and to

improve the segmentation accuracy. In this work, the training set from the Liver

Tumor Segmentation (LiTS) Challenge, composed of 131 CT scans is considered

for training and tuning the architectural hyperparameters of the liver parenchyma

segmentation model and 20 CT scans of the SLIVER07 dataset are used as a test

set for a final estimation of the proposed method. Moreover, twenty CT scans

from the 3D-IRCADb are considered as a training set for the liver vessels seg-

mentation model and four CT scans from Polyclinic of Bari are used as an inde-

pendent test set.

Obtained results are very promising, being the determined Dice Coefficient

greater than 94 % for the liver parenchyma model on the considered test set, and

an accuracy greater than 99 % for the suggested liver vessels model.

Keywords: Convolutional Neural Network, Dice, Tversky, Liver Segmenta-

tion, Vessels Segmentation.

Page 2: A Tversky Loss-based Convolutional Neural Network for Liver Vessels Segmentationic-ic.tongji.edu.cn/paper/509/509.pdf · 2020-06-09 · A Tversky Loss-based Convolutional Neural Network

2

1 Introduction

The volume quantification of organs is of fundamental importance in the clinical field,

for diagnosing pathologies and monitoring their progression over time. Imaging tech-

niques offer fast and accurate methods for performing this task in a non-invasive way.

In fact, starting from volumetric imaging acquisitions, such as Computed Tomography

(CT) or Magnetic Resonance (MR), it is possible to perform the three-dimensional seg-

mentation of organs, thus obtaining their volumetric information. This task reveals to

be a time-consuming process, although expert medical doctors could manually accom-

plish it, since a manual labeling is required for each bi-dimensional slice in the volu-

metric acquisition. This labelling procedure is also susceptible to differences inter- and

intra-operator [1]. Taking into account these premises, researchers made considerable

efforts in developing semi-automatic or automatic segmentation methods, especially

for those body areas containing organs whose morphology could vary over time, or

because of pathologies, such as kidneys or liver.

Furthermore, the segmentation of organs can be accomplished in different ways from a

user-interaction point of view; in fact, literature distinguishes among automatic, semi-

automatic and interactive systems [2]: automatic systems do not need any input from

the user; semi-automatic systems require some interaction, as setting a seed point, pa-

rameters tuning or special pre- or post-processing operations depending on specific

cases; while interactive systems require extensive editing procedures performed by the

user.

Recent works investigated the use of Convolutional Neural Networks (CNNs), and

Deep Learning strategies in general, in order to design and implement automatic clini-

cal decision support systems starting from medical images [3], [4]. Such support sys-

tems also include the segmentation of organs from volumetric imaging acquisitions [5],

[6], [7]. For example, De Vos et al. made an effort for localizing anatomical structures

in 3D medical images using CNNs, with the purpose to ease further tasks as image

analysis and segmentation [8]. Regarding the abdominal area, there has been a growing

interest in CT scan analysis for diagnosis purposes and therapy planning of the included

organs. In fact, the segmentation of the liver is crucial for several clinical procedures,

including radiotherapy, volume measurement and computer-assisted surgery [9]. Rafiei

et al. developed a 3D to 2D Fully Convolutional Network (3D-2D-FCN) to perform

automatic liver segmentation for accelerating the detection of trauma areas in emergen-

cies [10]. Lu et al. developed and validated an automatic approach integrating multi-

dimensional features into graph cut refinement for the liver segmentation task [9].

According to Couinaud model [11], hepatic vessels represent the anatomic border of

hepatic segments and consequently segmentectomies, based on the precise identifica-

tions of these vascular landmarks, are crucial in modern hepatic surgery as they avoid

unnecessary removal of normal liver parenchyma and reduce the complications of most

extensive resections [12]. In literature, there are different approaches for liver vessels

segmentation. Oliveira et al. proposed a segmentation method exploiting a region-based

approach [13], where a gaussian mixture model was used to identify the threshold to be

selected for adequately separating parenchyma from hepatic veins. Yang et al. proposed

a semi-automatic method for vessels extraction. They used a connected-threshold

Page 3: A Tversky Loss-based Convolutional Neural Network for Liver Vessels Segmentationic-ic.tongji.edu.cn/paper/509/509.pdf · 2020-06-09 · A Tversky Loss-based Convolutional Neural Network

3

region-growing method from the ITK library [14] to initially segment the veins. To find

the threshold, they exploited the histogram of the masked liver. However, this process

has to be supervised by an expert user through a graphic interface [15]. Goceri et al.

proposed a method called Adaptive Vein Segmentation (AVS): they exploited k-means

clustering for initial mask generation; then they applied post-processing procedures for

mask refinement, followed by morphological operations to reconstruct the vessels [16].

Chi et al. used a context-based voting algorithm to conduct a full vessels segmentation

and recognition of multiple vasculatures. Their approach considers context information

of voxels related to vessels intensity, saliency, direction, and connectivity [17]. Zeng et

al. proposed a liver-vessels segmentation and identification approach, based on the

combination of oriented flux symmetry and graph cuts [18]. In this work, a Deep Learning approach is proposed aiming at a proper image seg-

mentation of liver parenchyma and vessels. The suggested implementation is based on

the V-Net architecture, that is, a CNN widely used in volumetrical medical image seg-

mentation [19]. In particular, for the CNN model of liver parenchyma, the Dice loss

function is herein optimized. The proposed image segmentation model of liver paren-

chyma poses the basis for a subsequent vessels segmentation, allowing to exclude those

ones which are outside the liver region. Moreover, the penalization of false negatives

(i.e., vessels voxels classified as background) over false positives (i.e., background

voxels classified as vessels) is herein investigated to highlight how its value could speed

up the network convergence and the overall segmentation performance. For this pur-

pose, the Tversky loss-based function is herein exploited in designing the CNN model

for liver vessels segmentation.

2 Materials

2.1 Liver Parenchyma Segmentation

In order to design and implement the proposed automatic liver parenchyma segmenta-

tion approach, we considered the training set of the Liver Tumor Segmentation (LiTS)

Challenge, containing the CT abdominal acquisitions of 131 subjects [20]. Regarding

the test set, we evaluated the results of our experiments with a different independent

set, composed of 20 scans from the SLIVER07 dataset [2].

In the CT scans of LiTS Challenge, pixel spacing belongs to range 0.56 mm to 1.0

mm in x/y-direction, whilst slice distance ranges between 0.45 mm and 6.0 mm, with

the number of slices varying from 42 to 1026 [20]. In the CT scans of SLIVER07, pixel

spacing is included in the range 0.55 mm to 0.8 mm in the x/y-direction, and slice dis-

tance belongs to range 1 mm to 3 mm, depending on the used machine and protocol [2].

We pre-processed all the images by windowing the HU values into the range [-150,

350]. For the CNN model, the values were then scaled into the range [0, 1].

2.2 Liver Vessels Segmentation

We used the 20 CT scans of 3D-IRCADb for training and internally cross-validating

our CNN model, and a dataset of the Polyclinic of Bari composed of 4 CT scans as an

Page 4: A Tversky Loss-based Convolutional Neural Network for Liver Vessels Segmentationic-ic.tongji.edu.cn/paper/509/509.pdf · 2020-06-09 · A Tversky Loss-based Convolutional Neural Network

4

independent test set for external validation. The 3D-IRCADb dataset contains CT scans

whose axial-plane resolution varies from 0.56 mm to 0.81 mm, whilst resolution along

the z-axis spans from 1 mm to 4 mm. The number of slices ranges between 74 and 260.

The 20 CT scans of 3D-IRCADb come from 10 women and 10 men; the number of

patients with hepatic tumours is 75% of the overall dataset [21].

The dataset from the Polyclinic of Bari contains 4 CT scans with an axial plane

resolution varying from 0.76 mm to 0.86 mm, and a z-axis resolution spanning from

0.7 mm to 0.8 mm. The number of slices is between 563 and 694. Pre-processing

adopted for liver vessels segmentation was the same used for liver parenchyma seg-

mentation.

3 Methods

In the biomedical image segmentation area, a well-known and widely used semantic

segmentation CNN architecture is U-Net [22], whose model is based on an encoder-

decoder architecture and performs the segmentation of bi-dimensional images. Later, a

3D implementation of U-Net has been proposed by Çiçek et al. [23].

Variants of the 3D U-Net models have been successfully employed in the liver and

tumour segmentation task from CT scans. Among them, an interesting architecture is

RA-UNet, proposed by Jin et al. [24]. The authors of RA-UNet explore the possibility

to employ attention modules and residual learning. Milletari et al., instead, proposed a

variation of the standard U-Net, called V-Net, for the 3D medical image segmentation

[19]. Among the peculiarities of V-Net, we note the use of down-convolutions, with

stride 2 × 2 × 2 and kernel-size 2 × 2 × 2, instead of 2 × 2 × 2 max-pooling, the use of

PReLu [25] non-linearities and the adoption of residual connections. A 2.5D variant of

the V-Net architecture is depicted in Fig. 1, where all the 3D layers are replaced by the

corresponding 2D ones, with a first layer which processes 5 slices as 5 channels.

Fig. 1. The proposed 2.5D V-Net architecture.

In the adopted architecture, ”down” convolutional layers have stride 2 × 2 and ker-

nel-size 2 × 2; normal convolutional layers have kernel-size 5 × 5, and transposed

Page 5: A Tversky Loss-based Convolutional Neural Network for Liver Vessels Segmentationic-ic.tongji.edu.cn/paper/509/509.pdf · 2020-06-09 · A Tversky Loss-based Convolutional Neural Network

5

convolutional layers used as ”up” convolutions have 2 × 2 kernels. Moreover, we added

a Batch-Normalization (BN) layer after each convolutional layer. The use of BN layers

has been taken into consideration by the same authors of the original V-Net [26]. In-

stead of PReLu non-linearities, we adopted the standard ReLu ones. Before each ”Up”

block (in the decoder path) we added a dropout layer, as regularization.

We trained our 2.5D V-Net by taking random patches of 5 slices from the training

set, assigning a greater probability to take a patch containing at least one voxel belong-

ing to the liver or to the vessels. The optimizer for the training process was Adam [27],

with a starting learning rate of 0.01. We trained the network for 1000 epochs, reducing

the learning rate by 10 every 200 epochs. 500 samples were processed per each epoch.

In order to ensure low convergence time, it is crucial to select a proper loss function.

Common loss functions for the semantic segmentation task are the Binary Cross-En-

tropy (BCE) loss function, as in Eq. (1), and the Weighted BCE (WBCE) loss function,

as in Eq. (2). For the following definitions, let 𝑝𝑖 ∈ 𝑃 be the probability of the ith voxel

to belong to the liver and 𝑔𝑖 ∈ 𝐺 its binary label with 𝑖 = 1, … , 𝑁; 𝑃 and G respectively

being the predicted segmented volume and the ground truth volume.

𝐵𝐶𝐸 = −1

𝑁∑ (𝑔𝑖 ⋅ log(𝑝𝑖) + (1 − 𝑔𝑖) ⋅ log (1 − 𝑝𝑖))𝑁

𝑖=1 (1)

𝑊𝐵𝐶𝐸 = −1

𝑁∑ (𝜔1 ⋅ 𝑔𝑖 ⋅ log(𝑝𝑖) + 𝜔0 ⋅ (1 − 𝑔𝑖) ⋅ log (1 − 𝑝𝑖))𝑁

𝑖=1 (2)

In Eq. (2), 𝜔1 and 𝜔0 are introduced to give a different weight for positives and nega-

tives. These functions act as a proxy for the optimization of the true measures used later

for the evaluation, which usually include the Dice Coefficient 𝐷, as in Eq. (3). Thus,

another plausible choice for the optimization function consisted of directly adopting an

objective function based on the Dice Coefficient [19], or, more generally, the Tversky

index. Salehi et al. exploited the Tversky Loss function for lesion segmentation by

means of 3D CNNs [28]. The used implementation is reported in Eq. (4), where 𝑝0𝑖 is

the probability of the ith voxel to be positive, 𝑔0𝑖 its binary label (i.e., 1 for positives

and 0 for negatives), 𝑝1𝑖 its probability of being a negative, 𝑔1𝑖 its negated binary label,

directly obtained by logical NOT applied to 𝑔0𝑖 (i.e., 0 for positives and 1 for nega-

tives).

𝐷 =2⋅∑ 𝑝𝑖𝑔𝑖

𝑁𝑖=1

∑ 𝑝𝑖𝑁𝑖=1 +∑ 𝑔𝑖

𝑁𝑖=1

(3)

𝑇𝛼,𝛽 =2⋅∑ 𝑝0𝑖𝑔0𝑖

𝑁𝑖=1

∑ 𝑝0𝑖𝑔0𝑖𝑁𝑖=1 +𝛼 ∑ 𝑝0𝑖𝑔1𝑖

𝑁𝑖=1 +𝛽 ∑ 𝑝1𝑖𝑔0𝑖

𝑁𝑖=1

(4)

We decided to employ a Tversky index-based loss function due to the unbalanced

voxels problem. In fact, the voxels belonging to the liver region are only a fraction of

the whole CT scan. In the LiTS dataset, the unbalancing ratio is approximately 40:1 in

favour of negative voxels. Nonetheless, this problem is much more relevant to vessels

segmentation. In fact, in this case, the unbalancing ratio is roughly 200:1 in favour of

negative voxels. Dice Loss does not give different weights to False Positives (FPs) and

False Negatives (FNs), thus it does not focus the learning on the maximization of the

Page 6: A Tversky Loss-based Convolutional Neural Network for Liver Vessels Segmentationic-ic.tongji.edu.cn/paper/509/509.pdf · 2020-06-09 · A Tversky Loss-based Convolutional Neural Network

6

recall of the voxels of interest. With a Tversky Loss, thanks to the 𝛼 and 𝛽 coefficients,

it is possible to give a major weight on FNs.

We considered different data augmentation techniques, as slice-wise right-left flip-

ping of volume patches, gaussian blur, elastic transform with 𝛼 = 2 and 𝜎 = 3, multi-

plicative noise, random rotations in the range [-10°, 10°], random brightness and con-

trast perturbations. The Albumentations Python library has been used to perform the

augmentations [29].

In the inference phase, we processed the volumetric images in a 3D sliding window

fashion, processing sub-volumes of 512 × 512 × 5 voxels. Since we adopted a 2.5D

approach, the five processed slices were used for predicting the central one only. In

order to create patches of five slices also at the begin and at the end of the CT scan, the

first and last slices were replicated. The first step of post-processing consists in applying

morphological opening operator, in order to separate, if needed, the liver and the spleen.

In fact, due to the similarity between spleen and liver intensity values and texture, the

two organs could be misclassified. Then, we applied connected components labelling,

retaining only the largest one, since the liver is the largest organ in the abdomen. Fi-

nally, we applied morphological closing and morphological hole-filling to the seg-

mented masks. A similar procedure has been carried out for the vessels segmentation,

without the morphological and connected components labelling post-processing. In

Section 4, we report the segmentation results for both the liver parenchyma and the

liver vessels.

To develop the CNN model, we used PyTorch [30]. Pre- and post-processing phases

were conducted using the Insight Toolkit (ITK) [14].

4 Experimental Results

4.1 Segmentation Quality Measures

To evaluate the performance of the implemented segmentation algorithms, we refer to

the indexes adopted in the SLIVER07 and LiTS challenges [2], [20]. It is possible to

make a distinction between quality measures based on the volumetric overlap and those

based on surface distances.

Important quality measures based on volumetric overlap are the Volumetric Overlap

Error (𝑉𝑂𝐸), defined in Eq. (6) and the Sørensen–Dice Coefficient (𝐷𝑆𝐶), defined in

Eq. (7). The 𝑉𝑂𝐸 definition depends on the ratio between intersection and union,

namely the Jaccard Index J, as defined in Eq. (5). In all the definitions involved in the

quality measures, we denote 𝐵 as the binarized predicted segmented volume (obtained

by thresholding 𝑃) and 𝐺 as the ground truth volume; the cardinality operator for a set

is denoted as |⋅|.

𝐽(𝐵, 𝐺) =|𝐵∩𝐺|

|𝐵∪𝐺| (5)

𝑉𝑂𝐸(𝐵, 𝐺) = 1 − 𝐽(𝐵, 𝐺) (6)

𝐷𝑆𝐶(𝐵, 𝐺) =|𝐵∩𝐺|

|𝐵|+|𝐺| (7)

Page 7: A Tversky Loss-based Convolutional Neural Network for Liver Vessels Segmentationic-ic.tongji.edu.cn/paper/509/509.pdf · 2020-06-09 · A Tversky Loss-based Convolutional Neural Network

7

A more general formulation of both the 𝐷𝑆𝐶 and Jaccard Index is the Tversky Index

𝑇𝛼,𝛽(𝐵, 𝐺), defined as:

𝑇𝛼,𝛽(𝐵, 𝐺) =|𝐵∩𝐺|

|𝐵∪𝐺|+𝛼|𝐵−𝐺|+𝛽|𝐺−𝐵| (8)

We note that 𝑇0.5,0.5(𝐵, 𝐺) corresponds to 𝐷𝑆𝐶(𝐵, 𝐺), while 𝑇1,1(𝐵, 𝐺) corresponds to

𝐽(𝐵, 𝐺).

Besides calculating the overlap error, it is also possible to quantify the Relative Vol-

ume Difference (𝑅𝑉𝐷), defined as:

𝑅𝑉𝐷(𝐵, 𝐺) =|𝐵|−|𝐺|

|𝐺| (9)

Interesting quality measures based on external surface distances are the Maximum

Symmetric Surface Distance (𝑀𝑆𝑆𝐷) and the Average Symmetric Surface Distance

(𝐴𝑆𝑆𝐷). These measures are particularly useful for applications like surgical planning,

where to make a suitable prediction of the mesh of the organs is vital.

In order to properly define these distances, let us define a metric space (𝑋, 𝑑) where

𝑋 is a 3D Euclidean space and 𝑑 is the Euclidean distance over the space. Then, let

𝑆(𝐵), 𝑆(𝐺) in 𝑋 be, respectively, the external surfaces of the 𝐵 and the 𝐺 volumes. We

can define a distance function between any two non-empty sets 𝑆(𝐵) and 𝑆(𝐺) of 𝑋,

also known as one-sided Hausdorff distance, ℎ(𝑆(𝐵), 𝑆(𝐺)), as in Eq. (10).

ℎ(𝑆(𝐵), 𝑆(𝐺)) = sup𝑠𝐵∈𝑆(𝐵)

{ inf𝑠𝐺∈𝑆(𝐺)

𝑑(𝑠𝐵 , 𝑠𝐺)} (10)

Then, the 𝑀𝑆𝑆𝐷, also known as bidirectional Hausdorff distance, is defined as in

Eq. (11), whilst the 𝐴𝑆𝑆𝐷 is defined in Eq. (12).

𝑀𝑆𝑆𝐷(𝐵, 𝐺) = max{ℎ(𝑆(𝐵), 𝑆(𝐺)), ℎ(𝑆(𝐺), 𝑆(𝐵))} (11)

𝐴𝑆𝑆𝐷(𝐵, 𝐺) =1

|𝑆(𝐵)|+|𝑆(𝐺)|(∑ 𝑑(𝑠𝐵 , 𝑆(𝐺))𝑠𝐵∈𝑆(𝐵) + ∑ 𝑑(𝑠𝐺 , 𝑆(𝐵))𝑠𝐺∈𝑆(𝐺) ) (12)

We also need to introduce the accuracy, as defined in Eq. (13), the recall, as defined

in Eq. (14) and the specificity, as defined in Eq. (15).

𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 =𝑇𝑃+𝑇𝑁

𝑇𝑃+𝑇𝑁+𝐹𝑃+𝐹𝑁 (13)

𝑅𝑒𝑐𝑎𝑙𝑙 =𝑇𝑃

𝑇𝑃+𝐹𝑁 (14)

𝑆𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑖𝑡𝑦 =𝑇𝑁

𝑇𝑁+𝐹𝑃 (15)

Where TP, TN, FP and FN are respectively the number of True Positives, True Nega-

tives, False Positives and False Negatives.

Page 8: A Tversky Loss-based Convolutional Neural Network for Liver Vessels Segmentationic-ic.tongji.edu.cn/paper/509/509.pdf · 2020-06-09 · A Tversky Loss-based Convolutional Neural Network

8

4.2 Results and Discussion

The results obtained for the liver parenchyma segmentation and liver vessels segmen-

tation are reported in Table 1 and Table 2, respectively. As previously reported, the

liver parenchyma segmentation model and the liver vessels segmentation model have

been evaluated using SLIVER07 and our independent dataset, respectively.

Table 1. Liver Parenchyma Segmentation Results, expressed as “mean ± standard deviation”.

Model DSC

[%]

VOE

[%]

RVD

[%]

ASSD

[mm]

MSSD

[mm] Proposed 2.5D V-Net

(before post-processing) 94.76 ± 5.23 9.56 ± 8.39 -4.18 ± 9.27 2.80 ± 4.10 64.00 ± 60.15

Proposed 2.5D V-Net

(after post-processing) 94.44 ± 5.06 10.16 ± 8.21 -0.12 ± 10.22 2.15 ± 1.92 35.36 ± 17.35

Lu et al. [9]

Multi-dimensional graph cut N/A 9.21 ± 2.64 1.27 ± 3.85 1.75 ± 1.41 36.17 ± 15.90

Rafiei et al. [10]

3D-2D-FCN + CRF 93.52 N/A N/A N/A N/A

Table 2. Liver Vessels Segmentation Results, expressed as “mean ± standard deviation”.

Model Accuracy

[%]

Recall

[%]

Specificity

[%]

ASSD

[mm] Proposed 2.5D V-Net

with Dice loss 99.94 ± 0.01 53.97 ± 18.82 99.96 ± 0.02 7.54 ± 2.43

Proposed 2.5D V-Net

with Tversky loss

(𝛼 = 0.3, 𝛽 = 0.7)

99.95 ± 0.01 43.61 ± 19.34 99.97 ± 0.02 8.67 ± 4.63

Proposed 2.5D V-Net

with Tversky loss

(𝛼 = 0.1, 𝛽 = 0.9)

99.92 ± 0.03 58.09 ± 23.24 99.93 ± 0.04 9.55 ± 1.47

Goceri et al. [16]

AVS 89.57 ± 0.57 N/A N/A 23.1 ± 16.4

Chi et al. [17]

Context-Based Voting 98 ± 1 70 ± 1 99 ± 1 2.28 ± 1.38

Zeng et al. [18]

Oriented Flux Symmetry

and Graph Cuts

97.7 79.8 98.6 N/A

The 2.5D V-Net allowed us to obtain a mean Dice Coefficient of 94.76 % and a mean

MSSD of 64.00 mm for the liver parenchyma segmentation task. Since in a surgical

planning setup is vital to reduce the Hausdorff distance, we note that the adoption of

the post-processing is very beneficial. It allowed to reduce the mean MSSD to 35.36

mm, and its standard deviation from 60.15 mm to 17.35 mm. The proposed approach

is comparable to other methods proposed in literature, as reported in Table 1.

Regarding the vessels segmentation task, the proposed method shows a high accu-

racy, also compared to other approaches proposed in the literature, as can be seen from

Table 2. The adoption of the Tversky loss to penalize false negatives more than false

positives yields to significantly higher recall in the configuration with 𝛼 = 0.1, 𝛽 =0.9. Thus, in case of extremely unbalanced dataset, we appreciated the use of the

Tversky loss function. Anyway, we have to note that our test set is very small, and the

results suffer from high variability.

Page 9: A Tversky Loss-based Convolutional Neural Network for Liver Vessels Segmentationic-ic.tongji.edu.cn/paper/509/509.pdf · 2020-06-09 · A Tversky Loss-based Convolutional Neural Network

9

Examples of liver segmentation results obtained with the proposed method are depicted

in Fig. 2 and Fig. 3, while examples of vessels segmentation results are reported in Fig.

4 and Fig. 5.

Fig. 2. Two slices of the liver segmentation task: (left) ground truth; (center) Dice loss-based

2.5D V-Net prediction; (right) difference between ground truth and prediction, where false neg-

atives and false positives are respectively evidenced in green and in yellow.

Fig. 3. Two different views of the same mesh of the liver segmentation task: (left) ground truth;

(right) Dice loss-based 2.5D V-Net prediction.

Page 10: A Tversky Loss-based Convolutional Neural Network for Liver Vessels Segmentationic-ic.tongji.edu.cn/paper/509/509.pdf · 2020-06-09 · A Tversky Loss-based Convolutional Neural Network

10

Fig. 4. Two slices of the vessels Segmentation Task: (left) ground truth; (center) Tversky loss-

based 2.5D V-Net prediction; (right) difference between ground truth and prediction where false

negatives and false positives are respectively evidenced in green and in blue.

Fig. 5. Two different views of the same mesh of the vessels segmentation task: (left) ground

truth; (right) Tversky loss-based 2.5D V-Net prediction.

Page 11: A Tversky Loss-based Convolutional Neural Network for Liver Vessels Segmentationic-ic.tongji.edu.cn/paper/509/509.pdf · 2020-06-09 · A Tversky Loss-based Convolutional Neural Network

11

5 Conclusion and Future Works

In this work, we proposed a fully-automatic CNNs-based approach for the segmenta-

tion of liver parenchyma and vessels in CT scans.

The liver parenchyma segmentation has been evaluated on CT abdominal scans of

20 subjects considering different metrics. The proposed CNN approach allowed us to

obtain high voxel-level performances, with Dice Coefficient greater than 94 % on the

test set. For a surgical planning setup, it is vital to have a small Hausdorff distance, and

we note that the employment of a proper post-processing can yield to the reduction of

MSSD from 64.00 mm to 35.36 mm.

The model adopted for the liver vessels segmentation has been evaluated on an in-

dependent test set of 4 CT scans, with an Accuracy greater than 99 %. The obtained

results show that the 2.5D V-Net, trained with a Tversky Loss, is a very promising

approach for the vessels segmentation in CT scans, allowing to obtain accurate volu-

metric reconstructions of the segmented region.

The proposed system will help the radiologists in accomplishing the laborious task

of segmenting liver and vessels from a CT scan, laying the foundation for further image

analysis algorithm on the segmented region.

Future works will include further validation on datasets coming from different co-

horts of subjects, and investigation on novel analysis of the segmented liver region,

targeted to obtain the Couinaud hepatic segments classification.

References

1. L. Hoyte, W. Ye, L. Brubaker, J. R. Fielding, M. E. Lockhart, M. E. Heilbrun, M. B. Brown,

S. K. Warfield, and for the Pelvic Floor Disorders Network, “Segmentations of mri images

of the female pelvic floor: A study of inter- and intra-reader reliability,” Journal of Magnetic

Resonance Imaging, vol. 33, no. 3, pp. 684–691, 2011.

2. T. Heimann et al., “Comparison and evaluation of methods for liver segmentation from CT

datasets,” IEEE Transactions on Medical Imaging, vol. 28, no. 8, pp. 1251–1265, 2009.

3. A. Brunetti, L. Carnimeo, G. F. Trotta, and V. Bevilacqua, “Computer-assisted frameworks

for classification of liver, breast and blood neoplasias via neural networks: A survey based

on medical images,” Neurocomputing, vol. 335, pp. 274–298, 2019.

4. Altini, N.; Cascarano, G.D.; Brunetti, A.; Marino, F.; Rocchetti, M.T.; Matino, S.; Venere,

U.; Rossini, M.; Pesce, F.; Gesualdo, L.; Bevilacqua, V. Semantic Segmentation Framework

for Glomeruli Detection and Classification in Kidney Histological Sections. Electronics

2020, 9, 503. doi:10.3390/electronics9030503.

5. V. Bevilacqua, A. Brunetti, G. D. Cascarano, F. Palmieri, A. Guerriero, and M. Moschetta,

“A Deep Learning Approach for the Automatic Detection and Segmentation in Autosomal

Dominant Polycystic Kidney Disease Based on Magnetic Resonance Images,” Lecture

Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and

Lecture Notes in Bioinformatics), vol. 10955 LNCS, pp. 643–649, 2018.

6. V. Bevilacqua, A. Brunetti, G. D. Cascarano, A. Guerriero, F. Pesce, M. Moschetta, and L.

Gesualdo, “A comparison between two semantic deep learning frameworks for the autoso-

mal dominant polycystic kidney disease segmentation based on magnetic resonance im-

ages,” BMC Medical Informatics and Decision Making, vol. 19, no. 9, pp. 1–12, 2019.

Page 12: A Tversky Loss-based Convolutional Neural Network for Liver Vessels Segmentationic-ic.tongji.edu.cn/paper/509/509.pdf · 2020-06-09 · A Tversky Loss-based Convolutional Neural Network

12

7. G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der

Laak, B. Van Ginneken, and C. I. S´anchez, “A survey on deep learning in medical image

analysis,” Medical image analysis, vol. 42, pp. 60–88, 2017.

8. B. D. De Vos, J. M. Wolterink, P. A. De Jong, T. Leiner, M. A. Viergever, and I. Isgum,

“ConvNet-Based Localization of Anatomical Structures in 3-D Medical Images,” IEEE

Transactions on Medical Imaging, vol. 36, no. 7, pp. 1470–1481, 2017.

9. X. Lu, Q. Xie, Y. Zha, and D. Wang, “Fully automatic liver segmentation combining multi-

dimensional graph cut with shape information in 3D CT images,” Scientific Reports, vol. 8,

no. 1, pp. 1–9, 2018.

10. S. Rafiei, E. Nasr-Esfahani, K. Najarian, N. Karimi, S. Samavi, and S. M. Soroushmehr,

“Liver Segmentation in CT Images Using Three Dimensional to Two Dimensional Fully

Convolutional Network,” Proceedings - International Conference on Image Processing,

ICIP, pp. 2067–2071, 2018.

11. Couinaud C. Lobes et segments hepatique: notes sur architecture anatomique et chirurgicale

du foie. Presse Med 1954; 62:709–12.

12. Helling TS, Blondeau B. Anatomic segmental resection compared to major hepatectomy in

the trement of liver neoplasms. HPB (Oxford). 2005;7(3):222‐225.

doi:10.1080/13651820510028828.

13. Oliveira, Dário AB, Raul Q. Feitosa, and Mauro M. Correia. "Segmentation of liver, its

vessels and lesions from CT images for surgical planning." Biomedical engineering online

10.1 (2011): 30.

14. T. S. Yoo, M. J. Ackerman, W. E. Lorensen, W. Schroeder, V. Chalana, S. Aylward, D.

Metaxas, and R. Whitaker, “Engineering and algorithm design for an image processing API:

A technical report on ITK – The Insight Toolkit,” in Studies in Health Technology and In-

formatics, 2002.

15. Yang, Xiaopeng, et al. "Segmentation of liver and vessels from CT images and classification

of liver segments for preoperative liver surgical planning in living donor liver transplanta-

tion." Computer methods and programs in biomedicine 158 (2018): 41-52.

16. Goceri, Evgin, Zarine K. Shah, and Metin N. Gurcan. "Vessel segmentation from abdominal

magnetic resonance images: adaptive and reconstructive approach." International journal for

numerical methods in biomedical engineering 33.4 (2017): e2811.

17. Chi, Yanling, et al. "Segmentation of liver vasculature from contrast enhanced CT images

using context-based voting." IEEE transactions on biomedical engineering 58.8 (2010):

2144-2153.

18. Zeng, Ye-zhan, et al. "Liver vessel segmentation and identification based on oriented flux

symmetry and graph cuts." Computer methods and programs in biomedicine 150 (2017): 31-

39.

19. F. Milletari, N. Navab, and S. A. Ahmadi, “V-Net: Fully convolutional neural networks for

volumetric medical image segmentation,” Proceedings - 2016 4th International Conference

on 3D Vision, 3DV 2016, pp. 565–571, 2016.

20. P. Bilic, P. F. Christ, E. Vorontsov, G. Chlebus, H. Chen, Q. Dou, C.-W. Fu, X. Han, P.-A.

Heng, J. Hesser, S. Kadoury, T. Konopczynski, M. Le, C. Li, X. Li, J. Lipkov`a, J. Low-

engrub, H. Meine, J. H. Moltz, C. Pal, M. Piraud, X. Qi, J. Qi, M. Rempfler, K. Roth, A.

Schenk, A. Sekuboyina, E. Vorontsov, P. Zhou, C. H¨ulsemeyer, M. Beetz, F. Ettlinger, F.

Gruen, G. Kaissis, F. Loh¨ofer, R. Braren, J. Holch, F. Hofmann, W. Sommer, V. Heine-

mann, C. Jacobs, G. E. H. Mamani, B. van Ginneken, G. Chartrand, A. Tang, M. Drozdzal,

A. Ben-Cohen, E. Klang, M. M. Amitai, E. Konen, H. Greenspan, J. Moreau, A. Hostettler,

L. Soler, R. Vivanti, A. Szeskin, N. Lev-Cohain, J. Sosna, L. Joskowicz, and B. H. Menze,

“The Liver Tumor Segmentation Benchmark (LiTS),” pp. 1–43, 2019.

Page 13: A Tversky Loss-based Convolutional Neural Network for Liver Vessels Segmentationic-ic.tongji.edu.cn/paper/509/509.pdf · 2020-06-09 · A Tversky Loss-based Convolutional Neural Network

13

21. 3D-IRCADb 01 Dataset. Available Online: https://www.ircad.fr/research/3d-ircadb-01/.

Accessed 13th May 2020.

22. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical

image segmentation,” Lecture Notes in Computer Science (including subseries Lecture

Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9351, pp. 234–

241, 2015.

23. Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-net: Learn-

ing dense volumetric segmentation from sparse annotation,” Lecture Notes in Computer Sci-

ence (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bio-

informatics), vol. 9901 LNCS, pp. 424–432, 2016.

24. Q. Jin, Z. Meng, C. Sun, L. Wei, and R. Su, “RA-UNet: A hybrid deep attention-aware

network to extract liver and tumor in CT scans,” no. October, pp. 1–13, 2018.

25. K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level

performance on imagenet classification,” CoRR, vol. abs/1502.01852, 2015.

26. Chen Shen, Fausto Milletari, Holger R. Roth, Hirohisa Oda, Masahiro Oda, Yuichiro

Hayashi, Kazunari Misawa, and Kensaku Mori "Improving V-Nets for multi-class ab-

dominal organ segmentation", Proc. SPIE 10949, Medical Imaging 2019: Image Processing,

109490B (15 March 2019); https://doi.org/10.1117/12.2512790.

27. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” 2014.

28. S. S. M. Salehi, D. Erdogmus, and A. Gholipour, “Tversky loss function for image segmen-

tation using 3D fully convolutional deep networks,” in Lecture Notes in Computer Science

(including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinfor-

matics), 2017.

29. Buslaev, A.; Iglovikov, V.I.; Khvedchenya, E.; Parinov, A.; Druzhinin, M.; Kalinin, A.A.

Albumentations: Fast and Flexible Image Augmentations. Information 2020, 11, 125.

30. A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L.

Antiga, and A. Lerer, “Automatic differentiation in pytorch,” 2017.