A02-3 Function Integrated Diagnostic Assistance Based on ...

13
The 5 th International Symposium on Multidisciplinary Computational Anatomy This article has been printed without reviewing and editing as received from the authors, and the copyright of the article belongs to the authors. A02-3 Function Integrated Diagnostic Assistance Based on Multidisciplinary Computational Anatomy Models - Progress Overview FY2014-FY2018 - Hiroshi Fujita #1 , Takeshi Hara #2 , Xiangrong Zhou #3 , Kagaku Azuma †4 , Daisuke Fukuoka 5 , Yuji Hatanaka ##6 , Naoki Kamiya *7 , Tetsuro Katafuchi ††8 , Tomoko Matsubara ‡‡9 , Masayuki Matsuo ###10 , Tosiaki Miyati **11 , Chisako Muramatsu #12 , Atsushi Teramoto †††13 , Yoshikazu Uchiyama ‡‡‡14 # Department of Electrical, Electronic and Computer Engineering, Faculty of Engineering, Gifu University, 1-1 Yanagido, Gifu, Japan 1,2,3,12 {fujita, hara, zxr, chisa}@fjt.info.gifu-u.ac.jp *Department of Anatomy, School of Medicine, University of Occupational and Environmental Health, 1-1 Iseigaoka, Yahata-nishi-ku, Kitakyushu, Fukuoka, Japan 4 [email protected] Faculty of Education, Gifu University, 1-1 Yanagido, Gifu, Japan 5 [email protected] ## Department of Electronic Systems Engineering, The University of Shiga Prefecture, 2500 Hassaka-cho, Hikone-shi, Shiga, Japan 6 [email protected] * School of Information Science and Technology, Aichi Prefectural University 1522-3 Ibaragabasama, Nagakute, Aichi, Japan 7 [email protected] †† Department of Radiological Technology, Faculty of Health Science, Gifu University of Medical Science 795-1 Ichihiraga Azanagamine, Seki-shi, Gifu, Japan 8 [email protected] ‡‡ Department of Information Media, Nagoya Bunri University, 365 Maeda, Inazawa-cho, Inazawa, Japan 9 [email protected] ### Department of Radiology, Graduate School of Medicine, Gifu University, 1-1 Yanagido, Gifu, Japan 10 [email protected] ** Faculty of Health Sciences, Institute of Medical, Pharmaceutical and Health Sciences, Kanazawa University 5-11-80 Kodatsuno, Kanazawa, Ishikawa, Japan 11 [email protected] ††† Faculty of Radiological Technology, School of Health Sciences, Fujita Health University 1-98 Dengakugakubo, Kutsukake-cho, Toyoake, Aichi, Japan 13 [email protected] ‡‡‡ Department of Medical Physics, Faculty of Life Science, Kumamoto University, 4-24-1 Kuhonji, Kumamoto, Japan 14 [email protected] AbstractThis paper describes the purpose and summary of our achievement in the research work, which is a part of the research project “Multidisciplinary Computational Anatomy and Its Application to Highly Intelligent Diagnosis and Therapy,” founded by Grant-in-Aid for Scientific Research on Innovative Areas, MEXT, Japan. The main purposes of our research in this project are to establish a scientific principle of multidisciplinary computational anatomy and to develop computer-aided diagnosis (CAD) systems based on such anatomical models for organ and tissue functions. During FY2014 to FY2018, we made research achievements on each of sub projects. The promising results indicate the success of this project which may relate to improved clinical practice in the future. I. INTRODUCTION Modern medical imaging devices have granted the visualization of metabolic and functional information of a body, which have raised the enormous interest in quantitative image analysis. The multimodality image diagnosis with a

Transcript of A02-3 Function Integrated Diagnostic Assistance Based on ...

Page 1: A02-3 Function Integrated Diagnostic Assistance Based on ...

The 5th International Symposium on Multidisciplinary Computational Anatomy

This article has been printed without reviewing and editing as received from the authors, and the

copyright of the article belongs to the authors.

A02-3

Function Integrated Diagnostic Assistance Based on

Multidisciplinary Computational Anatomy Models - Progress Overview FY2014-FY2018 -

Hiroshi Fujita#1, Takeshi Hara #2, Xiangrong Zhou #3, Kagaku Azuma†4, Daisuke Fukuoka‡5, Yuji Hatanaka##6,

Naoki Kamiya*7, Tetsuro Katafuchi††8, Tomoko Matsubara‡‡9, Masayuki Matsuo###10, Tosiaki Miyati**11,

Chisako Muramatsu#12, Atsushi Teramoto†††13, Yoshikazu Uchiyama‡‡‡14

# Department of Electrical, Electronic and Computer Engineering, Faculty of Engineering, Gifu University,

1-1 Yanagido, Gifu, Japan 1,2,3,12 {fujita, hara, zxr, chisa}@fjt.info.gifu-u.ac.jp

*† Department of Anatomy, School of Medicine, University of Occupational and Environmental Health,

1-1 Iseigaoka, Yahata-nishi-ku, Kitakyushu, Fukuoka, Japan 4 [email protected]

‡ Faculty of Education, Gifu University, 1-1 Yanagido, Gifu, Japan 5 [email protected]

## Department of Electronic Systems Engineering, The University of Shiga Prefecture,

2500 Hassaka-cho, Hikone-shi, Shiga, Japan 6 [email protected]

* School of Information Science and Technology, Aichi Prefectural University

1522-3 Ibaragabasama, Nagakute, Aichi, Japan 7 [email protected]

††Department of Radiological Technology, Faculty of Health Science, Gifu University of Medical Science

795-1 Ichihiraga Azanagamine, Seki-shi, Gifu, Japan 8 [email protected]

‡‡Department of Information Media, Nagoya Bunri University, 365 Maeda, Inazawa-cho, Inazawa, Japan 9 [email protected]

### Department of Radiology, Graduate School of Medicine, Gifu University, 1-1 Yanagido, Gifu, Japan 10 [email protected]

** Faculty of Health Sciences, Institute of Medical, Pharmaceutical and Health Sciences, Kanazawa University

5-11-80 Kodatsuno, Kanazawa, Ishikawa, Japan 11 [email protected]

††† Faculty of Radiological Technology, School of Health Sciences, Fujita Health University

1-98 Dengakugakubo, Kutsukake-cho, Toyoake, Aichi, Japan 13 [email protected]

‡‡‡Department of Medical Physics, Faculty of Life Science, Kumamoto University, 4-24-1 Kuhonji, Kumamoto, Japan 14 [email protected]

Abstract— This paper describes the purpose and summary of our

achievement in the research work, which is a part of the research

project “Multidisciplinary Computational Anatomy and Its

Application to Highly Intelligent Diagnosis and Therapy,”

founded by Grant-in-Aid for Scientific Research on Innovative

Areas, MEXT, Japan. The main purposes of our research in this

project are to establish a scientific principle of multidisciplinary

computational anatomy and to develop computer-aided diagnosis

(CAD) systems based on such anatomical models for organ and

tissue functions. During FY2014 to FY2018, we made research

achievements on each of sub projects. The promising results

indicate the success of this project which may relate to improved

clinical practice in the future.

I. INTRODUCTION

Modern medical imaging devices have granted the

visualization of metabolic and functional information of a

body, which have raised the enormous interest in quantitative

image analysis. The multimodality image diagnosis with a

Page 2: A02-3 Function Integrated Diagnostic Assistance Based on ...

large volume of anatomic and functional information leads to

increasing demands for an intelligent support system that

allows for advanced visualization, information integration,

and disease highlighting. Such systems can assist an accurate

and efficient diagnosis, reporting, and treatment planning. In

order to develop the advanced systems, our goals in this

project are to establish scientific principles of

multidisciplinary computational anatomy and to develop

anatomical and functional models for diagnosis of organ and

tissue functions.

In the previous project named as computational anatomy,

we developed computational anatomy models of various

organs in CT images [1]. In current project, we not only

continue to improve model construction and application but

also focus on establishment of computational models for

functional imaging. These models can be effectively

combined to process multidisciplinary information.

As A02-3, our roles in this project are to investigate image

analysis methods based on the anatomic and functional

information fusion and to establish methodologies of

computer-aided diagnostic (CAD) systems for organ and

tissue functions. Figure 1 shows the overview of our research

project.

In the following sections, our overall achievement in this

project from FY2014 to FY2018 for each of the major topics

is reported. In section II, the fundamental techniques and the

results of construction of anatomical models, which can be the

basis of the development of functional models, are described,

followed by the achievements on analyses of functional

images in section III. In section IV, the research achievements

on other related CAD applications are discussed.

II. BASIC TECHNIQUES ON CONSTRUCTION OF ANATOMICAL

MODELS

A. Learning the Image Representation for Robust

Localization, Segmentation, Registration of the

Anatomical Structures on 3D CT and PET Images

1) Purpose: Localization, segmentation, and registration

are major challenges in medical image analysis, and required

as the fundamental preprocessing steps in our project, which

aims to merge the multi-modality medical images and fuse the

information of human anatomy and functional information as

well as temporal change. This work intended to accomplish

these fundamental preprocessing steps based on abstract

representations learned from the medical images to improve

the robustness and efficiency that were lacked in our previous

work [2].

2) Method: The proposed schemes used 2D and 3D deep

convolutional neural networks (CNN) to learn the hierarchical

image feature space to accomplish (1) segmentation of the

anatomical structures (including multiple organs) on a CT

image (generally in 3D) by using a pixel-wised annotation, (2)

localization of the bounding boxes that tightly surround the

interesting regions in CT and PET images, and (3) image

registration that arranges the CT and PET images scanned

separately into the same spatial space. All of these methods

were based on deep learning (both supervised and un-

supervised approaches), and the network structures of the

CNNs used for these different tasks were similar and partially

shared. The details of these our methods can be referred in

[41,87,91,92].

3) Results: A shared dataset consisting of 240 3D CT scans

and a humanly annotated ground truth were used for training

Fig. 1. Overview of the research project

Page 3: A02-3 Function Integrated Diagnostic Assistance Based on ...

and testing the anatomical structure (17 types of major organ)

segmentation and localization methods by using a 4-fold cross

validation. The coincidence (JI: Jaccard index) between the

segmentation or localization result and human annotation was

used as the evaluation criteria. In our experiments of organ

localization and segmentation, we confirmed that 77.5% of

bounding boxes were localized accurately, and mean JI was

78.8% (Fig.2). These perform-ances were comparable or

better in accuracy and had an advantage in computational

efficiency and robustness comparing to the conventional

methods without deep learning approach.

Another dataset consisting of 170 whole body PET/CT

scans was used for training and testing the image registration

method by using un-supervised deep learning technic. The

effectiveness of the registration process was evaluated by the

normalized cross-correlations (NCC) and mutual information

(MI) between CT and PET images using a ten-fold cross

validation. The experimental results demonstrated that the

mean values of NCC and MI improved from 0.40 to 0.58 and

from 1.98 to 2.34, respectively, via the image registration

process. The usefulness of our scheme for PET/CT image

registration was confirmed by the promising results (Fig. 3).

4) Conclusion: We proposed a novel approach to realize

semantic segmentation, organ localization, and image

registration based on the abstract representations learned from

the 3D CT and PET images. Comparing to our previous works

that were based on the image signal directly, the robustness

and efficiency of these fundamental preprocessing were

improved significantly. We confirmed that learning an image

representation by high-level hierarchical features was most

critical step for model construction required by

multidisciplinary computational anatomy.

III. ANALYSIS OF FUNCTIONAL IMAGING

A. Automated Evaluation of Tumor Activities on FDG-PET

CT Images

1) Purpose: Standardized uptake value (SUV) is a semi-

quantitative indicator of FDG-PET. We often use the

maximum value of SUV (SUVmax) for tumor evaluations, but

it is difficult to evaluate the size and spread of the tumor

because the maximum value is obtained for only one voxel

within the tumor region. Metabolic volume (MV) and total

lesion glycolysis (TLG) have been used to evaluate the

response to tumor treatments in recent years. MV is an

indicator of the volume that reflects the size and spread of

tissues with high metabolism. TLG is an indicator that

expresses the degree and volume of glucose metabolism. In

order to determine the MV and TLG, segmented regions of

objective tumors are required that indicate high glucose

metabolism. Currently, an automated detection method for

pulmonary nodules in PET/CT images using convolutional

neural network (CNN) is being used. Z-score is a statistical

index used to present deviation from a mean value and to

show a quantitative abnormality in various measured features

in medical fields by comparing with feature values in normal

groups. Z-score images have been widely used in statistical

image analysis methods to indicate the region abnormality

pixel-by-pixel after a normal database was constructed.

We have developed a new approach for application in the

torso regions for tumor analysis using FDG-PET/CT images.

In the statistical image analysis in the torso region, the organ

segmentation technique used is important, because the

function of the organs is normalized based on the segmented

regions. We have adopted a novel approach using a deep

learning technique for 3D CT images. The purpose of this

study is to examine the differences in Z-scores obtained by the

Result of 3D CNN Ground truth Result of 2D CNN

All

org

an t

ypes

Lin

e or

tube

pat

tern

s

Fig. 2 An example of organ segmentation in a 3D CT case [3]. Left:

segmentation result of multiple organs by using a 3D deep CNN, Middle:

a ground truth of the target organs. Right: segmented result by our

previous method using a 2D deep CNN with 3D voting [4].

Fig. 3 An example of PET/CT image registration result comparing to a

previous method (DIRNet).

Page 4: A02-3 Function Integrated Diagnostic Assistance Based on ...

two organ segmentation results of the bounding box and the

segmentation approaches for the normalization step by using

typical tumors in lung regions, because tumors with various

accumulations and sizes were evaluated in the FDG-PET/CT

images. Furthermore, we have developed a system to measure

MV, TLG, and effective dose based on the automated

segmented results.

2) Methods: The overall scheme of our procedure consisted

of the following five steps: (1) segmentation of the left and

right lungs (CT), (2) anatomical standardization (CT and

FDG), (3) creation of normal model (FDG), (4) creation of Z-

score images, and (5) automated segmentation of tumors.

The segmentation of lung regions was performed based on

our deep learning approach. Two hundred CT cases were used

for the initial training of the deep neural networks which

include manual segmentation results of 10 organs: lung (left

and right), heart, liver, stomach, spleen, kidney (left and right),

pancreas, and bladder. All regions were used for the training,

but the results of left and right lungs were employed in this

study.

After the determination of the organ regions, landmarks

(LMs) were set to perform the anatomical standardization.

Two methods for setting LMs, the bounding box method and

the surface method, were used to compare the anatomical

standardization results. The bounding box method is based on

our localization approach for multiple organs. In this method,

we set LMs on the apexes and the side of the smallest

rectangular parallelepiped surrounding the segmented organs.

The surface method is based on the segmentation results of

organs. In this method, we set LMs on the surface of the organ.

To translate the locations of LMs on CT onto PET images,

the image resolutions were unified on PET images. CT and

PET images were registered based on the location records

from the PET/CT device during the image acquisition.

The normal model was created by using nine normal cases

excluding training cases for the deep learning of organ

segmentations. All LMs were aligned and deformed into one

lung shape based on the thin-plate spline (TPS) method as a

non-rigid image deformation technique. LMs on patient

images were also aligned into one lung shape based on the

TPS method.

All normal cases were anatomically standardized by the

TPS method. The results of anatomical standardization

indicated that all pixels of PET images were aligned at the

standard locations that present the functions in the organ. The

mean (M) and standard deviation (SD) can express the

confidence interval of the normal function based on the

accumulation, that is, the standard uptake value (SUV) in this

study. The M and SD from the normal cases indicated the

ranges of normal activities of glucose metabolism in FDG-

PET images.

We evaluated 9 normal and 21 abnormal cases. Initially, the

regions of tumors were segmented by one of the authors using

a graph-cut method. The initial regions were verified by a

radiologist, independently: 25 obvious and 10 suspected

tumors on PET images were marked as the correct regions.

The regions were employed as the gold-standard (GS). Two

features of CG distance (CGD) and area cover with GS

(ACGS) were obtained from the GS and automatically

detected regions. The regions were detected correctly when

the CGD and ACGS were three voxels or less and 0.2 and

more, respectively.

3) Results: Tables I and II show the results from the

obvious cases (25 tumors) and the obvious and suspected

cases (35 tumors) using the surface and the bounding box

methods. Additionally, the results of only the SUV method

without Z-score images are presented. Figures 4 shows two

examples of obvious cases with GS regions in red regions.

The MV, TLG, and effective doses were also obtained in each

detected region based on the determined area.

4) Conclusion: LMs on the organ surface will be useful for

statistical analysis of torso images. Various values from the

analysis will be helpful indices for quantitative analysis of

FDG-PET images. Performance studies including human

observers will be required to prove the usefulness of the

automated methods.

B. Automated Detection of Lung Nodule in PET/CT Images

Using Deep Convolutional Neural Network

Fig. 4 Two examples of obvious cases in which automatic segmentation

was performed by the surface method and the gold standard verified by a radiologist. (a) Case 1 GS, (b) Case 1 automatic detection, (c) Case 2 GS,

and (d) Case 2 automatic detection

TABLE I RESULT FOR OBVIOUS 25 REGIONS

Surface

method

Bounding box

method

Only SUV

method

True positive

fraction [%] 76 (19/25) 64 (16/25) 76 (19/25)

Number of false

positives (FP) 1.17 1.06 1.11

Average volume

of FP [voxels] 70.10 101.90 189.85

TABLE II RESULT OF OBVIOUS AND SUSPICIOUS 35 REGIONS

Surface

method

Bounding box

method

Only SUV

method

True positive

fraction [%] 77 (27/35) 69 (24/35) 69 (24/35)

Number of false

positives (FP) 1.10 1.05 1.00

Average volume

of FP [voxels] 77.61 94.0 184.52

Page 5: A02-3 Function Integrated Diagnostic Assistance Based on ...

1) Purpose: Automated detection of lung nodules using

PET/CT images that we previously developed shows good

sensitivity; however, there was a challenge to reduce the false

positives (FPs). In this study, we propose an improved FP

reduction method for the detection of lung nodules in PET/CT

images using deep convolutional neural networks (DCNN)

[25].

2) Methods: The outline of our overall scheme for the

detection of lung nodules is shown in Fig. 5. First, initial

nodule candidates were identified separately on the PET and

CT images using the algorithm specific to each image type.

Subsequently, candidate regions obtained from them were

combined. FPs contained in the initial candidates were

eliminated by an ensemble method using convolutional neural

network and hand-crafted features.

3) Results and Conclusion: We evaluated the detection

performance using 104 PET/CT images collected by a cancer-

screening program. As a result of evaluation, the sensitivity of

detection was 90.1%, with 4.9 FPs/case. Our ensemble FP-

reduction method eliminated 93% of the FPs; our proposed

method using CNN technique eliminates approximately half

the FPs existing in the previous study. These results indicate

that our method may be useful in the computer-aided

detection of pulmonary nodules using PET/CT images.

C. A Model-Based Approach to Recognize Skeletal Muscle

Region for Muscle Function Analysis

1) Purpose: We have been working on the model-based

skeletal muscle recognition from the previous project. In this

project, we aimed to advance muscle recognition techniques

using the models for function analysis in whole body CT

images. We started an international collaborative work with

Bern University, Switzerland, and we worked on recognition

of spinal erector muscle using a deep learning technique.

2) Method: In this project, skeletal muscle recognition was

realized using whole-body CT images as well as torso CT

images. Figure 6 shows the research progress of the muscle

recognition for functional analysis from FY 2014 to FY 2018.

Each color shows the research area in the MCA project. The

blue region (Fig. 6 (a), (b) and (c)) is in the space axis, green

region (Fig. 6 (b) and (d)) is in the functional axis and orange

region (Fig. 6 (c)) is in the pathological axis.

We generated muscle shape models for sternocleido-

mastoid [36], supraspinatus, psoas major and iliac muscle

based method using the model on torso CT images (Fig. 6 (a)).

In order to achieve recognition of complex muscles,

automatic recognition of anatomical features on bones deeply

related to the muscles is necessary. For this reason, skeletal

muscle attachment sites corresponding to the anatomical

names were recognized from the shoulder blades based on the

precise recognition of bones as well as position and

distribution features. In addition, muscle bundle modelling

was done to properly position of the model and to create a

precise model for functional analysis. (Fig. 6 (b)).

For muscle function and muscle disease analysis of whole-

body CT images, the whole-body muscle recognition using

shape model and a 3D texture analysis of the muscles was

proposed (Fig. 6 (c)). We modelled the body cavity region

from the whole-body CT image, acquired the surface muscle

regions, and recognized the deep muscles by applying the site-

specific models. Based on the automatic recognition results,

we segmented the skeletal muscles and achieved not only the

muscle volume but also muscle image characteristics in

whole-body CT images.

As a challenging study, we achieved automatic recognition

of the spinal erector muscle [53], which has a large and

complex structure (Fig 6. (c)). By the conventional model-

based method, the recognition rate in the 3D volume had a

problem, although recognition rate was high in the 12th

thoracic spine cross section. Therefore, in this study,

recognition was performed using the deep-learning method. In

addition, in order to improve recognition results of the skeletal

muscles by deep learning for functional analysis, modelling of

attachment sites on bones and muscle bundles by site was

realized. (Fig 6. (b) and (d))

3) Results: The effectiveness of the proposed method was

evaluated using non-contrast torso CT and whole-body CT

images. In the automatic recognition of the muscle with the

shape model, concordance rates of 60.3% and 65.4% were

obtained in sternocleidomastoid region using 20 cases of torso

CT images and 10 whole-body CT images, respectively.

Fig. 5. Proposed overall scheme for detecting lung nodules in PET/CT images [25].

Page 6: A02-3 Function Integrated Diagnostic Assistance Based on ...

In the automatic recognition of the precise bone features,

we used the trunk CT images of 26 cases, and the average

concordance rate of 80.7% was obtained for the infraspinous

fossa. As the result, feature recognition on the shoulder blades

became possible in seven areas together with feature

recognition based on six anatomical names, which we have

already recognized.

In the automatic recognition and segmentation of the

whole-body CT images was successful in 36 of 39 cases using

the shape models at inner muscle region including psoas

major and iliac muscle. We succeeded in analysis of ALS

cases in four limbs using the recognition result.

For the erector spinal muscle, mean recognition accuracies

of erector spinal muscle and attached area were 89.9% and

65.5%, respectively, in terms of Dice coefficient using 11

cases. The muscle bundle running of the spinal column erector

muscle was drawn, and the region of the muscle group

constituting the spinal column erector muscle was acquired.

As a result, an average Dice coefficient of 65.2% was

obtained. This makes it possible to obtain muscle running

information as structural information and to realize analysis of

the structure in the muscle region recognized by deep learning.

4) Conclusion: By the proposed method, we have achieved

the recognition and analysis of skeletal muscles in torso and

whole-body CT images. We developed the method to

implement skeletal muscle modelling from micro to macros

by modelling the distribution shape of muscle fibbers and

muscle fibber bundles. At the same time, we achieved the

skeletal muscle recognition for skeletal muscle function

analysis by comparing and integrating the deep learning

method to the model based approach.

D. Development of Estimation System of Knee Extension

Strength Using Image Features in Ultrasound Image

Features

1) Purpose: The word “Locomotive syndrome” has been

proposed to describe the state of requiring care by

musculoskeletal disorders and its high-risk condition. The

goal of this study is to evaluate the relationships among aging,

muscle strength, and sonographic findings. This study aims to

develop a system for measuring the knee extension strength

using the ultrasound images of rectus femoris muscles and

quadriceps femoris muscles obtained with non-invasive

ultrasound diagnostic equipment [3,38,70].

2) Methods: First, we extract the muscle area from the

ultrasound images of the rectus femoris muscles. We detect

edges from an original image using Canny edge detector and

select the edges close to the boundary of the muscle area

manually. The boundary lines of the upper and lower ends of

the muscle area are determined by approximating the selected

edges with curves. The area between these boundaries is

defined as the muscle area. We obtain image features using

gray level co-occurrence matrix (GLCM) and the histograms

of echogenicity from the muscle area. From GLCM, we obtain

image features such as angular second moment and entropy.

From the echogenicity histograms, the average value, standard

deviation, skewness, and kurtosis are calculated. In addition,

the vertical length at the center of the muscle area is

considered as the thickness of the muscle. We combine these

features and physical features, such as the patient’s height,

and build a regression model of the knee extension strength

from the training data. Finally, we substitute the features

calculated from the test data into the regression model, and

estimate their knee extension strength [3,70].

To assess muscle quality, texture analysis using gray level

co-occurrence matrix was performed. The texture features

included the mean, skewness, kurtosis, inverse difference

moment, sum of entropy, and angular second moment. The

knee extension force in the sitting position and thickness of

the quadriceps femoris muscle were also measured [38].

Fig. 6. Schematic diagram of the muscle recognition for functional analysis. (a) Model-based recognition of muscle, (b) precise analysis of scapula for

muscle recognition, (c) segmental recognition for the whole-body muscle analysis, and (d) spinal erector muscle recognition using deep learning with

muscle bundle model.

Page 7: A02-3 Function Integrated Diagnostic Assistance Based on ...

3) Results: We employed 168 B-mode ultrasound images

of rectus femoris muscles scanned on both legs of 84 subjects

(19-86 years old) for evaluation. As a result of this experiment,

the correlation coefficient value between the measured values

and estimated values was 0.82. The root mean squared error

(RMSE) was 7.69 kg [70].

To assess muscle quality, One hundred forty-five healthy

volunteers were classified into six groups on the basis of sex

and age. The quadriceps femoris thickness, skewness, kurtosis,

inverse difference moment, angular second moment, and

muscle strength were significantly smaller in elderly

participants versus those in the younger and middle-aged

groups (P <.05). In contrast, the mean and sum of entropy

were significantly smaller in the younger group than in the

middle-aged and elderly groups [38].

4) Conclusion: We developed a system for estimating the

knee extension strength using ultrasound images. The system

has a potential in quantitatively assessing the muscular

morphologic changes due to aging and could be a valuable

tool for early detection of musculoskeletal disorders.

IV. OTHER CAD APPLICATIONS

A. Automated Malignancy Analysis in CT Images Using

Generative Adversarial Network and Deep Convolutional

Neural Network

1) Purpose: In the detailed diagnosis of lung nodule using

CT images, it is often hard task for radiologist to classify

between benign and malignant nodules. In this study, we

investigate the automated classification of pulmonary nodules

in CT images using a DCNN [57]. We use a generative

adversarial networks (GANs) to generate additional images

when only small amounts of data are available, which is a

common problem in medical research, and evaluate whether

the classification accuracy is improved by generating a large

amount of new pulmonary nodule images using the GAN.

2) Methods: Figure 7 shows an outline of the proposed

method. The volume of interest centered on the pulmonary

nodule is extracted from CT images, and two-dimensional

images are created with the axial section. The DCNN is

trained using nodule images generated by the Wasserstein

GAN, and then fine-tuned using the actual nodule images to

allow the DCNN to distinguish between benign and malignant

nodules. DCNN model is based on AlexNet; it consists of five

convolution layers, three pooling layers, and three fully

connected layers.

3) Results: CT images of 60 cases with pathological

diagnosis confirmed by biopsy were analyzed. They consisted

of 27 cases of benign nodules and 33 cases of malignant

nodules. As a result of evaluation, a correct identification rate

of 67% for benign nodules and 94% for malignant nodules

were obtained. Thus, almost all cases of malignancy were

classified correctly, with two-thirds of the benign cases also

classified correctly. This performance was higher than that of

existing method without using GAN generated images.

4) Conclusion: Evaluation results indicate that the

proposed method may reduce the number of biopsies in

patients with benign nodules that are difficult to differentiate

in CT images by over 60%. Furthermore, the classification

accuracy was clearly improved by the use of GAN technology,

even for medical datasets that contain relatively few images.

B. Automated Classification of Lung Cancer Types in

Cytological Images Using Deep Convolutional Neural

Network

1) Purpose: In the detailed diagnosis of lung lesion,

pathological examination is performed based on the

morphological shape of tissue or cell. However, pathologists

have to examine the huge number of images per patient; it is

difficult to maintain the diagnostic accuracy avoiding reading

errors. In order to assist the image diagnosis of thoracic

regions, we have developed the automated classification

scheme of lung cancer types using cytological images [39].

2) Methods: The microscopic images of specimens stained

by using the Papanicolaou method were first collected. Then,

they were given to the input layer of DCNN. The DCNN for

classification consisted of three convolutional layers, three

pooling layers, and two fully convolutional layers. The

probabilities of three types of cancers (adenocarcinoma, small

cell cancer, and squamous cell cancer) were estimated from

the output layer [39]. Training on the DCNN was conducted

by using our original database.

3) Results: For classification of the three cancer types,

DCNN was trained and evaluated using augmented data

generated from 298 original images. The classification

accuracy of adenocarcinoma, squamous cell carcinoma, and

small cell carcinoma were 89.0%, 60.0%, and 70.3%,

respectively; the overall accuracy was 71.1%.

4) Conclusion: The classification accuracy of 71.1% is

comparable to that of a cytotechnologist or pathologist. It is

noteworthy that DCNN is able to understand cell morphology

and placement of cancer cells solely from images without

prior knowledge and experience of biology and pathology.

These results indicate that DCNN is useful for classification

of lung cancer in cytodiagnosis.

Fig. 7. Outline of the proposed method to distinguish between benign and

malignant nodules [57].

Page 8: A02-3 Function Integrated Diagnostic Assistance Based on ...

C. Computer-Aided Diagnosis with Radiogenomics

1) Purpose: As we enter the post-genome era, the

technologies for genetic analysis have advanced dramatically

along with an astonishing decline in the cost of analysis.

Therefore, it is likely that genetic testing may be performed in

daily clinical practice in the near future. In this context, a

novel research field, ‘radiogenomics’ has been developed to

offer a new viewpoint for the use of genotypes in radiology

and medicine research. To incorporate the new concept of

radiogenomics, we developed two CAD schemes (1) for the

early detection of Alzheimer’s disease (AD) and (2) for the

subtype classification of breast cancer.

2) Methods: In the detection of ADs, we first investigated

brain morphological changes related to the individual’s

genotype [47]. Our data consisted of magnetic resonance

(MR) images of patients with mild cognitive impairment

(MCI) and AD, as well as their apolipoprotein E (APOE)

genotypes. The statistical parametric mapping (SPM) 12 was

used for three-dimensional anatomical standardization of the

brain MR images. A total of 30 normal images were used to

create a standard normal brain image. Z-score maps were

generated to identify the differences between an abnormal

image and the standard normal brain.

Quantification method of a difference from a standard

normal brain was used in the previous study. However, this

method cannot evaluate the cerebral atrophy accurately,

because they do not take into account any cerebral atrophies

due to normal aging. Hence, we developed a model for taking

into account the cerebral atrophy due to normal aging [55].

We obtained 60 normal MR images. These data included 20

images of each age group of 60’s, 70’s, and 80’s, respectively.

For anatomical standardization of the images, we used the

SPM software and employed a linear grayscale transformation.

The principal component (PC) analysis with voxel values of

60 normal MR images was performed to calculate

eigenvectors and PC scores. All cases were projected onto the

eigenspace formed by 2nd and 5th PC scores.

For the differential diagnosis of breast cancer, taxonomy

based on morphological characteristic of cell in

histopathological image has been used. In recent years,

however, subtype classification of breast cancer by analyzing

the gene expression profile of cancer cells is becoming a

standard procedure. Subtype classification of breast cancer is

more useful than the conventional method because the

characteristics of subtype classification is directly connected

with the treatment method. However, genetic testing by

biopsy is invasive and big burden on the patient. Therefore,

we developed CAD scheme for differentiation of triple-

negative breast cancer (TNBC) by estimating the genetic

properties of cancer based on radiogenomics [56]. We

collected 81 MR images, which included 30 TNBC and 51

others. From the MR slice images, we selected the slice

containing the largest area of the cancer and manually marked

the cancer region. We subsequently calculated 294 radiomic

features in the cancer region, and reduced the dimension of

radiomic features by using Lasso. Finally, linear discriminant

analysis, with the dimensionally compressed 10 image

features, was used for distinguishing between TNBC and

others.

3) Results: Relationship between the genotypes and

cerebral atrophy is shown in Fig. 8. It revealed that depending

on the genotype, (1) the anatomical locations of the cerebral

atrophy differ and (2) the transition in the disease state from

MCI to AD differs. Since a person’s genotype can be

determined by genetic screening, we can use this information

to identify patients susceptible to AD. Those patients may be

subjected to periodic image examinations for the early

detection of the onset of disease. If it is possible to identify

changes in cerebral atrophy related to the patient’s genotype

based on image examination, the onset may be detected earlier

than that at present. In the quantitative analysis, the

experimental result showed the AUC value for the

classification between 60’s normals and ADs was 0.911.

In the differential diagnosis of breast cancer, the result

showed that the AUC was 0.70, which indicated a possibility

for identifying TNBC by using radiomic features. Since

physicians can consider the continuity of differential diagnosis

and treatment, our method would be useful for supporting the

personalized medicine of breast cancers.

D. Retinal Nerve and Cardiovascular Diseases CAD

1) Purpose: Fundoscopic is useful for estimation of

functional changes of systemic circulatory organ. Eye diseases,

which are hypertensive retinopathy, diabetic retinopathy, etc

are diagnosed by funduscopy for screening. Therefore, we

developed CAD systems for hypertensive retinopathy and

diabetic retinopathy.

Fig. 8. Relationship between genotypes and cerebral atrophies in patients

with MCI and AD. The upper row shows the patients with APOE ε3, and

the lower row shows the patients with APOE ε4. The left shows the patients with MCI, and the right shows the patients with AD.

Page 9: A02-3 Function Integrated Diagnostic Assistance Based on ...

2) Methods: Measurement of retinal arteriolar-to-venular

diameter ratio (AVR) is needed for diagnosis of hypertensive

retinopathy. Blood vessels were first extracted using grayscale

high order local autocorrelation [75]. Blood vessel candidates

were then classified into artery and vein by using 3 layers

neural network with 12 features. Finally, AVRs were

measured. We trained and tested the proposed method by

using public retinal images database INSPIRE-AVR.

Early detection of diabetic retinopathy (DR) is important

because it is the second cause of visual loss in Japan.

Microaneurysm (MA) is the early findings of diabetic

retinopathy. MA appears as a dark small dot in the retinal

images, thus it is difficult for physicians to detect them

visually. We proposed three MA detection methods, which

were a method based on density gradient vector concentraton

(VC) [19], a method by combination of three MA

enhancement filter (TF) [48], and DCNN [4, 52].

3) Results: For the vessel analysis, our method classified

78% of arteries and 79% of veins correctly [5]. An example of

result is shown in Fig. 9. In the AVR measurement, the mean

unsigned error, standard deviation, and mean AVR were 0.12,

0.08, and 0.76, respectively [5].

For MA detection, the average sensitivity at selected false

positive rates by VC, TF, and DCNN were 0.399, 0.420 and

0.449 in the evaluation with DIARETDB1 of a public retinal

images database. The DCNN improves the performance and

workload of parameters setting in MA detection.

V. INTERNATIONAL COLLABORATION

One of the anticipated effort in this project is to establish the

scientific principle of multidisciplinary computational

anatomy and to accelerate the advancement of this research

field by international collaboration. We have been working on

research collaboration with renowned researchers, which

include analysis of PET/CT images with Prof. Huiyan Jiang in

Northeast University, China, quantitative and functional

analyses of bone and soft tissues in MR images with Dr.

Hiroshi Yoshioka in the University of California at Irvine,

segmentation of mammary grands on CT images with Prof.

Shuo Li in the University of Western Ontario, Canada,

segmentation of anatomical structures in 3D CT images with

Prof. Song Wang in the University of South Carolina, analysis

of liver on MR with Prof. Xuejun Zhang in Guangxi

University, China, and muscle segmentation using machine-

learning method [53, 90, 100] with Assoc. Prof. Guoyan

Zheng in University of Bern, Switzerland

IV. CONCLUSION

Our research achievement in developing the methods for

constructing the multidisciplinary models and applying the

models for various multimodality image analysis were

described. The results obtained are promising, which convince

us the success of our research project.

ACKNOWLEDGMENT

Authors thank all the collaborators for contributions in this

project. This research was supported in part by a Grant-in-Aid

for Scientific Research on Innovative Areas (No. 26108005),

MEXT, Japanese Government.

REFERENCES

[1] H. Fujita, T. Hara, X. Zhou, C. Muramatsu, N. Kamiya, M. Zhang, D.

Fukuoka, Y. Hatanaka, T. Matsubara, A. Teramoto, Y. Uchiyama, H. Chen, and H. Hoshi, “Model construction of computational anatomy:

Progress overview FY2009-2013,” in Proc. Fifth International Symposium on the Project “Computational Anatomy”, 2014, pp.25-35.

[2] X. Zhou, T. Ito, Xin. Zhou, H. Chen, T. Hara, R. Yokoyama, M.

Kanematsu, H. Hoshi, and H. Fujita, “A universal approach for automatic organ segmentations on 3D CT images based on organ

localization and 3D GrabCut,” in Proc. of SPIE Medical Imaging 2014:

vol. 9035, pp. 90352V-1 - 9035V-8. [3] H. Murakami, T. Watanabe, D. Fukuoka, N. Terabayashi, T. Hara, and

H. Fujita, “Development of estimation system of knee extension

strength using image features in ultrasound image of quadricepts,” in Proc. 34th JAMIT Annual Meeting, 2015, PP23, pp. 1-4 (in Japanese).

[4] M. Miyashita, Y. Hatanaka, K. Ogohara, C. Muramatsu, W. Sunayama,

and H. Fujita, “Automatic detection of microaneurysms in retinal image by using convolutional neural network,” in Proc.JAMIT 2018,

2018, pp. 65–69 (in Japanese)

[5] K. Samo, Y. Hatanaka, K. Ogohara, W. Sunayama, C. Muramatsu, and H. Fujita, “Automated classification of arteries and veins in retinal

images for measurement of arteriolar-to-venular diameter ratio,” in

Proc. 2017 IFMIA, 2017, pp.188–189.

LIST OF ACCEPTED AND PUBLISHED PAPERS

SINCE 1ST JULY 2014

[Peer-reviewed Papers] [6] X. Zhou, R. Xu, T. Hara, Y. Hirano, R. Yokoyama, M. Kanematsu, H.

Hoshi, S. Kido, and H. Fujita, “Development and evaluation of

statistical shape modeling for principal inner organs on torso CT images,” Radiological Physics and Technology, vol. 7, pp. 277-283,

July 2014.

[7] A. Tanigawa, Y. Uchiyama, C. Muramatsu, T. Hara, J. Shiraishi, and H. Fujita, “Improvement of automatic detection method of lacunar infarcts

on MR images: Reduction of false positives by using AdaBoost

template matching,” Medical Imaging and Information Sciences, vol.31, no.2, pp.41-46, Aug 2014 (in Japanese).

[8] K. Horiba, C. Muramatsu, T. Hayashi, T. Fukui, T. Hara, A. Katsumata,

and H. Fujita, “Automated measurement of mandibular cortical width on dental panoramic radiographs for early detection of osteoporosis:

extraction of linear structures,” Medical Imaging Technology, vol.32,

no.5, pp342-346, Nov 2014 (in Japanese) [9] T. Inoue, Y. Hatanaka, S. Okumura, K. Ogohara, C. Muramatsu, and H.

Fujita, “Automatic microaneurysm detection in retinal fundus images

by density gradient vector concentration,” J. the Institute of Image

Fig. 9. Example of arteries (red) and veins (blue) classified automatically.

Page 10: A02-3 Function Integrated Diagnostic Assistance Based on ...

Electronics Engineers of Japan, vol. 44, pp. 58–66, Jan. 2015 (in

Japanese).

[10] Y. Uchiyama, A. Abe, C. Muramatsu, T. Hara, J. Shiraishi, and H. Fujita, “Eigenspace template matching for detection of lacunar infarcts

on MR images,” J Digit Imaging, vol. 28, pp.116-122, Feb 2015.

[11] M. Zhang, C. Muramatsu, X. Zhou, T. Hara, and H. Fujita, “Blind image quality assessment using the joint statistics of generalized local

binary pattern,” IEEE Signal Processing Letters, vol. 22, pp. 207-210,

Feb 2015. [12] S. Murakawa, A. Tanigawa, Y. Uchiyama, C. Muramatsu, T. Hara, and

H. Fujita, “Kernel eigenspace template matching for detection of

lacunar infarcts on MR images” Nihon Hoshasen Gijutsu Gakkai Zasshi, vol. 71, pp. 85-91, Feb 2015. (in Japanese).

[13] T. Hara, T. Kobayashi, S. Ito, et al., “Quantitative analysis of torso

FDG-PET scans by using anatomical standardization of normal cases from thorough physical examinations,” PLOS ONE, vol.10, no.5,

e0125713, May 2015.

[14] S. Goshima, M. Kanematsu, H. Kondo, H. Watanabe, Y. Noda, H. Fujita, and K.T. Bae, “Computer-aided assessment of hepatic contour

abnormalities as an imaging biomarker for the prediction of

hepatocellular carcinoma development in patients with chronic hepatitis C,” European Journal of Radiology, vol.84, no.5, pp. 811-815,

May 2015.

[15] S. A. Z. B. S.Aluwee, H. Kato, X. Zhou, T. Hara, H. Fujita, M. Kanematsu, T. Furui, R. Yano, N. Miyai, and K. Morishige, “Magnetic

resonance imaging of uterine fibroids: A preliminary investigation into

the usefulness of 3D-rendered images for surgical planning,” SpringerPlus, vo. 4, article 384 (8 pages), July 2015.

[16] A. Teramoto, M. Kobayashi, T. Otsuka, M. Yamazaki, H. Anno, and H.

Fujita, "Preliminary study on the automated measurement of mammary gland ratio in the mammogram: Automated extraction of mammary

structure using Gabor filter," Medical Imaging and Information

Sciences, vol.32, no.3, pp.63-67, Sep. 2015. (in Japanese). [17] Y. Hattori, C. Muramatsu, R. Takahashi, T. Hara, T. Hayashi, X. Zhou,

A. Katsumata, and H. Fujita, “Automated detection of carotid artery

calcifications in dental panoramic radiographs: verification of detection performance using manual ROIs,” Medical Imaging and Information

Sciences, vol.32, no.3, pp. 68-70, Sep. 2015 (in Japanese). [18] M. Yamazaki, T. Otsuka, A. Teramoto, and H. Fujita, "Automatic

detection method for architectural distortion using iris filter together

with Gabor filter in the breast X-ray image," Medical Imaging Technology, vol.33, no.5, pp.197-202, Nov. 2015 (in Japanese).

[19] T. Inoue, Y. Hatanaka, S. Okumura, K. Ogohara, C. Muramatsu, and H.

Fujita, “Automatic microaneurysm detection in retinal fundus images by density gradient vector concentration,” Journal of the Institute of

Image Electronics Engineers of Japan, vol.44, no.1, Nov 2015 (in

Japanese). [20] X.J. Zhang, B. Zhou, K. Ma, X.H. Qu, X.M. Tan, X. Gao, W. Yan, L.L.

Long, and H. Fujita, “Selection of optimal shape features for staging

hepatic fibrosis on CT image,” J Med Imaging Health Inform, vol.5, no.8, pp. 1926-1930, Dec. 2015.

[21] X. Zhang, X. Gao, B.J. Liu, K. Ma, W. Yan, L. Liling, H. Yuhong, and

H. Fujita, “Effective staging of fibrosis by the selected texture features of liver: Which one is better, CT or MR imaging?,” J Comput Med

Imaging Graph, vol. 46, pp.227-236, Dec. 2015.

[22] S. Miyajo, A. Teramoto, O. Yamamuro, K. Omi, M. Nishio, and H. Fujita, "Preliminary study on the automated analysis of tumor in the

dynamic contrast enhanced breast MR images: estimation of invasive

regions using signal value of normal mammary gland and delayed-phase images," Medical Imaging and Information Sciences, vol.32,

no.4, pp.1xi-1xiv, Dec. 2015. (in Japanese).

[23] Y. Miki, T. Hara, C. Muramatsu, T. Hayashi, A. Katsumata, X. Zhou, and H. Fujita, “Advanced automatic method for detecting maxillary

sinusitis on dental panoramic radiographs,” Medical Imaging and

Information Sciences, vol. 32, no. 4, pp. 77-80, Dec. 2015 (in Japanese). [24] T. Otsuka, A. Teramoto, Y. Asada, S. Suzuki, H. Fujita, S. Kamiya,

and H. Anno, "Estimation of the average glandular dose using the

mammary gland image analysis in mammography,” Japanese Journal of Radiological Technology, vol. 72, no. 5, pp. 389-395, May 2016 (in

Japanese).

[25] A. Teramoto, H. Fujita, O. Yamamuro, and T. Tamaki, "Automated detection of pulmonary nodules in PET/CT images: ensemble false-

positive reduction using a convolutional neural network technique,"

Medical Physics, vol.43, pp.2821-2827, May 2016.

[26] C. Muramatsu, T. Hara, T. Endo, and H. Fujita, “Breast mass classification on mammograms using radial local ternary patterns,”

Computers in Biology and Medicine, vol. 72, pp. 43-53, May 2016.

[27] C. Muramatsu, K. Horiba, T. Hayashi, T. Fukui, T. Hara, A. Katsumata, and H. Fujita, “Quantitative assessment of mandibular cortical erosion

on dental panoramic radiographs for screening osteoporosis,” Int J

CARS, vol. 11, pp. 2021-2032, June 2016. [28] T. Nozaki, Y. Kaneko, H.J. Yu, K. Kaneshiro, R. Schwarzkopf, T.

Hara, and H. Yoshioka, “T1rho mapping of entire femoral cartilage

using depth-and angle-dependent analysis,” European Radiology, vol. 26, no. 6, pp. 1952-1962, June 2016.

[29] T. Nozaki, A. Tasaki, S. Horiuchi, J. Ochi, J. Starkeu, T. Hara, Y.

Saida, and H. Yoshioka, “Predicting retear after repair of full-thickness rotator cuff tear: two-point Dixon MR imaging quantification of fatty

muscle degeneration- initial experience with 1-year follow-up,”

Radiology, vol. 280, no. 2, pp. 500-509, Aug 2016. [30] X. Zhou, T. Ito, R. Takayama, S. Wang, T. Hara, and H. Fujita, “First

trial and evaluation of anatomical structure segmentations in 3D CT

images based only on deep learning,” Medical Image and Information Sciences, vol. 33, pp. 69-74, Sep 2016.

[31] Y. Miki, C. Muramatsu, T. Hayashi, X. Zhou, T. Hara, A. Katsumata,

and H. Fujita, “Classification of teeth in cone-bean CT using deep convolutional neural network,” Computers in Biology and Medicine,

vol. 80, pp. 24-29, Jan 2017.

[32] K. Takeda, T. Hara, X. Zhou, T. Katafuchi, M. Kato, S. Ito, K. Ishihara, S. Kumita, and H. Fujita, “Normal model construction for statistical

image analysis of torso FDG-PET images based on anatomical

standardization by CT images from FDG-PET/CT devices,” Int J CARS, vol.12, pp.777–787, May 2017.

[33] N. Minoura, A. Teramoto, K. Takahashi, O. Yamamuro, M. Nishio, T.

Tamaki, and H. Fujita, "Preliminary study on an automated extraction of breast region and automated detection of breast tumors and axillary

metastasis using PET/CT images," Medical Imaging Technology,

vol.35, pp.158-166, May 2017. [34] Y. Ariji, A. Katsumata, R. Kubo, A. Taguchi, H. Fujita, and E. Ariji,

“Factors affecting observer agreement in morphological evaluation of mandibular cortical bone on panoramic radiographs,” Oral Radiology,

vol. 32, pp. 117-123, May 2017.

[35] S. Horiuchi, T. Nozaki, A. Tasaki, A. Yamakawa, Y. Kaneko, T. Hara, and H. Yoshioka, "Reliability of MR quantification of rotator cuff

muscle fatty degeneration using a 2-point dixon technique in

comparison with the goutallier classification: Validation study by multiple readers," Acad Radiol, vol.24, pp.1343-1351, May 2017.

[36] N. Kamiya, K. Ieda, X. Zhou, H. Chen, M. Yamada, H. Kato, C.

Muramatsu, T. Hara, T. Miyoshi, T. Inuzuka, M. Matsuo, and H. Fujita, “Automated segmentation of sternocleidomastoid muscle using atlas-

based method in X-ray CT images: Preliminary study”, Medical

Imaging and Information Sciences, vol.34, pp.87-91, June 2017. (in Japanese)

[37] K. Mori, Y. Uchiyama, T. Hara, T. Iwama, and H. Fujita, “Detection of

unruptured aneurysms in brain MRA images using ring type gradient concentration filter and texture analysis,” Medical Imaging and

Information Sciences, vol. 34, pp. 75-79, June 2017. (in Japanese).

[38] T. Watanabe, H. Murakami, D. Fukuoka, N. Terabayashi, S. Shin, T. Yabumoto, H. Ito, H. Fujita, T. Matsuoka, and M. Seishima,

“Quantitative sonographic assessment of the quadriceps femoris

muscle in healthy Japanese adults,” J Ultrasound Med, vol.36, pp.1383-1395, July 2017.

[39] A. Teramoto, T. Tsukamoto, Y. Kiriyama, and H. Fujita, "Automated

classification of lung cancer types from cytological images using deep convolutional neural networks," BioMed Research International, vol.

2017, Article ID 4067832, pp.1-6, Aug 2017.

[40] S.A.Z.S. Aluwee, X. Zhou. H. Kato, H. Makino, C. Muramatsu, T. Hara, M. Matsuo, and H. Fujita, “Evaluation of pre-surgical models for

uterine surgery by use of three-dimensional printing and mold casting,”

Radiological Physics and Technology, vol. 10, pp. 279-285, Sep 2017. [RPT Doi Awards]

[41] X. Zhou, R. Takayama, S. Wang, T. Hara, and H. Fujita, “Deep

learning of the sectional appearances of 3D CT images for anatomical

Page 11: A02-3 Function Integrated Diagnostic Assistance Based on ...

structure segmentation based on an FCN voting method,” Med Phys,

vol.44, pp.5221-5233, Oct 2017.

[42] Y. Hatanaka, H. Tachiki, S. Okumura, K. Ogohara, C. Muramatsu, and H. Fujita,“Automated independent extraction of major arteries and

veins on retinal images,” Medical Imaging and Information Sciences,

vol. 34, pp. 136–140, Oct. 2017. (in Japanese) [43] K. Hirose, T. Hara, Y. Tanaka, C. Muramatsu, T. Katafuchi, M.

Matsusako, and H. Fujita, “Automated estimation of time activity

curve for pulmonary artery from axillary vein on dynamic scintigrapy by using three-layered artificial neural network,” IEICE D, vol.J101-D,

pp.40-43, Jan 2018. (in Japanese)

[44] X. Zhang, X. Tan, X. Gao, D. Wu, X. Zhou, and H. Fujita, “Non-rigid registration of multi-phase liver CT data using fully automated

landmark detection and TPS deformation,” Cluster Computing, pp. 1-

15, Mar 2018. [45] M. Tsujimoto, A. Teramoto, S. Ota, H. Toyama, and H. Fujita,

"Automated segmentation and detection of increased uptake regions in

bone scintigraphy using SPECT/CT images,” Annals of Nuclear Medicine, vol. 32, pp. 182-190, Apr 2018

[46] X. Zhang, X. Zhou, T. Hara, and H. Fujita, “Non-Invasive Assessment

of Hepatic Fibrosis by Elastic Measurement of Liver Using Magnetic Resonance Tagging Images,” Applied Sciences, vol. 8, Article# 437,

Mar 2018.

[47] C. Kai, Y. Uchiyama, J. Shiraishi, H. Fujita, and K. Doi, “Computer-aided diagnosis with radiogenomics: Analysis of the relationship

between genotype and morphological changes of the brain magnetic

resonance images,” Radiological Physics and Technology, vol. 11, pp. 265-273, May 2018.

[48] Y. Hatanaka, T. Inoue, K. Ogohara, S. Okumura, C. Muramatsu, H.

Fujita, “Automated microaneurysm detection in retinal fundus images based on the combination of three detectors,” J. Medical Imaging and

Health Informatics, vol.8, pp. 1103–1112, June 2018.

[49] A. Yamada, A. Teramoto, K. Kudo, T. Otsuka, H. Anno, and H. Fujita, "Basic study on the automated detection method of skull fracture in

head CT images using surface selective black-hat transform,” Journal

of Medical Imaging and Health Informatics, vol. 8, pp. 1069-1076, June 2018.

[50] H.R. Roth, H. Oda, X. Zhou, N. Shimizu, Y. Yang, Y. Hayashi, M. Oda, M Fujiwara, K. Misawa, and K. Mori, “An application of

cascaded 3D fully convolutional networks for medical image

segmentation,” Computerized Medical Imaging and Graphics, vol. 66, pp. 90-99, June 2018.

[51] S. Horiuchi, T. Nozaki, A. Tasaki, S. Ohde, G. A. Deshpande, J.

Starkey, T. Hara, N. Kitamura, and H. Yoshioka, “Comparison between isotropic 3D fat suppressed T2-weighted FSE (3D-T2FS) and

conventional 2D fat suppressed proton-weighted FSE (2D-PDFS)

shoulder MRI at 3T in patients with shoulder pain,” Journal of Computer Assisted Tomography, vol. 42, pp. 559-565, Jul/Aug 2018.

[52] M. Miyashita, Y. Hatanaka, K. Ogohara, C. Muramatsu, W. Sunayama,

and H. Fujita, “Automatic detection of microaneurysms in retinal image by using convolutional neural network,” Medical Imaging

Technology, vol. 36, pp. 189–195, Oct. 2018 (in Japanese).

[53] N. Kamiya, J. Li, M. Kume, H. Fujita, D. Shen and G. Zheng, "Fully automatic segmentation of paraspinal muscles from 3D torso CT

images via multi-scale iterative random forest classifications",

International Journal of Computer Assisted Radiology and Surgery, vol.13, pp.1697-1706, Nov. 2018.

[54] M. Murata, Y. Ariji, Y. Ohashi, T. Kawai, M. Fukuda, T. Funakoshi, Y.

Kise, M. Nozawa, A. Katsumata, H. Fujita, and E. Ariji, “Deep-learning classification using convolutional neural network for

evaluation of maxillary sinusitis on panoramic radiography,” Oral

Radiology, Dec 2018 online first. [55] C. Kai, Y. Uchiyama, J. Shiraishi, and H. Fujita, “Quantitation of

cerebral atrophy due to normal aging: principal component analysis

with MR images in patients’ age groups,” Nihon Hoshasen Gijutsu Gakkai Zasshi, vol. 74, pp. 1389-1395, Dec 2018. (in Japanese).

[56] C. Kai, M. Ishimaru, Y. Uchiyama, J. Shiraishi, N. Shinohara, and H.

Fujita, “Selection of radiomic features for the classification of triple-negative breast cancer based on radiogenomics,” Nihon Hoshasen

Gijutsu Gakkai Zasshi, vol. 75, pp. 24-31, Jan 2019. (in Japanese).

[57] Y. Onishi, A. Teramoto, M. Tsujimoto, T. Tsukamoto, K. Saito, H. Toyama, K. Imaizumi, and H. Fujita, “Automated pulmonary nodule

classification in computed tomography images using a deep

convolutional neural network trained by generative adversarial

networks,” BioMed Research International, vol.2019, Article ID 6051939, pp.1-9, Jan 2019.

[58] A. Teramoto, M. Tsujimoto, T. Inoue, T. Tsukamoto, K. Imaizumi, H.

Toyama, K. Saito, and H. Fujita, "Automated classification of pulmonary nodules through a retrospective analysis of conventional CT

and two-phase PET images in patients undergoing biopsy,” Asia

Oceania Journal of Nuclear Medicine and Biology, vol.7, pp.29-37, Jan 2019.

[Peer-reviewed International Conference Proceedings] [59] Y. Hatanaka, Y. Nagahata, C. Muramatsu, S. Okumura, K. Ogohara, A.

Sawada, K. Ishida, T. Yamamoto, and H. Fujita, “Improved automated

optic cup segmentation based on detection of blood vessel bends in retinal fundus images,” in Proc. IEEE EMBC 2014, 2014, 126–129.

[60] X. Zhou, S. Morita, Xin. Zhou, H. Chen, T. Hara, R. Yokoyama, M.

Kanematsu, H. Hoshi, and H. Fujita, “Automatic anatomy partitioning of the torso region on CT images by using multiple organ localizations

with a group-wise calibration technique,” in Proc. of SPIE Medical

Imaging 2015: Computer-Aided Diagnosis, vol. 9414, pp. 94143K. [61] R. Kawai, T. Hara, T. Katafuchi, T. Ishihara, X. Zhou, C. Muramatsu,

Y. Abe and H. Fujita, “Semi-automated measurements of heart-to-

mediastinum ratio on 123I-MIBG myocardial scintigrams by using image fusion method with chest X-ray images,” in Proc. SPIE Medical

Imaging, 2015, vol. 9414, pp. 941433-1-941433-6.

[62] A. Teramoto, H. Adachi, M. Tsujimoto, H. Fujita, K. Takahashi, O. Yamamuro, T. Tamaki, M. Nishio, and T. Kobayashi, “Automated

detection of lung tumors in PET/CT images using active contour

filter,” Proc. of SPIE Medical Imaging 2015, vol. 9414, pp. 94142V-1-94142V6.

[63] H. Adachi, A. Teramoto, S. Miyajo, O. Yamamuro, K. Ohmi, M.

Nishio, and H. Fujita, "Preliminary study on the automated detection of breast tumors using the characteristic features from unenhanced MR

images,” in Proc. of SPIE Medical Imaging 2015: Computer-Aided

Diagnosis, 2015, vol. 9414, pp. 94142A-1 - 94142A-6. [64] K. Horiba, C. Muramatsu, T. Hayashi, T. Fukui, T. Hara, A. Katsumata,

and H. Fujita, “Automated classification of mandibular cortical bone on dental panoramic radiographs for early detection of osteoporosis,”

Proc. SPIE Medical Imaging: Computer-Aided Diagnosis, 2015, vol.

9414, pp. 94142A1-94142A-6. [65] T. Matsubara, A. Ito, A. Tsunomori, T. Hara, C. Muramatsu, T. Endo,

and H. Fujita, “An automated method for detecting architectural

distortion on mammograms using direction analysis of linear structure,” in Proc. IEEE Eng Med Bio Soc, 2015, p. 2661-2664.

[66] Y. Hatanaka, K. Samo, M. Tajima, K. Ogohara, C. Muramatsu, S.

Okumura, and H. Fujita, “Automated blood vessel extraction using local features on retinal images,” in Proc. SPIE Medical Imaging 2016:

Computer-Aided Diagnosis, 2016, vol. 9785, paper 97852F.

[67] N. Kamiya, X. Zhou, K. Azuma, C. Muramatsu, T. Hara and H. Fujita, “Automated recognition of the iliac muscle and modeling of muscle

fiber direction in torso CT images”, in Proc. SPIE Medical Imaging,

Computer-Aided Diagnosis, 2016, vol.10134, 1013442-1-1013442-6. [68] X. Zhou, T. Kano, Y. Cai, S. Li, Xin. Zhou, T. Hara, R. Yokoyama,

and H. Fujita, “Automatic quantification of mammary glands on non-

contrast X-ray CT by using a novel segmentation approach,” in Proc. of the SPIE Medical Imaging 2016: Computer-Aided Diagnosis, vol.

9785, pp. 97851Z.

[69] C. Muramatsu, K. Ishira, A. Sawada, Y. Hatanaka, T. Yamamoto, and H. Fujita, “Automated detection of retinal nervel fiber layer defects on

fundus images: false positive reduction based on vessel likelihood,” in

Proc. SPIE Medical Imaging: 2016 Computer-Aided Diagnosis, vol. 9785, pp. 97852L.

[70] H. Murakami, T. Watanabe, D. Fukuoka, N. Terabayashi, T. Hara, C.

Muramatsu, and H. Fujita, “Development of estimation system of knee extension strength using image features in ultrasound images of retus

femoris,” in Proc. SPIE Medical Imaging 2016: Ultrasonic Imaging

and Tomography, vol. 9790, pp.979012. [71] Y. Yamaguchi, Y. Takeda, T. Hara, X. Zhou, Y. Tanaka, M.

Matsusako, K. Hosoya, T. Nihei, T. Katafuchi and H. Fujita, “hree

modality image registration of brain SPECT/CT and MR images for quantitative analysis of dopamine transporter imaging,” in Proc. SPIE

Page 12: A02-3 Function Integrated Diagnostic Assistance Based on ...

Medical Imaging, 2016 Biomedical Applications in Molecular,

Structural, and Functional Imaging, vol.9788, pp. 97881V.

[72] M. Yamazaki, A. Teramoto, and H. Fujita, "A hybrid detection scheme of architectural distortion in mammograms using iris filter and Gabor

filter," in Proc IWDM LNCS, vol. 9699, pp. 174-182, June 2016.

[73] A. Teramoto, S. Miyajo, H. Fujita, O. Yamamuro, K. Omi, and M. Nishio, "Automated analysis of breast tumour in the breast DCE-MR

images using level set method and selective enhancement of invasive

regions," in Proc IWDM LNCS, vol. 9699, pp. 439-445, June 2016. [74] C. Muramatsu, T. Takahashi, T. Morita, T. Endo, and H. Fujita,

“Similar image retrieval of breast masses on ultrasonography using

subjective data and multidimensional scaling,” in Proc. IWDM LNCS, vol. 9699, pp. 43-50, June 2016.

[75] Y. Hatanaka, H. Tachiki, K. Ogohara, C. Muramatsu, S. Okumura, and

H. Fujita, “Artery and vein diameter ratio measurement based on improvement of arteries and veins segmentation on retinal images,” in

Proc. IEEE EMBC 2016, 2016, pp.1336–1339.

[76] A. Yamada, A. Teramoto, T. Otsuka, K. Kudo, H. Anno, and H. Fujita, "A basic study on the development of automated skull fracture

detection in CT images using black-hat transform," in Proc. IEEE Eng

Med Bio Soc, 2016, pp. 6437-6440. [77] C. Muramatsu, R. Takahashi, T. Hayashi, T. Hara, T. Fukui, A.

Katsumata, and H. Fujita, “Quantitative evaluation of alveolar bone

resorption on dental panoramic radiographs by standardized dentition image transformation and probability estimation,” in Proc IEEE Eng

Med Bio Soc 2016, pp. 1038-1041

[78] X. Zhou, T. Ito, R. Takayama, S. Wang, T. Hara, and H. Fujita, “Three-dimensional CT image segmentation by combining 2D fully

convolutional network with 3D majority voting,” in Proc. 2nd

Workshop on Deep Learning in Medical Image Analysis, MICCAI 2016, pp. 107-116.

[79] X. Zhou, T. Kano, S. Li, Xin. Zhou, T. Hara, R.Yokoyama, and H.

Fujita, “Automated assessment of breast tissue density in non-contrast 3D CT images without image segmentation based on a deep CNN,” in

Proc. SPIE Medical Imaging 2017: Computer-aided diagnosis, vol.

10134, pp. 101342Q. [80] N. Kamiya, K. Ieda, X. Zhou, M. Yamada, H. Kato, C. Muramatsu, T.

Hara, T. Miyoshi, T. Inuzuka, M. Matsuo and H. Fujita, “Automated analysis of whole skeletal muscle for early differential diagnosis of

ALS in whole-body CT images: Preliminary study,” in Proc. SPIE

Medical Imaging, Computer-Aided Diagnosis, 2017, vol.10134, 1013442-1-1013442-6.

[81] X. Zhou, R. Takayama, S. Wang, T. Hara, and H. Fujita, “Automated

segmentation of 3D anatomical structures on CT images by using a deep convolutional network based on end-to-end learning approach,” in

Proc. SPIE Medical Imaging 2017: Image processing, vol. 10133, pp.

1013324. [82] Y. Hiramatsu, C. Muramatsu, H. Kobayashi, T. Hara, and H. Fujita,

“Automated detection of masses on whole breast volume ultrasound

scanner: false positive reduction using deep convolutional neural network,” in Proc SPIE Medical Imaging 2017: Computer-Aided

Diagnosis, vol.10134, pp. 101342S.

[83] Y. Miki, C. Muramatsu, T. Hayashi, X. Zhou, T. Hara, A. Katsumata, and H. Fujita, “Tooth labeling in cone-beam CT using deep

convolutional neural network for forensic identification,” in Proc SPIE

Medical Imaging 2017: Computer-Aided Diagnosis, vol. 10134, pp. 101343E.

[84] R. Watanabe, C. Muramatsu, K. Ishida, A. Sawada, Y. Hatanaka, T.

Yamamoto, and H. Fujita, “Automated detection of nerve fiber layer defects on retinal fundus images using fully convolutional neural

network for early diagnosis of glaucoma,” in Proc SPIE Medical

Imaging 2017: Computer-Aided Diagnosis, vol. 10134, pp. 1013438. [85] Y. Hatanaka, K. Samo, K. Ogohara, W. Sunayama, C. Muramatsu, S.

Okumura, and H. Fujita, “Automated blood vessel extraction based on

high-order local autocorrelation features on retinal images,” in VipIMAGE 2017, Lecture Notes in Computational Vision and

Biomechanics. Berlin, Germany: Springer, 2017, vol. 27, pp. 803–808.

[86] Y. Hatanaka, M. Tajima, R. Kawasaki, K. Saito, K. Ogohara, C. Muramatsu, W. Sunayama, and H. Fujita, “Retinal biometrics based on

iterative closest point algorithm,” in Proc. IEEE EMBC 2017, 2017, pp.

373–376.

[87] X. Zhou, K. Yamada, T. Kojima, R. Takayama, S. Wang, X.X. Zhou, T.

Hara, and H. Fujita, “Performance evaluation of 2D and 3D deep

learning approaches for automatic segmentation of multiple organs on CT images,” in Proc. of the SPIE Medical Imaging 2018: Computer-

aided diagnosis, vol. 10575, pp. 105752C.

[88] C. Muramatsu, S. Higuchi, T. Morita, M. Oiwa, and H. Fujita, “Similarity estimation for reference image retrieval in mammograms

using convolutional neural network,” in Proc SPIE Med Imaging 2018;

Computer-Aided Diagnosis, vol. 10575, pp. 105752U. [89] C. Muramatsu, S. Higuchi, T. Morita, M. Oiwa, T. Kawasaki, and H.

Fujita, “Retrieval of reference images of breast masses on

mammograms by similarity space modeling,” in Proc SPIE 10718, 14th International Workshop on Breast Imaging (IWBI 2018), pp.

1071809.

[90] N. Kamiya, M. Kume, G. Zheng, X. Zhou, H. Kato, H. Chen, C. Muramatsu, T. Hara, T. Miyoshi, M. Matsuo and H. Fujita,

"Automated Recognition of Erector Spinae Muscles and Their Skeletal

Attachment Region via Deep Learning in Torso CT Images," in Proc. of the 6th MICCAI Workshop on Computational Methods and Clinical

Applications in Musculoskeletal Imaging , 2018, pp.1-10.

[91] H. Yu, X. Zhou, H. Jiang, H. Kang, Z. Wang, T. Hara, and H. Fujita, "Learning 3D non-rigid deformation based on an unsupervised deep

learning for PET/CT image registration," in Proc. of SPIE Medical

Imaging 2019: Computer-Aided Diagnosis, in press. [92] X. Zhou, T. Kojima, S. Wang, X.X. Zhou, T. Hara, T. Nozaki, M.

Matsusako, and H. Fujita, "Automatic anatomy partitioning of the torso

region on CT images by using a deep convolutional network with a majority voting," in Proc. of SPIE Medical Imaging 2019: Computer-

Aided Diagnosis, in press.

[Book Chapters]

[93] Y. Uchiyama and H. Fujita, “Detection of Cerebrovascular Diseases,”

in Computer-Aided Detection and Diagnosis in Medical Imaging, eds. by Q. Li and R. M. Nishikawa, CRC Press, Boca Raton, pp.241-259,

Mar 2015.

[94] C. Muramatsu and H. Fujita, “Detection of Eye Diseases,” in Computer-Aided Detection and Diagnosis in Medical Imaging, eds. by

Q. Li and R. M. Nishikawa, CRC Press, Boca Raton, pp. 279-296, Mar 2015.

[95] H. Fujita, T. Hara, X. Zhou, D. Fukuoka, N. Kamiya, and C.

Muramatsu, Computational Anatomy Based on Whole Body Imaging, H. Kobatake and Y. Masutani, Eds. Tokyo, Japan: Springer Japan KK,

2017.

[96] A. Teramoto and H. Fujita, “Automated lung nodule detection using positron emission tomography/computed tomography,” in Intelligent

Decision Support Systems for Diagnosis of Medical Images, K. Suzuki

and Y. Chen, Eds. Berlin, Germany: Springer, 87-110, 2018. [97] C. Muramatsu, T. Hayashi, T. Hara, A. Katsumata, and H. Fujita,

“CAD on dental Imaging,” in Medical Image Analysis and Informatics:

Computer-aided Diagnosis and Therapy, A. Mazzoncini de Azevedo marques, A. Mencattini, M. Salmeri, and R.M. Rangayyan, Eds. CRC

Press/Taylor & Francis, pp. 103-127, 2018.

[98] Y. Hatanaka, and H. Fujita, “Computer-aided diagnosis with retinal fundus images,” in Medical Image Analysis and Informatics:

Computer-aided Diagnosis and Therapy, A. Mazzoncini de Azevedo

marques, A. Mencattini, M. Salmeri, and R.M. Rangayyan, Eds. CRC Press/Taylor & Francis, pp. 29-55, 2018.

[99] C. Muramatsu and H. Fujita, “Computer Analysis of Mammograms,”

in Handbook of X-ray Imaging – Physics and Technology, P. Russo, Ed. CRC Press, Boca Raton, pp. 1231-1247, 2018.

[100] N. Kamiya, Muscle Segmentation for Orthopedic Interventions,

Intelligent Orthopaedics. Advances in Experimental Medicine and Biology, vol.1093, pp.81-91, Springer, Singapore, Nov. 2018.

[101] N. Kamiya, M. Kume, G. Zheng, X. Zhou, H. Kato, H. Chen, C.

Muramatsu, T. Hara, T. Miyoshi, M. Matsuo and H. Fujita, “Automated Recognition of Erector Spinae Muscles and Their Skeletal

Attachment Region via Deep Learning in Torso CT Images”,

Computational Methods and Clinical Applications in Musculoskeletal Imaging. MSKI 2018. Lecture Notes in Computer Science, vol. 11404,

pp.1-10, Springer, Cham, Jan. 2019.

Page 13: A02-3 Function Integrated Diagnostic Assistance Based on ...

[Award-Winning International Conference Proceedings]

[102] A. Teramoto, T. Tsukamoto, Y. Kiriyama, R. Kato, and H. Fujita, "A

preliminary study on the automated classification of lung cancers in microscopic images using deep convolutional neural networks," in

Proc International Forum on Medical Imaging in Asia, paper P2-14,

Jan. 2017 [Best Poster Award]. [103] N. Kamiya, A. Oshima, E. Asano, X. Zhou, M. Yamada, H. Kato, K.

Azuma, C. Muramatsu, T. Hara, T. Miyoshi, T. Inuzuka, M. Matsuo

and H. Fujita, "Initial study on the classification of amyotrophic diseases using texture analysis and deep learning in whole-body CT

images," in Proc. International Forum on Medical Imaging 2019, P1-

64, in press. [Best Paper Award]

[Review Article for Award-Winning Presentation]

[104] T. Kobota, T. Hara, D. Fukuoka, K. Miki, T. Ishihara, H. Tago, Y. Abe, T. Katafuchi, and H. Fujita, “Current status of software development

for measuring heart-to-mediastinum ratio in 123I-MIBG myocardial

scintigrams using chest x-ray images with the aid of image fusion methods,” Japanese Society of Nuclear Cardiology Journal, vol.19,

pp.27-32, April 2017 (in Japanese) [Society Award in Technical

Category at 26th Annual Meeting of the Japanese Society of Nuclear Cardiology, 2016].

[Review Articles] [105] A. Katsumata and H. Fujita, “Progress of computer-aided

detection/diagnosis (CAD) in dentistry.” Japanese Dental Science

Review, vol.50, pp.63-68, Aug 2014. [106] N. Niki, H. Fujita, and K. Mori, “Computer-aided diagnosis and

computer-aided surgery systems based on multidisciplinary

computational anatomy,” Med Imag Tech, vol, 34, no. 3, pp. 144-150, May 2016.

[107] C. Muramatsu and H. Fujita, “Current status and future trend on image

analysis techniques for computer-aided diagnosis of breast ultrasound,”

Ultrasound Technology, pp.4-9, May-June 2017. (in Japanese) [108] H. Fujita, “CAD system revolution brought by AI,” Innervision, vol.7,

pp10-13, 2017. (in Japanese).

[109] H. Fujita, “Current status and future trend on application of artificial intelligence to medical image diagnosis,” JCR News, pp.3-5, 2017. (in

Japanese)

[110] C. Muramatsu, “Application of deep learning techniques to dental personal identification,” JCR News, pp.10-11, 2017. (in Japanese)

[111] A. Teramoto, “Automated detection scheme of lung nodule using deep

learning technique,” Medical Imaging and Information Sciences, vol.34, pp.54-56, 2017. (in Japanese)

[112] C. Muramatsu, “Tooth classification on dental cone-beam CT using

deep convolutional neural network,” Medical Imaging and Information Sciences, vol.34, pp.57-59, 2017. (in Japanese)

[113] X. Zhou and H. Fujita, “Simultaneous segmentation of multiple

anatomical structures on CT images using deep learning technique,” Medical Imaging and Information Sciences, vol.34, pp.63-65, 2017. (in

Japanese)

[114] H. Fujita, X. Zhou, A. Teramoto, and C. Muramatsu, “Application of deep learning to computer-aided diagnosis,” RadFan, vol.15, pp.30-39,

2017. (in Japanese)

[115] X. Zhou and H. Fujita, “Simultaneous recognition and segmentation of multiple anatomical structures on CT images by using deep learning

approach,” Med Imag Tech, vol.35, Sep 2017. (in Japanese)

[116] T. Watanabe, N. Terabayashi, D. Fukuoka, H. Fujita, T. Matsuoka, and M. Seishima, “Development of the sonographic evaluation technique in

the musculoskeletal field,” The Official Journal of Japanese Society of

Laboratory Medicine, vol.65, pp.1263-1268, Dec 2017 [117] Y. Hatanaka, “Automated microaneurysms detection on retinal images

by using density gradient vector concentration,” Image Laboratory,

vol.29, 17-23, Jan 2018. (in Japanese)