AutomaticAnalysisofLateralCephalogramsBasedon...
Transcript of AutomaticAnalysisofLateralCephalogramsBasedon...
Research ArticleAutomatic Analysis of Lateral Cephalograms Based onMultiresolution Decision Tree Regression Voting
ShumengWang 1 Huiqi Li 1 Jiazhi Li 13 Yanjun Zhang 1 and Bingshuang Zou 2
1School of Information and Electronics Beijing Institute of Technology Beijing China2Department of Oral Health Sciences Faculty of Dentistry University of British Columbia Vancouver Canada3Viterbi School of Engineering University of Southern California Los Angeles USA
Correspondence should be addressed to Huiqi Li huiqilibiteducn and Bingshuang Zou doctorzougmailcom
Received 22 June 2018 Accepted 22 October 2018 Published 19 November 2018
Guest Editor Ziyad S Haidar
Copyright copy 2018 Shumeng Wang et al is is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited
Cephalometric analysis is a standard tool for assessment and prediction of craniofacial growth orthodontic diagnosis and oral-maxillofacial treatment planning e aim of this study is to develop a fully automatic system of cephalometric analysis includingcephalometric landmark detection and cephalometric measurement in lateral cephalograms for malformation classification andassessment of dental growth and soft tissue profile First a novel method of multiscale decision tree regression voting using SIFT-based patch features is proposed for automatic landmark detection in lateral cephalometric radiographs en some clinicalmeasurements are calculated by using the detected landmark positions Finally two databases are tested in this study one is thebenchmark database of 300 lateral cephalograms from 2015 ISBI Challenge and the other is our own database of 165 lateralcephalograms Experimental results show that the performance of our proposedmethod is satisfactory for landmark detection andmeasurement analysis in lateral cephalograms
1 Introduction
Cephalometric analysis is a scientific research approach forassessment and prediction of craniofacial growth ortho-dontic diagnosis and oral-maxillofacial treatment planningfor patients with malocclusion in clinical practice [1] Wefocus on 2D lateral cephalometric analysis which is per-formed on cephalometric radiographs in lateral viewCephalometric analysis has undergone three stages of de-velopment manual stage computer-aided stage andcomputer-automated stage Cephalometric analysis basedon radiographs was introduced by Broadbent [2] andHofrath [3] for the first time in 1931 In the first stage itconsists of five steps to obtain the cephalometric analysis (a)placing a sheet of acetate over the cephalometric radiograph(b) manual tracing of craniofacial anatomical structures (c)manual marking of cephalometric landmarks (d) measuringangular and linear parameters using the landmark locationsand (e) analysisclassification of craniomaxillofacial hardtissue and soft tissue [4] is process is tedious time
consuming and subjective In the second stage the first stepin traditional cephalometric analysis has been skipped sincethe cephalometric radiograph is digitized Furthermore thenext two steps can be operated by computer and themeasurement can be automatically calculated by softwareHowever this computer-aided analysis is still time con-suming and the results are not reproducible due to largeinter- and intravariability error in landmark annotation Inthe third stage the most crucial step ie identifyinglandmarks can be automatized by image processing algo-rithms [5] e automatic analysis has high reliability andrepeatability and it can save a lot of time for the ortho-dontists However fully automatic cephalometric analysis ischallenging due to overlaying structures and in-homogeneous intensity in cephalometric radiographs as wellas anatomical differences among subjects
Cephalometric landmarks include corners line in-tersections center points and other salient features of an-atomical structures ey always have stable geometricallocations in anatomical structures in cephalograms In this
HindawiJournal of Healthcare EngineeringVolume 2018 Article ID 1797502 15 pageshttpsdoiorg10115520181797502
study 45 landmarks are used in lateral cephalograms (referto Table 1) e cephalometric landmarks play a major rolein calculating the cephalometric planes which are the linesbetween two landmarks in 2D cephalometric radiographs asshown in Table 2 Furthermore different measurements arecalculated using different cephalometric landmarks andplanes and they are used to analyze different anatomicalstructures [4] Measurements can be angles or distancewhich are used to analyze skeletal and dental anatomicalstructures as well as soft tissue profile To conduct a clinicaldiagnosis many analytic approaches have been developedincluding Downs analysis Wylie analysis Riedel analysisSteiner analysis Tweed analysis Sassouni analysis Bjorkanalysis and so on [6]
Methods of automatic cephalometric landmark de-tection are mainly separated into three categories (1)bottom-upmethods (2) deformationmodel-basedmethodsand (3) classifierregressor-basedmethodse first categoryis bottom-up methods while the other two categories arelearning-based methods
For bottom-up methods two techniques were usuallyemployed including edge detection and template matchingEdge-based methods were to extract the anatomical con-tours in cephalograms and then the relative landmarks wereidentified on the contours using prior knowledge Levy-Mandel et al [7] first proposed the edge tracking method foridentifying craniofacial landmarks First input images weresmoothed by median filter and then edges were extracted byMero-Vassy operator Second contours are obtained by theedge tracking technique based on the constraints of loca-tions endings and breaking segments and edge linkingconditions Finally the landmarks are detected on contoursaccording to their definition in cephalometry is methodwas only tested by two high quality cephalometric radio-graphs Furthermore 13 landmarks on noncontours werenot detected A two-stage landmark detection method wasproposed by Grau et al [8] which first extracted the majorline features in high contrast by line-detection module suchas contours of jaws and nose and then detected landmarksby pattern recognition based on mathematical morphologyEdge-based methods could only detect the landmarks oncontours and were not robust to noise low contrast andocclusion Template-matching-based methods were to finda most likely region with the least distance to the template ofthe specific landmark and the center of the region wasconsidered as the estimated position of the specific land-mark erefore not only the landmarks on contours butalso the landmarks on noncontours can be detected Ashishet al [9] proposed a template-matching method usinga coarse-to-fine strategy for cephalometric landmark de-tection Mondal et al [10] proposed an improved methodfrom Canny edge extraction to detect the craniofacial an-atomical structures Kaur and Singh [11] proposed an au-tomatic cephalometric landmark detection using Zernikemoments and template matching Template matching basedmethods had difficulty in choosing the representativetemplate and were not robust to anatomical variability inindividual All these methods strongly depended on thequality of images
Subsequently deformation model-based methods wereproposed for these limitations by using shape constraintsForsyth and Davis [12] reviewed and evaluated the ap-proaches reported from 1986 to 1996 on automatic ceph-alometric analysis systems and they highlighteda cephalometric analysis system presented by Davis andTaylor which introduced an appearance model into land-mark detection e improved algorithms of activeshapeappearance models were then used to refine cepha-lometric landmark detection combining with templatematching and higher accuracy was obtained [13ndash17] Hereshape or appearance models were learned from training data
Table 1 Description of cephalometric landmarks
Symbol DescriptionN NasionNs Soft tissue nasionPrn PronasaleCm ColumellaANS Anterior nasal spineA SubspinaleSn SubnasaleSsAprime Upper pit of lipsUL Upper lipStoms Stomion superiusStomi Stomion inferiusLL Lower lipSiBprime Lower pit of lipsPos Soft tissue pogonionGs Soft tissue gnathionMes Soft tissue mentonGo GonionMe MentonGn GnathionPg PogonionLIA Mandibular joint midpointB SupramentalId InfradentaleLI Lower incisal incisionUI Upper incisal incisionFA Maxillary incisorrsquos facial axisSPr Superior prosthionUIA Root point of incisorOr Orbitale
Se Internode of sphenoid wing with anterior cranialfossa
L6A Root apex of mandibular first molarUL5 Midpoint of tip of the second molarL6E Buccal apex of mandibular first molarUL6 Midpoint of tip of first molarU6E Buccal apex of maxillary first molarU6A Root apex of maxillary first molarPNS Posterior nasal spinePtm Pterygomaxillary fissureS SellaCo CondylionAr ArticulareBa Basion
Bolton Concave point of posterior incision of occipitalcondyle
P PorionULI Midpoint of lower incisor and upper incisor
2 Journal of Healthcare Engineering
to regularize the searching through all landmarks in testingdata However it was difficult to initialize landmark posi-tions for searching because the initialization was alwaysachieved by traditional methods
Recently significant progress has been made for auto-matic landmark detection in cephalograms by using super-vised machine-learning approaches ese machine-learningapproaches can be further separated into two classes clas-sification and regression models A support vector machine(SVM) classifier was used to predict the locations of land-marks in cephalometric radiographs while the projectedprincipal-edge distribution was proposed to describe edges asthe feature vector [18] El-Feghi et al [19] proposed a coarse-to-fine landmark detection algorithm which first used fuzzyneural network to predict the locations of landmarks andthen refined the locations of landmarks by template matchingLeonardi et al [20] employed the cellular neural networksapproach for automatic cephalometric landmark detection onsoftcopy of direct digital cephalometric X-rays which wastested by 41 cephalograms and detected 10 landmarksFavaedi et al [21] proposed a probability relaxation methodbased on shape features for cephalometric landmark de-tection Farshbaf and Pouyan et al [22] proposed a coarse-to-fine SVM classifier to predict the locations of landmarks incephalograms which used histograms of oriented gradientsfor coarse detection and histograms of gray profile for re-finement Classifier-based methods were successfully used toidentify the landmark from the whole cephalograms butpositive and negative samples were difficult to be balanced intraining data and computational complexity is increased dueto pixel-based searching
Many algorithms have been reported for automaticcephalometric landmark detection but results were difficultto compare due to the different databases and landmarkse situation has been better since two challenges were held
at 2014 and 2015 IEEE International Symposium on Bio-medical Imaging (ISBI) During the challenges regressor-based methods were first introduced to automatic landmarkdetection in lateral cephalograms e challenges aimed atautomatic cephalometric landmark detection and usinglandmark positions to measure the cephalometric linear andangular parameters for automatic assessment of anatomicalabnormalities to assist clinical diagnosis Benchmarks havebeen achieved by using random forest (RF) [23] regressionvoting based on shape model matching at the challenges Inparticular Lindner et al [24 25] presented the algorithm ofrandom forest regression voting (RFRV) in the constrainedlocal model (CLM) framework for automatic landmarkdetection which obtained the mean error of 167mm andthe successful detection rate of 7368 in precision range of2mme algorithm of RFRV-CLMwon the challenges andthe RF-based classification algorithm combined with Haar-like appearance features and game theory [26] was in rank 2e comprehensive performance analysis among those al-gorithms for the challenges were reported in [27 28] LaterVandaele et al [29] proposed an ensemble tree-basedmethod using multiresolution features in bioimages andthe method was tested by three databases includinga cephalogram database of 2015 ISBI challenge Now publicdatabase and evaluation are available to improve the per-formance of automatic analysis system for cephalometriclandmark detection and parameter measurement and manyresearch outcomes have been achieved However automaticcephalometric landmark detection and parameter mea-surement for clinical practice is still challenging
In recent days efforts have been made to develop au-tomatic cephalometric analysis systems for clinical usages Inthis paper we present a new automatic cephalometricanalysis system including landmark detection and param-eter measurement in lateral cephalograms and the blockdiagram of our system is shown in Figure 1 e core of thissystem is automatic landmark detection which is realized bya new method based on multiresolution decision tree re-gression voting Ourmain contributions can be concluded infour aspects (1) we propose a new landmark detectionframework of multiscale decision tree regression voting (2)SIFT-based patch feature is first employed to extract thelocal feature of cephalometric landmarks and the proposedapproach is flexible when extending to detection of morelandmarks because feature selection and shape constraintsare not used (3) two clinical databases are used to evaluatethe extension of the proposed method Experimental resultsshow that our method can achieve robust detection whenextending from 19 landmarks to 45 landmarks (4) auto-matic measurement of clinical parameters is implemented inour system based on the detected landmarks which canfacilitate clinical diagnosis and research
e rest of this paper is organized as follows Section 2describes our proposed method of automatic landmarkdetection based onmultiscale decision tree regression votingand parameter measurement Experimental results for 2015ISBI Challenge cephalometric benchmark database and ourown database are presented in Section 3 e discussion ofthe proposed system is given in Section 4 Finally we
Table 2 Description of cephalometric planes
Name Involvinglandmarks
Descriptionin database1
Descriptionin database2
Reference planesSN Anterior cranialbase plane S-N L1L2 L39L1
FH Frankfort horizontalplane P-O L4L3 L44L29
Bolton plane Bolton-N mdash L43L1Measurement planesPalatal plane ANS-PNS L18L17 L5L37Cranial base plane Ba-N mdash L42L1MP Mandibular plane Go-Me L10L8 L17L18RP Ramal plane Ar-Go L19L10 L41L17NP Facial plane N-Pg L2L7 L1L20NA plane N-A L2L5 L1L6AB Subspinale toinfradentale plane A-B L5L6 L6L22
AP plane A-Pg L5L7 L6L20Soft tissue measurementplanesFacial plane of soft tissue Ns-Pos mdash L2L14Ricketts esthetic plane Prn-Pos mdash L3L14H plane UL-Pos L13L16 L9L14
Journal of Healthcare Engineering 3
conclude this study with expectation of future work inSection 5
2 Method
21 Landmark Detection It is well known that cephalo-metric landmark detection is the most important step incephalometric analysis In this paper we propose a newframework of automatic cephalometric landmark detectionwhich is based on multiresolution decision tree regressionvoting (MDTRV) using patch features
211 SIFT-Based Patch Feature Extraction
(1) SIFT Feature Scale invariant feature transform (SIFT) isfirst proposed for key point detection by Lowe [30] ebasic idea of SIFT feature extraction algorithm consists offour steps (1) local extrema point detection in scale-space byconstructing the difference-of-Gaussian image pyramids (2)accurate key point localization including location scale andorientation (3) orientation assignment for invariance toimage rotation and (4) key point descriptor as the localimage feature e advantage of SIFT features to representthe key points is affine invariant and robust to illuminationchange for images is method of feature extraction hasbeen commonly used in the fields of image matching andregistration
(2) SIFT Descriptor for Image Patch Key points with dis-cretized descriptors can be used as visual words in the field ofimage retrieval Histogram of visual words can then be usedby a classifier to map images to abstract visual classes Mostsuccessful image representations are based on affine in-variant features derived from image patches First featuresare extracted on sampled patches of the image either usinga multiresolution grid in a randomized manner or usinginterest point detectors Each patch is then described usinga feature vector eg SIFT [30] In this paper we use theSIFT feature vectors [31] to represent image patches whichcan be used by a regressor to map patches to the dis-placements from each landmark
e diagram of SIFTfeature descriptor of an image patchis illustrated in Figure 2 An image patch centered at location(x y) will be described by a square window of length 2W + 1where W is a parameter For each square image patch Pcentered at position (x y) of length 2W + 1 the gradient Px
and Py of image patch P is computed by using finite dif-ferences e gradient magnitude gm is computed by
gm P2
x + P2y
1113969 (1)
and the gradient angle ga (measured in radians clockwisestarting from the X axis) is calculated by
ga arctanPy
Px
(2)
Each square window is separated into 4 times 4 adjacentsmall windows and the 8-bin histogram of gradients (di-rection angles started from 0 to (2π) is extracted in eachsmall window For each image patch 4 times 4 histograms ofgradients are concatenated to the resulting feature vector f(dimension 128) Finally the feature f of image patch P canbe described as SIFT descriptor e example of SIFT-basedpatch feature extraction in a cephalogram is shown inFigure 3
212 Decision Tree Regression Decision tree is a classicaland efficient statistical learning algorithm and has beenwidely used to solve classification and regression problems[32 33] Decision trees predict responses to data To predicta response query the new data by the decisions from the rootnode to a leaf node in the tree e leaf node contains theresponse Classification trees give responses that are nom-inal while regression trees give numeric responses In thispaper CART (classification and regression trees) [34]a binary tree is used as the regressors Rl to learn themapping relationship between the SIFT-based patch featurevectors f and the displacement vectors d(dx dy) from thecenters of the patches to the position of each landmark in thetraining images e regressors Rl are then used to predictthe displacements d using patch feature vectors of the testimage Finally the predicted displacements are used toobtain the optimal location of each landmark via voting Oneadvantage of the regression approach is to avoid balancingpositive and negative examples for the training of classifiersAnother advantage of using regression rather than classi-fication is that good results can be obtained by evaluatingthe region of interest on randomly sampling pixels ratherthan at every pixel
(1) Training In training process a regression tree Rl can beconstructed for each landmark l via splitting Here opti-mization criterion and stopping rules are used to determinehow a decision tree is to grow e decision tree can beimproved by pruning or selecting the appropriateparameters
e optimization criterion is to choose a split to min-imize the mean-squared error (MSE) of predictions com-pared to the ground truths in the training data Splitting isthe main process of creating decision trees In general foursteps are performed to split node t First for each obser-vation fj (ie the extracted feature in the training data) theweighted MSE εt of the responses (displacements d) in nodet is computed by using
εt 1113944jisinT
dj minus dt1113872 11138732
N j 1 N (3)
whereT is the set of all observation indices in node t andN isthe sample size Second the probability of an observation innode t is calculated by
Landmarkdetection
ParametermeasurementCephalogram Result analysis
Figure 1 Block diagram of automatic cephalometric analysissystem
4 Journal of Healthcare Engineering
P(T) 1113944jisinT
wj (4)
where wj is the weight of observation j In this paper set wj
1N ird all elements of the observation are sorted in
ascending order Every element is regraded as a splittingcandidate TU is the unsplit set of all observation indicescorresponding tomissing values Finally the best way to splitnode t using fj is determined by maximizing the reductionin MSE Δεt among all splitting candidates For all splitting
Image Geometry descriptor
hellip012
34
5 67
8910
1112
13 1415
Bin indexes when stacked
y
x
ŷ
x
Figure 2 Diagram of SIFT feature descriptor of an image patch
(a) (b)
120
Mag
nitu
de
100
80
60
40
20
00 20 40 60
Index80 100 120 140
(c)
Figure 3 SIFT-based patch feature extraction (a) Original image showing a patch (b) Geometry descriptor (c) e extracted feature vector
Journal of Healthcare Engineering 5
candidates in the observation fj the following steps areperformed
(1) Split the observations in node t into left and rightchild nodes (tL and tR respectively)
(2) Compute the reduction in MSE Δεt For a particularsplitting candidate tL and tR represent the observationindices in the sets TL and TR respectively If fj doesnot contain any missing values then the reduction inMSE Δεt for the current splitting candidate is
Δεt P(T)εt minusP TL( 1113857εtLminusP TR( 1113857εtR
(5)
If fj contains any missing values then the reduction inMSE Δεt
prime is
Δεtprime P TminusTU( 1113857εt minusP TL( 1113857εtL
minusP TR( 1113857εtR (6)
where TminusTU is the set of all observation indices innode t that are not missing
(3) Choose the splitting candidate that yields the largestMSE reduction In this way the observations in nodet are split at the candidate that maximize the MSEreduction
To stop splitting nodes of the decision tree two rules canbe followed (1) it is pure of the node that the MSE for theobserved response in this node is less than the MSE for theobserved response in the entire data multiplied by the tol-erance on quadratic error per node and (2) the decision treereaches to the setting values for depth of the regression de-cision tree for example the maximum number of splittingnodes max_splits
e simplicity and performance of a decision tree shouldbe considered to improve the performance of the decisiontree at the same time A deep tree usually achieves highaccuracy on the training data However the tree is not toobtain high accuracy on a test data as well It means thata deep tree tends to overfit ie its test accuracy is much lessthan its training accuracy On the contrary a shallow treedoes not achieve high training accuracy but can be morerobust ie its training accuracy could be similar to that ofa test data Moreover a shallow tree is easy to interpret andsaves time for prediction In addition tree accuracy can beobtained by cross validation when there are not enough datafor training and testing
ere are two ways to improve the performance ofdecision trees by minimizing cross-validated loss One is toselect the optimal parameter value to control depth of de-cision trees e other is postpruning after creating decisiontrees In this paper we use the parameter max_splits tocontrol the depth of resulting decision trees Setting a largevalue for max_splits lends to growing a deep tree whilesetting a small value formax_splits yields a shallow tree withlarger leaves To select the appropriate value for max_splitsthe following steps are performed (1) set a spaced set ofvalues from 1 to the total sample size formax_splits per tree(2) create cross-validated regression trees for the datausing the setting values for max_splits and calculate thecross-validated errors and (3) the appropriate value of
max_splits can be obtained by minimizing the cross-validated errors
(2) Prediction In prediction process you can easily predictresponses for new data after creating a regression tree RlSuppose f_new is the new data (ie a feature vectorextracted from a new patch P_new in the test cephalogram)According to the rules of the regression tree the nodes selectthe specific attributes from the new observation f_new andreach the leaf step by step which stores the mean dis-placement d_new Here we predict the displacement fromthe center of patch P_new to the landmark l using SIFTfeature vector by the regressor Rl
213 MDTRV Using SIFT-Based Patch Features
(1) Decision Tree Regression Voting (DTRV) As illustrated inFigure 4 the algorithm of automatic cephalometric land-mark detection using decision tree regression voting(DTRV) in single scale consists of training and testingprocesses as a supervised learning algorithm e trainingprocess begins by feature extraction for patches sampledfrom the training images en a regression tree is con-structed for each landmark with inputting feature vectorsand displacements (f d) Here f represents the observationsand d represents the targets for this regression problem etesting process begins by the same step of feature extractionen the resulting f_new is used to predict the displacementd_new by regressor Rl In the end the optimal landmarkposition is obtained via voting e voting style includessingle unit voting and single weighted voting As mentionedin [35] these two styles perform equally well in applicationof voting optimal landmark positions for medical images Inthis paper we use the single unit voting
(2) MDTRV Using SIFT-Based Patch Features Finally a newframework of MDTRV using SIFT-based patch features isproposed by using a simple and efficient strategy for accuratelandmark detection in lateral cephalograms ere are fourstages in the proposed algorithm of MDTRV which areiteratively performed in the scales of 0125 025 05 and 1When the scale is 0125 the training K patches with Kdisplacements are randomly sampled in a whole cephalo-gram for a specific landmark l(l 1 L) e decisiontree regressor R1
l is created by using these training samplesFor prediction SIFT features are extracted for Kprime testingpatches sampled in the whole cephalogram and Kprime dis-placements are predicted by regressor R1
l using extractedfeatures e optimal position of the specific landmark l isobtained via voting through all predicted displacementsWhen the scale is 025 05 or 1 only the patch sampling ruleis different from the procedure in scale of 0125 at is thetraining and testing patches are randomly sampled in the(2S + 1) times (2S + 1) neighborhood of true and initial landmarkpositions e estimated landmark position is used as theinitial landmark position in the next scale of testing processe optimal position of the specific landmark l in scale of 1 isrefined by using Hough forest [36] which is regarded as the
6 Journal of Healthcare Engineering
final resulting location is multiresolution coarse-to-finestrategy has greatly improved the accuracy of cephalometriclandmark detection
22 Parameter Measurement Measurements are eitherangular or linear parameters calculated by using cephalo-metric landmarks and planes (refer to Tables 1 and 2)According to geometrical structure measurements can beclassified into five classes the angle of three points the angleof two planes the distance between two points the distancefrom a point to a plane and the distance between two pointsprojected to a plane All measurements can be calculatedautomatically in our system as described in the following
2218e Angle of8ree Points Assume point B is the vertexamong three points A B and C then the angle of these threepoints is calculated by
angABC arccosa2 + c2 minus b2
2ac1113888 1113889 times
180π
1113874 1113875 (7)
anda AminusB
b AminusC
c BminusC
⎧⎪⎪⎨
⎪⎪⎩(8)
where AminusB
(xA minus xB)2 + (yA minusyB)21113969
represents theEuclidean distance of A and B
222 8e Angle between Two Planes As the cephalometricradiographs are 2D images in this study the planes areprojected as straight lines us the planes are determinedby two points e angle between two planes AB and CD iscalculated by
angAB arctanyA minusyB
xA minus xB
1113888 1113889 times180π
1113874 1113875
angCD arctanyC minusyD
xC minusxD
1113888 1113889 times180π
1113874 1113875
⎧⎪⎪⎪⎪⎪⎪⎪⎨
⎪⎪⎪⎪⎪⎪⎪⎩
(9)
angABCD |angABminusangCD| (10)
223 8e Distance between Two Points e distance be-tween two points is calculated using the following equation
|AB| AminusB times ratio (11)
ratio is a scale indicator that can be calculated by thecalibrated gauge in the lateral cephalograms and its unit ismmpixel
224 8e Distance from a Point to a Plane in the HorizontalDirection e plane AB is defined as
y + k1x + k0 0 (12)
us the distance of point C to the plane AB is cal-culated by
|C middot AB| yC + k1xC + k0
1 + k21
1113969
111386811138681113868111386811138681113868111386811138681113868111386811138681113868
111386811138681113868111386811138681113868111386811138681113868111386811138681113868
times ratio (13)
225 8e Distance between Two Points Projected to a Planee calculation of the distance between two points projectedto a plane is illustrated in Figure 5 In order to calculate thedistance between two points projected to a plane first we useEquations (4)ndash(12) to determine the plane AB en wecalculate the distance from two points C and D to the planeAB as c1 and c2 by Equations (4)ndash(13) ird the distance c0between two points C and D is calculated by Equations(4)ndash(11) Finally the distance |C middot AB middot D| between twopoints C and D projected to the plane AB is represented as c
and is calculated by
c
c20 minus c1 minus c2( 11138572
1113969
times ratio (14)
3 Experimental Evaluation
31 Data Description
311 Database of 2015 ISBI Challenge e benchmarkdatabase (database1) included 300 cephalometric radio-graphs (150 for TrainingData 150 for Test1Data) which isdescribed in Wang et al [28] All of the cephalograms werecollected from 300 patients aged from 6 to 60 years old e
Feature extraction
Regression treeconstruction
d
f
Feature extractionf_new d_new
Prediction Voting
R1
Ground truth oflandmarks
Training patches
Testing patches
Testing process
Training process
Landmarkpositions
P (x y)
Figure 4 Algorithm of decision tree regression voting
Journal of Healthcare Engineering 7
image resolution was 1935 times 2400 pixels For evaluation 19landmarks were manually annotated by two experienceddoctors in each cephalogram (see illustrations in Figure 6)the ground truth was the average of the annotations by bothdoctors and eight clinical measurements were used forclassification of anatomical types
312 Database of Peking University School and Hospital ofStomatology e database2 included 165 cephalometricradiographs as illustrated in Table 3 (the IRB approvalnumber is PKUSSIRB-201415063) ere were 55 cephalo-grams collected from 55Chinese adult subjects (29 females 26males) who were diagnosed as skeletal class I (0 lt ANB lt 4)with minor dental crowding and a harmoniouse other 110cephalograms were collected from the 55 skeletal class IIIpatients (ANB lt 0) (32 females 23 males) who underwentcombined surgical orthodontic treatment at the PekingUniversity School and Hospital of Stomatology from 2010 to2013 e image resolutions were different in the range from758 times 925 to 2690 times 3630 pixels For evaluation 45 landmarksweremanually annotated by two experienced orthodontists ineach cephalogram as shown in Figure 7 the ground truth wasthe average of annotations by both doctors and 27 clinicalmeasurements were used for future cephalometric analysis
313 Experimental Settings In the initial scale of landmarkdetection for training K 50 for prediction Kprime 400 Inthe other scales the additional parametersW and S are set to48 and 40 separately Because the image resolutions aredifferent preprocessing is required to rescale all the imagesto the fixed width of 1960 pixels in database2 For database2we test our algorithm by using 5-fold cross validation
32 Landmark Detection
321 Evaluation Criterion e first evaluation criterion isthe mean radial error with the associated standard deviatione radial error E ie the distance between the predictedposition and the true position of each landmark is defined as
Ei Ai minusBi
times ratio ai isin A bi isin B 1le ileM (15)
where A B represent the estimated and true positions ofeach landmark for all cephalograms in the dataset Ai Bi arethe corresponding positions in set A and B respectively and
M is the total number of cephalograms in the dataset emean radial error (MRE) and the associated standard de-viation (SD) for each landmark are defined as
MRE mean(E) 1113936
Mi1Ei
M
SD
1113936Mi1 Ei minusMRE( 1113857
2
Mminus 1
1113971 (16)
e second evaluation criterion is the success detectionrate with respect to the 25mm 3mm 5mm and 10mmprecision ranges If Ei is less than a precision range thedetection of the landmark is considered as a successfuldetection in the precision range otherwise it is consideredas a failed detection e success detection rate (SDR) pz
with precision less than z is defined as
pz Ei lt z1113864 1113865
Mtimes 100 1le ileM (17)
where z denotes four precision ranges used in the evaluationincluding 20mm 25mm 3mm and 4mm
322 Experimental Results
(1) Results of database1 Experimental results of cephalo-metric landmark detection using database1 is shown inTable 4 e MREs of landmarks L10 L4 L19 L5 and L16are more than 2mm e other 14 landmarks are all withinthe MRE of 2mm in which 3 landmarks are within the MREof 1mm e average MRE and SD of 19 landmarks are169mm and 143mm respectively It shows that the de-tection of cephalometric landmarks is accurate by ourproposed method e SDRs are 7337 7965 8446and 9067 within the precision ranges of 20 25 30 and40mm respectively In 2mm precision range the SDR ismore than 90 for four landmarks (L9 L12 L13 L14) theSDR is between 80 and 90 for six landmarks (L1 L7 L8L11 L15 L17) the SDR is between 70 and 80 for twolandmarks (L6 and L18) the SDRs for the other sevenlandmarks are less than 70 where the SDR of landmarkL10 is the lowest It can be seen from Table 4 that thelandmark L10 is the most difficult to detect accurately
Comparison of our method with three state-of-the-artmethods using Test1data is shown in Table 5 e differencebetween MRE of our proposed method and RFRV-CLM in[24] is less than 1 pixel which means that their performanceis comparable within the resolution of image
Furthermore we conduct the comparison with the toptwo methods in 2015 ISBI Challenge of cephalometriclandmark detection in terms of MRE SD and SDR as il-lustrated in Table 6 Our method achieves the SDR of 7337within the precision range of 20mm which is similar to theSDR of 7368 of the best method e MRE of our pro-posed method is 169mm which is comparable to that ofmethod of RFRV-CLM but the SD of our method is lessthan that of method of RFRV-CLM e results show that
D
AB
C
c1
c2
c0
c
Figure 5 e distance calculation between two points projected toa plane
8 Journal of Healthcare Engineering
our method is accurate for landmark detection in lateralcephalograms and is more robust
(2) Results of database2 Two examples of landmark de-tection in database2 are shown in Figure 8 It can be ob-served from the figure that the predicted locations of 45landmarks in cephalograms are near to the ground truthlocations which shows the success of our proposed algo-rithm for landmark detection
For the quantitative assessment of the proposed algo-rithm the statistical result is shown in Table 7eMRE andSD of the proposed method to detect 45 landmarks are171mm and 139mm respectively e average SDRs of 45landmarks within the precision range 20mm 25mm3mm and 4mm are 7208 8063 8646 and 9307respectively e experimental results in database2 showcomparable performance to the results of database1 It
indicates that the proposed method can be successfullyapplied to the detection of more clinical landmarks
33 Measurement Analysis
331 Evaluation Criterion For the classification of ana-tomical types eight cephalometric measurements wereusually used e description of these eight measurementsand the methods for classification are explained in Tables 8and 9 respectively
One evaluation criterion for measurement analysis is thesuccess classification rate (SCR) for these 8 popular methodsof analysis which is calculated using confusion matrix Inthe confusion matrix each column represents the instancesof an estimated type while each row represents the instancesof the ground truth type e SCR is defined as the averageddiagonal of the confusion matrix
For evaluation the performance of the proposed systemfor more measurement analysis in database2 another cri-terion the mean absolute error (MAE) is calculated by
MAE 1113936
Mi1
1113954Vi minusVi
11138681113868111386811138681113868111386811138681113868
M (18)
where 1113954Vi is the value of angular or linear measurementestimated by our system and Vi is the ground truth mea-surement obtained using human annotated landmarks
1 2
3
4
5
6
78 9
101112 13
14
15
16
17 18
19
(a)
L1 S
L2 N
L3 Or
L4 P
L5 A
L6 B
L7 Pg
L8 Me
L9 Gn
L10 Go
L11 LI
L12 UI
L13 UL
L14 LL
L15 Sn
L16 Pos
L17 PNS
L18 ANS
L19 Ar
(b)
Figure 6 Cephalogram annotation example showing the 19 landmarks in database1 (a) Cephalogram annotation example (b) Descriptionof 19 landmarks
Table 3 e description of database2
Female MaleClass I 29 26
Class III Pretreatment 32 23Posttreatment 32 23
Total number of cephalograms 165Total number of patients 110
Journal of Healthcare Engineering 9
332 Experimental Results
(1) Results of database1 Using the detected 19 landmarks 8cephalometric measurements are calculated for classificationof anatomical types in database1 e comparison of ourmethod to the best twomethods is given in Table 10 where wehave achieved the best classification results for two mea-surements of APDI and FHI e average SCR obtained byour method is 7503 which is much better than the methodin [26] and is comparable to the method of RFRV-CLM [24]
(2) Results of database2 According to the detected 45landmarks 27 measurements are automatically calculated byour system using database2 including 17 angular and 10linear measurements which are illustrated in Table 11
e MAE and MAElowast of 27 measurements are illustratedin Table 12 Here the MAE represents the performance ofmeasurement analysis of our automatic system e MAElowastrepresents interobserver variability calculated between theground truth values obtained by the two experts For angularmeasurements the difference between MAE and MAElowast iswithin 05deg for 917 measurements the difference between
12
345
6 78
91011
1213
14
1516
17
181920
212223
24252627
28
29
30
31
32333435
363738
39
40
414243
44
45
(a)
L1 N L16 Mes L31 L6A
L2 Ns L17 Go L32 UL5
L3 Prn L18 Me L33 L6E
L4 Cm L19 Gn L34 UL6
L5 ANS L20 Pg L35 U6E
L6 A L21 LIA L36 U6A
L7 Sn L22 B L37 PNS
L8 Ss L23 Id L38 Ptm
L9 UL L24 LI L39 S
L10 Stoms L25 UI L40 Co
L11 Stomi L26 FA L41 Ar
L12 LL L27 SPr L42 Ba
L13 Si L28 UIA L43 Bolton
L14 Pos L29 Or L44 P
L15 Gs L30 Se L45 ULI
(b)
Figure 7 Cephalogram annotation example showing landmarks in database2 (a) Cephalogram annotation example (b)e description of45 cephalometric landmarks
Table 4 e experimental results of 19 landmark detection inTest1Data
Landmark MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
L1 131 126 8667 9267 9467 9733L2 192 205 6800 7200 7933 8867L3 181 174 6667 8133 8800 9533L4 311 259 5000 5400 5933 6733L5 224 156 5600 6600 7533 8667L6 159 139 7200 7867 8400 9400L7 123 087 8067 9133 9667 9933L8 108 088 8867 9533 9667 9867L9 087 075 9133 9333 9800 9933L10 398 233 2333 2933 3800 5333L11 097 089 8733 9200 9600 9867L12 090 170 9467 9467 9600 9667L13 102 071 9200 9533 9800 10000L14 089 061 9333 9867 10000 10000L15 118 116 8533 9200 9333 9800L16 214 148 5000 6067 7267 9067L17 116 085 8600 9267 9467 9867L18 177 151 7200 7933 8600 9267L19 297 277 5000 5400 5800 6733Average 169 143 7337 7965 8446 9067
Table 5 Comparison of our method with three methods in term ofMRE using Test1Dtata
Method [24] [26] [29] OursMRE (pixels) 1674 1846 1779 1692
Table 6 Comparison of our method to two methods in terms ofMRE SD and SDR using Test1Dtata
Method MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
[24] 167 165 7368 8021 8519 9147[26] 184 176 7172 7740 8193 8804Ours 169 143 7337 7965 8446 9067
10 Journal of Healthcare Engineering
MAE and MAElowast is within 1deg for 1217 measurements andthe difference betweenMAE andMAElowast is within 2deg for 1617measurements In particular theMAE of ZAngle is less thanMAElowast e other one measurement has the difference of224deg For linear measurements the difference betweenMAEand MAElowast is within 05mm for 910 measurements eother one measurement has the difference of 116mm eresults show that our automatic system is efficient and ac-curate for measurement analysis
4 Discussion
e interobserver variability of human annotation is ana-lyzed Table 13 shows that the MRE and SD between an-notations by two orthodontists are 138mm and 155mm fordatabase1 respectively [27] for database2 the MRE and SDbetween annotations of two orthodontists are 126mm and127mm respectively Experimental results show that theproposed algorithm of automatic cephalometric landmark
detection can achieve the MRE of less than 2mm and the SDalmost equal to that of manual marking erefore theperformance of automatic landmark detection by the pro-posed algorithm is comparable to manual marking in term ofthe interobserver variability between two clinical expertsFurthermore the detected landmarks are used to calculate theangular and linearmeasurements in lateral cephalogramsesatisfactory results of measurement analysis are presented inexperiments based on the accurate landmark detection Itshows that the proposed algorithm has the potential to beapplied in clinical practice of cephalometric analysis for or-thodontic diagnosis and treatment planning
e detection accuracy of some landmarks is lower thanthe average value and there are mainly three main reasons (i)some landmarks are located at the overlaying anatomicalstructures such as landmarksGo UL5 L6E UL6 andU6E (ii)some landmarks have large variability of manual marking dueto large anatomical variability among subjects especially inabnormality such as landmarks A ANS Pos Ar Ba andBolton and (iii) structural information is not obvious due tolittle intensity variability in the neighborhood of some land-marks in images such as landmarks P Co L6A and U6A
e proposed system follows clinical cephalometricanalysis procedure and the accuracy of the system can beevaluated by the manual marking accuracy ere are severallimitations in this study On one hand except for the algo-rithm the data effects to the performance of the system in-clude three aspects (i) the quality of the training data (ii) thesize of the training dataset and (iii) the shape and appearancevariation exhibited in the training data On the other hand
12
345
6 78
91011
1213
14
1516
17
181920
212223
24252627
28
29
30
31
32333435
363738
39
40
414243
44
45
True locationsPredicted locations
(a)
12
345
6 78
91011
1213
14
1516
17
181920
21222324252627
28
29
30
31
32333435
3637
38
39
4041
4243
44
45
True locationsPredicted locations
(b)
Figure 8 Examples of automatic cephalometric landmark detection (a) Example of detection in Class I (b) Example of detection in Class III
Table 7 e statistical results of automatic cephalometric land-mark detection in database2
No of 5-fold
MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
1 172 135 7249 8132 8704 93372 172 138 7138 7994 8563 92733 166 132 7431 8206 8744 93374 177 136 6967 7896 8532 92935 168 154 7253 8088 8687 9296Average 171 139 7208 8063 8646 9307
Journal of Healthcare Engineering 11
performance of the system depends on consistency betweenthe training data and the testing data similarly as any su-pervised learning-based methods
5 Conclusion
In conclusion we design a new framework of landmarkdetection in lateral cephalograms with low-to-high resolu-tions In each image resolution decision tree regressionvoting is employed in landmark detection e proposedalgorithm takes full advantage of image information indifferent resolutions In lower resolution the primary localstructure information rather than local detail informationcan be extracted to predict the positions of anatomical
structure involving the specific landmarks In higher reso-lution the local structure information involves more detailinformation and it is useful for prediction positions oflandmarks in the neighborhood As demonstrated in ex-perimental results the proposed algorithm has achievedgood performance Compared with state-of-the-art methodsusing the benchmark database our algorithm has obtainedthe comparable accuracy of landmark detection in terms ofMRE SD and SDR Tested by our own clinical database ouralgorithm has also obtained average 72 successful de-tection rate within precision range of 20mm In particular45 landmarks have been detected in our database which isover two times of the number of landmarks in the bench-mark database erefore the extensibility of the proposed
Table 8 Description of cephalometric measurements in database1 [28]
No Measurement Description in mathematics Description in words1 ANB angL5L2L6 e angle between the landmark 5 2 and 62 SNB angL1L2L6 e angle between the landmark 1 2 and 63 SNA angL1L2L5 e angle between the landmark 1 2 and 5
4 ODI angL5L6L8L10 + angL17L18L4L3e arithmetic sum of the angle between AB plane(L5L6) to mandibular plane (L8L10) and the angle of
palatal plane (L17L18) to FH plane (L4L3)
5 APDI angL3L4L2L7 + angL2L7L5L6 + angL3L4L17L18
e arithmetic sum of the angle between FH plane(L3L4) to facial plane (L2L7) the angle of facial plane(L2L7) to AB plane (L5L6) and the angle of FH plane
(L3L4) to palatal plane (L17L18)
6 FHI |L1L10||L2L8|
e ratio of posterior face height (PFH the distancefrom L1 to L10) to anterior face height (AFH the
distance from L2 to L8)
7 FHA angL1L2L10L9 e angle between SN plane (L1L2) to mandibularplane (L10L9)
8 MW |L12L11| x(L12)ltx(L11)
minus|L12L11| otherwise1113896When the x ordinate of L12 is less than L11rsquos theMW
is |L12L11| otherwise MW is minus|L12L11|
Table 9 Eight standard cephalometric measurement methods for classification of anatomical types [28]
No Measurement Type 1 Type 2 Type 3 Type 41 ANB 32degsim57deg class I (normal) gt57deg class II lt32deg class III mdash
2 SNB 746degsim787deg normalmandible lt746deg retrognathic mandible gt787deg prognathic mandible mdash
3 SNA 794degsim832deg normal maxilla gt832deg prognathic maxilla lt794deg retrognathic maxilla mdash4 ODI Normal 745deg plusmn 607deg gt805deg deep bite tendency lt684deg open bite tendency mdash5 APDI Normal 814deg plusmn 38deg lt776deg class II tendency gt852deg class III tendency mdash6 FHI Normal 065sim075 gt075 short face tendency lt065 long face tendency mdash
7 FHA Normal 268degsim314deg gt314deg mandible high angletendency
lt268deg mandible lower angletendency mdash
8 MW Normal 2mmsim45mm MW 0mm edge to edge MW lt0mm anterior cross biteMWgt45mm
large over jet
Table 10 Comparison of our method to two methods in term of SCR using Test1Data
Methode success classification rates SCR ()
ANB SNB SNA ODI APDI FHI FHA MW Average[24] 6499 8452 6845 8464 8214 6792 7554 8219 7630[26] 5942 7109 5900 7804 8016 5897 7703 8394 7096Ours 5861 7885 5986 7659 8349 8244 7718 8320 7503
12 Journal of Healthcare Engineering
Table 11 Description of cephalometric measurements indatabase2
Name Description inmathematics Description in words
8e angles of three points (unit deg)
SNA angL39L1L6 e angle of landmarks L39 L1L6
SNB angL39L1L22 e angle of landmarks L39 L1L22
ANB angL6L1L22 e angle of landmarks L6 L1L22
SNPg angL39L20L1 e angle of landmarks L39 L20L1
NAPg angL1L6L20 e angle of landmarks L1 L6L20
NSGn angL1L39L19 e angle of landmarks L1 L39L19
Ns-Prn-Pos angL2L3L14 e angle of landmarks L2 L3L14
Cm-Sn-UL angL4L7L9 e angle of landmarks L4 L7L9
LL-Bprime-Pos angL12L13L14 e angle of landmarks L12 L13L14
Angle ofthe jaw angL41L17L18 e angle of landmarks L41 L17
L18Angle ofconvexity angL2L7L14 e angle of landmarks L2 L7
L148e angles between two planes (unit deg)
FH-SN angL44L29L39L1 e angle between FH plane(L44L29) and SN plane (L39L1)
UI-SN angL28L25L39L1 e angle between upper incisor(L28L25) and SN plane (L39L1)
LI-FH angL24L21L44L29e angle between lower incisoraxis (L24L21) and FH plane
(L44L29)
UI-LI angL28L25L24L21 e angle between upper andlower incisors (L28L25 L24L21)
H angle angL9L14L8L14 e angle between H plane(L9L14)and NB plane (L8L14)
Z angle angL44L29L9L14e rear lower angle between FHplane (L44L29) and H plane
(L9L14)8e distances between two points (unit mm)
N-Me |L1L18|e forward height the distancebetween landmarks L1 L18
N-ANS |L1L5|
e up-forward height thedistance between landmarks L1
L5
ANS-Me |L5L18|
e down-forward height thedistance between landmarks L5
L18
Stoms-UI |L10L25|e vertical distance between
landmarks L10 L258e distances from the point to the plane in the horizontal direction(unit mm)
UI-AP |L25L6L20|e distance between landmark
L25 to AP plane (L6L20)
LI-AP |L24L6L20|e distance from landmark L24
to AP plane (L6L20)
UL-EP |L9L3L14|e distance from landmark L9
to EP plane (L3L14)
Table 12 Result of our method for 27 measurements in terms ofMAE and MAElowast using database2
Name MAE MAElowast MAE-MAElowast
8e angles of three points (unit deg)SNA 195 170 024SNB 157 117 039ANB 129 108 021SNPg 159 120 039NAPg 283 244 039NSGn 142 096 047Ns-Prn-Pos 168 110 058Cm-Sn-UL 552 380 172LL-Bprime-Pos 395 344 051Angle of the jaw 320 166 154Angle of convexity 267 186 0818e angles between two planes (unit deg)FH-SN 200 196 004UI-SN 471 281 190LI-FH 334 227 107UI-LI 690 466 224H angle 094 080 014Z angle 169 186 -0178e distances between two points (unit mm)N-Me 137 110 027N-ANS 157 134 024ANS-Me 106 096 010Stoms-UI 075 042 0338e distances from the point to the plane in the horizontal direction(unit mm)UI-AP 096 071 025LI-AP 096 076 020UL-EP 050 036 014LL-EP 045 039 005MaxE 206 179 0278e distances between two points projected to a plane (unit mm)Wits 470 353 116MAElowast interobserver variability
Table 11 Continued
Name Description inmathematics Description in words
LL-EP |L12L13L14|e distance from landmark L12
to EP plane (L3L14)
MaxE |L6L5L37|
Maxillary length the distancefrom landmark L6 to palatal
plane (L5L37)8e distances between two points projected to a plane (unit mm)
Wits |L6L32L34L22|
e distance between the twolandmarks L6 and L22 projectedto functional jaw plane (L32L34)
Table 13 e interobserver error of manual marking betweendoctor1 and doctor2
Interobserver variabilityMRE (mm) SD (mm)
database1 138 155database2 126 127
Journal of Healthcare Engineering 13
algorithm is confirmed using this clinical dataset In addi-tion automatic measurement of clinical parameters has alsoachieved satisfactory results In the future we will put moreefforts to improve the performance of automatic analysis inlateral cephalograms so that the automatic system can beutilized in clinical practice to obtain objective measurementMore research will be conducted to reduce the computa-tional complexity of the algorithm as well
Data Availability
e database1 of 2015 ISBI Challenge is available at httpwwwntustedutwsimcweiwangISBI2015challenge1indexhtml e database2 of Peking University School andHospital of Stomatology cannot be made public availabledue to privacy concerns for the patients
Disclosure
is work is based on research conducted at Beijing Instituteof Technology
Conflicts of Interest
e authors declare that there are no conflicts of interestregarding the publication of this paper
Acknowledgments
e authors would like to express their gratitude to De-partment of Orthodontics Peking University School andHospital of Stomatology China for the supply of imagedatabase2
References
[1] J Yang X Ling Y Lu et al ldquoCephalometric image analysisand measurement for orthognathic surgeryrdquo Medical amp Bi-ological Engineering amp Computing vol 39 no 3 pp 279ndash2842001
[2] B H Broadbent ldquoA new x-ray technique and its applicationto orthodontiardquo Angle Orthodontist vol 1 pp 45-46 1931
[3] H Hofrath ldquoDie Bedeutung des Rontgenfern undAbstandsaufnahme fiir de Diagnostik der KieferanomalienrdquoFortschritte der Kieferorthopadie vol 2 pp 232ndash258 1931
[4] R Leonardi D Giordano F Maiorana et al ldquoAutomaticcephalometric analysis - a systematic reviewrdquo Angle Ortho-dontist vol 78 no 1 pp 145ndash151 2008
[5] T S Douglas ldquoImage processing for craniofacial landmarkidentification and measurement a review of photogrammetryand cephalometryrdquo Computerized Medical Imaging andGraphics vol 28 no 7 pp 401ndash409 2004
[6] D S Gill and F B Naini ldquoChapter 9 cephalometric analysisrdquoin Orthodontics Principles and Practice pp 78ndash87 BlackwellScience Publishing Oxford UK 2011
[7] A D Levy-Mandel A N Venetsanopoulos and J K TsotsosldquoKnowledge-based landmarking of cephalogramsrdquo Com-puters and Biomedical Research vol 19 no 3 pp 282ndash3091986
[8] V Grau M Alcaniz M C Juan et al ldquoAutomatic localizationof cephalometric landmarksrdquo Journal of Biomedical In-formatics vol 34 no 3 pp 146ndash156 2001
[9] J Ashish M Tanmoy and H K Sardana ldquoA novel strategyfor automatic localization of cephalometric landmarksrdquo inProceedings of IEEE International Conference on ComputerEngineering and Technology pp V3-284ndashV3-288 TianjinChina 2010
[10] T Mondal A Jain and H K Sardana ldquoAutomatic craniofacialstructure detection on cephalometric imagesrdquo IEEE Trans-actions on Image Processing vol 20 no 9 pp 2606ndash2614 2011
[11] A Kaur and C Singh ldquoAutomatic cephalometric landmarkdetection using Zernike moments and template matchingrdquoSignal Image and Video Processing vol 9 no 1 pp 117ndash1322015
[12] D B Forsyth and D N Davis ldquoAssessment of an automatedcephalometric analysis systemrdquo European Journal of Ortho-dontics vol 18 no 5 pp 471ndash478 1996
[13] S Rueda and M Alcaniz ldquoAn approach for the automaticcephalometric landmark detection using mathematicalmorphology and active appearance modelsrdquo in Proceedings ofMedical Image Computing and Computer-Assisted In-tervention (MICCAI 2006) pp 159ndash166 Lecture Notes inComputer Science R Larsen Ed Copenhagen DenmarkOctober 2006
[14] W Yue D Yin C Li et al ldquoAutomated 2-D cephalometricanalysis on X-ray images by a model-based approachrdquo IEEETransactions on Biomedical Engineering vol 53 no 8pp 1615ndash1623 2006
[15] R Kafieh A mehri S sadri et al ldquoAutomatic landmarkdetection in cephalometry using a modified Active ShapeModel with sub image matchingrdquo in Proceedings of IEEEInternational Conference on Machine Vision pp 73ndash78Islamabad Pakistan 2008
[16] J Keustermans W Mollemans D Vandermeulen andP Suetens ldquoAutomated cephalometric landmark identifica-tion using shape and local appearance modelsrdquo in Proceedingsof 20th International Conference on Pattern Recognitionpp 2464ndash2467 Istanbul Turkey August 2010
[17] P Vucinic Z Trpovski and I Scepan ldquoAutomatic land-marking of cephalograms using active appearance modelsrdquoEuropean Journal of Orthodontics vol 32 no 3 pp 233ndash2412010
[18] S Chakrabartty M Yagi T Shibata et al ldquoRobust cepha-lometric landmark identification using support vector ma-chinesrdquo in Proceedings of IEEE International Conference onAcoustics Speech and Signal Processing vol 2 pp 825ndash828Vancouver Canada May 2003
[19] I El-Feghi M A Sid-Ahmed and M Ahmadi ldquoAutomaticlocalization of craniofacial landmarks for assisted cephalom-etryrdquo Pattern Recognition vol 37 no 3 pp 609ndash621 2004
[20] R Leonardi D Giordano and F Maiorana ldquoAn evaluation ofcellular neural networks for the automatic identification ofcephalometric landmarks on digital imagesrdquo Journal of Bio-medicine and Biotechnology vol 2009 Article ID 717102 12pages 2009
[21] L Favaedi M Petrou and IEEE ldquoCephalometric landmarksidentification using probabilistic relaxationrdquo in Proceedings of2010 Annual International Conference of the IEEE Engineeringin Medicine and Biology Society pp 4391ndash4394 IEEE Engi-neering in Medicine and Biology Society Conference Pro-ceedings Buenos Aires Argentina August 2010
[22] M Farshbaf and A A Pouyan ldquoLandmark detection oncephalometric radiology images through combining classi-fiersrdquo in Proceedings of 2010 17th Iranian Conference ofBiomedical Engineering (ICBME) pp 1ndash4 Isfahan IranNovember 2010
14 Journal of Healthcare Engineering
[23] L Breiman ldquoRandom forestsrdquo Machine Learning vol 45no 1 pp 5ndash32 2001
[24] C Lindner and T F Cootes ldquoFully automatic cephalometricevaluation using random forest regression-votingrdquo in Pro-ceedings of International Symposium on Biomedical Imaging(ISBI) Grand Challenges in Dental X-ray Image Ana-lysismdashAutomated Detection and Analysis for Diagnosis inCephalometric X-ray Image Brooklyn NY USA May 2015
[25] C Lindner C-W Wang C-T Huang et al ldquoFully automaticsystem for accurate localisation and analysis of cephalometriclandmarks in lateral cephalogramsrdquo Scientific Reports vol 6no 1 2016
[26] B Ibragimov et al ldquoComputerized cephalometry by gametheory with shape- and appearance-based landmark re-finementrdquo in Proceedings of International Symposium onBiomedical imaging (ISBI) Grand Challenges in Dental X-rayImage AnalysismdashAutomated Detection and Analysis forDiagnosis in Cephalometric X-ray Image Brooklyn NYUSA May 2015
[27] C-WWang C-T HuangM-C Hsieh et al ldquoEvaluation andcomparison of anatomical landmark detection methods forcephalometric X-ray images a grand challengerdquo IEEETransactions on Medical Imaging vol 34 no 9 pp 1890ndash1900 2015
[28] C W Wang C T Huang J H Lee et al ldquoA benchmark forcomparison of dental radiography analysis algorithmsrdquoMedical Image Analysis vol 31 pp 63ndash76 2016
[29] R Vandaele J AcetoMMuller et al ldquoLandmark detection in2D bioimages for geometric morphometrics a multi-resolution tree-based approachrdquo Scientific Reports vol 8no 1 2018
[30] D G Lowe ldquoDistinctive image features from scale-invariantKeypointsrdquo International Journal of Computer Vision vol 60no 2 pp 91ndash110 2004
[31] A Vedaldi and B Fulkerson ldquoVLFeat an open and portablelibrary of computer vision algorithmsrdquo in Proceedings ofConference on ACM multimedia pp 1469ndash1472 FirenzeItaly October httpwwwvlfeatorg
[32] C Qiu L Jiang and C Li ldquoRandomly selected decision treefor test-cost sensitive learningrdquo Applied Soft Computingvol 53 pp 27ndash33 2017
[33] K Kim and J S Hong ldquoA hybrid decision tree algorithm formixed numeric and categorical data in regression analysisrdquoPattern Recognition Letters vol 98 pp 39ndash45 2017
[34] L Breiman and J H Friedman Classification and RegressionTrees CRC Press Boca Raton FL USA 1984
[35] C Lindner P A Bromiley M C Ionita et al ldquoRobust andaccurate shape model matching using random forest re-gression-votingrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 37 no 9 pp 1862ndash1874 2015
[36] J Gall and V S Lempitsky ldquoClass-specific Hough forests forobject detectionrdquo in Proceedings of IEEE Conference onComputer Vision and Pattern Recognition (CVPR) pp 143ndash157 Miami FL USA June 2009
Journal of Healthcare Engineering 15
International Journal of
AerospaceEngineeringHindawiwwwhindawicom Volume 2018
RoboticsJournal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Active and Passive Electronic Components
VLSI Design
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Shock and Vibration
Hindawiwwwhindawicom Volume 2018
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawiwwwhindawicom
Volume 2018
Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom
The Scientific World Journal
Volume 2018
Control Scienceand Engineering
Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom
Journal ofEngineeringVolume 2018
SensorsJournal of
Hindawiwwwhindawicom Volume 2018
International Journal of
RotatingMachinery
Hindawiwwwhindawicom Volume 2018
Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Navigation and Observation
International Journal of
Hindawi
wwwhindawicom Volume 2018
Advances in
Multimedia
Submit your manuscripts atwwwhindawicom
study 45 landmarks are used in lateral cephalograms (referto Table 1) e cephalometric landmarks play a major rolein calculating the cephalometric planes which are the linesbetween two landmarks in 2D cephalometric radiographs asshown in Table 2 Furthermore different measurements arecalculated using different cephalometric landmarks andplanes and they are used to analyze different anatomicalstructures [4] Measurements can be angles or distancewhich are used to analyze skeletal and dental anatomicalstructures as well as soft tissue profile To conduct a clinicaldiagnosis many analytic approaches have been developedincluding Downs analysis Wylie analysis Riedel analysisSteiner analysis Tweed analysis Sassouni analysis Bjorkanalysis and so on [6]
Methods of automatic cephalometric landmark de-tection are mainly separated into three categories (1)bottom-upmethods (2) deformationmodel-basedmethodsand (3) classifierregressor-basedmethodse first categoryis bottom-up methods while the other two categories arelearning-based methods
For bottom-up methods two techniques were usuallyemployed including edge detection and template matchingEdge-based methods were to extract the anatomical con-tours in cephalograms and then the relative landmarks wereidentified on the contours using prior knowledge Levy-Mandel et al [7] first proposed the edge tracking method foridentifying craniofacial landmarks First input images weresmoothed by median filter and then edges were extracted byMero-Vassy operator Second contours are obtained by theedge tracking technique based on the constraints of loca-tions endings and breaking segments and edge linkingconditions Finally the landmarks are detected on contoursaccording to their definition in cephalometry is methodwas only tested by two high quality cephalometric radio-graphs Furthermore 13 landmarks on noncontours werenot detected A two-stage landmark detection method wasproposed by Grau et al [8] which first extracted the majorline features in high contrast by line-detection module suchas contours of jaws and nose and then detected landmarksby pattern recognition based on mathematical morphologyEdge-based methods could only detect the landmarks oncontours and were not robust to noise low contrast andocclusion Template-matching-based methods were to finda most likely region with the least distance to the template ofthe specific landmark and the center of the region wasconsidered as the estimated position of the specific land-mark erefore not only the landmarks on contours butalso the landmarks on noncontours can be detected Ashishet al [9] proposed a template-matching method usinga coarse-to-fine strategy for cephalometric landmark de-tection Mondal et al [10] proposed an improved methodfrom Canny edge extraction to detect the craniofacial an-atomical structures Kaur and Singh [11] proposed an au-tomatic cephalometric landmark detection using Zernikemoments and template matching Template matching basedmethods had difficulty in choosing the representativetemplate and were not robust to anatomical variability inindividual All these methods strongly depended on thequality of images
Subsequently deformation model-based methods wereproposed for these limitations by using shape constraintsForsyth and Davis [12] reviewed and evaluated the ap-proaches reported from 1986 to 1996 on automatic ceph-alometric analysis systems and they highlighteda cephalometric analysis system presented by Davis andTaylor which introduced an appearance model into land-mark detection e improved algorithms of activeshapeappearance models were then used to refine cepha-lometric landmark detection combining with templatematching and higher accuracy was obtained [13ndash17] Hereshape or appearance models were learned from training data
Table 1 Description of cephalometric landmarks
Symbol DescriptionN NasionNs Soft tissue nasionPrn PronasaleCm ColumellaANS Anterior nasal spineA SubspinaleSn SubnasaleSsAprime Upper pit of lipsUL Upper lipStoms Stomion superiusStomi Stomion inferiusLL Lower lipSiBprime Lower pit of lipsPos Soft tissue pogonionGs Soft tissue gnathionMes Soft tissue mentonGo GonionMe MentonGn GnathionPg PogonionLIA Mandibular joint midpointB SupramentalId InfradentaleLI Lower incisal incisionUI Upper incisal incisionFA Maxillary incisorrsquos facial axisSPr Superior prosthionUIA Root point of incisorOr Orbitale
Se Internode of sphenoid wing with anterior cranialfossa
L6A Root apex of mandibular first molarUL5 Midpoint of tip of the second molarL6E Buccal apex of mandibular first molarUL6 Midpoint of tip of first molarU6E Buccal apex of maxillary first molarU6A Root apex of maxillary first molarPNS Posterior nasal spinePtm Pterygomaxillary fissureS SellaCo CondylionAr ArticulareBa Basion
Bolton Concave point of posterior incision of occipitalcondyle
P PorionULI Midpoint of lower incisor and upper incisor
2 Journal of Healthcare Engineering
to regularize the searching through all landmarks in testingdata However it was difficult to initialize landmark posi-tions for searching because the initialization was alwaysachieved by traditional methods
Recently significant progress has been made for auto-matic landmark detection in cephalograms by using super-vised machine-learning approaches ese machine-learningapproaches can be further separated into two classes clas-sification and regression models A support vector machine(SVM) classifier was used to predict the locations of land-marks in cephalometric radiographs while the projectedprincipal-edge distribution was proposed to describe edges asthe feature vector [18] El-Feghi et al [19] proposed a coarse-to-fine landmark detection algorithm which first used fuzzyneural network to predict the locations of landmarks andthen refined the locations of landmarks by template matchingLeonardi et al [20] employed the cellular neural networksapproach for automatic cephalometric landmark detection onsoftcopy of direct digital cephalometric X-rays which wastested by 41 cephalograms and detected 10 landmarksFavaedi et al [21] proposed a probability relaxation methodbased on shape features for cephalometric landmark de-tection Farshbaf and Pouyan et al [22] proposed a coarse-to-fine SVM classifier to predict the locations of landmarks incephalograms which used histograms of oriented gradientsfor coarse detection and histograms of gray profile for re-finement Classifier-based methods were successfully used toidentify the landmark from the whole cephalograms butpositive and negative samples were difficult to be balanced intraining data and computational complexity is increased dueto pixel-based searching
Many algorithms have been reported for automaticcephalometric landmark detection but results were difficultto compare due to the different databases and landmarkse situation has been better since two challenges were held
at 2014 and 2015 IEEE International Symposium on Bio-medical Imaging (ISBI) During the challenges regressor-based methods were first introduced to automatic landmarkdetection in lateral cephalograms e challenges aimed atautomatic cephalometric landmark detection and usinglandmark positions to measure the cephalometric linear andangular parameters for automatic assessment of anatomicalabnormalities to assist clinical diagnosis Benchmarks havebeen achieved by using random forest (RF) [23] regressionvoting based on shape model matching at the challenges Inparticular Lindner et al [24 25] presented the algorithm ofrandom forest regression voting (RFRV) in the constrainedlocal model (CLM) framework for automatic landmarkdetection which obtained the mean error of 167mm andthe successful detection rate of 7368 in precision range of2mme algorithm of RFRV-CLMwon the challenges andthe RF-based classification algorithm combined with Haar-like appearance features and game theory [26] was in rank 2e comprehensive performance analysis among those al-gorithms for the challenges were reported in [27 28] LaterVandaele et al [29] proposed an ensemble tree-basedmethod using multiresolution features in bioimages andthe method was tested by three databases includinga cephalogram database of 2015 ISBI challenge Now publicdatabase and evaluation are available to improve the per-formance of automatic analysis system for cephalometriclandmark detection and parameter measurement and manyresearch outcomes have been achieved However automaticcephalometric landmark detection and parameter mea-surement for clinical practice is still challenging
In recent days efforts have been made to develop au-tomatic cephalometric analysis systems for clinical usages Inthis paper we present a new automatic cephalometricanalysis system including landmark detection and param-eter measurement in lateral cephalograms and the blockdiagram of our system is shown in Figure 1 e core of thissystem is automatic landmark detection which is realized bya new method based on multiresolution decision tree re-gression voting Ourmain contributions can be concluded infour aspects (1) we propose a new landmark detectionframework of multiscale decision tree regression voting (2)SIFT-based patch feature is first employed to extract thelocal feature of cephalometric landmarks and the proposedapproach is flexible when extending to detection of morelandmarks because feature selection and shape constraintsare not used (3) two clinical databases are used to evaluatethe extension of the proposed method Experimental resultsshow that our method can achieve robust detection whenextending from 19 landmarks to 45 landmarks (4) auto-matic measurement of clinical parameters is implemented inour system based on the detected landmarks which canfacilitate clinical diagnosis and research
e rest of this paper is organized as follows Section 2describes our proposed method of automatic landmarkdetection based onmultiscale decision tree regression votingand parameter measurement Experimental results for 2015ISBI Challenge cephalometric benchmark database and ourown database are presented in Section 3 e discussion ofthe proposed system is given in Section 4 Finally we
Table 2 Description of cephalometric planes
Name Involvinglandmarks
Descriptionin database1
Descriptionin database2
Reference planesSN Anterior cranialbase plane S-N L1L2 L39L1
FH Frankfort horizontalplane P-O L4L3 L44L29
Bolton plane Bolton-N mdash L43L1Measurement planesPalatal plane ANS-PNS L18L17 L5L37Cranial base plane Ba-N mdash L42L1MP Mandibular plane Go-Me L10L8 L17L18RP Ramal plane Ar-Go L19L10 L41L17NP Facial plane N-Pg L2L7 L1L20NA plane N-A L2L5 L1L6AB Subspinale toinfradentale plane A-B L5L6 L6L22
AP plane A-Pg L5L7 L6L20Soft tissue measurementplanesFacial plane of soft tissue Ns-Pos mdash L2L14Ricketts esthetic plane Prn-Pos mdash L3L14H plane UL-Pos L13L16 L9L14
Journal of Healthcare Engineering 3
conclude this study with expectation of future work inSection 5
2 Method
21 Landmark Detection It is well known that cephalo-metric landmark detection is the most important step incephalometric analysis In this paper we propose a newframework of automatic cephalometric landmark detectionwhich is based on multiresolution decision tree regressionvoting (MDTRV) using patch features
211 SIFT-Based Patch Feature Extraction
(1) SIFT Feature Scale invariant feature transform (SIFT) isfirst proposed for key point detection by Lowe [30] ebasic idea of SIFT feature extraction algorithm consists offour steps (1) local extrema point detection in scale-space byconstructing the difference-of-Gaussian image pyramids (2)accurate key point localization including location scale andorientation (3) orientation assignment for invariance toimage rotation and (4) key point descriptor as the localimage feature e advantage of SIFT features to representthe key points is affine invariant and robust to illuminationchange for images is method of feature extraction hasbeen commonly used in the fields of image matching andregistration
(2) SIFT Descriptor for Image Patch Key points with dis-cretized descriptors can be used as visual words in the field ofimage retrieval Histogram of visual words can then be usedby a classifier to map images to abstract visual classes Mostsuccessful image representations are based on affine in-variant features derived from image patches First featuresare extracted on sampled patches of the image either usinga multiresolution grid in a randomized manner or usinginterest point detectors Each patch is then described usinga feature vector eg SIFT [30] In this paper we use theSIFT feature vectors [31] to represent image patches whichcan be used by a regressor to map patches to the dis-placements from each landmark
e diagram of SIFTfeature descriptor of an image patchis illustrated in Figure 2 An image patch centered at location(x y) will be described by a square window of length 2W + 1where W is a parameter For each square image patch Pcentered at position (x y) of length 2W + 1 the gradient Px
and Py of image patch P is computed by using finite dif-ferences e gradient magnitude gm is computed by
gm P2
x + P2y
1113969 (1)
and the gradient angle ga (measured in radians clockwisestarting from the X axis) is calculated by
ga arctanPy
Px
(2)
Each square window is separated into 4 times 4 adjacentsmall windows and the 8-bin histogram of gradients (di-rection angles started from 0 to (2π) is extracted in eachsmall window For each image patch 4 times 4 histograms ofgradients are concatenated to the resulting feature vector f(dimension 128) Finally the feature f of image patch P canbe described as SIFT descriptor e example of SIFT-basedpatch feature extraction in a cephalogram is shown inFigure 3
212 Decision Tree Regression Decision tree is a classicaland efficient statistical learning algorithm and has beenwidely used to solve classification and regression problems[32 33] Decision trees predict responses to data To predicta response query the new data by the decisions from the rootnode to a leaf node in the tree e leaf node contains theresponse Classification trees give responses that are nom-inal while regression trees give numeric responses In thispaper CART (classification and regression trees) [34]a binary tree is used as the regressors Rl to learn themapping relationship between the SIFT-based patch featurevectors f and the displacement vectors d(dx dy) from thecenters of the patches to the position of each landmark in thetraining images e regressors Rl are then used to predictthe displacements d using patch feature vectors of the testimage Finally the predicted displacements are used toobtain the optimal location of each landmark via voting Oneadvantage of the regression approach is to avoid balancingpositive and negative examples for the training of classifiersAnother advantage of using regression rather than classi-fication is that good results can be obtained by evaluatingthe region of interest on randomly sampling pixels ratherthan at every pixel
(1) Training In training process a regression tree Rl can beconstructed for each landmark l via splitting Here opti-mization criterion and stopping rules are used to determinehow a decision tree is to grow e decision tree can beimproved by pruning or selecting the appropriateparameters
e optimization criterion is to choose a split to min-imize the mean-squared error (MSE) of predictions com-pared to the ground truths in the training data Splitting isthe main process of creating decision trees In general foursteps are performed to split node t First for each obser-vation fj (ie the extracted feature in the training data) theweighted MSE εt of the responses (displacements d) in nodet is computed by using
εt 1113944jisinT
dj minus dt1113872 11138732
N j 1 N (3)
whereT is the set of all observation indices in node t andN isthe sample size Second the probability of an observation innode t is calculated by
Landmarkdetection
ParametermeasurementCephalogram Result analysis
Figure 1 Block diagram of automatic cephalometric analysissystem
4 Journal of Healthcare Engineering
P(T) 1113944jisinT
wj (4)
where wj is the weight of observation j In this paper set wj
1N ird all elements of the observation are sorted in
ascending order Every element is regraded as a splittingcandidate TU is the unsplit set of all observation indicescorresponding tomissing values Finally the best way to splitnode t using fj is determined by maximizing the reductionin MSE Δεt among all splitting candidates For all splitting
Image Geometry descriptor
hellip012
34
5 67
8910
1112
13 1415
Bin indexes when stacked
y
x
ŷ
x
Figure 2 Diagram of SIFT feature descriptor of an image patch
(a) (b)
120
Mag
nitu
de
100
80
60
40
20
00 20 40 60
Index80 100 120 140
(c)
Figure 3 SIFT-based patch feature extraction (a) Original image showing a patch (b) Geometry descriptor (c) e extracted feature vector
Journal of Healthcare Engineering 5
candidates in the observation fj the following steps areperformed
(1) Split the observations in node t into left and rightchild nodes (tL and tR respectively)
(2) Compute the reduction in MSE Δεt For a particularsplitting candidate tL and tR represent the observationindices in the sets TL and TR respectively If fj doesnot contain any missing values then the reduction inMSE Δεt for the current splitting candidate is
Δεt P(T)εt minusP TL( 1113857εtLminusP TR( 1113857εtR
(5)
If fj contains any missing values then the reduction inMSE Δεt
prime is
Δεtprime P TminusTU( 1113857εt minusP TL( 1113857εtL
minusP TR( 1113857εtR (6)
where TminusTU is the set of all observation indices innode t that are not missing
(3) Choose the splitting candidate that yields the largestMSE reduction In this way the observations in nodet are split at the candidate that maximize the MSEreduction
To stop splitting nodes of the decision tree two rules canbe followed (1) it is pure of the node that the MSE for theobserved response in this node is less than the MSE for theobserved response in the entire data multiplied by the tol-erance on quadratic error per node and (2) the decision treereaches to the setting values for depth of the regression de-cision tree for example the maximum number of splittingnodes max_splits
e simplicity and performance of a decision tree shouldbe considered to improve the performance of the decisiontree at the same time A deep tree usually achieves highaccuracy on the training data However the tree is not toobtain high accuracy on a test data as well It means thata deep tree tends to overfit ie its test accuracy is much lessthan its training accuracy On the contrary a shallow treedoes not achieve high training accuracy but can be morerobust ie its training accuracy could be similar to that ofa test data Moreover a shallow tree is easy to interpret andsaves time for prediction In addition tree accuracy can beobtained by cross validation when there are not enough datafor training and testing
ere are two ways to improve the performance ofdecision trees by minimizing cross-validated loss One is toselect the optimal parameter value to control depth of de-cision trees e other is postpruning after creating decisiontrees In this paper we use the parameter max_splits tocontrol the depth of resulting decision trees Setting a largevalue for max_splits lends to growing a deep tree whilesetting a small value formax_splits yields a shallow tree withlarger leaves To select the appropriate value for max_splitsthe following steps are performed (1) set a spaced set ofvalues from 1 to the total sample size formax_splits per tree(2) create cross-validated regression trees for the datausing the setting values for max_splits and calculate thecross-validated errors and (3) the appropriate value of
max_splits can be obtained by minimizing the cross-validated errors
(2) Prediction In prediction process you can easily predictresponses for new data after creating a regression tree RlSuppose f_new is the new data (ie a feature vectorextracted from a new patch P_new in the test cephalogram)According to the rules of the regression tree the nodes selectthe specific attributes from the new observation f_new andreach the leaf step by step which stores the mean dis-placement d_new Here we predict the displacement fromthe center of patch P_new to the landmark l using SIFTfeature vector by the regressor Rl
213 MDTRV Using SIFT-Based Patch Features
(1) Decision Tree Regression Voting (DTRV) As illustrated inFigure 4 the algorithm of automatic cephalometric land-mark detection using decision tree regression voting(DTRV) in single scale consists of training and testingprocesses as a supervised learning algorithm e trainingprocess begins by feature extraction for patches sampledfrom the training images en a regression tree is con-structed for each landmark with inputting feature vectorsand displacements (f d) Here f represents the observationsand d represents the targets for this regression problem etesting process begins by the same step of feature extractionen the resulting f_new is used to predict the displacementd_new by regressor Rl In the end the optimal landmarkposition is obtained via voting e voting style includessingle unit voting and single weighted voting As mentionedin [35] these two styles perform equally well in applicationof voting optimal landmark positions for medical images Inthis paper we use the single unit voting
(2) MDTRV Using SIFT-Based Patch Features Finally a newframework of MDTRV using SIFT-based patch features isproposed by using a simple and efficient strategy for accuratelandmark detection in lateral cephalograms ere are fourstages in the proposed algorithm of MDTRV which areiteratively performed in the scales of 0125 025 05 and 1When the scale is 0125 the training K patches with Kdisplacements are randomly sampled in a whole cephalo-gram for a specific landmark l(l 1 L) e decisiontree regressor R1
l is created by using these training samplesFor prediction SIFT features are extracted for Kprime testingpatches sampled in the whole cephalogram and Kprime dis-placements are predicted by regressor R1
l using extractedfeatures e optimal position of the specific landmark l isobtained via voting through all predicted displacementsWhen the scale is 025 05 or 1 only the patch sampling ruleis different from the procedure in scale of 0125 at is thetraining and testing patches are randomly sampled in the(2S + 1) times (2S + 1) neighborhood of true and initial landmarkpositions e estimated landmark position is used as theinitial landmark position in the next scale of testing processe optimal position of the specific landmark l in scale of 1 isrefined by using Hough forest [36] which is regarded as the
6 Journal of Healthcare Engineering
final resulting location is multiresolution coarse-to-finestrategy has greatly improved the accuracy of cephalometriclandmark detection
22 Parameter Measurement Measurements are eitherangular or linear parameters calculated by using cephalo-metric landmarks and planes (refer to Tables 1 and 2)According to geometrical structure measurements can beclassified into five classes the angle of three points the angleof two planes the distance between two points the distancefrom a point to a plane and the distance between two pointsprojected to a plane All measurements can be calculatedautomatically in our system as described in the following
2218e Angle of8ree Points Assume point B is the vertexamong three points A B and C then the angle of these threepoints is calculated by
angABC arccosa2 + c2 minus b2
2ac1113888 1113889 times
180π
1113874 1113875 (7)
anda AminusB
b AminusC
c BminusC
⎧⎪⎪⎨
⎪⎪⎩(8)
where AminusB
(xA minus xB)2 + (yA minusyB)21113969
represents theEuclidean distance of A and B
222 8e Angle between Two Planes As the cephalometricradiographs are 2D images in this study the planes areprojected as straight lines us the planes are determinedby two points e angle between two planes AB and CD iscalculated by
angAB arctanyA minusyB
xA minus xB
1113888 1113889 times180π
1113874 1113875
angCD arctanyC minusyD
xC minusxD
1113888 1113889 times180π
1113874 1113875
⎧⎪⎪⎪⎪⎪⎪⎪⎨
⎪⎪⎪⎪⎪⎪⎪⎩
(9)
angABCD |angABminusangCD| (10)
223 8e Distance between Two Points e distance be-tween two points is calculated using the following equation
|AB| AminusB times ratio (11)
ratio is a scale indicator that can be calculated by thecalibrated gauge in the lateral cephalograms and its unit ismmpixel
224 8e Distance from a Point to a Plane in the HorizontalDirection e plane AB is defined as
y + k1x + k0 0 (12)
us the distance of point C to the plane AB is cal-culated by
|C middot AB| yC + k1xC + k0
1 + k21
1113969
111386811138681113868111386811138681113868111386811138681113868111386811138681113868
111386811138681113868111386811138681113868111386811138681113868111386811138681113868
times ratio (13)
225 8e Distance between Two Points Projected to a Planee calculation of the distance between two points projectedto a plane is illustrated in Figure 5 In order to calculate thedistance between two points projected to a plane first we useEquations (4)ndash(12) to determine the plane AB en wecalculate the distance from two points C and D to the planeAB as c1 and c2 by Equations (4)ndash(13) ird the distance c0between two points C and D is calculated by Equations(4)ndash(11) Finally the distance |C middot AB middot D| between twopoints C and D projected to the plane AB is represented as c
and is calculated by
c
c20 minus c1 minus c2( 11138572
1113969
times ratio (14)
3 Experimental Evaluation
31 Data Description
311 Database of 2015 ISBI Challenge e benchmarkdatabase (database1) included 300 cephalometric radio-graphs (150 for TrainingData 150 for Test1Data) which isdescribed in Wang et al [28] All of the cephalograms werecollected from 300 patients aged from 6 to 60 years old e
Feature extraction
Regression treeconstruction
d
f
Feature extractionf_new d_new
Prediction Voting
R1
Ground truth oflandmarks
Training patches
Testing patches
Testing process
Training process
Landmarkpositions
P (x y)
Figure 4 Algorithm of decision tree regression voting
Journal of Healthcare Engineering 7
image resolution was 1935 times 2400 pixels For evaluation 19landmarks were manually annotated by two experienceddoctors in each cephalogram (see illustrations in Figure 6)the ground truth was the average of the annotations by bothdoctors and eight clinical measurements were used forclassification of anatomical types
312 Database of Peking University School and Hospital ofStomatology e database2 included 165 cephalometricradiographs as illustrated in Table 3 (the IRB approvalnumber is PKUSSIRB-201415063) ere were 55 cephalo-grams collected from 55Chinese adult subjects (29 females 26males) who were diagnosed as skeletal class I (0 lt ANB lt 4)with minor dental crowding and a harmoniouse other 110cephalograms were collected from the 55 skeletal class IIIpatients (ANB lt 0) (32 females 23 males) who underwentcombined surgical orthodontic treatment at the PekingUniversity School and Hospital of Stomatology from 2010 to2013 e image resolutions were different in the range from758 times 925 to 2690 times 3630 pixels For evaluation 45 landmarksweremanually annotated by two experienced orthodontists ineach cephalogram as shown in Figure 7 the ground truth wasthe average of annotations by both doctors and 27 clinicalmeasurements were used for future cephalometric analysis
313 Experimental Settings In the initial scale of landmarkdetection for training K 50 for prediction Kprime 400 Inthe other scales the additional parametersW and S are set to48 and 40 separately Because the image resolutions aredifferent preprocessing is required to rescale all the imagesto the fixed width of 1960 pixels in database2 For database2we test our algorithm by using 5-fold cross validation
32 Landmark Detection
321 Evaluation Criterion e first evaluation criterion isthe mean radial error with the associated standard deviatione radial error E ie the distance between the predictedposition and the true position of each landmark is defined as
Ei Ai minusBi
times ratio ai isin A bi isin B 1le ileM (15)
where A B represent the estimated and true positions ofeach landmark for all cephalograms in the dataset Ai Bi arethe corresponding positions in set A and B respectively and
M is the total number of cephalograms in the dataset emean radial error (MRE) and the associated standard de-viation (SD) for each landmark are defined as
MRE mean(E) 1113936
Mi1Ei
M
SD
1113936Mi1 Ei minusMRE( 1113857
2
Mminus 1
1113971 (16)
e second evaluation criterion is the success detectionrate with respect to the 25mm 3mm 5mm and 10mmprecision ranges If Ei is less than a precision range thedetection of the landmark is considered as a successfuldetection in the precision range otherwise it is consideredas a failed detection e success detection rate (SDR) pz
with precision less than z is defined as
pz Ei lt z1113864 1113865
Mtimes 100 1le ileM (17)
where z denotes four precision ranges used in the evaluationincluding 20mm 25mm 3mm and 4mm
322 Experimental Results
(1) Results of database1 Experimental results of cephalo-metric landmark detection using database1 is shown inTable 4 e MREs of landmarks L10 L4 L19 L5 and L16are more than 2mm e other 14 landmarks are all withinthe MRE of 2mm in which 3 landmarks are within the MREof 1mm e average MRE and SD of 19 landmarks are169mm and 143mm respectively It shows that the de-tection of cephalometric landmarks is accurate by ourproposed method e SDRs are 7337 7965 8446and 9067 within the precision ranges of 20 25 30 and40mm respectively In 2mm precision range the SDR ismore than 90 for four landmarks (L9 L12 L13 L14) theSDR is between 80 and 90 for six landmarks (L1 L7 L8L11 L15 L17) the SDR is between 70 and 80 for twolandmarks (L6 and L18) the SDRs for the other sevenlandmarks are less than 70 where the SDR of landmarkL10 is the lowest It can be seen from Table 4 that thelandmark L10 is the most difficult to detect accurately
Comparison of our method with three state-of-the-artmethods using Test1data is shown in Table 5 e differencebetween MRE of our proposed method and RFRV-CLM in[24] is less than 1 pixel which means that their performanceis comparable within the resolution of image
Furthermore we conduct the comparison with the toptwo methods in 2015 ISBI Challenge of cephalometriclandmark detection in terms of MRE SD and SDR as il-lustrated in Table 6 Our method achieves the SDR of 7337within the precision range of 20mm which is similar to theSDR of 7368 of the best method e MRE of our pro-posed method is 169mm which is comparable to that ofmethod of RFRV-CLM but the SD of our method is lessthan that of method of RFRV-CLM e results show that
D
AB
C
c1
c2
c0
c
Figure 5 e distance calculation between two points projected toa plane
8 Journal of Healthcare Engineering
our method is accurate for landmark detection in lateralcephalograms and is more robust
(2) Results of database2 Two examples of landmark de-tection in database2 are shown in Figure 8 It can be ob-served from the figure that the predicted locations of 45landmarks in cephalograms are near to the ground truthlocations which shows the success of our proposed algo-rithm for landmark detection
For the quantitative assessment of the proposed algo-rithm the statistical result is shown in Table 7eMRE andSD of the proposed method to detect 45 landmarks are171mm and 139mm respectively e average SDRs of 45landmarks within the precision range 20mm 25mm3mm and 4mm are 7208 8063 8646 and 9307respectively e experimental results in database2 showcomparable performance to the results of database1 It
indicates that the proposed method can be successfullyapplied to the detection of more clinical landmarks
33 Measurement Analysis
331 Evaluation Criterion For the classification of ana-tomical types eight cephalometric measurements wereusually used e description of these eight measurementsand the methods for classification are explained in Tables 8and 9 respectively
One evaluation criterion for measurement analysis is thesuccess classification rate (SCR) for these 8 popular methodsof analysis which is calculated using confusion matrix Inthe confusion matrix each column represents the instancesof an estimated type while each row represents the instancesof the ground truth type e SCR is defined as the averageddiagonal of the confusion matrix
For evaluation the performance of the proposed systemfor more measurement analysis in database2 another cri-terion the mean absolute error (MAE) is calculated by
MAE 1113936
Mi1
1113954Vi minusVi
11138681113868111386811138681113868111386811138681113868
M (18)
where 1113954Vi is the value of angular or linear measurementestimated by our system and Vi is the ground truth mea-surement obtained using human annotated landmarks
1 2
3
4
5
6
78 9
101112 13
14
15
16
17 18
19
(a)
L1 S
L2 N
L3 Or
L4 P
L5 A
L6 B
L7 Pg
L8 Me
L9 Gn
L10 Go
L11 LI
L12 UI
L13 UL
L14 LL
L15 Sn
L16 Pos
L17 PNS
L18 ANS
L19 Ar
(b)
Figure 6 Cephalogram annotation example showing the 19 landmarks in database1 (a) Cephalogram annotation example (b) Descriptionof 19 landmarks
Table 3 e description of database2
Female MaleClass I 29 26
Class III Pretreatment 32 23Posttreatment 32 23
Total number of cephalograms 165Total number of patients 110
Journal of Healthcare Engineering 9
332 Experimental Results
(1) Results of database1 Using the detected 19 landmarks 8cephalometric measurements are calculated for classificationof anatomical types in database1 e comparison of ourmethod to the best twomethods is given in Table 10 where wehave achieved the best classification results for two mea-surements of APDI and FHI e average SCR obtained byour method is 7503 which is much better than the methodin [26] and is comparable to the method of RFRV-CLM [24]
(2) Results of database2 According to the detected 45landmarks 27 measurements are automatically calculated byour system using database2 including 17 angular and 10linear measurements which are illustrated in Table 11
e MAE and MAElowast of 27 measurements are illustratedin Table 12 Here the MAE represents the performance ofmeasurement analysis of our automatic system e MAElowastrepresents interobserver variability calculated between theground truth values obtained by the two experts For angularmeasurements the difference between MAE and MAElowast iswithin 05deg for 917 measurements the difference between
12
345
6 78
91011
1213
14
1516
17
181920
212223
24252627
28
29
30
31
32333435
363738
39
40
414243
44
45
(a)
L1 N L16 Mes L31 L6A
L2 Ns L17 Go L32 UL5
L3 Prn L18 Me L33 L6E
L4 Cm L19 Gn L34 UL6
L5 ANS L20 Pg L35 U6E
L6 A L21 LIA L36 U6A
L7 Sn L22 B L37 PNS
L8 Ss L23 Id L38 Ptm
L9 UL L24 LI L39 S
L10 Stoms L25 UI L40 Co
L11 Stomi L26 FA L41 Ar
L12 LL L27 SPr L42 Ba
L13 Si L28 UIA L43 Bolton
L14 Pos L29 Or L44 P
L15 Gs L30 Se L45 ULI
(b)
Figure 7 Cephalogram annotation example showing landmarks in database2 (a) Cephalogram annotation example (b)e description of45 cephalometric landmarks
Table 4 e experimental results of 19 landmark detection inTest1Data
Landmark MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
L1 131 126 8667 9267 9467 9733L2 192 205 6800 7200 7933 8867L3 181 174 6667 8133 8800 9533L4 311 259 5000 5400 5933 6733L5 224 156 5600 6600 7533 8667L6 159 139 7200 7867 8400 9400L7 123 087 8067 9133 9667 9933L8 108 088 8867 9533 9667 9867L9 087 075 9133 9333 9800 9933L10 398 233 2333 2933 3800 5333L11 097 089 8733 9200 9600 9867L12 090 170 9467 9467 9600 9667L13 102 071 9200 9533 9800 10000L14 089 061 9333 9867 10000 10000L15 118 116 8533 9200 9333 9800L16 214 148 5000 6067 7267 9067L17 116 085 8600 9267 9467 9867L18 177 151 7200 7933 8600 9267L19 297 277 5000 5400 5800 6733Average 169 143 7337 7965 8446 9067
Table 5 Comparison of our method with three methods in term ofMRE using Test1Dtata
Method [24] [26] [29] OursMRE (pixels) 1674 1846 1779 1692
Table 6 Comparison of our method to two methods in terms ofMRE SD and SDR using Test1Dtata
Method MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
[24] 167 165 7368 8021 8519 9147[26] 184 176 7172 7740 8193 8804Ours 169 143 7337 7965 8446 9067
10 Journal of Healthcare Engineering
MAE and MAElowast is within 1deg for 1217 measurements andthe difference betweenMAE andMAElowast is within 2deg for 1617measurements In particular theMAE of ZAngle is less thanMAElowast e other one measurement has the difference of224deg For linear measurements the difference betweenMAEand MAElowast is within 05mm for 910 measurements eother one measurement has the difference of 116mm eresults show that our automatic system is efficient and ac-curate for measurement analysis
4 Discussion
e interobserver variability of human annotation is ana-lyzed Table 13 shows that the MRE and SD between an-notations by two orthodontists are 138mm and 155mm fordatabase1 respectively [27] for database2 the MRE and SDbetween annotations of two orthodontists are 126mm and127mm respectively Experimental results show that theproposed algorithm of automatic cephalometric landmark
detection can achieve the MRE of less than 2mm and the SDalmost equal to that of manual marking erefore theperformance of automatic landmark detection by the pro-posed algorithm is comparable to manual marking in term ofthe interobserver variability between two clinical expertsFurthermore the detected landmarks are used to calculate theangular and linearmeasurements in lateral cephalogramsesatisfactory results of measurement analysis are presented inexperiments based on the accurate landmark detection Itshows that the proposed algorithm has the potential to beapplied in clinical practice of cephalometric analysis for or-thodontic diagnosis and treatment planning
e detection accuracy of some landmarks is lower thanthe average value and there are mainly three main reasons (i)some landmarks are located at the overlaying anatomicalstructures such as landmarksGo UL5 L6E UL6 andU6E (ii)some landmarks have large variability of manual marking dueto large anatomical variability among subjects especially inabnormality such as landmarks A ANS Pos Ar Ba andBolton and (iii) structural information is not obvious due tolittle intensity variability in the neighborhood of some land-marks in images such as landmarks P Co L6A and U6A
e proposed system follows clinical cephalometricanalysis procedure and the accuracy of the system can beevaluated by the manual marking accuracy ere are severallimitations in this study On one hand except for the algo-rithm the data effects to the performance of the system in-clude three aspects (i) the quality of the training data (ii) thesize of the training dataset and (iii) the shape and appearancevariation exhibited in the training data On the other hand
12
345
6 78
91011
1213
14
1516
17
181920
212223
24252627
28
29
30
31
32333435
363738
39
40
414243
44
45
True locationsPredicted locations
(a)
12
345
6 78
91011
1213
14
1516
17
181920
21222324252627
28
29
30
31
32333435
3637
38
39
4041
4243
44
45
True locationsPredicted locations
(b)
Figure 8 Examples of automatic cephalometric landmark detection (a) Example of detection in Class I (b) Example of detection in Class III
Table 7 e statistical results of automatic cephalometric land-mark detection in database2
No of 5-fold
MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
1 172 135 7249 8132 8704 93372 172 138 7138 7994 8563 92733 166 132 7431 8206 8744 93374 177 136 6967 7896 8532 92935 168 154 7253 8088 8687 9296Average 171 139 7208 8063 8646 9307
Journal of Healthcare Engineering 11
performance of the system depends on consistency betweenthe training data and the testing data similarly as any su-pervised learning-based methods
5 Conclusion
In conclusion we design a new framework of landmarkdetection in lateral cephalograms with low-to-high resolu-tions In each image resolution decision tree regressionvoting is employed in landmark detection e proposedalgorithm takes full advantage of image information indifferent resolutions In lower resolution the primary localstructure information rather than local detail informationcan be extracted to predict the positions of anatomical
structure involving the specific landmarks In higher reso-lution the local structure information involves more detailinformation and it is useful for prediction positions oflandmarks in the neighborhood As demonstrated in ex-perimental results the proposed algorithm has achievedgood performance Compared with state-of-the-art methodsusing the benchmark database our algorithm has obtainedthe comparable accuracy of landmark detection in terms ofMRE SD and SDR Tested by our own clinical database ouralgorithm has also obtained average 72 successful de-tection rate within precision range of 20mm In particular45 landmarks have been detected in our database which isover two times of the number of landmarks in the bench-mark database erefore the extensibility of the proposed
Table 8 Description of cephalometric measurements in database1 [28]
No Measurement Description in mathematics Description in words1 ANB angL5L2L6 e angle between the landmark 5 2 and 62 SNB angL1L2L6 e angle between the landmark 1 2 and 63 SNA angL1L2L5 e angle between the landmark 1 2 and 5
4 ODI angL5L6L8L10 + angL17L18L4L3e arithmetic sum of the angle between AB plane(L5L6) to mandibular plane (L8L10) and the angle of
palatal plane (L17L18) to FH plane (L4L3)
5 APDI angL3L4L2L7 + angL2L7L5L6 + angL3L4L17L18
e arithmetic sum of the angle between FH plane(L3L4) to facial plane (L2L7) the angle of facial plane(L2L7) to AB plane (L5L6) and the angle of FH plane
(L3L4) to palatal plane (L17L18)
6 FHI |L1L10||L2L8|
e ratio of posterior face height (PFH the distancefrom L1 to L10) to anterior face height (AFH the
distance from L2 to L8)
7 FHA angL1L2L10L9 e angle between SN plane (L1L2) to mandibularplane (L10L9)
8 MW |L12L11| x(L12)ltx(L11)
minus|L12L11| otherwise1113896When the x ordinate of L12 is less than L11rsquos theMW
is |L12L11| otherwise MW is minus|L12L11|
Table 9 Eight standard cephalometric measurement methods for classification of anatomical types [28]
No Measurement Type 1 Type 2 Type 3 Type 41 ANB 32degsim57deg class I (normal) gt57deg class II lt32deg class III mdash
2 SNB 746degsim787deg normalmandible lt746deg retrognathic mandible gt787deg prognathic mandible mdash
3 SNA 794degsim832deg normal maxilla gt832deg prognathic maxilla lt794deg retrognathic maxilla mdash4 ODI Normal 745deg plusmn 607deg gt805deg deep bite tendency lt684deg open bite tendency mdash5 APDI Normal 814deg plusmn 38deg lt776deg class II tendency gt852deg class III tendency mdash6 FHI Normal 065sim075 gt075 short face tendency lt065 long face tendency mdash
7 FHA Normal 268degsim314deg gt314deg mandible high angletendency
lt268deg mandible lower angletendency mdash
8 MW Normal 2mmsim45mm MW 0mm edge to edge MW lt0mm anterior cross biteMWgt45mm
large over jet
Table 10 Comparison of our method to two methods in term of SCR using Test1Data
Methode success classification rates SCR ()
ANB SNB SNA ODI APDI FHI FHA MW Average[24] 6499 8452 6845 8464 8214 6792 7554 8219 7630[26] 5942 7109 5900 7804 8016 5897 7703 8394 7096Ours 5861 7885 5986 7659 8349 8244 7718 8320 7503
12 Journal of Healthcare Engineering
Table 11 Description of cephalometric measurements indatabase2
Name Description inmathematics Description in words
8e angles of three points (unit deg)
SNA angL39L1L6 e angle of landmarks L39 L1L6
SNB angL39L1L22 e angle of landmarks L39 L1L22
ANB angL6L1L22 e angle of landmarks L6 L1L22
SNPg angL39L20L1 e angle of landmarks L39 L20L1
NAPg angL1L6L20 e angle of landmarks L1 L6L20
NSGn angL1L39L19 e angle of landmarks L1 L39L19
Ns-Prn-Pos angL2L3L14 e angle of landmarks L2 L3L14
Cm-Sn-UL angL4L7L9 e angle of landmarks L4 L7L9
LL-Bprime-Pos angL12L13L14 e angle of landmarks L12 L13L14
Angle ofthe jaw angL41L17L18 e angle of landmarks L41 L17
L18Angle ofconvexity angL2L7L14 e angle of landmarks L2 L7
L148e angles between two planes (unit deg)
FH-SN angL44L29L39L1 e angle between FH plane(L44L29) and SN plane (L39L1)
UI-SN angL28L25L39L1 e angle between upper incisor(L28L25) and SN plane (L39L1)
LI-FH angL24L21L44L29e angle between lower incisoraxis (L24L21) and FH plane
(L44L29)
UI-LI angL28L25L24L21 e angle between upper andlower incisors (L28L25 L24L21)
H angle angL9L14L8L14 e angle between H plane(L9L14)and NB plane (L8L14)
Z angle angL44L29L9L14e rear lower angle between FHplane (L44L29) and H plane
(L9L14)8e distances between two points (unit mm)
N-Me |L1L18|e forward height the distancebetween landmarks L1 L18
N-ANS |L1L5|
e up-forward height thedistance between landmarks L1
L5
ANS-Me |L5L18|
e down-forward height thedistance between landmarks L5
L18
Stoms-UI |L10L25|e vertical distance between
landmarks L10 L258e distances from the point to the plane in the horizontal direction(unit mm)
UI-AP |L25L6L20|e distance between landmark
L25 to AP plane (L6L20)
LI-AP |L24L6L20|e distance from landmark L24
to AP plane (L6L20)
UL-EP |L9L3L14|e distance from landmark L9
to EP plane (L3L14)
Table 12 Result of our method for 27 measurements in terms ofMAE and MAElowast using database2
Name MAE MAElowast MAE-MAElowast
8e angles of three points (unit deg)SNA 195 170 024SNB 157 117 039ANB 129 108 021SNPg 159 120 039NAPg 283 244 039NSGn 142 096 047Ns-Prn-Pos 168 110 058Cm-Sn-UL 552 380 172LL-Bprime-Pos 395 344 051Angle of the jaw 320 166 154Angle of convexity 267 186 0818e angles between two planes (unit deg)FH-SN 200 196 004UI-SN 471 281 190LI-FH 334 227 107UI-LI 690 466 224H angle 094 080 014Z angle 169 186 -0178e distances between two points (unit mm)N-Me 137 110 027N-ANS 157 134 024ANS-Me 106 096 010Stoms-UI 075 042 0338e distances from the point to the plane in the horizontal direction(unit mm)UI-AP 096 071 025LI-AP 096 076 020UL-EP 050 036 014LL-EP 045 039 005MaxE 206 179 0278e distances between two points projected to a plane (unit mm)Wits 470 353 116MAElowast interobserver variability
Table 11 Continued
Name Description inmathematics Description in words
LL-EP |L12L13L14|e distance from landmark L12
to EP plane (L3L14)
MaxE |L6L5L37|
Maxillary length the distancefrom landmark L6 to palatal
plane (L5L37)8e distances between two points projected to a plane (unit mm)
Wits |L6L32L34L22|
e distance between the twolandmarks L6 and L22 projectedto functional jaw plane (L32L34)
Table 13 e interobserver error of manual marking betweendoctor1 and doctor2
Interobserver variabilityMRE (mm) SD (mm)
database1 138 155database2 126 127
Journal of Healthcare Engineering 13
algorithm is confirmed using this clinical dataset In addi-tion automatic measurement of clinical parameters has alsoachieved satisfactory results In the future we will put moreefforts to improve the performance of automatic analysis inlateral cephalograms so that the automatic system can beutilized in clinical practice to obtain objective measurementMore research will be conducted to reduce the computa-tional complexity of the algorithm as well
Data Availability
e database1 of 2015 ISBI Challenge is available at httpwwwntustedutwsimcweiwangISBI2015challenge1indexhtml e database2 of Peking University School andHospital of Stomatology cannot be made public availabledue to privacy concerns for the patients
Disclosure
is work is based on research conducted at Beijing Instituteof Technology
Conflicts of Interest
e authors declare that there are no conflicts of interestregarding the publication of this paper
Acknowledgments
e authors would like to express their gratitude to De-partment of Orthodontics Peking University School andHospital of Stomatology China for the supply of imagedatabase2
References
[1] J Yang X Ling Y Lu et al ldquoCephalometric image analysisand measurement for orthognathic surgeryrdquo Medical amp Bi-ological Engineering amp Computing vol 39 no 3 pp 279ndash2842001
[2] B H Broadbent ldquoA new x-ray technique and its applicationto orthodontiardquo Angle Orthodontist vol 1 pp 45-46 1931
[3] H Hofrath ldquoDie Bedeutung des Rontgenfern undAbstandsaufnahme fiir de Diagnostik der KieferanomalienrdquoFortschritte der Kieferorthopadie vol 2 pp 232ndash258 1931
[4] R Leonardi D Giordano F Maiorana et al ldquoAutomaticcephalometric analysis - a systematic reviewrdquo Angle Ortho-dontist vol 78 no 1 pp 145ndash151 2008
[5] T S Douglas ldquoImage processing for craniofacial landmarkidentification and measurement a review of photogrammetryand cephalometryrdquo Computerized Medical Imaging andGraphics vol 28 no 7 pp 401ndash409 2004
[6] D S Gill and F B Naini ldquoChapter 9 cephalometric analysisrdquoin Orthodontics Principles and Practice pp 78ndash87 BlackwellScience Publishing Oxford UK 2011
[7] A D Levy-Mandel A N Venetsanopoulos and J K TsotsosldquoKnowledge-based landmarking of cephalogramsrdquo Com-puters and Biomedical Research vol 19 no 3 pp 282ndash3091986
[8] V Grau M Alcaniz M C Juan et al ldquoAutomatic localizationof cephalometric landmarksrdquo Journal of Biomedical In-formatics vol 34 no 3 pp 146ndash156 2001
[9] J Ashish M Tanmoy and H K Sardana ldquoA novel strategyfor automatic localization of cephalometric landmarksrdquo inProceedings of IEEE International Conference on ComputerEngineering and Technology pp V3-284ndashV3-288 TianjinChina 2010
[10] T Mondal A Jain and H K Sardana ldquoAutomatic craniofacialstructure detection on cephalometric imagesrdquo IEEE Trans-actions on Image Processing vol 20 no 9 pp 2606ndash2614 2011
[11] A Kaur and C Singh ldquoAutomatic cephalometric landmarkdetection using Zernike moments and template matchingrdquoSignal Image and Video Processing vol 9 no 1 pp 117ndash1322015
[12] D B Forsyth and D N Davis ldquoAssessment of an automatedcephalometric analysis systemrdquo European Journal of Ortho-dontics vol 18 no 5 pp 471ndash478 1996
[13] S Rueda and M Alcaniz ldquoAn approach for the automaticcephalometric landmark detection using mathematicalmorphology and active appearance modelsrdquo in Proceedings ofMedical Image Computing and Computer-Assisted In-tervention (MICCAI 2006) pp 159ndash166 Lecture Notes inComputer Science R Larsen Ed Copenhagen DenmarkOctober 2006
[14] W Yue D Yin C Li et al ldquoAutomated 2-D cephalometricanalysis on X-ray images by a model-based approachrdquo IEEETransactions on Biomedical Engineering vol 53 no 8pp 1615ndash1623 2006
[15] R Kafieh A mehri S sadri et al ldquoAutomatic landmarkdetection in cephalometry using a modified Active ShapeModel with sub image matchingrdquo in Proceedings of IEEEInternational Conference on Machine Vision pp 73ndash78Islamabad Pakistan 2008
[16] J Keustermans W Mollemans D Vandermeulen andP Suetens ldquoAutomated cephalometric landmark identifica-tion using shape and local appearance modelsrdquo in Proceedingsof 20th International Conference on Pattern Recognitionpp 2464ndash2467 Istanbul Turkey August 2010
[17] P Vucinic Z Trpovski and I Scepan ldquoAutomatic land-marking of cephalograms using active appearance modelsrdquoEuropean Journal of Orthodontics vol 32 no 3 pp 233ndash2412010
[18] S Chakrabartty M Yagi T Shibata et al ldquoRobust cepha-lometric landmark identification using support vector ma-chinesrdquo in Proceedings of IEEE International Conference onAcoustics Speech and Signal Processing vol 2 pp 825ndash828Vancouver Canada May 2003
[19] I El-Feghi M A Sid-Ahmed and M Ahmadi ldquoAutomaticlocalization of craniofacial landmarks for assisted cephalom-etryrdquo Pattern Recognition vol 37 no 3 pp 609ndash621 2004
[20] R Leonardi D Giordano and F Maiorana ldquoAn evaluation ofcellular neural networks for the automatic identification ofcephalometric landmarks on digital imagesrdquo Journal of Bio-medicine and Biotechnology vol 2009 Article ID 717102 12pages 2009
[21] L Favaedi M Petrou and IEEE ldquoCephalometric landmarksidentification using probabilistic relaxationrdquo in Proceedings of2010 Annual International Conference of the IEEE Engineeringin Medicine and Biology Society pp 4391ndash4394 IEEE Engi-neering in Medicine and Biology Society Conference Pro-ceedings Buenos Aires Argentina August 2010
[22] M Farshbaf and A A Pouyan ldquoLandmark detection oncephalometric radiology images through combining classi-fiersrdquo in Proceedings of 2010 17th Iranian Conference ofBiomedical Engineering (ICBME) pp 1ndash4 Isfahan IranNovember 2010
14 Journal of Healthcare Engineering
[23] L Breiman ldquoRandom forestsrdquo Machine Learning vol 45no 1 pp 5ndash32 2001
[24] C Lindner and T F Cootes ldquoFully automatic cephalometricevaluation using random forest regression-votingrdquo in Pro-ceedings of International Symposium on Biomedical Imaging(ISBI) Grand Challenges in Dental X-ray Image Ana-lysismdashAutomated Detection and Analysis for Diagnosis inCephalometric X-ray Image Brooklyn NY USA May 2015
[25] C Lindner C-W Wang C-T Huang et al ldquoFully automaticsystem for accurate localisation and analysis of cephalometriclandmarks in lateral cephalogramsrdquo Scientific Reports vol 6no 1 2016
[26] B Ibragimov et al ldquoComputerized cephalometry by gametheory with shape- and appearance-based landmark re-finementrdquo in Proceedings of International Symposium onBiomedical imaging (ISBI) Grand Challenges in Dental X-rayImage AnalysismdashAutomated Detection and Analysis forDiagnosis in Cephalometric X-ray Image Brooklyn NYUSA May 2015
[27] C-WWang C-T HuangM-C Hsieh et al ldquoEvaluation andcomparison of anatomical landmark detection methods forcephalometric X-ray images a grand challengerdquo IEEETransactions on Medical Imaging vol 34 no 9 pp 1890ndash1900 2015
[28] C W Wang C T Huang J H Lee et al ldquoA benchmark forcomparison of dental radiography analysis algorithmsrdquoMedical Image Analysis vol 31 pp 63ndash76 2016
[29] R Vandaele J AcetoMMuller et al ldquoLandmark detection in2D bioimages for geometric morphometrics a multi-resolution tree-based approachrdquo Scientific Reports vol 8no 1 2018
[30] D G Lowe ldquoDistinctive image features from scale-invariantKeypointsrdquo International Journal of Computer Vision vol 60no 2 pp 91ndash110 2004
[31] A Vedaldi and B Fulkerson ldquoVLFeat an open and portablelibrary of computer vision algorithmsrdquo in Proceedings ofConference on ACM multimedia pp 1469ndash1472 FirenzeItaly October httpwwwvlfeatorg
[32] C Qiu L Jiang and C Li ldquoRandomly selected decision treefor test-cost sensitive learningrdquo Applied Soft Computingvol 53 pp 27ndash33 2017
[33] K Kim and J S Hong ldquoA hybrid decision tree algorithm formixed numeric and categorical data in regression analysisrdquoPattern Recognition Letters vol 98 pp 39ndash45 2017
[34] L Breiman and J H Friedman Classification and RegressionTrees CRC Press Boca Raton FL USA 1984
[35] C Lindner P A Bromiley M C Ionita et al ldquoRobust andaccurate shape model matching using random forest re-gression-votingrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 37 no 9 pp 1862ndash1874 2015
[36] J Gall and V S Lempitsky ldquoClass-specific Hough forests forobject detectionrdquo in Proceedings of IEEE Conference onComputer Vision and Pattern Recognition (CVPR) pp 143ndash157 Miami FL USA June 2009
Journal of Healthcare Engineering 15
International Journal of
AerospaceEngineeringHindawiwwwhindawicom Volume 2018
RoboticsJournal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Active and Passive Electronic Components
VLSI Design
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Shock and Vibration
Hindawiwwwhindawicom Volume 2018
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawiwwwhindawicom
Volume 2018
Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom
The Scientific World Journal
Volume 2018
Control Scienceand Engineering
Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom
Journal ofEngineeringVolume 2018
SensorsJournal of
Hindawiwwwhindawicom Volume 2018
International Journal of
RotatingMachinery
Hindawiwwwhindawicom Volume 2018
Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Navigation and Observation
International Journal of
Hindawi
wwwhindawicom Volume 2018
Advances in
Multimedia
Submit your manuscripts atwwwhindawicom
to regularize the searching through all landmarks in testingdata However it was difficult to initialize landmark posi-tions for searching because the initialization was alwaysachieved by traditional methods
Recently significant progress has been made for auto-matic landmark detection in cephalograms by using super-vised machine-learning approaches ese machine-learningapproaches can be further separated into two classes clas-sification and regression models A support vector machine(SVM) classifier was used to predict the locations of land-marks in cephalometric radiographs while the projectedprincipal-edge distribution was proposed to describe edges asthe feature vector [18] El-Feghi et al [19] proposed a coarse-to-fine landmark detection algorithm which first used fuzzyneural network to predict the locations of landmarks andthen refined the locations of landmarks by template matchingLeonardi et al [20] employed the cellular neural networksapproach for automatic cephalometric landmark detection onsoftcopy of direct digital cephalometric X-rays which wastested by 41 cephalograms and detected 10 landmarksFavaedi et al [21] proposed a probability relaxation methodbased on shape features for cephalometric landmark de-tection Farshbaf and Pouyan et al [22] proposed a coarse-to-fine SVM classifier to predict the locations of landmarks incephalograms which used histograms of oriented gradientsfor coarse detection and histograms of gray profile for re-finement Classifier-based methods were successfully used toidentify the landmark from the whole cephalograms butpositive and negative samples were difficult to be balanced intraining data and computational complexity is increased dueto pixel-based searching
Many algorithms have been reported for automaticcephalometric landmark detection but results were difficultto compare due to the different databases and landmarkse situation has been better since two challenges were held
at 2014 and 2015 IEEE International Symposium on Bio-medical Imaging (ISBI) During the challenges regressor-based methods were first introduced to automatic landmarkdetection in lateral cephalograms e challenges aimed atautomatic cephalometric landmark detection and usinglandmark positions to measure the cephalometric linear andangular parameters for automatic assessment of anatomicalabnormalities to assist clinical diagnosis Benchmarks havebeen achieved by using random forest (RF) [23] regressionvoting based on shape model matching at the challenges Inparticular Lindner et al [24 25] presented the algorithm ofrandom forest regression voting (RFRV) in the constrainedlocal model (CLM) framework for automatic landmarkdetection which obtained the mean error of 167mm andthe successful detection rate of 7368 in precision range of2mme algorithm of RFRV-CLMwon the challenges andthe RF-based classification algorithm combined with Haar-like appearance features and game theory [26] was in rank 2e comprehensive performance analysis among those al-gorithms for the challenges were reported in [27 28] LaterVandaele et al [29] proposed an ensemble tree-basedmethod using multiresolution features in bioimages andthe method was tested by three databases includinga cephalogram database of 2015 ISBI challenge Now publicdatabase and evaluation are available to improve the per-formance of automatic analysis system for cephalometriclandmark detection and parameter measurement and manyresearch outcomes have been achieved However automaticcephalometric landmark detection and parameter mea-surement for clinical practice is still challenging
In recent days efforts have been made to develop au-tomatic cephalometric analysis systems for clinical usages Inthis paper we present a new automatic cephalometricanalysis system including landmark detection and param-eter measurement in lateral cephalograms and the blockdiagram of our system is shown in Figure 1 e core of thissystem is automatic landmark detection which is realized bya new method based on multiresolution decision tree re-gression voting Ourmain contributions can be concluded infour aspects (1) we propose a new landmark detectionframework of multiscale decision tree regression voting (2)SIFT-based patch feature is first employed to extract thelocal feature of cephalometric landmarks and the proposedapproach is flexible when extending to detection of morelandmarks because feature selection and shape constraintsare not used (3) two clinical databases are used to evaluatethe extension of the proposed method Experimental resultsshow that our method can achieve robust detection whenextending from 19 landmarks to 45 landmarks (4) auto-matic measurement of clinical parameters is implemented inour system based on the detected landmarks which canfacilitate clinical diagnosis and research
e rest of this paper is organized as follows Section 2describes our proposed method of automatic landmarkdetection based onmultiscale decision tree regression votingand parameter measurement Experimental results for 2015ISBI Challenge cephalometric benchmark database and ourown database are presented in Section 3 e discussion ofthe proposed system is given in Section 4 Finally we
Table 2 Description of cephalometric planes
Name Involvinglandmarks
Descriptionin database1
Descriptionin database2
Reference planesSN Anterior cranialbase plane S-N L1L2 L39L1
FH Frankfort horizontalplane P-O L4L3 L44L29
Bolton plane Bolton-N mdash L43L1Measurement planesPalatal plane ANS-PNS L18L17 L5L37Cranial base plane Ba-N mdash L42L1MP Mandibular plane Go-Me L10L8 L17L18RP Ramal plane Ar-Go L19L10 L41L17NP Facial plane N-Pg L2L7 L1L20NA plane N-A L2L5 L1L6AB Subspinale toinfradentale plane A-B L5L6 L6L22
AP plane A-Pg L5L7 L6L20Soft tissue measurementplanesFacial plane of soft tissue Ns-Pos mdash L2L14Ricketts esthetic plane Prn-Pos mdash L3L14H plane UL-Pos L13L16 L9L14
Journal of Healthcare Engineering 3
conclude this study with expectation of future work inSection 5
2 Method
21 Landmark Detection It is well known that cephalo-metric landmark detection is the most important step incephalometric analysis In this paper we propose a newframework of automatic cephalometric landmark detectionwhich is based on multiresolution decision tree regressionvoting (MDTRV) using patch features
211 SIFT-Based Patch Feature Extraction
(1) SIFT Feature Scale invariant feature transform (SIFT) isfirst proposed for key point detection by Lowe [30] ebasic idea of SIFT feature extraction algorithm consists offour steps (1) local extrema point detection in scale-space byconstructing the difference-of-Gaussian image pyramids (2)accurate key point localization including location scale andorientation (3) orientation assignment for invariance toimage rotation and (4) key point descriptor as the localimage feature e advantage of SIFT features to representthe key points is affine invariant and robust to illuminationchange for images is method of feature extraction hasbeen commonly used in the fields of image matching andregistration
(2) SIFT Descriptor for Image Patch Key points with dis-cretized descriptors can be used as visual words in the field ofimage retrieval Histogram of visual words can then be usedby a classifier to map images to abstract visual classes Mostsuccessful image representations are based on affine in-variant features derived from image patches First featuresare extracted on sampled patches of the image either usinga multiresolution grid in a randomized manner or usinginterest point detectors Each patch is then described usinga feature vector eg SIFT [30] In this paper we use theSIFT feature vectors [31] to represent image patches whichcan be used by a regressor to map patches to the dis-placements from each landmark
e diagram of SIFTfeature descriptor of an image patchis illustrated in Figure 2 An image patch centered at location(x y) will be described by a square window of length 2W + 1where W is a parameter For each square image patch Pcentered at position (x y) of length 2W + 1 the gradient Px
and Py of image patch P is computed by using finite dif-ferences e gradient magnitude gm is computed by
gm P2
x + P2y
1113969 (1)
and the gradient angle ga (measured in radians clockwisestarting from the X axis) is calculated by
ga arctanPy
Px
(2)
Each square window is separated into 4 times 4 adjacentsmall windows and the 8-bin histogram of gradients (di-rection angles started from 0 to (2π) is extracted in eachsmall window For each image patch 4 times 4 histograms ofgradients are concatenated to the resulting feature vector f(dimension 128) Finally the feature f of image patch P canbe described as SIFT descriptor e example of SIFT-basedpatch feature extraction in a cephalogram is shown inFigure 3
212 Decision Tree Regression Decision tree is a classicaland efficient statistical learning algorithm and has beenwidely used to solve classification and regression problems[32 33] Decision trees predict responses to data To predicta response query the new data by the decisions from the rootnode to a leaf node in the tree e leaf node contains theresponse Classification trees give responses that are nom-inal while regression trees give numeric responses In thispaper CART (classification and regression trees) [34]a binary tree is used as the regressors Rl to learn themapping relationship between the SIFT-based patch featurevectors f and the displacement vectors d(dx dy) from thecenters of the patches to the position of each landmark in thetraining images e regressors Rl are then used to predictthe displacements d using patch feature vectors of the testimage Finally the predicted displacements are used toobtain the optimal location of each landmark via voting Oneadvantage of the regression approach is to avoid balancingpositive and negative examples for the training of classifiersAnother advantage of using regression rather than classi-fication is that good results can be obtained by evaluatingthe region of interest on randomly sampling pixels ratherthan at every pixel
(1) Training In training process a regression tree Rl can beconstructed for each landmark l via splitting Here opti-mization criterion and stopping rules are used to determinehow a decision tree is to grow e decision tree can beimproved by pruning or selecting the appropriateparameters
e optimization criterion is to choose a split to min-imize the mean-squared error (MSE) of predictions com-pared to the ground truths in the training data Splitting isthe main process of creating decision trees In general foursteps are performed to split node t First for each obser-vation fj (ie the extracted feature in the training data) theweighted MSE εt of the responses (displacements d) in nodet is computed by using
εt 1113944jisinT
dj minus dt1113872 11138732
N j 1 N (3)
whereT is the set of all observation indices in node t andN isthe sample size Second the probability of an observation innode t is calculated by
Landmarkdetection
ParametermeasurementCephalogram Result analysis
Figure 1 Block diagram of automatic cephalometric analysissystem
4 Journal of Healthcare Engineering
P(T) 1113944jisinT
wj (4)
where wj is the weight of observation j In this paper set wj
1N ird all elements of the observation are sorted in
ascending order Every element is regraded as a splittingcandidate TU is the unsplit set of all observation indicescorresponding tomissing values Finally the best way to splitnode t using fj is determined by maximizing the reductionin MSE Δεt among all splitting candidates For all splitting
Image Geometry descriptor
hellip012
34
5 67
8910
1112
13 1415
Bin indexes when stacked
y
x
ŷ
x
Figure 2 Diagram of SIFT feature descriptor of an image patch
(a) (b)
120
Mag
nitu
de
100
80
60
40
20
00 20 40 60
Index80 100 120 140
(c)
Figure 3 SIFT-based patch feature extraction (a) Original image showing a patch (b) Geometry descriptor (c) e extracted feature vector
Journal of Healthcare Engineering 5
candidates in the observation fj the following steps areperformed
(1) Split the observations in node t into left and rightchild nodes (tL and tR respectively)
(2) Compute the reduction in MSE Δεt For a particularsplitting candidate tL and tR represent the observationindices in the sets TL and TR respectively If fj doesnot contain any missing values then the reduction inMSE Δεt for the current splitting candidate is
Δεt P(T)εt minusP TL( 1113857εtLminusP TR( 1113857εtR
(5)
If fj contains any missing values then the reduction inMSE Δεt
prime is
Δεtprime P TminusTU( 1113857εt minusP TL( 1113857εtL
minusP TR( 1113857εtR (6)
where TminusTU is the set of all observation indices innode t that are not missing
(3) Choose the splitting candidate that yields the largestMSE reduction In this way the observations in nodet are split at the candidate that maximize the MSEreduction
To stop splitting nodes of the decision tree two rules canbe followed (1) it is pure of the node that the MSE for theobserved response in this node is less than the MSE for theobserved response in the entire data multiplied by the tol-erance on quadratic error per node and (2) the decision treereaches to the setting values for depth of the regression de-cision tree for example the maximum number of splittingnodes max_splits
e simplicity and performance of a decision tree shouldbe considered to improve the performance of the decisiontree at the same time A deep tree usually achieves highaccuracy on the training data However the tree is not toobtain high accuracy on a test data as well It means thata deep tree tends to overfit ie its test accuracy is much lessthan its training accuracy On the contrary a shallow treedoes not achieve high training accuracy but can be morerobust ie its training accuracy could be similar to that ofa test data Moreover a shallow tree is easy to interpret andsaves time for prediction In addition tree accuracy can beobtained by cross validation when there are not enough datafor training and testing
ere are two ways to improve the performance ofdecision trees by minimizing cross-validated loss One is toselect the optimal parameter value to control depth of de-cision trees e other is postpruning after creating decisiontrees In this paper we use the parameter max_splits tocontrol the depth of resulting decision trees Setting a largevalue for max_splits lends to growing a deep tree whilesetting a small value formax_splits yields a shallow tree withlarger leaves To select the appropriate value for max_splitsthe following steps are performed (1) set a spaced set ofvalues from 1 to the total sample size formax_splits per tree(2) create cross-validated regression trees for the datausing the setting values for max_splits and calculate thecross-validated errors and (3) the appropriate value of
max_splits can be obtained by minimizing the cross-validated errors
(2) Prediction In prediction process you can easily predictresponses for new data after creating a regression tree RlSuppose f_new is the new data (ie a feature vectorextracted from a new patch P_new in the test cephalogram)According to the rules of the regression tree the nodes selectthe specific attributes from the new observation f_new andreach the leaf step by step which stores the mean dis-placement d_new Here we predict the displacement fromthe center of patch P_new to the landmark l using SIFTfeature vector by the regressor Rl
213 MDTRV Using SIFT-Based Patch Features
(1) Decision Tree Regression Voting (DTRV) As illustrated inFigure 4 the algorithm of automatic cephalometric land-mark detection using decision tree regression voting(DTRV) in single scale consists of training and testingprocesses as a supervised learning algorithm e trainingprocess begins by feature extraction for patches sampledfrom the training images en a regression tree is con-structed for each landmark with inputting feature vectorsand displacements (f d) Here f represents the observationsand d represents the targets for this regression problem etesting process begins by the same step of feature extractionen the resulting f_new is used to predict the displacementd_new by regressor Rl In the end the optimal landmarkposition is obtained via voting e voting style includessingle unit voting and single weighted voting As mentionedin [35] these two styles perform equally well in applicationof voting optimal landmark positions for medical images Inthis paper we use the single unit voting
(2) MDTRV Using SIFT-Based Patch Features Finally a newframework of MDTRV using SIFT-based patch features isproposed by using a simple and efficient strategy for accuratelandmark detection in lateral cephalograms ere are fourstages in the proposed algorithm of MDTRV which areiteratively performed in the scales of 0125 025 05 and 1When the scale is 0125 the training K patches with Kdisplacements are randomly sampled in a whole cephalo-gram for a specific landmark l(l 1 L) e decisiontree regressor R1
l is created by using these training samplesFor prediction SIFT features are extracted for Kprime testingpatches sampled in the whole cephalogram and Kprime dis-placements are predicted by regressor R1
l using extractedfeatures e optimal position of the specific landmark l isobtained via voting through all predicted displacementsWhen the scale is 025 05 or 1 only the patch sampling ruleis different from the procedure in scale of 0125 at is thetraining and testing patches are randomly sampled in the(2S + 1) times (2S + 1) neighborhood of true and initial landmarkpositions e estimated landmark position is used as theinitial landmark position in the next scale of testing processe optimal position of the specific landmark l in scale of 1 isrefined by using Hough forest [36] which is regarded as the
6 Journal of Healthcare Engineering
final resulting location is multiresolution coarse-to-finestrategy has greatly improved the accuracy of cephalometriclandmark detection
22 Parameter Measurement Measurements are eitherangular or linear parameters calculated by using cephalo-metric landmarks and planes (refer to Tables 1 and 2)According to geometrical structure measurements can beclassified into five classes the angle of three points the angleof two planes the distance between two points the distancefrom a point to a plane and the distance between two pointsprojected to a plane All measurements can be calculatedautomatically in our system as described in the following
2218e Angle of8ree Points Assume point B is the vertexamong three points A B and C then the angle of these threepoints is calculated by
angABC arccosa2 + c2 minus b2
2ac1113888 1113889 times
180π
1113874 1113875 (7)
anda AminusB
b AminusC
c BminusC
⎧⎪⎪⎨
⎪⎪⎩(8)
where AminusB
(xA minus xB)2 + (yA minusyB)21113969
represents theEuclidean distance of A and B
222 8e Angle between Two Planes As the cephalometricradiographs are 2D images in this study the planes areprojected as straight lines us the planes are determinedby two points e angle between two planes AB and CD iscalculated by
angAB arctanyA minusyB
xA minus xB
1113888 1113889 times180π
1113874 1113875
angCD arctanyC minusyD
xC minusxD
1113888 1113889 times180π
1113874 1113875
⎧⎪⎪⎪⎪⎪⎪⎪⎨
⎪⎪⎪⎪⎪⎪⎪⎩
(9)
angABCD |angABminusangCD| (10)
223 8e Distance between Two Points e distance be-tween two points is calculated using the following equation
|AB| AminusB times ratio (11)
ratio is a scale indicator that can be calculated by thecalibrated gauge in the lateral cephalograms and its unit ismmpixel
224 8e Distance from a Point to a Plane in the HorizontalDirection e plane AB is defined as
y + k1x + k0 0 (12)
us the distance of point C to the plane AB is cal-culated by
|C middot AB| yC + k1xC + k0
1 + k21
1113969
111386811138681113868111386811138681113868111386811138681113868111386811138681113868
111386811138681113868111386811138681113868111386811138681113868111386811138681113868
times ratio (13)
225 8e Distance between Two Points Projected to a Planee calculation of the distance between two points projectedto a plane is illustrated in Figure 5 In order to calculate thedistance between two points projected to a plane first we useEquations (4)ndash(12) to determine the plane AB en wecalculate the distance from two points C and D to the planeAB as c1 and c2 by Equations (4)ndash(13) ird the distance c0between two points C and D is calculated by Equations(4)ndash(11) Finally the distance |C middot AB middot D| between twopoints C and D projected to the plane AB is represented as c
and is calculated by
c
c20 minus c1 minus c2( 11138572
1113969
times ratio (14)
3 Experimental Evaluation
31 Data Description
311 Database of 2015 ISBI Challenge e benchmarkdatabase (database1) included 300 cephalometric radio-graphs (150 for TrainingData 150 for Test1Data) which isdescribed in Wang et al [28] All of the cephalograms werecollected from 300 patients aged from 6 to 60 years old e
Feature extraction
Regression treeconstruction
d
f
Feature extractionf_new d_new
Prediction Voting
R1
Ground truth oflandmarks
Training patches
Testing patches
Testing process
Training process
Landmarkpositions
P (x y)
Figure 4 Algorithm of decision tree regression voting
Journal of Healthcare Engineering 7
image resolution was 1935 times 2400 pixels For evaluation 19landmarks were manually annotated by two experienceddoctors in each cephalogram (see illustrations in Figure 6)the ground truth was the average of the annotations by bothdoctors and eight clinical measurements were used forclassification of anatomical types
312 Database of Peking University School and Hospital ofStomatology e database2 included 165 cephalometricradiographs as illustrated in Table 3 (the IRB approvalnumber is PKUSSIRB-201415063) ere were 55 cephalo-grams collected from 55Chinese adult subjects (29 females 26males) who were diagnosed as skeletal class I (0 lt ANB lt 4)with minor dental crowding and a harmoniouse other 110cephalograms were collected from the 55 skeletal class IIIpatients (ANB lt 0) (32 females 23 males) who underwentcombined surgical orthodontic treatment at the PekingUniversity School and Hospital of Stomatology from 2010 to2013 e image resolutions were different in the range from758 times 925 to 2690 times 3630 pixels For evaluation 45 landmarksweremanually annotated by two experienced orthodontists ineach cephalogram as shown in Figure 7 the ground truth wasthe average of annotations by both doctors and 27 clinicalmeasurements were used for future cephalometric analysis
313 Experimental Settings In the initial scale of landmarkdetection for training K 50 for prediction Kprime 400 Inthe other scales the additional parametersW and S are set to48 and 40 separately Because the image resolutions aredifferent preprocessing is required to rescale all the imagesto the fixed width of 1960 pixels in database2 For database2we test our algorithm by using 5-fold cross validation
32 Landmark Detection
321 Evaluation Criterion e first evaluation criterion isthe mean radial error with the associated standard deviatione radial error E ie the distance between the predictedposition and the true position of each landmark is defined as
Ei Ai minusBi
times ratio ai isin A bi isin B 1le ileM (15)
where A B represent the estimated and true positions ofeach landmark for all cephalograms in the dataset Ai Bi arethe corresponding positions in set A and B respectively and
M is the total number of cephalograms in the dataset emean radial error (MRE) and the associated standard de-viation (SD) for each landmark are defined as
MRE mean(E) 1113936
Mi1Ei
M
SD
1113936Mi1 Ei minusMRE( 1113857
2
Mminus 1
1113971 (16)
e second evaluation criterion is the success detectionrate with respect to the 25mm 3mm 5mm and 10mmprecision ranges If Ei is less than a precision range thedetection of the landmark is considered as a successfuldetection in the precision range otherwise it is consideredas a failed detection e success detection rate (SDR) pz
with precision less than z is defined as
pz Ei lt z1113864 1113865
Mtimes 100 1le ileM (17)
where z denotes four precision ranges used in the evaluationincluding 20mm 25mm 3mm and 4mm
322 Experimental Results
(1) Results of database1 Experimental results of cephalo-metric landmark detection using database1 is shown inTable 4 e MREs of landmarks L10 L4 L19 L5 and L16are more than 2mm e other 14 landmarks are all withinthe MRE of 2mm in which 3 landmarks are within the MREof 1mm e average MRE and SD of 19 landmarks are169mm and 143mm respectively It shows that the de-tection of cephalometric landmarks is accurate by ourproposed method e SDRs are 7337 7965 8446and 9067 within the precision ranges of 20 25 30 and40mm respectively In 2mm precision range the SDR ismore than 90 for four landmarks (L9 L12 L13 L14) theSDR is between 80 and 90 for six landmarks (L1 L7 L8L11 L15 L17) the SDR is between 70 and 80 for twolandmarks (L6 and L18) the SDRs for the other sevenlandmarks are less than 70 where the SDR of landmarkL10 is the lowest It can be seen from Table 4 that thelandmark L10 is the most difficult to detect accurately
Comparison of our method with three state-of-the-artmethods using Test1data is shown in Table 5 e differencebetween MRE of our proposed method and RFRV-CLM in[24] is less than 1 pixel which means that their performanceis comparable within the resolution of image
Furthermore we conduct the comparison with the toptwo methods in 2015 ISBI Challenge of cephalometriclandmark detection in terms of MRE SD and SDR as il-lustrated in Table 6 Our method achieves the SDR of 7337within the precision range of 20mm which is similar to theSDR of 7368 of the best method e MRE of our pro-posed method is 169mm which is comparable to that ofmethod of RFRV-CLM but the SD of our method is lessthan that of method of RFRV-CLM e results show that
D
AB
C
c1
c2
c0
c
Figure 5 e distance calculation between two points projected toa plane
8 Journal of Healthcare Engineering
our method is accurate for landmark detection in lateralcephalograms and is more robust
(2) Results of database2 Two examples of landmark de-tection in database2 are shown in Figure 8 It can be ob-served from the figure that the predicted locations of 45landmarks in cephalograms are near to the ground truthlocations which shows the success of our proposed algo-rithm for landmark detection
For the quantitative assessment of the proposed algo-rithm the statistical result is shown in Table 7eMRE andSD of the proposed method to detect 45 landmarks are171mm and 139mm respectively e average SDRs of 45landmarks within the precision range 20mm 25mm3mm and 4mm are 7208 8063 8646 and 9307respectively e experimental results in database2 showcomparable performance to the results of database1 It
indicates that the proposed method can be successfullyapplied to the detection of more clinical landmarks
33 Measurement Analysis
331 Evaluation Criterion For the classification of ana-tomical types eight cephalometric measurements wereusually used e description of these eight measurementsand the methods for classification are explained in Tables 8and 9 respectively
One evaluation criterion for measurement analysis is thesuccess classification rate (SCR) for these 8 popular methodsof analysis which is calculated using confusion matrix Inthe confusion matrix each column represents the instancesof an estimated type while each row represents the instancesof the ground truth type e SCR is defined as the averageddiagonal of the confusion matrix
For evaluation the performance of the proposed systemfor more measurement analysis in database2 another cri-terion the mean absolute error (MAE) is calculated by
MAE 1113936
Mi1
1113954Vi minusVi
11138681113868111386811138681113868111386811138681113868
M (18)
where 1113954Vi is the value of angular or linear measurementestimated by our system and Vi is the ground truth mea-surement obtained using human annotated landmarks
1 2
3
4
5
6
78 9
101112 13
14
15
16
17 18
19
(a)
L1 S
L2 N
L3 Or
L4 P
L5 A
L6 B
L7 Pg
L8 Me
L9 Gn
L10 Go
L11 LI
L12 UI
L13 UL
L14 LL
L15 Sn
L16 Pos
L17 PNS
L18 ANS
L19 Ar
(b)
Figure 6 Cephalogram annotation example showing the 19 landmarks in database1 (a) Cephalogram annotation example (b) Descriptionof 19 landmarks
Table 3 e description of database2
Female MaleClass I 29 26
Class III Pretreatment 32 23Posttreatment 32 23
Total number of cephalograms 165Total number of patients 110
Journal of Healthcare Engineering 9
332 Experimental Results
(1) Results of database1 Using the detected 19 landmarks 8cephalometric measurements are calculated for classificationof anatomical types in database1 e comparison of ourmethod to the best twomethods is given in Table 10 where wehave achieved the best classification results for two mea-surements of APDI and FHI e average SCR obtained byour method is 7503 which is much better than the methodin [26] and is comparable to the method of RFRV-CLM [24]
(2) Results of database2 According to the detected 45landmarks 27 measurements are automatically calculated byour system using database2 including 17 angular and 10linear measurements which are illustrated in Table 11
e MAE and MAElowast of 27 measurements are illustratedin Table 12 Here the MAE represents the performance ofmeasurement analysis of our automatic system e MAElowastrepresents interobserver variability calculated between theground truth values obtained by the two experts For angularmeasurements the difference between MAE and MAElowast iswithin 05deg for 917 measurements the difference between
12
345
6 78
91011
1213
14
1516
17
181920
212223
24252627
28
29
30
31
32333435
363738
39
40
414243
44
45
(a)
L1 N L16 Mes L31 L6A
L2 Ns L17 Go L32 UL5
L3 Prn L18 Me L33 L6E
L4 Cm L19 Gn L34 UL6
L5 ANS L20 Pg L35 U6E
L6 A L21 LIA L36 U6A
L7 Sn L22 B L37 PNS
L8 Ss L23 Id L38 Ptm
L9 UL L24 LI L39 S
L10 Stoms L25 UI L40 Co
L11 Stomi L26 FA L41 Ar
L12 LL L27 SPr L42 Ba
L13 Si L28 UIA L43 Bolton
L14 Pos L29 Or L44 P
L15 Gs L30 Se L45 ULI
(b)
Figure 7 Cephalogram annotation example showing landmarks in database2 (a) Cephalogram annotation example (b)e description of45 cephalometric landmarks
Table 4 e experimental results of 19 landmark detection inTest1Data
Landmark MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
L1 131 126 8667 9267 9467 9733L2 192 205 6800 7200 7933 8867L3 181 174 6667 8133 8800 9533L4 311 259 5000 5400 5933 6733L5 224 156 5600 6600 7533 8667L6 159 139 7200 7867 8400 9400L7 123 087 8067 9133 9667 9933L8 108 088 8867 9533 9667 9867L9 087 075 9133 9333 9800 9933L10 398 233 2333 2933 3800 5333L11 097 089 8733 9200 9600 9867L12 090 170 9467 9467 9600 9667L13 102 071 9200 9533 9800 10000L14 089 061 9333 9867 10000 10000L15 118 116 8533 9200 9333 9800L16 214 148 5000 6067 7267 9067L17 116 085 8600 9267 9467 9867L18 177 151 7200 7933 8600 9267L19 297 277 5000 5400 5800 6733Average 169 143 7337 7965 8446 9067
Table 5 Comparison of our method with three methods in term ofMRE using Test1Dtata
Method [24] [26] [29] OursMRE (pixels) 1674 1846 1779 1692
Table 6 Comparison of our method to two methods in terms ofMRE SD and SDR using Test1Dtata
Method MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
[24] 167 165 7368 8021 8519 9147[26] 184 176 7172 7740 8193 8804Ours 169 143 7337 7965 8446 9067
10 Journal of Healthcare Engineering
MAE and MAElowast is within 1deg for 1217 measurements andthe difference betweenMAE andMAElowast is within 2deg for 1617measurements In particular theMAE of ZAngle is less thanMAElowast e other one measurement has the difference of224deg For linear measurements the difference betweenMAEand MAElowast is within 05mm for 910 measurements eother one measurement has the difference of 116mm eresults show that our automatic system is efficient and ac-curate for measurement analysis
4 Discussion
e interobserver variability of human annotation is ana-lyzed Table 13 shows that the MRE and SD between an-notations by two orthodontists are 138mm and 155mm fordatabase1 respectively [27] for database2 the MRE and SDbetween annotations of two orthodontists are 126mm and127mm respectively Experimental results show that theproposed algorithm of automatic cephalometric landmark
detection can achieve the MRE of less than 2mm and the SDalmost equal to that of manual marking erefore theperformance of automatic landmark detection by the pro-posed algorithm is comparable to manual marking in term ofthe interobserver variability between two clinical expertsFurthermore the detected landmarks are used to calculate theangular and linearmeasurements in lateral cephalogramsesatisfactory results of measurement analysis are presented inexperiments based on the accurate landmark detection Itshows that the proposed algorithm has the potential to beapplied in clinical practice of cephalometric analysis for or-thodontic diagnosis and treatment planning
e detection accuracy of some landmarks is lower thanthe average value and there are mainly three main reasons (i)some landmarks are located at the overlaying anatomicalstructures such as landmarksGo UL5 L6E UL6 andU6E (ii)some landmarks have large variability of manual marking dueto large anatomical variability among subjects especially inabnormality such as landmarks A ANS Pos Ar Ba andBolton and (iii) structural information is not obvious due tolittle intensity variability in the neighborhood of some land-marks in images such as landmarks P Co L6A and U6A
e proposed system follows clinical cephalometricanalysis procedure and the accuracy of the system can beevaluated by the manual marking accuracy ere are severallimitations in this study On one hand except for the algo-rithm the data effects to the performance of the system in-clude three aspects (i) the quality of the training data (ii) thesize of the training dataset and (iii) the shape and appearancevariation exhibited in the training data On the other hand
12
345
6 78
91011
1213
14
1516
17
181920
212223
24252627
28
29
30
31
32333435
363738
39
40
414243
44
45
True locationsPredicted locations
(a)
12
345
6 78
91011
1213
14
1516
17
181920
21222324252627
28
29
30
31
32333435
3637
38
39
4041
4243
44
45
True locationsPredicted locations
(b)
Figure 8 Examples of automatic cephalometric landmark detection (a) Example of detection in Class I (b) Example of detection in Class III
Table 7 e statistical results of automatic cephalometric land-mark detection in database2
No of 5-fold
MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
1 172 135 7249 8132 8704 93372 172 138 7138 7994 8563 92733 166 132 7431 8206 8744 93374 177 136 6967 7896 8532 92935 168 154 7253 8088 8687 9296Average 171 139 7208 8063 8646 9307
Journal of Healthcare Engineering 11
performance of the system depends on consistency betweenthe training data and the testing data similarly as any su-pervised learning-based methods
5 Conclusion
In conclusion we design a new framework of landmarkdetection in lateral cephalograms with low-to-high resolu-tions In each image resolution decision tree regressionvoting is employed in landmark detection e proposedalgorithm takes full advantage of image information indifferent resolutions In lower resolution the primary localstructure information rather than local detail informationcan be extracted to predict the positions of anatomical
structure involving the specific landmarks In higher reso-lution the local structure information involves more detailinformation and it is useful for prediction positions oflandmarks in the neighborhood As demonstrated in ex-perimental results the proposed algorithm has achievedgood performance Compared with state-of-the-art methodsusing the benchmark database our algorithm has obtainedthe comparable accuracy of landmark detection in terms ofMRE SD and SDR Tested by our own clinical database ouralgorithm has also obtained average 72 successful de-tection rate within precision range of 20mm In particular45 landmarks have been detected in our database which isover two times of the number of landmarks in the bench-mark database erefore the extensibility of the proposed
Table 8 Description of cephalometric measurements in database1 [28]
No Measurement Description in mathematics Description in words1 ANB angL5L2L6 e angle between the landmark 5 2 and 62 SNB angL1L2L6 e angle between the landmark 1 2 and 63 SNA angL1L2L5 e angle between the landmark 1 2 and 5
4 ODI angL5L6L8L10 + angL17L18L4L3e arithmetic sum of the angle between AB plane(L5L6) to mandibular plane (L8L10) and the angle of
palatal plane (L17L18) to FH plane (L4L3)
5 APDI angL3L4L2L7 + angL2L7L5L6 + angL3L4L17L18
e arithmetic sum of the angle between FH plane(L3L4) to facial plane (L2L7) the angle of facial plane(L2L7) to AB plane (L5L6) and the angle of FH plane
(L3L4) to palatal plane (L17L18)
6 FHI |L1L10||L2L8|
e ratio of posterior face height (PFH the distancefrom L1 to L10) to anterior face height (AFH the
distance from L2 to L8)
7 FHA angL1L2L10L9 e angle between SN plane (L1L2) to mandibularplane (L10L9)
8 MW |L12L11| x(L12)ltx(L11)
minus|L12L11| otherwise1113896When the x ordinate of L12 is less than L11rsquos theMW
is |L12L11| otherwise MW is minus|L12L11|
Table 9 Eight standard cephalometric measurement methods for classification of anatomical types [28]
No Measurement Type 1 Type 2 Type 3 Type 41 ANB 32degsim57deg class I (normal) gt57deg class II lt32deg class III mdash
2 SNB 746degsim787deg normalmandible lt746deg retrognathic mandible gt787deg prognathic mandible mdash
3 SNA 794degsim832deg normal maxilla gt832deg prognathic maxilla lt794deg retrognathic maxilla mdash4 ODI Normal 745deg plusmn 607deg gt805deg deep bite tendency lt684deg open bite tendency mdash5 APDI Normal 814deg plusmn 38deg lt776deg class II tendency gt852deg class III tendency mdash6 FHI Normal 065sim075 gt075 short face tendency lt065 long face tendency mdash
7 FHA Normal 268degsim314deg gt314deg mandible high angletendency
lt268deg mandible lower angletendency mdash
8 MW Normal 2mmsim45mm MW 0mm edge to edge MW lt0mm anterior cross biteMWgt45mm
large over jet
Table 10 Comparison of our method to two methods in term of SCR using Test1Data
Methode success classification rates SCR ()
ANB SNB SNA ODI APDI FHI FHA MW Average[24] 6499 8452 6845 8464 8214 6792 7554 8219 7630[26] 5942 7109 5900 7804 8016 5897 7703 8394 7096Ours 5861 7885 5986 7659 8349 8244 7718 8320 7503
12 Journal of Healthcare Engineering
Table 11 Description of cephalometric measurements indatabase2
Name Description inmathematics Description in words
8e angles of three points (unit deg)
SNA angL39L1L6 e angle of landmarks L39 L1L6
SNB angL39L1L22 e angle of landmarks L39 L1L22
ANB angL6L1L22 e angle of landmarks L6 L1L22
SNPg angL39L20L1 e angle of landmarks L39 L20L1
NAPg angL1L6L20 e angle of landmarks L1 L6L20
NSGn angL1L39L19 e angle of landmarks L1 L39L19
Ns-Prn-Pos angL2L3L14 e angle of landmarks L2 L3L14
Cm-Sn-UL angL4L7L9 e angle of landmarks L4 L7L9
LL-Bprime-Pos angL12L13L14 e angle of landmarks L12 L13L14
Angle ofthe jaw angL41L17L18 e angle of landmarks L41 L17
L18Angle ofconvexity angL2L7L14 e angle of landmarks L2 L7
L148e angles between two planes (unit deg)
FH-SN angL44L29L39L1 e angle between FH plane(L44L29) and SN plane (L39L1)
UI-SN angL28L25L39L1 e angle between upper incisor(L28L25) and SN plane (L39L1)
LI-FH angL24L21L44L29e angle between lower incisoraxis (L24L21) and FH plane
(L44L29)
UI-LI angL28L25L24L21 e angle between upper andlower incisors (L28L25 L24L21)
H angle angL9L14L8L14 e angle between H plane(L9L14)and NB plane (L8L14)
Z angle angL44L29L9L14e rear lower angle between FHplane (L44L29) and H plane
(L9L14)8e distances between two points (unit mm)
N-Me |L1L18|e forward height the distancebetween landmarks L1 L18
N-ANS |L1L5|
e up-forward height thedistance between landmarks L1
L5
ANS-Me |L5L18|
e down-forward height thedistance between landmarks L5
L18
Stoms-UI |L10L25|e vertical distance between
landmarks L10 L258e distances from the point to the plane in the horizontal direction(unit mm)
UI-AP |L25L6L20|e distance between landmark
L25 to AP plane (L6L20)
LI-AP |L24L6L20|e distance from landmark L24
to AP plane (L6L20)
UL-EP |L9L3L14|e distance from landmark L9
to EP plane (L3L14)
Table 12 Result of our method for 27 measurements in terms ofMAE and MAElowast using database2
Name MAE MAElowast MAE-MAElowast
8e angles of three points (unit deg)SNA 195 170 024SNB 157 117 039ANB 129 108 021SNPg 159 120 039NAPg 283 244 039NSGn 142 096 047Ns-Prn-Pos 168 110 058Cm-Sn-UL 552 380 172LL-Bprime-Pos 395 344 051Angle of the jaw 320 166 154Angle of convexity 267 186 0818e angles between two planes (unit deg)FH-SN 200 196 004UI-SN 471 281 190LI-FH 334 227 107UI-LI 690 466 224H angle 094 080 014Z angle 169 186 -0178e distances between two points (unit mm)N-Me 137 110 027N-ANS 157 134 024ANS-Me 106 096 010Stoms-UI 075 042 0338e distances from the point to the plane in the horizontal direction(unit mm)UI-AP 096 071 025LI-AP 096 076 020UL-EP 050 036 014LL-EP 045 039 005MaxE 206 179 0278e distances between two points projected to a plane (unit mm)Wits 470 353 116MAElowast interobserver variability
Table 11 Continued
Name Description inmathematics Description in words
LL-EP |L12L13L14|e distance from landmark L12
to EP plane (L3L14)
MaxE |L6L5L37|
Maxillary length the distancefrom landmark L6 to palatal
plane (L5L37)8e distances between two points projected to a plane (unit mm)
Wits |L6L32L34L22|
e distance between the twolandmarks L6 and L22 projectedto functional jaw plane (L32L34)
Table 13 e interobserver error of manual marking betweendoctor1 and doctor2
Interobserver variabilityMRE (mm) SD (mm)
database1 138 155database2 126 127
Journal of Healthcare Engineering 13
algorithm is confirmed using this clinical dataset In addi-tion automatic measurement of clinical parameters has alsoachieved satisfactory results In the future we will put moreefforts to improve the performance of automatic analysis inlateral cephalograms so that the automatic system can beutilized in clinical practice to obtain objective measurementMore research will be conducted to reduce the computa-tional complexity of the algorithm as well
Data Availability
e database1 of 2015 ISBI Challenge is available at httpwwwntustedutwsimcweiwangISBI2015challenge1indexhtml e database2 of Peking University School andHospital of Stomatology cannot be made public availabledue to privacy concerns for the patients
Disclosure
is work is based on research conducted at Beijing Instituteof Technology
Conflicts of Interest
e authors declare that there are no conflicts of interestregarding the publication of this paper
Acknowledgments
e authors would like to express their gratitude to De-partment of Orthodontics Peking University School andHospital of Stomatology China for the supply of imagedatabase2
References
[1] J Yang X Ling Y Lu et al ldquoCephalometric image analysisand measurement for orthognathic surgeryrdquo Medical amp Bi-ological Engineering amp Computing vol 39 no 3 pp 279ndash2842001
[2] B H Broadbent ldquoA new x-ray technique and its applicationto orthodontiardquo Angle Orthodontist vol 1 pp 45-46 1931
[3] H Hofrath ldquoDie Bedeutung des Rontgenfern undAbstandsaufnahme fiir de Diagnostik der KieferanomalienrdquoFortschritte der Kieferorthopadie vol 2 pp 232ndash258 1931
[4] R Leonardi D Giordano F Maiorana et al ldquoAutomaticcephalometric analysis - a systematic reviewrdquo Angle Ortho-dontist vol 78 no 1 pp 145ndash151 2008
[5] T S Douglas ldquoImage processing for craniofacial landmarkidentification and measurement a review of photogrammetryand cephalometryrdquo Computerized Medical Imaging andGraphics vol 28 no 7 pp 401ndash409 2004
[6] D S Gill and F B Naini ldquoChapter 9 cephalometric analysisrdquoin Orthodontics Principles and Practice pp 78ndash87 BlackwellScience Publishing Oxford UK 2011
[7] A D Levy-Mandel A N Venetsanopoulos and J K TsotsosldquoKnowledge-based landmarking of cephalogramsrdquo Com-puters and Biomedical Research vol 19 no 3 pp 282ndash3091986
[8] V Grau M Alcaniz M C Juan et al ldquoAutomatic localizationof cephalometric landmarksrdquo Journal of Biomedical In-formatics vol 34 no 3 pp 146ndash156 2001
[9] J Ashish M Tanmoy and H K Sardana ldquoA novel strategyfor automatic localization of cephalometric landmarksrdquo inProceedings of IEEE International Conference on ComputerEngineering and Technology pp V3-284ndashV3-288 TianjinChina 2010
[10] T Mondal A Jain and H K Sardana ldquoAutomatic craniofacialstructure detection on cephalometric imagesrdquo IEEE Trans-actions on Image Processing vol 20 no 9 pp 2606ndash2614 2011
[11] A Kaur and C Singh ldquoAutomatic cephalometric landmarkdetection using Zernike moments and template matchingrdquoSignal Image and Video Processing vol 9 no 1 pp 117ndash1322015
[12] D B Forsyth and D N Davis ldquoAssessment of an automatedcephalometric analysis systemrdquo European Journal of Ortho-dontics vol 18 no 5 pp 471ndash478 1996
[13] S Rueda and M Alcaniz ldquoAn approach for the automaticcephalometric landmark detection using mathematicalmorphology and active appearance modelsrdquo in Proceedings ofMedical Image Computing and Computer-Assisted In-tervention (MICCAI 2006) pp 159ndash166 Lecture Notes inComputer Science R Larsen Ed Copenhagen DenmarkOctober 2006
[14] W Yue D Yin C Li et al ldquoAutomated 2-D cephalometricanalysis on X-ray images by a model-based approachrdquo IEEETransactions on Biomedical Engineering vol 53 no 8pp 1615ndash1623 2006
[15] R Kafieh A mehri S sadri et al ldquoAutomatic landmarkdetection in cephalometry using a modified Active ShapeModel with sub image matchingrdquo in Proceedings of IEEEInternational Conference on Machine Vision pp 73ndash78Islamabad Pakistan 2008
[16] J Keustermans W Mollemans D Vandermeulen andP Suetens ldquoAutomated cephalometric landmark identifica-tion using shape and local appearance modelsrdquo in Proceedingsof 20th International Conference on Pattern Recognitionpp 2464ndash2467 Istanbul Turkey August 2010
[17] P Vucinic Z Trpovski and I Scepan ldquoAutomatic land-marking of cephalograms using active appearance modelsrdquoEuropean Journal of Orthodontics vol 32 no 3 pp 233ndash2412010
[18] S Chakrabartty M Yagi T Shibata et al ldquoRobust cepha-lometric landmark identification using support vector ma-chinesrdquo in Proceedings of IEEE International Conference onAcoustics Speech and Signal Processing vol 2 pp 825ndash828Vancouver Canada May 2003
[19] I El-Feghi M A Sid-Ahmed and M Ahmadi ldquoAutomaticlocalization of craniofacial landmarks for assisted cephalom-etryrdquo Pattern Recognition vol 37 no 3 pp 609ndash621 2004
[20] R Leonardi D Giordano and F Maiorana ldquoAn evaluation ofcellular neural networks for the automatic identification ofcephalometric landmarks on digital imagesrdquo Journal of Bio-medicine and Biotechnology vol 2009 Article ID 717102 12pages 2009
[21] L Favaedi M Petrou and IEEE ldquoCephalometric landmarksidentification using probabilistic relaxationrdquo in Proceedings of2010 Annual International Conference of the IEEE Engineeringin Medicine and Biology Society pp 4391ndash4394 IEEE Engi-neering in Medicine and Biology Society Conference Pro-ceedings Buenos Aires Argentina August 2010
[22] M Farshbaf and A A Pouyan ldquoLandmark detection oncephalometric radiology images through combining classi-fiersrdquo in Proceedings of 2010 17th Iranian Conference ofBiomedical Engineering (ICBME) pp 1ndash4 Isfahan IranNovember 2010
14 Journal of Healthcare Engineering
[23] L Breiman ldquoRandom forestsrdquo Machine Learning vol 45no 1 pp 5ndash32 2001
[24] C Lindner and T F Cootes ldquoFully automatic cephalometricevaluation using random forest regression-votingrdquo in Pro-ceedings of International Symposium on Biomedical Imaging(ISBI) Grand Challenges in Dental X-ray Image Ana-lysismdashAutomated Detection and Analysis for Diagnosis inCephalometric X-ray Image Brooklyn NY USA May 2015
[25] C Lindner C-W Wang C-T Huang et al ldquoFully automaticsystem for accurate localisation and analysis of cephalometriclandmarks in lateral cephalogramsrdquo Scientific Reports vol 6no 1 2016
[26] B Ibragimov et al ldquoComputerized cephalometry by gametheory with shape- and appearance-based landmark re-finementrdquo in Proceedings of International Symposium onBiomedical imaging (ISBI) Grand Challenges in Dental X-rayImage AnalysismdashAutomated Detection and Analysis forDiagnosis in Cephalometric X-ray Image Brooklyn NYUSA May 2015
[27] C-WWang C-T HuangM-C Hsieh et al ldquoEvaluation andcomparison of anatomical landmark detection methods forcephalometric X-ray images a grand challengerdquo IEEETransactions on Medical Imaging vol 34 no 9 pp 1890ndash1900 2015
[28] C W Wang C T Huang J H Lee et al ldquoA benchmark forcomparison of dental radiography analysis algorithmsrdquoMedical Image Analysis vol 31 pp 63ndash76 2016
[29] R Vandaele J AcetoMMuller et al ldquoLandmark detection in2D bioimages for geometric morphometrics a multi-resolution tree-based approachrdquo Scientific Reports vol 8no 1 2018
[30] D G Lowe ldquoDistinctive image features from scale-invariantKeypointsrdquo International Journal of Computer Vision vol 60no 2 pp 91ndash110 2004
[31] A Vedaldi and B Fulkerson ldquoVLFeat an open and portablelibrary of computer vision algorithmsrdquo in Proceedings ofConference on ACM multimedia pp 1469ndash1472 FirenzeItaly October httpwwwvlfeatorg
[32] C Qiu L Jiang and C Li ldquoRandomly selected decision treefor test-cost sensitive learningrdquo Applied Soft Computingvol 53 pp 27ndash33 2017
[33] K Kim and J S Hong ldquoA hybrid decision tree algorithm formixed numeric and categorical data in regression analysisrdquoPattern Recognition Letters vol 98 pp 39ndash45 2017
[34] L Breiman and J H Friedman Classification and RegressionTrees CRC Press Boca Raton FL USA 1984
[35] C Lindner P A Bromiley M C Ionita et al ldquoRobust andaccurate shape model matching using random forest re-gression-votingrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 37 no 9 pp 1862ndash1874 2015
[36] J Gall and V S Lempitsky ldquoClass-specific Hough forests forobject detectionrdquo in Proceedings of IEEE Conference onComputer Vision and Pattern Recognition (CVPR) pp 143ndash157 Miami FL USA June 2009
Journal of Healthcare Engineering 15
International Journal of
AerospaceEngineeringHindawiwwwhindawicom Volume 2018
RoboticsJournal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Active and Passive Electronic Components
VLSI Design
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Shock and Vibration
Hindawiwwwhindawicom Volume 2018
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawiwwwhindawicom
Volume 2018
Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom
The Scientific World Journal
Volume 2018
Control Scienceand Engineering
Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom
Journal ofEngineeringVolume 2018
SensorsJournal of
Hindawiwwwhindawicom Volume 2018
International Journal of
RotatingMachinery
Hindawiwwwhindawicom Volume 2018
Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Navigation and Observation
International Journal of
Hindawi
wwwhindawicom Volume 2018
Advances in
Multimedia
Submit your manuscripts atwwwhindawicom
conclude this study with expectation of future work inSection 5
2 Method
21 Landmark Detection It is well known that cephalo-metric landmark detection is the most important step incephalometric analysis In this paper we propose a newframework of automatic cephalometric landmark detectionwhich is based on multiresolution decision tree regressionvoting (MDTRV) using patch features
211 SIFT-Based Patch Feature Extraction
(1) SIFT Feature Scale invariant feature transform (SIFT) isfirst proposed for key point detection by Lowe [30] ebasic idea of SIFT feature extraction algorithm consists offour steps (1) local extrema point detection in scale-space byconstructing the difference-of-Gaussian image pyramids (2)accurate key point localization including location scale andorientation (3) orientation assignment for invariance toimage rotation and (4) key point descriptor as the localimage feature e advantage of SIFT features to representthe key points is affine invariant and robust to illuminationchange for images is method of feature extraction hasbeen commonly used in the fields of image matching andregistration
(2) SIFT Descriptor for Image Patch Key points with dis-cretized descriptors can be used as visual words in the field ofimage retrieval Histogram of visual words can then be usedby a classifier to map images to abstract visual classes Mostsuccessful image representations are based on affine in-variant features derived from image patches First featuresare extracted on sampled patches of the image either usinga multiresolution grid in a randomized manner or usinginterest point detectors Each patch is then described usinga feature vector eg SIFT [30] In this paper we use theSIFT feature vectors [31] to represent image patches whichcan be used by a regressor to map patches to the dis-placements from each landmark
e diagram of SIFTfeature descriptor of an image patchis illustrated in Figure 2 An image patch centered at location(x y) will be described by a square window of length 2W + 1where W is a parameter For each square image patch Pcentered at position (x y) of length 2W + 1 the gradient Px
and Py of image patch P is computed by using finite dif-ferences e gradient magnitude gm is computed by
gm P2
x + P2y
1113969 (1)
and the gradient angle ga (measured in radians clockwisestarting from the X axis) is calculated by
ga arctanPy
Px
(2)
Each square window is separated into 4 times 4 adjacentsmall windows and the 8-bin histogram of gradients (di-rection angles started from 0 to (2π) is extracted in eachsmall window For each image patch 4 times 4 histograms ofgradients are concatenated to the resulting feature vector f(dimension 128) Finally the feature f of image patch P canbe described as SIFT descriptor e example of SIFT-basedpatch feature extraction in a cephalogram is shown inFigure 3
212 Decision Tree Regression Decision tree is a classicaland efficient statistical learning algorithm and has beenwidely used to solve classification and regression problems[32 33] Decision trees predict responses to data To predicta response query the new data by the decisions from the rootnode to a leaf node in the tree e leaf node contains theresponse Classification trees give responses that are nom-inal while regression trees give numeric responses In thispaper CART (classification and regression trees) [34]a binary tree is used as the regressors Rl to learn themapping relationship between the SIFT-based patch featurevectors f and the displacement vectors d(dx dy) from thecenters of the patches to the position of each landmark in thetraining images e regressors Rl are then used to predictthe displacements d using patch feature vectors of the testimage Finally the predicted displacements are used toobtain the optimal location of each landmark via voting Oneadvantage of the regression approach is to avoid balancingpositive and negative examples for the training of classifiersAnother advantage of using regression rather than classi-fication is that good results can be obtained by evaluatingthe region of interest on randomly sampling pixels ratherthan at every pixel
(1) Training In training process a regression tree Rl can beconstructed for each landmark l via splitting Here opti-mization criterion and stopping rules are used to determinehow a decision tree is to grow e decision tree can beimproved by pruning or selecting the appropriateparameters
e optimization criterion is to choose a split to min-imize the mean-squared error (MSE) of predictions com-pared to the ground truths in the training data Splitting isthe main process of creating decision trees In general foursteps are performed to split node t First for each obser-vation fj (ie the extracted feature in the training data) theweighted MSE εt of the responses (displacements d) in nodet is computed by using
εt 1113944jisinT
dj minus dt1113872 11138732
N j 1 N (3)
whereT is the set of all observation indices in node t andN isthe sample size Second the probability of an observation innode t is calculated by
Landmarkdetection
ParametermeasurementCephalogram Result analysis
Figure 1 Block diagram of automatic cephalometric analysissystem
4 Journal of Healthcare Engineering
P(T) 1113944jisinT
wj (4)
where wj is the weight of observation j In this paper set wj
1N ird all elements of the observation are sorted in
ascending order Every element is regraded as a splittingcandidate TU is the unsplit set of all observation indicescorresponding tomissing values Finally the best way to splitnode t using fj is determined by maximizing the reductionin MSE Δεt among all splitting candidates For all splitting
Image Geometry descriptor
hellip012
34
5 67
8910
1112
13 1415
Bin indexes when stacked
y
x
ŷ
x
Figure 2 Diagram of SIFT feature descriptor of an image patch
(a) (b)
120
Mag
nitu
de
100
80
60
40
20
00 20 40 60
Index80 100 120 140
(c)
Figure 3 SIFT-based patch feature extraction (a) Original image showing a patch (b) Geometry descriptor (c) e extracted feature vector
Journal of Healthcare Engineering 5
candidates in the observation fj the following steps areperformed
(1) Split the observations in node t into left and rightchild nodes (tL and tR respectively)
(2) Compute the reduction in MSE Δεt For a particularsplitting candidate tL and tR represent the observationindices in the sets TL and TR respectively If fj doesnot contain any missing values then the reduction inMSE Δεt for the current splitting candidate is
Δεt P(T)εt minusP TL( 1113857εtLminusP TR( 1113857εtR
(5)
If fj contains any missing values then the reduction inMSE Δεt
prime is
Δεtprime P TminusTU( 1113857εt minusP TL( 1113857εtL
minusP TR( 1113857εtR (6)
where TminusTU is the set of all observation indices innode t that are not missing
(3) Choose the splitting candidate that yields the largestMSE reduction In this way the observations in nodet are split at the candidate that maximize the MSEreduction
To stop splitting nodes of the decision tree two rules canbe followed (1) it is pure of the node that the MSE for theobserved response in this node is less than the MSE for theobserved response in the entire data multiplied by the tol-erance on quadratic error per node and (2) the decision treereaches to the setting values for depth of the regression de-cision tree for example the maximum number of splittingnodes max_splits
e simplicity and performance of a decision tree shouldbe considered to improve the performance of the decisiontree at the same time A deep tree usually achieves highaccuracy on the training data However the tree is not toobtain high accuracy on a test data as well It means thata deep tree tends to overfit ie its test accuracy is much lessthan its training accuracy On the contrary a shallow treedoes not achieve high training accuracy but can be morerobust ie its training accuracy could be similar to that ofa test data Moreover a shallow tree is easy to interpret andsaves time for prediction In addition tree accuracy can beobtained by cross validation when there are not enough datafor training and testing
ere are two ways to improve the performance ofdecision trees by minimizing cross-validated loss One is toselect the optimal parameter value to control depth of de-cision trees e other is postpruning after creating decisiontrees In this paper we use the parameter max_splits tocontrol the depth of resulting decision trees Setting a largevalue for max_splits lends to growing a deep tree whilesetting a small value formax_splits yields a shallow tree withlarger leaves To select the appropriate value for max_splitsthe following steps are performed (1) set a spaced set ofvalues from 1 to the total sample size formax_splits per tree(2) create cross-validated regression trees for the datausing the setting values for max_splits and calculate thecross-validated errors and (3) the appropriate value of
max_splits can be obtained by minimizing the cross-validated errors
(2) Prediction In prediction process you can easily predictresponses for new data after creating a regression tree RlSuppose f_new is the new data (ie a feature vectorextracted from a new patch P_new in the test cephalogram)According to the rules of the regression tree the nodes selectthe specific attributes from the new observation f_new andreach the leaf step by step which stores the mean dis-placement d_new Here we predict the displacement fromthe center of patch P_new to the landmark l using SIFTfeature vector by the regressor Rl
213 MDTRV Using SIFT-Based Patch Features
(1) Decision Tree Regression Voting (DTRV) As illustrated inFigure 4 the algorithm of automatic cephalometric land-mark detection using decision tree regression voting(DTRV) in single scale consists of training and testingprocesses as a supervised learning algorithm e trainingprocess begins by feature extraction for patches sampledfrom the training images en a regression tree is con-structed for each landmark with inputting feature vectorsand displacements (f d) Here f represents the observationsand d represents the targets for this regression problem etesting process begins by the same step of feature extractionen the resulting f_new is used to predict the displacementd_new by regressor Rl In the end the optimal landmarkposition is obtained via voting e voting style includessingle unit voting and single weighted voting As mentionedin [35] these two styles perform equally well in applicationof voting optimal landmark positions for medical images Inthis paper we use the single unit voting
(2) MDTRV Using SIFT-Based Patch Features Finally a newframework of MDTRV using SIFT-based patch features isproposed by using a simple and efficient strategy for accuratelandmark detection in lateral cephalograms ere are fourstages in the proposed algorithm of MDTRV which areiteratively performed in the scales of 0125 025 05 and 1When the scale is 0125 the training K patches with Kdisplacements are randomly sampled in a whole cephalo-gram for a specific landmark l(l 1 L) e decisiontree regressor R1
l is created by using these training samplesFor prediction SIFT features are extracted for Kprime testingpatches sampled in the whole cephalogram and Kprime dis-placements are predicted by regressor R1
l using extractedfeatures e optimal position of the specific landmark l isobtained via voting through all predicted displacementsWhen the scale is 025 05 or 1 only the patch sampling ruleis different from the procedure in scale of 0125 at is thetraining and testing patches are randomly sampled in the(2S + 1) times (2S + 1) neighborhood of true and initial landmarkpositions e estimated landmark position is used as theinitial landmark position in the next scale of testing processe optimal position of the specific landmark l in scale of 1 isrefined by using Hough forest [36] which is regarded as the
6 Journal of Healthcare Engineering
final resulting location is multiresolution coarse-to-finestrategy has greatly improved the accuracy of cephalometriclandmark detection
22 Parameter Measurement Measurements are eitherangular or linear parameters calculated by using cephalo-metric landmarks and planes (refer to Tables 1 and 2)According to geometrical structure measurements can beclassified into five classes the angle of three points the angleof two planes the distance between two points the distancefrom a point to a plane and the distance between two pointsprojected to a plane All measurements can be calculatedautomatically in our system as described in the following
2218e Angle of8ree Points Assume point B is the vertexamong three points A B and C then the angle of these threepoints is calculated by
angABC arccosa2 + c2 minus b2
2ac1113888 1113889 times
180π
1113874 1113875 (7)
anda AminusB
b AminusC
c BminusC
⎧⎪⎪⎨
⎪⎪⎩(8)
where AminusB
(xA minus xB)2 + (yA minusyB)21113969
represents theEuclidean distance of A and B
222 8e Angle between Two Planes As the cephalometricradiographs are 2D images in this study the planes areprojected as straight lines us the planes are determinedby two points e angle between two planes AB and CD iscalculated by
angAB arctanyA minusyB
xA minus xB
1113888 1113889 times180π
1113874 1113875
angCD arctanyC minusyD
xC minusxD
1113888 1113889 times180π
1113874 1113875
⎧⎪⎪⎪⎪⎪⎪⎪⎨
⎪⎪⎪⎪⎪⎪⎪⎩
(9)
angABCD |angABminusangCD| (10)
223 8e Distance between Two Points e distance be-tween two points is calculated using the following equation
|AB| AminusB times ratio (11)
ratio is a scale indicator that can be calculated by thecalibrated gauge in the lateral cephalograms and its unit ismmpixel
224 8e Distance from a Point to a Plane in the HorizontalDirection e plane AB is defined as
y + k1x + k0 0 (12)
us the distance of point C to the plane AB is cal-culated by
|C middot AB| yC + k1xC + k0
1 + k21
1113969
111386811138681113868111386811138681113868111386811138681113868111386811138681113868
111386811138681113868111386811138681113868111386811138681113868111386811138681113868
times ratio (13)
225 8e Distance between Two Points Projected to a Planee calculation of the distance between two points projectedto a plane is illustrated in Figure 5 In order to calculate thedistance between two points projected to a plane first we useEquations (4)ndash(12) to determine the plane AB en wecalculate the distance from two points C and D to the planeAB as c1 and c2 by Equations (4)ndash(13) ird the distance c0between two points C and D is calculated by Equations(4)ndash(11) Finally the distance |C middot AB middot D| between twopoints C and D projected to the plane AB is represented as c
and is calculated by
c
c20 minus c1 minus c2( 11138572
1113969
times ratio (14)
3 Experimental Evaluation
31 Data Description
311 Database of 2015 ISBI Challenge e benchmarkdatabase (database1) included 300 cephalometric radio-graphs (150 for TrainingData 150 for Test1Data) which isdescribed in Wang et al [28] All of the cephalograms werecollected from 300 patients aged from 6 to 60 years old e
Feature extraction
Regression treeconstruction
d
f
Feature extractionf_new d_new
Prediction Voting
R1
Ground truth oflandmarks
Training patches
Testing patches
Testing process
Training process
Landmarkpositions
P (x y)
Figure 4 Algorithm of decision tree regression voting
Journal of Healthcare Engineering 7
image resolution was 1935 times 2400 pixels For evaluation 19landmarks were manually annotated by two experienceddoctors in each cephalogram (see illustrations in Figure 6)the ground truth was the average of the annotations by bothdoctors and eight clinical measurements were used forclassification of anatomical types
312 Database of Peking University School and Hospital ofStomatology e database2 included 165 cephalometricradiographs as illustrated in Table 3 (the IRB approvalnumber is PKUSSIRB-201415063) ere were 55 cephalo-grams collected from 55Chinese adult subjects (29 females 26males) who were diagnosed as skeletal class I (0 lt ANB lt 4)with minor dental crowding and a harmoniouse other 110cephalograms were collected from the 55 skeletal class IIIpatients (ANB lt 0) (32 females 23 males) who underwentcombined surgical orthodontic treatment at the PekingUniversity School and Hospital of Stomatology from 2010 to2013 e image resolutions were different in the range from758 times 925 to 2690 times 3630 pixels For evaluation 45 landmarksweremanually annotated by two experienced orthodontists ineach cephalogram as shown in Figure 7 the ground truth wasthe average of annotations by both doctors and 27 clinicalmeasurements were used for future cephalometric analysis
313 Experimental Settings In the initial scale of landmarkdetection for training K 50 for prediction Kprime 400 Inthe other scales the additional parametersW and S are set to48 and 40 separately Because the image resolutions aredifferent preprocessing is required to rescale all the imagesto the fixed width of 1960 pixels in database2 For database2we test our algorithm by using 5-fold cross validation
32 Landmark Detection
321 Evaluation Criterion e first evaluation criterion isthe mean radial error with the associated standard deviatione radial error E ie the distance between the predictedposition and the true position of each landmark is defined as
Ei Ai minusBi
times ratio ai isin A bi isin B 1le ileM (15)
where A B represent the estimated and true positions ofeach landmark for all cephalograms in the dataset Ai Bi arethe corresponding positions in set A and B respectively and
M is the total number of cephalograms in the dataset emean radial error (MRE) and the associated standard de-viation (SD) for each landmark are defined as
MRE mean(E) 1113936
Mi1Ei
M
SD
1113936Mi1 Ei minusMRE( 1113857
2
Mminus 1
1113971 (16)
e second evaluation criterion is the success detectionrate with respect to the 25mm 3mm 5mm and 10mmprecision ranges If Ei is less than a precision range thedetection of the landmark is considered as a successfuldetection in the precision range otherwise it is consideredas a failed detection e success detection rate (SDR) pz
with precision less than z is defined as
pz Ei lt z1113864 1113865
Mtimes 100 1le ileM (17)
where z denotes four precision ranges used in the evaluationincluding 20mm 25mm 3mm and 4mm
322 Experimental Results
(1) Results of database1 Experimental results of cephalo-metric landmark detection using database1 is shown inTable 4 e MREs of landmarks L10 L4 L19 L5 and L16are more than 2mm e other 14 landmarks are all withinthe MRE of 2mm in which 3 landmarks are within the MREof 1mm e average MRE and SD of 19 landmarks are169mm and 143mm respectively It shows that the de-tection of cephalometric landmarks is accurate by ourproposed method e SDRs are 7337 7965 8446and 9067 within the precision ranges of 20 25 30 and40mm respectively In 2mm precision range the SDR ismore than 90 for four landmarks (L9 L12 L13 L14) theSDR is between 80 and 90 for six landmarks (L1 L7 L8L11 L15 L17) the SDR is between 70 and 80 for twolandmarks (L6 and L18) the SDRs for the other sevenlandmarks are less than 70 where the SDR of landmarkL10 is the lowest It can be seen from Table 4 that thelandmark L10 is the most difficult to detect accurately
Comparison of our method with three state-of-the-artmethods using Test1data is shown in Table 5 e differencebetween MRE of our proposed method and RFRV-CLM in[24] is less than 1 pixel which means that their performanceis comparable within the resolution of image
Furthermore we conduct the comparison with the toptwo methods in 2015 ISBI Challenge of cephalometriclandmark detection in terms of MRE SD and SDR as il-lustrated in Table 6 Our method achieves the SDR of 7337within the precision range of 20mm which is similar to theSDR of 7368 of the best method e MRE of our pro-posed method is 169mm which is comparable to that ofmethod of RFRV-CLM but the SD of our method is lessthan that of method of RFRV-CLM e results show that
D
AB
C
c1
c2
c0
c
Figure 5 e distance calculation between two points projected toa plane
8 Journal of Healthcare Engineering
our method is accurate for landmark detection in lateralcephalograms and is more robust
(2) Results of database2 Two examples of landmark de-tection in database2 are shown in Figure 8 It can be ob-served from the figure that the predicted locations of 45landmarks in cephalograms are near to the ground truthlocations which shows the success of our proposed algo-rithm for landmark detection
For the quantitative assessment of the proposed algo-rithm the statistical result is shown in Table 7eMRE andSD of the proposed method to detect 45 landmarks are171mm and 139mm respectively e average SDRs of 45landmarks within the precision range 20mm 25mm3mm and 4mm are 7208 8063 8646 and 9307respectively e experimental results in database2 showcomparable performance to the results of database1 It
indicates that the proposed method can be successfullyapplied to the detection of more clinical landmarks
33 Measurement Analysis
331 Evaluation Criterion For the classification of ana-tomical types eight cephalometric measurements wereusually used e description of these eight measurementsand the methods for classification are explained in Tables 8and 9 respectively
One evaluation criterion for measurement analysis is thesuccess classification rate (SCR) for these 8 popular methodsof analysis which is calculated using confusion matrix Inthe confusion matrix each column represents the instancesof an estimated type while each row represents the instancesof the ground truth type e SCR is defined as the averageddiagonal of the confusion matrix
For evaluation the performance of the proposed systemfor more measurement analysis in database2 another cri-terion the mean absolute error (MAE) is calculated by
MAE 1113936
Mi1
1113954Vi minusVi
11138681113868111386811138681113868111386811138681113868
M (18)
where 1113954Vi is the value of angular or linear measurementestimated by our system and Vi is the ground truth mea-surement obtained using human annotated landmarks
1 2
3
4
5
6
78 9
101112 13
14
15
16
17 18
19
(a)
L1 S
L2 N
L3 Or
L4 P
L5 A
L6 B
L7 Pg
L8 Me
L9 Gn
L10 Go
L11 LI
L12 UI
L13 UL
L14 LL
L15 Sn
L16 Pos
L17 PNS
L18 ANS
L19 Ar
(b)
Figure 6 Cephalogram annotation example showing the 19 landmarks in database1 (a) Cephalogram annotation example (b) Descriptionof 19 landmarks
Table 3 e description of database2
Female MaleClass I 29 26
Class III Pretreatment 32 23Posttreatment 32 23
Total number of cephalograms 165Total number of patients 110
Journal of Healthcare Engineering 9
332 Experimental Results
(1) Results of database1 Using the detected 19 landmarks 8cephalometric measurements are calculated for classificationof anatomical types in database1 e comparison of ourmethod to the best twomethods is given in Table 10 where wehave achieved the best classification results for two mea-surements of APDI and FHI e average SCR obtained byour method is 7503 which is much better than the methodin [26] and is comparable to the method of RFRV-CLM [24]
(2) Results of database2 According to the detected 45landmarks 27 measurements are automatically calculated byour system using database2 including 17 angular and 10linear measurements which are illustrated in Table 11
e MAE and MAElowast of 27 measurements are illustratedin Table 12 Here the MAE represents the performance ofmeasurement analysis of our automatic system e MAElowastrepresents interobserver variability calculated between theground truth values obtained by the two experts For angularmeasurements the difference between MAE and MAElowast iswithin 05deg for 917 measurements the difference between
12
345
6 78
91011
1213
14
1516
17
181920
212223
24252627
28
29
30
31
32333435
363738
39
40
414243
44
45
(a)
L1 N L16 Mes L31 L6A
L2 Ns L17 Go L32 UL5
L3 Prn L18 Me L33 L6E
L4 Cm L19 Gn L34 UL6
L5 ANS L20 Pg L35 U6E
L6 A L21 LIA L36 U6A
L7 Sn L22 B L37 PNS
L8 Ss L23 Id L38 Ptm
L9 UL L24 LI L39 S
L10 Stoms L25 UI L40 Co
L11 Stomi L26 FA L41 Ar
L12 LL L27 SPr L42 Ba
L13 Si L28 UIA L43 Bolton
L14 Pos L29 Or L44 P
L15 Gs L30 Se L45 ULI
(b)
Figure 7 Cephalogram annotation example showing landmarks in database2 (a) Cephalogram annotation example (b)e description of45 cephalometric landmarks
Table 4 e experimental results of 19 landmark detection inTest1Data
Landmark MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
L1 131 126 8667 9267 9467 9733L2 192 205 6800 7200 7933 8867L3 181 174 6667 8133 8800 9533L4 311 259 5000 5400 5933 6733L5 224 156 5600 6600 7533 8667L6 159 139 7200 7867 8400 9400L7 123 087 8067 9133 9667 9933L8 108 088 8867 9533 9667 9867L9 087 075 9133 9333 9800 9933L10 398 233 2333 2933 3800 5333L11 097 089 8733 9200 9600 9867L12 090 170 9467 9467 9600 9667L13 102 071 9200 9533 9800 10000L14 089 061 9333 9867 10000 10000L15 118 116 8533 9200 9333 9800L16 214 148 5000 6067 7267 9067L17 116 085 8600 9267 9467 9867L18 177 151 7200 7933 8600 9267L19 297 277 5000 5400 5800 6733Average 169 143 7337 7965 8446 9067
Table 5 Comparison of our method with three methods in term ofMRE using Test1Dtata
Method [24] [26] [29] OursMRE (pixels) 1674 1846 1779 1692
Table 6 Comparison of our method to two methods in terms ofMRE SD and SDR using Test1Dtata
Method MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
[24] 167 165 7368 8021 8519 9147[26] 184 176 7172 7740 8193 8804Ours 169 143 7337 7965 8446 9067
10 Journal of Healthcare Engineering
MAE and MAElowast is within 1deg for 1217 measurements andthe difference betweenMAE andMAElowast is within 2deg for 1617measurements In particular theMAE of ZAngle is less thanMAElowast e other one measurement has the difference of224deg For linear measurements the difference betweenMAEand MAElowast is within 05mm for 910 measurements eother one measurement has the difference of 116mm eresults show that our automatic system is efficient and ac-curate for measurement analysis
4 Discussion
e interobserver variability of human annotation is ana-lyzed Table 13 shows that the MRE and SD between an-notations by two orthodontists are 138mm and 155mm fordatabase1 respectively [27] for database2 the MRE and SDbetween annotations of two orthodontists are 126mm and127mm respectively Experimental results show that theproposed algorithm of automatic cephalometric landmark
detection can achieve the MRE of less than 2mm and the SDalmost equal to that of manual marking erefore theperformance of automatic landmark detection by the pro-posed algorithm is comparable to manual marking in term ofthe interobserver variability between two clinical expertsFurthermore the detected landmarks are used to calculate theangular and linearmeasurements in lateral cephalogramsesatisfactory results of measurement analysis are presented inexperiments based on the accurate landmark detection Itshows that the proposed algorithm has the potential to beapplied in clinical practice of cephalometric analysis for or-thodontic diagnosis and treatment planning
e detection accuracy of some landmarks is lower thanthe average value and there are mainly three main reasons (i)some landmarks are located at the overlaying anatomicalstructures such as landmarksGo UL5 L6E UL6 andU6E (ii)some landmarks have large variability of manual marking dueto large anatomical variability among subjects especially inabnormality such as landmarks A ANS Pos Ar Ba andBolton and (iii) structural information is not obvious due tolittle intensity variability in the neighborhood of some land-marks in images such as landmarks P Co L6A and U6A
e proposed system follows clinical cephalometricanalysis procedure and the accuracy of the system can beevaluated by the manual marking accuracy ere are severallimitations in this study On one hand except for the algo-rithm the data effects to the performance of the system in-clude three aspects (i) the quality of the training data (ii) thesize of the training dataset and (iii) the shape and appearancevariation exhibited in the training data On the other hand
12
345
6 78
91011
1213
14
1516
17
181920
212223
24252627
28
29
30
31
32333435
363738
39
40
414243
44
45
True locationsPredicted locations
(a)
12
345
6 78
91011
1213
14
1516
17
181920
21222324252627
28
29
30
31
32333435
3637
38
39
4041
4243
44
45
True locationsPredicted locations
(b)
Figure 8 Examples of automatic cephalometric landmark detection (a) Example of detection in Class I (b) Example of detection in Class III
Table 7 e statistical results of automatic cephalometric land-mark detection in database2
No of 5-fold
MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
1 172 135 7249 8132 8704 93372 172 138 7138 7994 8563 92733 166 132 7431 8206 8744 93374 177 136 6967 7896 8532 92935 168 154 7253 8088 8687 9296Average 171 139 7208 8063 8646 9307
Journal of Healthcare Engineering 11
performance of the system depends on consistency betweenthe training data and the testing data similarly as any su-pervised learning-based methods
5 Conclusion
In conclusion we design a new framework of landmarkdetection in lateral cephalograms with low-to-high resolu-tions In each image resolution decision tree regressionvoting is employed in landmark detection e proposedalgorithm takes full advantage of image information indifferent resolutions In lower resolution the primary localstructure information rather than local detail informationcan be extracted to predict the positions of anatomical
structure involving the specific landmarks In higher reso-lution the local structure information involves more detailinformation and it is useful for prediction positions oflandmarks in the neighborhood As demonstrated in ex-perimental results the proposed algorithm has achievedgood performance Compared with state-of-the-art methodsusing the benchmark database our algorithm has obtainedthe comparable accuracy of landmark detection in terms ofMRE SD and SDR Tested by our own clinical database ouralgorithm has also obtained average 72 successful de-tection rate within precision range of 20mm In particular45 landmarks have been detected in our database which isover two times of the number of landmarks in the bench-mark database erefore the extensibility of the proposed
Table 8 Description of cephalometric measurements in database1 [28]
No Measurement Description in mathematics Description in words1 ANB angL5L2L6 e angle between the landmark 5 2 and 62 SNB angL1L2L6 e angle between the landmark 1 2 and 63 SNA angL1L2L5 e angle between the landmark 1 2 and 5
4 ODI angL5L6L8L10 + angL17L18L4L3e arithmetic sum of the angle between AB plane(L5L6) to mandibular plane (L8L10) and the angle of
palatal plane (L17L18) to FH plane (L4L3)
5 APDI angL3L4L2L7 + angL2L7L5L6 + angL3L4L17L18
e arithmetic sum of the angle between FH plane(L3L4) to facial plane (L2L7) the angle of facial plane(L2L7) to AB plane (L5L6) and the angle of FH plane
(L3L4) to palatal plane (L17L18)
6 FHI |L1L10||L2L8|
e ratio of posterior face height (PFH the distancefrom L1 to L10) to anterior face height (AFH the
distance from L2 to L8)
7 FHA angL1L2L10L9 e angle between SN plane (L1L2) to mandibularplane (L10L9)
8 MW |L12L11| x(L12)ltx(L11)
minus|L12L11| otherwise1113896When the x ordinate of L12 is less than L11rsquos theMW
is |L12L11| otherwise MW is minus|L12L11|
Table 9 Eight standard cephalometric measurement methods for classification of anatomical types [28]
No Measurement Type 1 Type 2 Type 3 Type 41 ANB 32degsim57deg class I (normal) gt57deg class II lt32deg class III mdash
2 SNB 746degsim787deg normalmandible lt746deg retrognathic mandible gt787deg prognathic mandible mdash
3 SNA 794degsim832deg normal maxilla gt832deg prognathic maxilla lt794deg retrognathic maxilla mdash4 ODI Normal 745deg plusmn 607deg gt805deg deep bite tendency lt684deg open bite tendency mdash5 APDI Normal 814deg plusmn 38deg lt776deg class II tendency gt852deg class III tendency mdash6 FHI Normal 065sim075 gt075 short face tendency lt065 long face tendency mdash
7 FHA Normal 268degsim314deg gt314deg mandible high angletendency
lt268deg mandible lower angletendency mdash
8 MW Normal 2mmsim45mm MW 0mm edge to edge MW lt0mm anterior cross biteMWgt45mm
large over jet
Table 10 Comparison of our method to two methods in term of SCR using Test1Data
Methode success classification rates SCR ()
ANB SNB SNA ODI APDI FHI FHA MW Average[24] 6499 8452 6845 8464 8214 6792 7554 8219 7630[26] 5942 7109 5900 7804 8016 5897 7703 8394 7096Ours 5861 7885 5986 7659 8349 8244 7718 8320 7503
12 Journal of Healthcare Engineering
Table 11 Description of cephalometric measurements indatabase2
Name Description inmathematics Description in words
8e angles of three points (unit deg)
SNA angL39L1L6 e angle of landmarks L39 L1L6
SNB angL39L1L22 e angle of landmarks L39 L1L22
ANB angL6L1L22 e angle of landmarks L6 L1L22
SNPg angL39L20L1 e angle of landmarks L39 L20L1
NAPg angL1L6L20 e angle of landmarks L1 L6L20
NSGn angL1L39L19 e angle of landmarks L1 L39L19
Ns-Prn-Pos angL2L3L14 e angle of landmarks L2 L3L14
Cm-Sn-UL angL4L7L9 e angle of landmarks L4 L7L9
LL-Bprime-Pos angL12L13L14 e angle of landmarks L12 L13L14
Angle ofthe jaw angL41L17L18 e angle of landmarks L41 L17
L18Angle ofconvexity angL2L7L14 e angle of landmarks L2 L7
L148e angles between two planes (unit deg)
FH-SN angL44L29L39L1 e angle between FH plane(L44L29) and SN plane (L39L1)
UI-SN angL28L25L39L1 e angle between upper incisor(L28L25) and SN plane (L39L1)
LI-FH angL24L21L44L29e angle between lower incisoraxis (L24L21) and FH plane
(L44L29)
UI-LI angL28L25L24L21 e angle between upper andlower incisors (L28L25 L24L21)
H angle angL9L14L8L14 e angle between H plane(L9L14)and NB plane (L8L14)
Z angle angL44L29L9L14e rear lower angle between FHplane (L44L29) and H plane
(L9L14)8e distances between two points (unit mm)
N-Me |L1L18|e forward height the distancebetween landmarks L1 L18
N-ANS |L1L5|
e up-forward height thedistance between landmarks L1
L5
ANS-Me |L5L18|
e down-forward height thedistance between landmarks L5
L18
Stoms-UI |L10L25|e vertical distance between
landmarks L10 L258e distances from the point to the plane in the horizontal direction(unit mm)
UI-AP |L25L6L20|e distance between landmark
L25 to AP plane (L6L20)
LI-AP |L24L6L20|e distance from landmark L24
to AP plane (L6L20)
UL-EP |L9L3L14|e distance from landmark L9
to EP plane (L3L14)
Table 12 Result of our method for 27 measurements in terms ofMAE and MAElowast using database2
Name MAE MAElowast MAE-MAElowast
8e angles of three points (unit deg)SNA 195 170 024SNB 157 117 039ANB 129 108 021SNPg 159 120 039NAPg 283 244 039NSGn 142 096 047Ns-Prn-Pos 168 110 058Cm-Sn-UL 552 380 172LL-Bprime-Pos 395 344 051Angle of the jaw 320 166 154Angle of convexity 267 186 0818e angles between two planes (unit deg)FH-SN 200 196 004UI-SN 471 281 190LI-FH 334 227 107UI-LI 690 466 224H angle 094 080 014Z angle 169 186 -0178e distances between two points (unit mm)N-Me 137 110 027N-ANS 157 134 024ANS-Me 106 096 010Stoms-UI 075 042 0338e distances from the point to the plane in the horizontal direction(unit mm)UI-AP 096 071 025LI-AP 096 076 020UL-EP 050 036 014LL-EP 045 039 005MaxE 206 179 0278e distances between two points projected to a plane (unit mm)Wits 470 353 116MAElowast interobserver variability
Table 11 Continued
Name Description inmathematics Description in words
LL-EP |L12L13L14|e distance from landmark L12
to EP plane (L3L14)
MaxE |L6L5L37|
Maxillary length the distancefrom landmark L6 to palatal
plane (L5L37)8e distances between two points projected to a plane (unit mm)
Wits |L6L32L34L22|
e distance between the twolandmarks L6 and L22 projectedto functional jaw plane (L32L34)
Table 13 e interobserver error of manual marking betweendoctor1 and doctor2
Interobserver variabilityMRE (mm) SD (mm)
database1 138 155database2 126 127
Journal of Healthcare Engineering 13
algorithm is confirmed using this clinical dataset In addi-tion automatic measurement of clinical parameters has alsoachieved satisfactory results In the future we will put moreefforts to improve the performance of automatic analysis inlateral cephalograms so that the automatic system can beutilized in clinical practice to obtain objective measurementMore research will be conducted to reduce the computa-tional complexity of the algorithm as well
Data Availability
e database1 of 2015 ISBI Challenge is available at httpwwwntustedutwsimcweiwangISBI2015challenge1indexhtml e database2 of Peking University School andHospital of Stomatology cannot be made public availabledue to privacy concerns for the patients
Disclosure
is work is based on research conducted at Beijing Instituteof Technology
Conflicts of Interest
e authors declare that there are no conflicts of interestregarding the publication of this paper
Acknowledgments
e authors would like to express their gratitude to De-partment of Orthodontics Peking University School andHospital of Stomatology China for the supply of imagedatabase2
References
[1] J Yang X Ling Y Lu et al ldquoCephalometric image analysisand measurement for orthognathic surgeryrdquo Medical amp Bi-ological Engineering amp Computing vol 39 no 3 pp 279ndash2842001
[2] B H Broadbent ldquoA new x-ray technique and its applicationto orthodontiardquo Angle Orthodontist vol 1 pp 45-46 1931
[3] H Hofrath ldquoDie Bedeutung des Rontgenfern undAbstandsaufnahme fiir de Diagnostik der KieferanomalienrdquoFortschritte der Kieferorthopadie vol 2 pp 232ndash258 1931
[4] R Leonardi D Giordano F Maiorana et al ldquoAutomaticcephalometric analysis - a systematic reviewrdquo Angle Ortho-dontist vol 78 no 1 pp 145ndash151 2008
[5] T S Douglas ldquoImage processing for craniofacial landmarkidentification and measurement a review of photogrammetryand cephalometryrdquo Computerized Medical Imaging andGraphics vol 28 no 7 pp 401ndash409 2004
[6] D S Gill and F B Naini ldquoChapter 9 cephalometric analysisrdquoin Orthodontics Principles and Practice pp 78ndash87 BlackwellScience Publishing Oxford UK 2011
[7] A D Levy-Mandel A N Venetsanopoulos and J K TsotsosldquoKnowledge-based landmarking of cephalogramsrdquo Com-puters and Biomedical Research vol 19 no 3 pp 282ndash3091986
[8] V Grau M Alcaniz M C Juan et al ldquoAutomatic localizationof cephalometric landmarksrdquo Journal of Biomedical In-formatics vol 34 no 3 pp 146ndash156 2001
[9] J Ashish M Tanmoy and H K Sardana ldquoA novel strategyfor automatic localization of cephalometric landmarksrdquo inProceedings of IEEE International Conference on ComputerEngineering and Technology pp V3-284ndashV3-288 TianjinChina 2010
[10] T Mondal A Jain and H K Sardana ldquoAutomatic craniofacialstructure detection on cephalometric imagesrdquo IEEE Trans-actions on Image Processing vol 20 no 9 pp 2606ndash2614 2011
[11] A Kaur and C Singh ldquoAutomatic cephalometric landmarkdetection using Zernike moments and template matchingrdquoSignal Image and Video Processing vol 9 no 1 pp 117ndash1322015
[12] D B Forsyth and D N Davis ldquoAssessment of an automatedcephalometric analysis systemrdquo European Journal of Ortho-dontics vol 18 no 5 pp 471ndash478 1996
[13] S Rueda and M Alcaniz ldquoAn approach for the automaticcephalometric landmark detection using mathematicalmorphology and active appearance modelsrdquo in Proceedings ofMedical Image Computing and Computer-Assisted In-tervention (MICCAI 2006) pp 159ndash166 Lecture Notes inComputer Science R Larsen Ed Copenhagen DenmarkOctober 2006
[14] W Yue D Yin C Li et al ldquoAutomated 2-D cephalometricanalysis on X-ray images by a model-based approachrdquo IEEETransactions on Biomedical Engineering vol 53 no 8pp 1615ndash1623 2006
[15] R Kafieh A mehri S sadri et al ldquoAutomatic landmarkdetection in cephalometry using a modified Active ShapeModel with sub image matchingrdquo in Proceedings of IEEEInternational Conference on Machine Vision pp 73ndash78Islamabad Pakistan 2008
[16] J Keustermans W Mollemans D Vandermeulen andP Suetens ldquoAutomated cephalometric landmark identifica-tion using shape and local appearance modelsrdquo in Proceedingsof 20th International Conference on Pattern Recognitionpp 2464ndash2467 Istanbul Turkey August 2010
[17] P Vucinic Z Trpovski and I Scepan ldquoAutomatic land-marking of cephalograms using active appearance modelsrdquoEuropean Journal of Orthodontics vol 32 no 3 pp 233ndash2412010
[18] S Chakrabartty M Yagi T Shibata et al ldquoRobust cepha-lometric landmark identification using support vector ma-chinesrdquo in Proceedings of IEEE International Conference onAcoustics Speech and Signal Processing vol 2 pp 825ndash828Vancouver Canada May 2003
[19] I El-Feghi M A Sid-Ahmed and M Ahmadi ldquoAutomaticlocalization of craniofacial landmarks for assisted cephalom-etryrdquo Pattern Recognition vol 37 no 3 pp 609ndash621 2004
[20] R Leonardi D Giordano and F Maiorana ldquoAn evaluation ofcellular neural networks for the automatic identification ofcephalometric landmarks on digital imagesrdquo Journal of Bio-medicine and Biotechnology vol 2009 Article ID 717102 12pages 2009
[21] L Favaedi M Petrou and IEEE ldquoCephalometric landmarksidentification using probabilistic relaxationrdquo in Proceedings of2010 Annual International Conference of the IEEE Engineeringin Medicine and Biology Society pp 4391ndash4394 IEEE Engi-neering in Medicine and Biology Society Conference Pro-ceedings Buenos Aires Argentina August 2010
[22] M Farshbaf and A A Pouyan ldquoLandmark detection oncephalometric radiology images through combining classi-fiersrdquo in Proceedings of 2010 17th Iranian Conference ofBiomedical Engineering (ICBME) pp 1ndash4 Isfahan IranNovember 2010
14 Journal of Healthcare Engineering
[23] L Breiman ldquoRandom forestsrdquo Machine Learning vol 45no 1 pp 5ndash32 2001
[24] C Lindner and T F Cootes ldquoFully automatic cephalometricevaluation using random forest regression-votingrdquo in Pro-ceedings of International Symposium on Biomedical Imaging(ISBI) Grand Challenges in Dental X-ray Image Ana-lysismdashAutomated Detection and Analysis for Diagnosis inCephalometric X-ray Image Brooklyn NY USA May 2015
[25] C Lindner C-W Wang C-T Huang et al ldquoFully automaticsystem for accurate localisation and analysis of cephalometriclandmarks in lateral cephalogramsrdquo Scientific Reports vol 6no 1 2016
[26] B Ibragimov et al ldquoComputerized cephalometry by gametheory with shape- and appearance-based landmark re-finementrdquo in Proceedings of International Symposium onBiomedical imaging (ISBI) Grand Challenges in Dental X-rayImage AnalysismdashAutomated Detection and Analysis forDiagnosis in Cephalometric X-ray Image Brooklyn NYUSA May 2015
[27] C-WWang C-T HuangM-C Hsieh et al ldquoEvaluation andcomparison of anatomical landmark detection methods forcephalometric X-ray images a grand challengerdquo IEEETransactions on Medical Imaging vol 34 no 9 pp 1890ndash1900 2015
[28] C W Wang C T Huang J H Lee et al ldquoA benchmark forcomparison of dental radiography analysis algorithmsrdquoMedical Image Analysis vol 31 pp 63ndash76 2016
[29] R Vandaele J AcetoMMuller et al ldquoLandmark detection in2D bioimages for geometric morphometrics a multi-resolution tree-based approachrdquo Scientific Reports vol 8no 1 2018
[30] D G Lowe ldquoDistinctive image features from scale-invariantKeypointsrdquo International Journal of Computer Vision vol 60no 2 pp 91ndash110 2004
[31] A Vedaldi and B Fulkerson ldquoVLFeat an open and portablelibrary of computer vision algorithmsrdquo in Proceedings ofConference on ACM multimedia pp 1469ndash1472 FirenzeItaly October httpwwwvlfeatorg
[32] C Qiu L Jiang and C Li ldquoRandomly selected decision treefor test-cost sensitive learningrdquo Applied Soft Computingvol 53 pp 27ndash33 2017
[33] K Kim and J S Hong ldquoA hybrid decision tree algorithm formixed numeric and categorical data in regression analysisrdquoPattern Recognition Letters vol 98 pp 39ndash45 2017
[34] L Breiman and J H Friedman Classification and RegressionTrees CRC Press Boca Raton FL USA 1984
[35] C Lindner P A Bromiley M C Ionita et al ldquoRobust andaccurate shape model matching using random forest re-gression-votingrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 37 no 9 pp 1862ndash1874 2015
[36] J Gall and V S Lempitsky ldquoClass-specific Hough forests forobject detectionrdquo in Proceedings of IEEE Conference onComputer Vision and Pattern Recognition (CVPR) pp 143ndash157 Miami FL USA June 2009
Journal of Healthcare Engineering 15
International Journal of
AerospaceEngineeringHindawiwwwhindawicom Volume 2018
RoboticsJournal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Active and Passive Electronic Components
VLSI Design
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Shock and Vibration
Hindawiwwwhindawicom Volume 2018
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawiwwwhindawicom
Volume 2018
Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom
The Scientific World Journal
Volume 2018
Control Scienceand Engineering
Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom
Journal ofEngineeringVolume 2018
SensorsJournal of
Hindawiwwwhindawicom Volume 2018
International Journal of
RotatingMachinery
Hindawiwwwhindawicom Volume 2018
Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Navigation and Observation
International Journal of
Hindawi
wwwhindawicom Volume 2018
Advances in
Multimedia
Submit your manuscripts atwwwhindawicom
P(T) 1113944jisinT
wj (4)
where wj is the weight of observation j In this paper set wj
1N ird all elements of the observation are sorted in
ascending order Every element is regraded as a splittingcandidate TU is the unsplit set of all observation indicescorresponding tomissing values Finally the best way to splitnode t using fj is determined by maximizing the reductionin MSE Δεt among all splitting candidates For all splitting
Image Geometry descriptor
hellip012
34
5 67
8910
1112
13 1415
Bin indexes when stacked
y
x
ŷ
x
Figure 2 Diagram of SIFT feature descriptor of an image patch
(a) (b)
120
Mag
nitu
de
100
80
60
40
20
00 20 40 60
Index80 100 120 140
(c)
Figure 3 SIFT-based patch feature extraction (a) Original image showing a patch (b) Geometry descriptor (c) e extracted feature vector
Journal of Healthcare Engineering 5
candidates in the observation fj the following steps areperformed
(1) Split the observations in node t into left and rightchild nodes (tL and tR respectively)
(2) Compute the reduction in MSE Δεt For a particularsplitting candidate tL and tR represent the observationindices in the sets TL and TR respectively If fj doesnot contain any missing values then the reduction inMSE Δεt for the current splitting candidate is
Δεt P(T)εt minusP TL( 1113857εtLminusP TR( 1113857εtR
(5)
If fj contains any missing values then the reduction inMSE Δεt
prime is
Δεtprime P TminusTU( 1113857εt minusP TL( 1113857εtL
minusP TR( 1113857εtR (6)
where TminusTU is the set of all observation indices innode t that are not missing
(3) Choose the splitting candidate that yields the largestMSE reduction In this way the observations in nodet are split at the candidate that maximize the MSEreduction
To stop splitting nodes of the decision tree two rules canbe followed (1) it is pure of the node that the MSE for theobserved response in this node is less than the MSE for theobserved response in the entire data multiplied by the tol-erance on quadratic error per node and (2) the decision treereaches to the setting values for depth of the regression de-cision tree for example the maximum number of splittingnodes max_splits
e simplicity and performance of a decision tree shouldbe considered to improve the performance of the decisiontree at the same time A deep tree usually achieves highaccuracy on the training data However the tree is not toobtain high accuracy on a test data as well It means thata deep tree tends to overfit ie its test accuracy is much lessthan its training accuracy On the contrary a shallow treedoes not achieve high training accuracy but can be morerobust ie its training accuracy could be similar to that ofa test data Moreover a shallow tree is easy to interpret andsaves time for prediction In addition tree accuracy can beobtained by cross validation when there are not enough datafor training and testing
ere are two ways to improve the performance ofdecision trees by minimizing cross-validated loss One is toselect the optimal parameter value to control depth of de-cision trees e other is postpruning after creating decisiontrees In this paper we use the parameter max_splits tocontrol the depth of resulting decision trees Setting a largevalue for max_splits lends to growing a deep tree whilesetting a small value formax_splits yields a shallow tree withlarger leaves To select the appropriate value for max_splitsthe following steps are performed (1) set a spaced set ofvalues from 1 to the total sample size formax_splits per tree(2) create cross-validated regression trees for the datausing the setting values for max_splits and calculate thecross-validated errors and (3) the appropriate value of
max_splits can be obtained by minimizing the cross-validated errors
(2) Prediction In prediction process you can easily predictresponses for new data after creating a regression tree RlSuppose f_new is the new data (ie a feature vectorextracted from a new patch P_new in the test cephalogram)According to the rules of the regression tree the nodes selectthe specific attributes from the new observation f_new andreach the leaf step by step which stores the mean dis-placement d_new Here we predict the displacement fromthe center of patch P_new to the landmark l using SIFTfeature vector by the regressor Rl
213 MDTRV Using SIFT-Based Patch Features
(1) Decision Tree Regression Voting (DTRV) As illustrated inFigure 4 the algorithm of automatic cephalometric land-mark detection using decision tree regression voting(DTRV) in single scale consists of training and testingprocesses as a supervised learning algorithm e trainingprocess begins by feature extraction for patches sampledfrom the training images en a regression tree is con-structed for each landmark with inputting feature vectorsand displacements (f d) Here f represents the observationsand d represents the targets for this regression problem etesting process begins by the same step of feature extractionen the resulting f_new is used to predict the displacementd_new by regressor Rl In the end the optimal landmarkposition is obtained via voting e voting style includessingle unit voting and single weighted voting As mentionedin [35] these two styles perform equally well in applicationof voting optimal landmark positions for medical images Inthis paper we use the single unit voting
(2) MDTRV Using SIFT-Based Patch Features Finally a newframework of MDTRV using SIFT-based patch features isproposed by using a simple and efficient strategy for accuratelandmark detection in lateral cephalograms ere are fourstages in the proposed algorithm of MDTRV which areiteratively performed in the scales of 0125 025 05 and 1When the scale is 0125 the training K patches with Kdisplacements are randomly sampled in a whole cephalo-gram for a specific landmark l(l 1 L) e decisiontree regressor R1
l is created by using these training samplesFor prediction SIFT features are extracted for Kprime testingpatches sampled in the whole cephalogram and Kprime dis-placements are predicted by regressor R1
l using extractedfeatures e optimal position of the specific landmark l isobtained via voting through all predicted displacementsWhen the scale is 025 05 or 1 only the patch sampling ruleis different from the procedure in scale of 0125 at is thetraining and testing patches are randomly sampled in the(2S + 1) times (2S + 1) neighborhood of true and initial landmarkpositions e estimated landmark position is used as theinitial landmark position in the next scale of testing processe optimal position of the specific landmark l in scale of 1 isrefined by using Hough forest [36] which is regarded as the
6 Journal of Healthcare Engineering
final resulting location is multiresolution coarse-to-finestrategy has greatly improved the accuracy of cephalometriclandmark detection
22 Parameter Measurement Measurements are eitherangular or linear parameters calculated by using cephalo-metric landmarks and planes (refer to Tables 1 and 2)According to geometrical structure measurements can beclassified into five classes the angle of three points the angleof two planes the distance between two points the distancefrom a point to a plane and the distance between two pointsprojected to a plane All measurements can be calculatedautomatically in our system as described in the following
2218e Angle of8ree Points Assume point B is the vertexamong three points A B and C then the angle of these threepoints is calculated by
angABC arccosa2 + c2 minus b2
2ac1113888 1113889 times
180π
1113874 1113875 (7)
anda AminusB
b AminusC
c BminusC
⎧⎪⎪⎨
⎪⎪⎩(8)
where AminusB
(xA minus xB)2 + (yA minusyB)21113969
represents theEuclidean distance of A and B
222 8e Angle between Two Planes As the cephalometricradiographs are 2D images in this study the planes areprojected as straight lines us the planes are determinedby two points e angle between two planes AB and CD iscalculated by
angAB arctanyA minusyB
xA minus xB
1113888 1113889 times180π
1113874 1113875
angCD arctanyC minusyD
xC minusxD
1113888 1113889 times180π
1113874 1113875
⎧⎪⎪⎪⎪⎪⎪⎪⎨
⎪⎪⎪⎪⎪⎪⎪⎩
(9)
angABCD |angABminusangCD| (10)
223 8e Distance between Two Points e distance be-tween two points is calculated using the following equation
|AB| AminusB times ratio (11)
ratio is a scale indicator that can be calculated by thecalibrated gauge in the lateral cephalograms and its unit ismmpixel
224 8e Distance from a Point to a Plane in the HorizontalDirection e plane AB is defined as
y + k1x + k0 0 (12)
us the distance of point C to the plane AB is cal-culated by
|C middot AB| yC + k1xC + k0
1 + k21
1113969
111386811138681113868111386811138681113868111386811138681113868111386811138681113868
111386811138681113868111386811138681113868111386811138681113868111386811138681113868
times ratio (13)
225 8e Distance between Two Points Projected to a Planee calculation of the distance between two points projectedto a plane is illustrated in Figure 5 In order to calculate thedistance between two points projected to a plane first we useEquations (4)ndash(12) to determine the plane AB en wecalculate the distance from two points C and D to the planeAB as c1 and c2 by Equations (4)ndash(13) ird the distance c0between two points C and D is calculated by Equations(4)ndash(11) Finally the distance |C middot AB middot D| between twopoints C and D projected to the plane AB is represented as c
and is calculated by
c
c20 minus c1 minus c2( 11138572
1113969
times ratio (14)
3 Experimental Evaluation
31 Data Description
311 Database of 2015 ISBI Challenge e benchmarkdatabase (database1) included 300 cephalometric radio-graphs (150 for TrainingData 150 for Test1Data) which isdescribed in Wang et al [28] All of the cephalograms werecollected from 300 patients aged from 6 to 60 years old e
Feature extraction
Regression treeconstruction
d
f
Feature extractionf_new d_new
Prediction Voting
R1
Ground truth oflandmarks
Training patches
Testing patches
Testing process
Training process
Landmarkpositions
P (x y)
Figure 4 Algorithm of decision tree regression voting
Journal of Healthcare Engineering 7
image resolution was 1935 times 2400 pixels For evaluation 19landmarks were manually annotated by two experienceddoctors in each cephalogram (see illustrations in Figure 6)the ground truth was the average of the annotations by bothdoctors and eight clinical measurements were used forclassification of anatomical types
312 Database of Peking University School and Hospital ofStomatology e database2 included 165 cephalometricradiographs as illustrated in Table 3 (the IRB approvalnumber is PKUSSIRB-201415063) ere were 55 cephalo-grams collected from 55Chinese adult subjects (29 females 26males) who were diagnosed as skeletal class I (0 lt ANB lt 4)with minor dental crowding and a harmoniouse other 110cephalograms were collected from the 55 skeletal class IIIpatients (ANB lt 0) (32 females 23 males) who underwentcombined surgical orthodontic treatment at the PekingUniversity School and Hospital of Stomatology from 2010 to2013 e image resolutions were different in the range from758 times 925 to 2690 times 3630 pixels For evaluation 45 landmarksweremanually annotated by two experienced orthodontists ineach cephalogram as shown in Figure 7 the ground truth wasthe average of annotations by both doctors and 27 clinicalmeasurements were used for future cephalometric analysis
313 Experimental Settings In the initial scale of landmarkdetection for training K 50 for prediction Kprime 400 Inthe other scales the additional parametersW and S are set to48 and 40 separately Because the image resolutions aredifferent preprocessing is required to rescale all the imagesto the fixed width of 1960 pixels in database2 For database2we test our algorithm by using 5-fold cross validation
32 Landmark Detection
321 Evaluation Criterion e first evaluation criterion isthe mean radial error with the associated standard deviatione radial error E ie the distance between the predictedposition and the true position of each landmark is defined as
Ei Ai minusBi
times ratio ai isin A bi isin B 1le ileM (15)
where A B represent the estimated and true positions ofeach landmark for all cephalograms in the dataset Ai Bi arethe corresponding positions in set A and B respectively and
M is the total number of cephalograms in the dataset emean radial error (MRE) and the associated standard de-viation (SD) for each landmark are defined as
MRE mean(E) 1113936
Mi1Ei
M
SD
1113936Mi1 Ei minusMRE( 1113857
2
Mminus 1
1113971 (16)
e second evaluation criterion is the success detectionrate with respect to the 25mm 3mm 5mm and 10mmprecision ranges If Ei is less than a precision range thedetection of the landmark is considered as a successfuldetection in the precision range otherwise it is consideredas a failed detection e success detection rate (SDR) pz
with precision less than z is defined as
pz Ei lt z1113864 1113865
Mtimes 100 1le ileM (17)
where z denotes four precision ranges used in the evaluationincluding 20mm 25mm 3mm and 4mm
322 Experimental Results
(1) Results of database1 Experimental results of cephalo-metric landmark detection using database1 is shown inTable 4 e MREs of landmarks L10 L4 L19 L5 and L16are more than 2mm e other 14 landmarks are all withinthe MRE of 2mm in which 3 landmarks are within the MREof 1mm e average MRE and SD of 19 landmarks are169mm and 143mm respectively It shows that the de-tection of cephalometric landmarks is accurate by ourproposed method e SDRs are 7337 7965 8446and 9067 within the precision ranges of 20 25 30 and40mm respectively In 2mm precision range the SDR ismore than 90 for four landmarks (L9 L12 L13 L14) theSDR is between 80 and 90 for six landmarks (L1 L7 L8L11 L15 L17) the SDR is between 70 and 80 for twolandmarks (L6 and L18) the SDRs for the other sevenlandmarks are less than 70 where the SDR of landmarkL10 is the lowest It can be seen from Table 4 that thelandmark L10 is the most difficult to detect accurately
Comparison of our method with three state-of-the-artmethods using Test1data is shown in Table 5 e differencebetween MRE of our proposed method and RFRV-CLM in[24] is less than 1 pixel which means that their performanceis comparable within the resolution of image
Furthermore we conduct the comparison with the toptwo methods in 2015 ISBI Challenge of cephalometriclandmark detection in terms of MRE SD and SDR as il-lustrated in Table 6 Our method achieves the SDR of 7337within the precision range of 20mm which is similar to theSDR of 7368 of the best method e MRE of our pro-posed method is 169mm which is comparable to that ofmethod of RFRV-CLM but the SD of our method is lessthan that of method of RFRV-CLM e results show that
D
AB
C
c1
c2
c0
c
Figure 5 e distance calculation between two points projected toa plane
8 Journal of Healthcare Engineering
our method is accurate for landmark detection in lateralcephalograms and is more robust
(2) Results of database2 Two examples of landmark de-tection in database2 are shown in Figure 8 It can be ob-served from the figure that the predicted locations of 45landmarks in cephalograms are near to the ground truthlocations which shows the success of our proposed algo-rithm for landmark detection
For the quantitative assessment of the proposed algo-rithm the statistical result is shown in Table 7eMRE andSD of the proposed method to detect 45 landmarks are171mm and 139mm respectively e average SDRs of 45landmarks within the precision range 20mm 25mm3mm and 4mm are 7208 8063 8646 and 9307respectively e experimental results in database2 showcomparable performance to the results of database1 It
indicates that the proposed method can be successfullyapplied to the detection of more clinical landmarks
33 Measurement Analysis
331 Evaluation Criterion For the classification of ana-tomical types eight cephalometric measurements wereusually used e description of these eight measurementsand the methods for classification are explained in Tables 8and 9 respectively
One evaluation criterion for measurement analysis is thesuccess classification rate (SCR) for these 8 popular methodsof analysis which is calculated using confusion matrix Inthe confusion matrix each column represents the instancesof an estimated type while each row represents the instancesof the ground truth type e SCR is defined as the averageddiagonal of the confusion matrix
For evaluation the performance of the proposed systemfor more measurement analysis in database2 another cri-terion the mean absolute error (MAE) is calculated by
MAE 1113936
Mi1
1113954Vi minusVi
11138681113868111386811138681113868111386811138681113868
M (18)
where 1113954Vi is the value of angular or linear measurementestimated by our system and Vi is the ground truth mea-surement obtained using human annotated landmarks
1 2
3
4
5
6
78 9
101112 13
14
15
16
17 18
19
(a)
L1 S
L2 N
L3 Or
L4 P
L5 A
L6 B
L7 Pg
L8 Me
L9 Gn
L10 Go
L11 LI
L12 UI
L13 UL
L14 LL
L15 Sn
L16 Pos
L17 PNS
L18 ANS
L19 Ar
(b)
Figure 6 Cephalogram annotation example showing the 19 landmarks in database1 (a) Cephalogram annotation example (b) Descriptionof 19 landmarks
Table 3 e description of database2
Female MaleClass I 29 26
Class III Pretreatment 32 23Posttreatment 32 23
Total number of cephalograms 165Total number of patients 110
Journal of Healthcare Engineering 9
332 Experimental Results
(1) Results of database1 Using the detected 19 landmarks 8cephalometric measurements are calculated for classificationof anatomical types in database1 e comparison of ourmethod to the best twomethods is given in Table 10 where wehave achieved the best classification results for two mea-surements of APDI and FHI e average SCR obtained byour method is 7503 which is much better than the methodin [26] and is comparable to the method of RFRV-CLM [24]
(2) Results of database2 According to the detected 45landmarks 27 measurements are automatically calculated byour system using database2 including 17 angular and 10linear measurements which are illustrated in Table 11
e MAE and MAElowast of 27 measurements are illustratedin Table 12 Here the MAE represents the performance ofmeasurement analysis of our automatic system e MAElowastrepresents interobserver variability calculated between theground truth values obtained by the two experts For angularmeasurements the difference between MAE and MAElowast iswithin 05deg for 917 measurements the difference between
12
345
6 78
91011
1213
14
1516
17
181920
212223
24252627
28
29
30
31
32333435
363738
39
40
414243
44
45
(a)
L1 N L16 Mes L31 L6A
L2 Ns L17 Go L32 UL5
L3 Prn L18 Me L33 L6E
L4 Cm L19 Gn L34 UL6
L5 ANS L20 Pg L35 U6E
L6 A L21 LIA L36 U6A
L7 Sn L22 B L37 PNS
L8 Ss L23 Id L38 Ptm
L9 UL L24 LI L39 S
L10 Stoms L25 UI L40 Co
L11 Stomi L26 FA L41 Ar
L12 LL L27 SPr L42 Ba
L13 Si L28 UIA L43 Bolton
L14 Pos L29 Or L44 P
L15 Gs L30 Se L45 ULI
(b)
Figure 7 Cephalogram annotation example showing landmarks in database2 (a) Cephalogram annotation example (b)e description of45 cephalometric landmarks
Table 4 e experimental results of 19 landmark detection inTest1Data
Landmark MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
L1 131 126 8667 9267 9467 9733L2 192 205 6800 7200 7933 8867L3 181 174 6667 8133 8800 9533L4 311 259 5000 5400 5933 6733L5 224 156 5600 6600 7533 8667L6 159 139 7200 7867 8400 9400L7 123 087 8067 9133 9667 9933L8 108 088 8867 9533 9667 9867L9 087 075 9133 9333 9800 9933L10 398 233 2333 2933 3800 5333L11 097 089 8733 9200 9600 9867L12 090 170 9467 9467 9600 9667L13 102 071 9200 9533 9800 10000L14 089 061 9333 9867 10000 10000L15 118 116 8533 9200 9333 9800L16 214 148 5000 6067 7267 9067L17 116 085 8600 9267 9467 9867L18 177 151 7200 7933 8600 9267L19 297 277 5000 5400 5800 6733Average 169 143 7337 7965 8446 9067
Table 5 Comparison of our method with three methods in term ofMRE using Test1Dtata
Method [24] [26] [29] OursMRE (pixels) 1674 1846 1779 1692
Table 6 Comparison of our method to two methods in terms ofMRE SD and SDR using Test1Dtata
Method MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
[24] 167 165 7368 8021 8519 9147[26] 184 176 7172 7740 8193 8804Ours 169 143 7337 7965 8446 9067
10 Journal of Healthcare Engineering
MAE and MAElowast is within 1deg for 1217 measurements andthe difference betweenMAE andMAElowast is within 2deg for 1617measurements In particular theMAE of ZAngle is less thanMAElowast e other one measurement has the difference of224deg For linear measurements the difference betweenMAEand MAElowast is within 05mm for 910 measurements eother one measurement has the difference of 116mm eresults show that our automatic system is efficient and ac-curate for measurement analysis
4 Discussion
e interobserver variability of human annotation is ana-lyzed Table 13 shows that the MRE and SD between an-notations by two orthodontists are 138mm and 155mm fordatabase1 respectively [27] for database2 the MRE and SDbetween annotations of two orthodontists are 126mm and127mm respectively Experimental results show that theproposed algorithm of automatic cephalometric landmark
detection can achieve the MRE of less than 2mm and the SDalmost equal to that of manual marking erefore theperformance of automatic landmark detection by the pro-posed algorithm is comparable to manual marking in term ofthe interobserver variability between two clinical expertsFurthermore the detected landmarks are used to calculate theangular and linearmeasurements in lateral cephalogramsesatisfactory results of measurement analysis are presented inexperiments based on the accurate landmark detection Itshows that the proposed algorithm has the potential to beapplied in clinical practice of cephalometric analysis for or-thodontic diagnosis and treatment planning
e detection accuracy of some landmarks is lower thanthe average value and there are mainly three main reasons (i)some landmarks are located at the overlaying anatomicalstructures such as landmarksGo UL5 L6E UL6 andU6E (ii)some landmarks have large variability of manual marking dueto large anatomical variability among subjects especially inabnormality such as landmarks A ANS Pos Ar Ba andBolton and (iii) structural information is not obvious due tolittle intensity variability in the neighborhood of some land-marks in images such as landmarks P Co L6A and U6A
e proposed system follows clinical cephalometricanalysis procedure and the accuracy of the system can beevaluated by the manual marking accuracy ere are severallimitations in this study On one hand except for the algo-rithm the data effects to the performance of the system in-clude three aspects (i) the quality of the training data (ii) thesize of the training dataset and (iii) the shape and appearancevariation exhibited in the training data On the other hand
12
345
6 78
91011
1213
14
1516
17
181920
212223
24252627
28
29
30
31
32333435
363738
39
40
414243
44
45
True locationsPredicted locations
(a)
12
345
6 78
91011
1213
14
1516
17
181920
21222324252627
28
29
30
31
32333435
3637
38
39
4041
4243
44
45
True locationsPredicted locations
(b)
Figure 8 Examples of automatic cephalometric landmark detection (a) Example of detection in Class I (b) Example of detection in Class III
Table 7 e statistical results of automatic cephalometric land-mark detection in database2
No of 5-fold
MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
1 172 135 7249 8132 8704 93372 172 138 7138 7994 8563 92733 166 132 7431 8206 8744 93374 177 136 6967 7896 8532 92935 168 154 7253 8088 8687 9296Average 171 139 7208 8063 8646 9307
Journal of Healthcare Engineering 11
performance of the system depends on consistency betweenthe training data and the testing data similarly as any su-pervised learning-based methods
5 Conclusion
In conclusion we design a new framework of landmarkdetection in lateral cephalograms with low-to-high resolu-tions In each image resolution decision tree regressionvoting is employed in landmark detection e proposedalgorithm takes full advantage of image information indifferent resolutions In lower resolution the primary localstructure information rather than local detail informationcan be extracted to predict the positions of anatomical
structure involving the specific landmarks In higher reso-lution the local structure information involves more detailinformation and it is useful for prediction positions oflandmarks in the neighborhood As demonstrated in ex-perimental results the proposed algorithm has achievedgood performance Compared with state-of-the-art methodsusing the benchmark database our algorithm has obtainedthe comparable accuracy of landmark detection in terms ofMRE SD and SDR Tested by our own clinical database ouralgorithm has also obtained average 72 successful de-tection rate within precision range of 20mm In particular45 landmarks have been detected in our database which isover two times of the number of landmarks in the bench-mark database erefore the extensibility of the proposed
Table 8 Description of cephalometric measurements in database1 [28]
No Measurement Description in mathematics Description in words1 ANB angL5L2L6 e angle between the landmark 5 2 and 62 SNB angL1L2L6 e angle between the landmark 1 2 and 63 SNA angL1L2L5 e angle between the landmark 1 2 and 5
4 ODI angL5L6L8L10 + angL17L18L4L3e arithmetic sum of the angle between AB plane(L5L6) to mandibular plane (L8L10) and the angle of
palatal plane (L17L18) to FH plane (L4L3)
5 APDI angL3L4L2L7 + angL2L7L5L6 + angL3L4L17L18
e arithmetic sum of the angle between FH plane(L3L4) to facial plane (L2L7) the angle of facial plane(L2L7) to AB plane (L5L6) and the angle of FH plane
(L3L4) to palatal plane (L17L18)
6 FHI |L1L10||L2L8|
e ratio of posterior face height (PFH the distancefrom L1 to L10) to anterior face height (AFH the
distance from L2 to L8)
7 FHA angL1L2L10L9 e angle between SN plane (L1L2) to mandibularplane (L10L9)
8 MW |L12L11| x(L12)ltx(L11)
minus|L12L11| otherwise1113896When the x ordinate of L12 is less than L11rsquos theMW
is |L12L11| otherwise MW is minus|L12L11|
Table 9 Eight standard cephalometric measurement methods for classification of anatomical types [28]
No Measurement Type 1 Type 2 Type 3 Type 41 ANB 32degsim57deg class I (normal) gt57deg class II lt32deg class III mdash
2 SNB 746degsim787deg normalmandible lt746deg retrognathic mandible gt787deg prognathic mandible mdash
3 SNA 794degsim832deg normal maxilla gt832deg prognathic maxilla lt794deg retrognathic maxilla mdash4 ODI Normal 745deg plusmn 607deg gt805deg deep bite tendency lt684deg open bite tendency mdash5 APDI Normal 814deg plusmn 38deg lt776deg class II tendency gt852deg class III tendency mdash6 FHI Normal 065sim075 gt075 short face tendency lt065 long face tendency mdash
7 FHA Normal 268degsim314deg gt314deg mandible high angletendency
lt268deg mandible lower angletendency mdash
8 MW Normal 2mmsim45mm MW 0mm edge to edge MW lt0mm anterior cross biteMWgt45mm
large over jet
Table 10 Comparison of our method to two methods in term of SCR using Test1Data
Methode success classification rates SCR ()
ANB SNB SNA ODI APDI FHI FHA MW Average[24] 6499 8452 6845 8464 8214 6792 7554 8219 7630[26] 5942 7109 5900 7804 8016 5897 7703 8394 7096Ours 5861 7885 5986 7659 8349 8244 7718 8320 7503
12 Journal of Healthcare Engineering
Table 11 Description of cephalometric measurements indatabase2
Name Description inmathematics Description in words
8e angles of three points (unit deg)
SNA angL39L1L6 e angle of landmarks L39 L1L6
SNB angL39L1L22 e angle of landmarks L39 L1L22
ANB angL6L1L22 e angle of landmarks L6 L1L22
SNPg angL39L20L1 e angle of landmarks L39 L20L1
NAPg angL1L6L20 e angle of landmarks L1 L6L20
NSGn angL1L39L19 e angle of landmarks L1 L39L19
Ns-Prn-Pos angL2L3L14 e angle of landmarks L2 L3L14
Cm-Sn-UL angL4L7L9 e angle of landmarks L4 L7L9
LL-Bprime-Pos angL12L13L14 e angle of landmarks L12 L13L14
Angle ofthe jaw angL41L17L18 e angle of landmarks L41 L17
L18Angle ofconvexity angL2L7L14 e angle of landmarks L2 L7
L148e angles between two planes (unit deg)
FH-SN angL44L29L39L1 e angle between FH plane(L44L29) and SN plane (L39L1)
UI-SN angL28L25L39L1 e angle between upper incisor(L28L25) and SN plane (L39L1)
LI-FH angL24L21L44L29e angle between lower incisoraxis (L24L21) and FH plane
(L44L29)
UI-LI angL28L25L24L21 e angle between upper andlower incisors (L28L25 L24L21)
H angle angL9L14L8L14 e angle between H plane(L9L14)and NB plane (L8L14)
Z angle angL44L29L9L14e rear lower angle between FHplane (L44L29) and H plane
(L9L14)8e distances between two points (unit mm)
N-Me |L1L18|e forward height the distancebetween landmarks L1 L18
N-ANS |L1L5|
e up-forward height thedistance between landmarks L1
L5
ANS-Me |L5L18|
e down-forward height thedistance between landmarks L5
L18
Stoms-UI |L10L25|e vertical distance between
landmarks L10 L258e distances from the point to the plane in the horizontal direction(unit mm)
UI-AP |L25L6L20|e distance between landmark
L25 to AP plane (L6L20)
LI-AP |L24L6L20|e distance from landmark L24
to AP plane (L6L20)
UL-EP |L9L3L14|e distance from landmark L9
to EP plane (L3L14)
Table 12 Result of our method for 27 measurements in terms ofMAE and MAElowast using database2
Name MAE MAElowast MAE-MAElowast
8e angles of three points (unit deg)SNA 195 170 024SNB 157 117 039ANB 129 108 021SNPg 159 120 039NAPg 283 244 039NSGn 142 096 047Ns-Prn-Pos 168 110 058Cm-Sn-UL 552 380 172LL-Bprime-Pos 395 344 051Angle of the jaw 320 166 154Angle of convexity 267 186 0818e angles between two planes (unit deg)FH-SN 200 196 004UI-SN 471 281 190LI-FH 334 227 107UI-LI 690 466 224H angle 094 080 014Z angle 169 186 -0178e distances between two points (unit mm)N-Me 137 110 027N-ANS 157 134 024ANS-Me 106 096 010Stoms-UI 075 042 0338e distances from the point to the plane in the horizontal direction(unit mm)UI-AP 096 071 025LI-AP 096 076 020UL-EP 050 036 014LL-EP 045 039 005MaxE 206 179 0278e distances between two points projected to a plane (unit mm)Wits 470 353 116MAElowast interobserver variability
Table 11 Continued
Name Description inmathematics Description in words
LL-EP |L12L13L14|e distance from landmark L12
to EP plane (L3L14)
MaxE |L6L5L37|
Maxillary length the distancefrom landmark L6 to palatal
plane (L5L37)8e distances between two points projected to a plane (unit mm)
Wits |L6L32L34L22|
e distance between the twolandmarks L6 and L22 projectedto functional jaw plane (L32L34)
Table 13 e interobserver error of manual marking betweendoctor1 and doctor2
Interobserver variabilityMRE (mm) SD (mm)
database1 138 155database2 126 127
Journal of Healthcare Engineering 13
algorithm is confirmed using this clinical dataset In addi-tion automatic measurement of clinical parameters has alsoachieved satisfactory results In the future we will put moreefforts to improve the performance of automatic analysis inlateral cephalograms so that the automatic system can beutilized in clinical practice to obtain objective measurementMore research will be conducted to reduce the computa-tional complexity of the algorithm as well
Data Availability
e database1 of 2015 ISBI Challenge is available at httpwwwntustedutwsimcweiwangISBI2015challenge1indexhtml e database2 of Peking University School andHospital of Stomatology cannot be made public availabledue to privacy concerns for the patients
Disclosure
is work is based on research conducted at Beijing Instituteof Technology
Conflicts of Interest
e authors declare that there are no conflicts of interestregarding the publication of this paper
Acknowledgments
e authors would like to express their gratitude to De-partment of Orthodontics Peking University School andHospital of Stomatology China for the supply of imagedatabase2
References
[1] J Yang X Ling Y Lu et al ldquoCephalometric image analysisand measurement for orthognathic surgeryrdquo Medical amp Bi-ological Engineering amp Computing vol 39 no 3 pp 279ndash2842001
[2] B H Broadbent ldquoA new x-ray technique and its applicationto orthodontiardquo Angle Orthodontist vol 1 pp 45-46 1931
[3] H Hofrath ldquoDie Bedeutung des Rontgenfern undAbstandsaufnahme fiir de Diagnostik der KieferanomalienrdquoFortschritte der Kieferorthopadie vol 2 pp 232ndash258 1931
[4] R Leonardi D Giordano F Maiorana et al ldquoAutomaticcephalometric analysis - a systematic reviewrdquo Angle Ortho-dontist vol 78 no 1 pp 145ndash151 2008
[5] T S Douglas ldquoImage processing for craniofacial landmarkidentification and measurement a review of photogrammetryand cephalometryrdquo Computerized Medical Imaging andGraphics vol 28 no 7 pp 401ndash409 2004
[6] D S Gill and F B Naini ldquoChapter 9 cephalometric analysisrdquoin Orthodontics Principles and Practice pp 78ndash87 BlackwellScience Publishing Oxford UK 2011
[7] A D Levy-Mandel A N Venetsanopoulos and J K TsotsosldquoKnowledge-based landmarking of cephalogramsrdquo Com-puters and Biomedical Research vol 19 no 3 pp 282ndash3091986
[8] V Grau M Alcaniz M C Juan et al ldquoAutomatic localizationof cephalometric landmarksrdquo Journal of Biomedical In-formatics vol 34 no 3 pp 146ndash156 2001
[9] J Ashish M Tanmoy and H K Sardana ldquoA novel strategyfor automatic localization of cephalometric landmarksrdquo inProceedings of IEEE International Conference on ComputerEngineering and Technology pp V3-284ndashV3-288 TianjinChina 2010
[10] T Mondal A Jain and H K Sardana ldquoAutomatic craniofacialstructure detection on cephalometric imagesrdquo IEEE Trans-actions on Image Processing vol 20 no 9 pp 2606ndash2614 2011
[11] A Kaur and C Singh ldquoAutomatic cephalometric landmarkdetection using Zernike moments and template matchingrdquoSignal Image and Video Processing vol 9 no 1 pp 117ndash1322015
[12] D B Forsyth and D N Davis ldquoAssessment of an automatedcephalometric analysis systemrdquo European Journal of Ortho-dontics vol 18 no 5 pp 471ndash478 1996
[13] S Rueda and M Alcaniz ldquoAn approach for the automaticcephalometric landmark detection using mathematicalmorphology and active appearance modelsrdquo in Proceedings ofMedical Image Computing and Computer-Assisted In-tervention (MICCAI 2006) pp 159ndash166 Lecture Notes inComputer Science R Larsen Ed Copenhagen DenmarkOctober 2006
[14] W Yue D Yin C Li et al ldquoAutomated 2-D cephalometricanalysis on X-ray images by a model-based approachrdquo IEEETransactions on Biomedical Engineering vol 53 no 8pp 1615ndash1623 2006
[15] R Kafieh A mehri S sadri et al ldquoAutomatic landmarkdetection in cephalometry using a modified Active ShapeModel with sub image matchingrdquo in Proceedings of IEEEInternational Conference on Machine Vision pp 73ndash78Islamabad Pakistan 2008
[16] J Keustermans W Mollemans D Vandermeulen andP Suetens ldquoAutomated cephalometric landmark identifica-tion using shape and local appearance modelsrdquo in Proceedingsof 20th International Conference on Pattern Recognitionpp 2464ndash2467 Istanbul Turkey August 2010
[17] P Vucinic Z Trpovski and I Scepan ldquoAutomatic land-marking of cephalograms using active appearance modelsrdquoEuropean Journal of Orthodontics vol 32 no 3 pp 233ndash2412010
[18] S Chakrabartty M Yagi T Shibata et al ldquoRobust cepha-lometric landmark identification using support vector ma-chinesrdquo in Proceedings of IEEE International Conference onAcoustics Speech and Signal Processing vol 2 pp 825ndash828Vancouver Canada May 2003
[19] I El-Feghi M A Sid-Ahmed and M Ahmadi ldquoAutomaticlocalization of craniofacial landmarks for assisted cephalom-etryrdquo Pattern Recognition vol 37 no 3 pp 609ndash621 2004
[20] R Leonardi D Giordano and F Maiorana ldquoAn evaluation ofcellular neural networks for the automatic identification ofcephalometric landmarks on digital imagesrdquo Journal of Bio-medicine and Biotechnology vol 2009 Article ID 717102 12pages 2009
[21] L Favaedi M Petrou and IEEE ldquoCephalometric landmarksidentification using probabilistic relaxationrdquo in Proceedings of2010 Annual International Conference of the IEEE Engineeringin Medicine and Biology Society pp 4391ndash4394 IEEE Engi-neering in Medicine and Biology Society Conference Pro-ceedings Buenos Aires Argentina August 2010
[22] M Farshbaf and A A Pouyan ldquoLandmark detection oncephalometric radiology images through combining classi-fiersrdquo in Proceedings of 2010 17th Iranian Conference ofBiomedical Engineering (ICBME) pp 1ndash4 Isfahan IranNovember 2010
14 Journal of Healthcare Engineering
[23] L Breiman ldquoRandom forestsrdquo Machine Learning vol 45no 1 pp 5ndash32 2001
[24] C Lindner and T F Cootes ldquoFully automatic cephalometricevaluation using random forest regression-votingrdquo in Pro-ceedings of International Symposium on Biomedical Imaging(ISBI) Grand Challenges in Dental X-ray Image Ana-lysismdashAutomated Detection and Analysis for Diagnosis inCephalometric X-ray Image Brooklyn NY USA May 2015
[25] C Lindner C-W Wang C-T Huang et al ldquoFully automaticsystem for accurate localisation and analysis of cephalometriclandmarks in lateral cephalogramsrdquo Scientific Reports vol 6no 1 2016
[26] B Ibragimov et al ldquoComputerized cephalometry by gametheory with shape- and appearance-based landmark re-finementrdquo in Proceedings of International Symposium onBiomedical imaging (ISBI) Grand Challenges in Dental X-rayImage AnalysismdashAutomated Detection and Analysis forDiagnosis in Cephalometric X-ray Image Brooklyn NYUSA May 2015
[27] C-WWang C-T HuangM-C Hsieh et al ldquoEvaluation andcomparison of anatomical landmark detection methods forcephalometric X-ray images a grand challengerdquo IEEETransactions on Medical Imaging vol 34 no 9 pp 1890ndash1900 2015
[28] C W Wang C T Huang J H Lee et al ldquoA benchmark forcomparison of dental radiography analysis algorithmsrdquoMedical Image Analysis vol 31 pp 63ndash76 2016
[29] R Vandaele J AcetoMMuller et al ldquoLandmark detection in2D bioimages for geometric morphometrics a multi-resolution tree-based approachrdquo Scientific Reports vol 8no 1 2018
[30] D G Lowe ldquoDistinctive image features from scale-invariantKeypointsrdquo International Journal of Computer Vision vol 60no 2 pp 91ndash110 2004
[31] A Vedaldi and B Fulkerson ldquoVLFeat an open and portablelibrary of computer vision algorithmsrdquo in Proceedings ofConference on ACM multimedia pp 1469ndash1472 FirenzeItaly October httpwwwvlfeatorg
[32] C Qiu L Jiang and C Li ldquoRandomly selected decision treefor test-cost sensitive learningrdquo Applied Soft Computingvol 53 pp 27ndash33 2017
[33] K Kim and J S Hong ldquoA hybrid decision tree algorithm formixed numeric and categorical data in regression analysisrdquoPattern Recognition Letters vol 98 pp 39ndash45 2017
[34] L Breiman and J H Friedman Classification and RegressionTrees CRC Press Boca Raton FL USA 1984
[35] C Lindner P A Bromiley M C Ionita et al ldquoRobust andaccurate shape model matching using random forest re-gression-votingrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 37 no 9 pp 1862ndash1874 2015
[36] J Gall and V S Lempitsky ldquoClass-specific Hough forests forobject detectionrdquo in Proceedings of IEEE Conference onComputer Vision and Pattern Recognition (CVPR) pp 143ndash157 Miami FL USA June 2009
Journal of Healthcare Engineering 15
International Journal of
AerospaceEngineeringHindawiwwwhindawicom Volume 2018
RoboticsJournal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Active and Passive Electronic Components
VLSI Design
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Shock and Vibration
Hindawiwwwhindawicom Volume 2018
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawiwwwhindawicom
Volume 2018
Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom
The Scientific World Journal
Volume 2018
Control Scienceand Engineering
Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom
Journal ofEngineeringVolume 2018
SensorsJournal of
Hindawiwwwhindawicom Volume 2018
International Journal of
RotatingMachinery
Hindawiwwwhindawicom Volume 2018
Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Navigation and Observation
International Journal of
Hindawi
wwwhindawicom Volume 2018
Advances in
Multimedia
Submit your manuscripts atwwwhindawicom
candidates in the observation fj the following steps areperformed
(1) Split the observations in node t into left and rightchild nodes (tL and tR respectively)
(2) Compute the reduction in MSE Δεt For a particularsplitting candidate tL and tR represent the observationindices in the sets TL and TR respectively If fj doesnot contain any missing values then the reduction inMSE Δεt for the current splitting candidate is
Δεt P(T)εt minusP TL( 1113857εtLminusP TR( 1113857εtR
(5)
If fj contains any missing values then the reduction inMSE Δεt
prime is
Δεtprime P TminusTU( 1113857εt minusP TL( 1113857εtL
minusP TR( 1113857εtR (6)
where TminusTU is the set of all observation indices innode t that are not missing
(3) Choose the splitting candidate that yields the largestMSE reduction In this way the observations in nodet are split at the candidate that maximize the MSEreduction
To stop splitting nodes of the decision tree two rules canbe followed (1) it is pure of the node that the MSE for theobserved response in this node is less than the MSE for theobserved response in the entire data multiplied by the tol-erance on quadratic error per node and (2) the decision treereaches to the setting values for depth of the regression de-cision tree for example the maximum number of splittingnodes max_splits
e simplicity and performance of a decision tree shouldbe considered to improve the performance of the decisiontree at the same time A deep tree usually achieves highaccuracy on the training data However the tree is not toobtain high accuracy on a test data as well It means thata deep tree tends to overfit ie its test accuracy is much lessthan its training accuracy On the contrary a shallow treedoes not achieve high training accuracy but can be morerobust ie its training accuracy could be similar to that ofa test data Moreover a shallow tree is easy to interpret andsaves time for prediction In addition tree accuracy can beobtained by cross validation when there are not enough datafor training and testing
ere are two ways to improve the performance ofdecision trees by minimizing cross-validated loss One is toselect the optimal parameter value to control depth of de-cision trees e other is postpruning after creating decisiontrees In this paper we use the parameter max_splits tocontrol the depth of resulting decision trees Setting a largevalue for max_splits lends to growing a deep tree whilesetting a small value formax_splits yields a shallow tree withlarger leaves To select the appropriate value for max_splitsthe following steps are performed (1) set a spaced set ofvalues from 1 to the total sample size formax_splits per tree(2) create cross-validated regression trees for the datausing the setting values for max_splits and calculate thecross-validated errors and (3) the appropriate value of
max_splits can be obtained by minimizing the cross-validated errors
(2) Prediction In prediction process you can easily predictresponses for new data after creating a regression tree RlSuppose f_new is the new data (ie a feature vectorextracted from a new patch P_new in the test cephalogram)According to the rules of the regression tree the nodes selectthe specific attributes from the new observation f_new andreach the leaf step by step which stores the mean dis-placement d_new Here we predict the displacement fromthe center of patch P_new to the landmark l using SIFTfeature vector by the regressor Rl
213 MDTRV Using SIFT-Based Patch Features
(1) Decision Tree Regression Voting (DTRV) As illustrated inFigure 4 the algorithm of automatic cephalometric land-mark detection using decision tree regression voting(DTRV) in single scale consists of training and testingprocesses as a supervised learning algorithm e trainingprocess begins by feature extraction for patches sampledfrom the training images en a regression tree is con-structed for each landmark with inputting feature vectorsand displacements (f d) Here f represents the observationsand d represents the targets for this regression problem etesting process begins by the same step of feature extractionen the resulting f_new is used to predict the displacementd_new by regressor Rl In the end the optimal landmarkposition is obtained via voting e voting style includessingle unit voting and single weighted voting As mentionedin [35] these two styles perform equally well in applicationof voting optimal landmark positions for medical images Inthis paper we use the single unit voting
(2) MDTRV Using SIFT-Based Patch Features Finally a newframework of MDTRV using SIFT-based patch features isproposed by using a simple and efficient strategy for accuratelandmark detection in lateral cephalograms ere are fourstages in the proposed algorithm of MDTRV which areiteratively performed in the scales of 0125 025 05 and 1When the scale is 0125 the training K patches with Kdisplacements are randomly sampled in a whole cephalo-gram for a specific landmark l(l 1 L) e decisiontree regressor R1
l is created by using these training samplesFor prediction SIFT features are extracted for Kprime testingpatches sampled in the whole cephalogram and Kprime dis-placements are predicted by regressor R1
l using extractedfeatures e optimal position of the specific landmark l isobtained via voting through all predicted displacementsWhen the scale is 025 05 or 1 only the patch sampling ruleis different from the procedure in scale of 0125 at is thetraining and testing patches are randomly sampled in the(2S + 1) times (2S + 1) neighborhood of true and initial landmarkpositions e estimated landmark position is used as theinitial landmark position in the next scale of testing processe optimal position of the specific landmark l in scale of 1 isrefined by using Hough forest [36] which is regarded as the
6 Journal of Healthcare Engineering
final resulting location is multiresolution coarse-to-finestrategy has greatly improved the accuracy of cephalometriclandmark detection
22 Parameter Measurement Measurements are eitherangular or linear parameters calculated by using cephalo-metric landmarks and planes (refer to Tables 1 and 2)According to geometrical structure measurements can beclassified into five classes the angle of three points the angleof two planes the distance between two points the distancefrom a point to a plane and the distance between two pointsprojected to a plane All measurements can be calculatedautomatically in our system as described in the following
2218e Angle of8ree Points Assume point B is the vertexamong three points A B and C then the angle of these threepoints is calculated by
angABC arccosa2 + c2 minus b2
2ac1113888 1113889 times
180π
1113874 1113875 (7)
anda AminusB
b AminusC
c BminusC
⎧⎪⎪⎨
⎪⎪⎩(8)
where AminusB
(xA minus xB)2 + (yA minusyB)21113969
represents theEuclidean distance of A and B
222 8e Angle between Two Planes As the cephalometricradiographs are 2D images in this study the planes areprojected as straight lines us the planes are determinedby two points e angle between two planes AB and CD iscalculated by
angAB arctanyA minusyB
xA minus xB
1113888 1113889 times180π
1113874 1113875
angCD arctanyC minusyD
xC minusxD
1113888 1113889 times180π
1113874 1113875
⎧⎪⎪⎪⎪⎪⎪⎪⎨
⎪⎪⎪⎪⎪⎪⎪⎩
(9)
angABCD |angABminusangCD| (10)
223 8e Distance between Two Points e distance be-tween two points is calculated using the following equation
|AB| AminusB times ratio (11)
ratio is a scale indicator that can be calculated by thecalibrated gauge in the lateral cephalograms and its unit ismmpixel
224 8e Distance from a Point to a Plane in the HorizontalDirection e plane AB is defined as
y + k1x + k0 0 (12)
us the distance of point C to the plane AB is cal-culated by
|C middot AB| yC + k1xC + k0
1 + k21
1113969
111386811138681113868111386811138681113868111386811138681113868111386811138681113868
111386811138681113868111386811138681113868111386811138681113868111386811138681113868
times ratio (13)
225 8e Distance between Two Points Projected to a Planee calculation of the distance between two points projectedto a plane is illustrated in Figure 5 In order to calculate thedistance between two points projected to a plane first we useEquations (4)ndash(12) to determine the plane AB en wecalculate the distance from two points C and D to the planeAB as c1 and c2 by Equations (4)ndash(13) ird the distance c0between two points C and D is calculated by Equations(4)ndash(11) Finally the distance |C middot AB middot D| between twopoints C and D projected to the plane AB is represented as c
and is calculated by
c
c20 minus c1 minus c2( 11138572
1113969
times ratio (14)
3 Experimental Evaluation
31 Data Description
311 Database of 2015 ISBI Challenge e benchmarkdatabase (database1) included 300 cephalometric radio-graphs (150 for TrainingData 150 for Test1Data) which isdescribed in Wang et al [28] All of the cephalograms werecollected from 300 patients aged from 6 to 60 years old e
Feature extraction
Regression treeconstruction
d
f
Feature extractionf_new d_new
Prediction Voting
R1
Ground truth oflandmarks
Training patches
Testing patches
Testing process
Training process
Landmarkpositions
P (x y)
Figure 4 Algorithm of decision tree regression voting
Journal of Healthcare Engineering 7
image resolution was 1935 times 2400 pixels For evaluation 19landmarks were manually annotated by two experienceddoctors in each cephalogram (see illustrations in Figure 6)the ground truth was the average of the annotations by bothdoctors and eight clinical measurements were used forclassification of anatomical types
312 Database of Peking University School and Hospital ofStomatology e database2 included 165 cephalometricradiographs as illustrated in Table 3 (the IRB approvalnumber is PKUSSIRB-201415063) ere were 55 cephalo-grams collected from 55Chinese adult subjects (29 females 26males) who were diagnosed as skeletal class I (0 lt ANB lt 4)with minor dental crowding and a harmoniouse other 110cephalograms were collected from the 55 skeletal class IIIpatients (ANB lt 0) (32 females 23 males) who underwentcombined surgical orthodontic treatment at the PekingUniversity School and Hospital of Stomatology from 2010 to2013 e image resolutions were different in the range from758 times 925 to 2690 times 3630 pixels For evaluation 45 landmarksweremanually annotated by two experienced orthodontists ineach cephalogram as shown in Figure 7 the ground truth wasthe average of annotations by both doctors and 27 clinicalmeasurements were used for future cephalometric analysis
313 Experimental Settings In the initial scale of landmarkdetection for training K 50 for prediction Kprime 400 Inthe other scales the additional parametersW and S are set to48 and 40 separately Because the image resolutions aredifferent preprocessing is required to rescale all the imagesto the fixed width of 1960 pixels in database2 For database2we test our algorithm by using 5-fold cross validation
32 Landmark Detection
321 Evaluation Criterion e first evaluation criterion isthe mean radial error with the associated standard deviatione radial error E ie the distance between the predictedposition and the true position of each landmark is defined as
Ei Ai minusBi
times ratio ai isin A bi isin B 1le ileM (15)
where A B represent the estimated and true positions ofeach landmark for all cephalograms in the dataset Ai Bi arethe corresponding positions in set A and B respectively and
M is the total number of cephalograms in the dataset emean radial error (MRE) and the associated standard de-viation (SD) for each landmark are defined as
MRE mean(E) 1113936
Mi1Ei
M
SD
1113936Mi1 Ei minusMRE( 1113857
2
Mminus 1
1113971 (16)
e second evaluation criterion is the success detectionrate with respect to the 25mm 3mm 5mm and 10mmprecision ranges If Ei is less than a precision range thedetection of the landmark is considered as a successfuldetection in the precision range otherwise it is consideredas a failed detection e success detection rate (SDR) pz
with precision less than z is defined as
pz Ei lt z1113864 1113865
Mtimes 100 1le ileM (17)
where z denotes four precision ranges used in the evaluationincluding 20mm 25mm 3mm and 4mm
322 Experimental Results
(1) Results of database1 Experimental results of cephalo-metric landmark detection using database1 is shown inTable 4 e MREs of landmarks L10 L4 L19 L5 and L16are more than 2mm e other 14 landmarks are all withinthe MRE of 2mm in which 3 landmarks are within the MREof 1mm e average MRE and SD of 19 landmarks are169mm and 143mm respectively It shows that the de-tection of cephalometric landmarks is accurate by ourproposed method e SDRs are 7337 7965 8446and 9067 within the precision ranges of 20 25 30 and40mm respectively In 2mm precision range the SDR ismore than 90 for four landmarks (L9 L12 L13 L14) theSDR is between 80 and 90 for six landmarks (L1 L7 L8L11 L15 L17) the SDR is between 70 and 80 for twolandmarks (L6 and L18) the SDRs for the other sevenlandmarks are less than 70 where the SDR of landmarkL10 is the lowest It can be seen from Table 4 that thelandmark L10 is the most difficult to detect accurately
Comparison of our method with three state-of-the-artmethods using Test1data is shown in Table 5 e differencebetween MRE of our proposed method and RFRV-CLM in[24] is less than 1 pixel which means that their performanceis comparable within the resolution of image
Furthermore we conduct the comparison with the toptwo methods in 2015 ISBI Challenge of cephalometriclandmark detection in terms of MRE SD and SDR as il-lustrated in Table 6 Our method achieves the SDR of 7337within the precision range of 20mm which is similar to theSDR of 7368 of the best method e MRE of our pro-posed method is 169mm which is comparable to that ofmethod of RFRV-CLM but the SD of our method is lessthan that of method of RFRV-CLM e results show that
D
AB
C
c1
c2
c0
c
Figure 5 e distance calculation between two points projected toa plane
8 Journal of Healthcare Engineering
our method is accurate for landmark detection in lateralcephalograms and is more robust
(2) Results of database2 Two examples of landmark de-tection in database2 are shown in Figure 8 It can be ob-served from the figure that the predicted locations of 45landmarks in cephalograms are near to the ground truthlocations which shows the success of our proposed algo-rithm for landmark detection
For the quantitative assessment of the proposed algo-rithm the statistical result is shown in Table 7eMRE andSD of the proposed method to detect 45 landmarks are171mm and 139mm respectively e average SDRs of 45landmarks within the precision range 20mm 25mm3mm and 4mm are 7208 8063 8646 and 9307respectively e experimental results in database2 showcomparable performance to the results of database1 It
indicates that the proposed method can be successfullyapplied to the detection of more clinical landmarks
33 Measurement Analysis
331 Evaluation Criterion For the classification of ana-tomical types eight cephalometric measurements wereusually used e description of these eight measurementsand the methods for classification are explained in Tables 8and 9 respectively
One evaluation criterion for measurement analysis is thesuccess classification rate (SCR) for these 8 popular methodsof analysis which is calculated using confusion matrix Inthe confusion matrix each column represents the instancesof an estimated type while each row represents the instancesof the ground truth type e SCR is defined as the averageddiagonal of the confusion matrix
For evaluation the performance of the proposed systemfor more measurement analysis in database2 another cri-terion the mean absolute error (MAE) is calculated by
MAE 1113936
Mi1
1113954Vi minusVi
11138681113868111386811138681113868111386811138681113868
M (18)
where 1113954Vi is the value of angular or linear measurementestimated by our system and Vi is the ground truth mea-surement obtained using human annotated landmarks
1 2
3
4
5
6
78 9
101112 13
14
15
16
17 18
19
(a)
L1 S
L2 N
L3 Or
L4 P
L5 A
L6 B
L7 Pg
L8 Me
L9 Gn
L10 Go
L11 LI
L12 UI
L13 UL
L14 LL
L15 Sn
L16 Pos
L17 PNS
L18 ANS
L19 Ar
(b)
Figure 6 Cephalogram annotation example showing the 19 landmarks in database1 (a) Cephalogram annotation example (b) Descriptionof 19 landmarks
Table 3 e description of database2
Female MaleClass I 29 26
Class III Pretreatment 32 23Posttreatment 32 23
Total number of cephalograms 165Total number of patients 110
Journal of Healthcare Engineering 9
332 Experimental Results
(1) Results of database1 Using the detected 19 landmarks 8cephalometric measurements are calculated for classificationof anatomical types in database1 e comparison of ourmethod to the best twomethods is given in Table 10 where wehave achieved the best classification results for two mea-surements of APDI and FHI e average SCR obtained byour method is 7503 which is much better than the methodin [26] and is comparable to the method of RFRV-CLM [24]
(2) Results of database2 According to the detected 45landmarks 27 measurements are automatically calculated byour system using database2 including 17 angular and 10linear measurements which are illustrated in Table 11
e MAE and MAElowast of 27 measurements are illustratedin Table 12 Here the MAE represents the performance ofmeasurement analysis of our automatic system e MAElowastrepresents interobserver variability calculated between theground truth values obtained by the two experts For angularmeasurements the difference between MAE and MAElowast iswithin 05deg for 917 measurements the difference between
12
345
6 78
91011
1213
14
1516
17
181920
212223
24252627
28
29
30
31
32333435
363738
39
40
414243
44
45
(a)
L1 N L16 Mes L31 L6A
L2 Ns L17 Go L32 UL5
L3 Prn L18 Me L33 L6E
L4 Cm L19 Gn L34 UL6
L5 ANS L20 Pg L35 U6E
L6 A L21 LIA L36 U6A
L7 Sn L22 B L37 PNS
L8 Ss L23 Id L38 Ptm
L9 UL L24 LI L39 S
L10 Stoms L25 UI L40 Co
L11 Stomi L26 FA L41 Ar
L12 LL L27 SPr L42 Ba
L13 Si L28 UIA L43 Bolton
L14 Pos L29 Or L44 P
L15 Gs L30 Se L45 ULI
(b)
Figure 7 Cephalogram annotation example showing landmarks in database2 (a) Cephalogram annotation example (b)e description of45 cephalometric landmarks
Table 4 e experimental results of 19 landmark detection inTest1Data
Landmark MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
L1 131 126 8667 9267 9467 9733L2 192 205 6800 7200 7933 8867L3 181 174 6667 8133 8800 9533L4 311 259 5000 5400 5933 6733L5 224 156 5600 6600 7533 8667L6 159 139 7200 7867 8400 9400L7 123 087 8067 9133 9667 9933L8 108 088 8867 9533 9667 9867L9 087 075 9133 9333 9800 9933L10 398 233 2333 2933 3800 5333L11 097 089 8733 9200 9600 9867L12 090 170 9467 9467 9600 9667L13 102 071 9200 9533 9800 10000L14 089 061 9333 9867 10000 10000L15 118 116 8533 9200 9333 9800L16 214 148 5000 6067 7267 9067L17 116 085 8600 9267 9467 9867L18 177 151 7200 7933 8600 9267L19 297 277 5000 5400 5800 6733Average 169 143 7337 7965 8446 9067
Table 5 Comparison of our method with three methods in term ofMRE using Test1Dtata
Method [24] [26] [29] OursMRE (pixels) 1674 1846 1779 1692
Table 6 Comparison of our method to two methods in terms ofMRE SD and SDR using Test1Dtata
Method MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
[24] 167 165 7368 8021 8519 9147[26] 184 176 7172 7740 8193 8804Ours 169 143 7337 7965 8446 9067
10 Journal of Healthcare Engineering
MAE and MAElowast is within 1deg for 1217 measurements andthe difference betweenMAE andMAElowast is within 2deg for 1617measurements In particular theMAE of ZAngle is less thanMAElowast e other one measurement has the difference of224deg For linear measurements the difference betweenMAEand MAElowast is within 05mm for 910 measurements eother one measurement has the difference of 116mm eresults show that our automatic system is efficient and ac-curate for measurement analysis
4 Discussion
e interobserver variability of human annotation is ana-lyzed Table 13 shows that the MRE and SD between an-notations by two orthodontists are 138mm and 155mm fordatabase1 respectively [27] for database2 the MRE and SDbetween annotations of two orthodontists are 126mm and127mm respectively Experimental results show that theproposed algorithm of automatic cephalometric landmark
detection can achieve the MRE of less than 2mm and the SDalmost equal to that of manual marking erefore theperformance of automatic landmark detection by the pro-posed algorithm is comparable to manual marking in term ofthe interobserver variability between two clinical expertsFurthermore the detected landmarks are used to calculate theangular and linearmeasurements in lateral cephalogramsesatisfactory results of measurement analysis are presented inexperiments based on the accurate landmark detection Itshows that the proposed algorithm has the potential to beapplied in clinical practice of cephalometric analysis for or-thodontic diagnosis and treatment planning
e detection accuracy of some landmarks is lower thanthe average value and there are mainly three main reasons (i)some landmarks are located at the overlaying anatomicalstructures such as landmarksGo UL5 L6E UL6 andU6E (ii)some landmarks have large variability of manual marking dueto large anatomical variability among subjects especially inabnormality such as landmarks A ANS Pos Ar Ba andBolton and (iii) structural information is not obvious due tolittle intensity variability in the neighborhood of some land-marks in images such as landmarks P Co L6A and U6A
e proposed system follows clinical cephalometricanalysis procedure and the accuracy of the system can beevaluated by the manual marking accuracy ere are severallimitations in this study On one hand except for the algo-rithm the data effects to the performance of the system in-clude three aspects (i) the quality of the training data (ii) thesize of the training dataset and (iii) the shape and appearancevariation exhibited in the training data On the other hand
12
345
6 78
91011
1213
14
1516
17
181920
212223
24252627
28
29
30
31
32333435
363738
39
40
414243
44
45
True locationsPredicted locations
(a)
12
345
6 78
91011
1213
14
1516
17
181920
21222324252627
28
29
30
31
32333435
3637
38
39
4041
4243
44
45
True locationsPredicted locations
(b)
Figure 8 Examples of automatic cephalometric landmark detection (a) Example of detection in Class I (b) Example of detection in Class III
Table 7 e statistical results of automatic cephalometric land-mark detection in database2
No of 5-fold
MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
1 172 135 7249 8132 8704 93372 172 138 7138 7994 8563 92733 166 132 7431 8206 8744 93374 177 136 6967 7896 8532 92935 168 154 7253 8088 8687 9296Average 171 139 7208 8063 8646 9307
Journal of Healthcare Engineering 11
performance of the system depends on consistency betweenthe training data and the testing data similarly as any su-pervised learning-based methods
5 Conclusion
In conclusion we design a new framework of landmarkdetection in lateral cephalograms with low-to-high resolu-tions In each image resolution decision tree regressionvoting is employed in landmark detection e proposedalgorithm takes full advantage of image information indifferent resolutions In lower resolution the primary localstructure information rather than local detail informationcan be extracted to predict the positions of anatomical
structure involving the specific landmarks In higher reso-lution the local structure information involves more detailinformation and it is useful for prediction positions oflandmarks in the neighborhood As demonstrated in ex-perimental results the proposed algorithm has achievedgood performance Compared with state-of-the-art methodsusing the benchmark database our algorithm has obtainedthe comparable accuracy of landmark detection in terms ofMRE SD and SDR Tested by our own clinical database ouralgorithm has also obtained average 72 successful de-tection rate within precision range of 20mm In particular45 landmarks have been detected in our database which isover two times of the number of landmarks in the bench-mark database erefore the extensibility of the proposed
Table 8 Description of cephalometric measurements in database1 [28]
No Measurement Description in mathematics Description in words1 ANB angL5L2L6 e angle between the landmark 5 2 and 62 SNB angL1L2L6 e angle between the landmark 1 2 and 63 SNA angL1L2L5 e angle between the landmark 1 2 and 5
4 ODI angL5L6L8L10 + angL17L18L4L3e arithmetic sum of the angle between AB plane(L5L6) to mandibular plane (L8L10) and the angle of
palatal plane (L17L18) to FH plane (L4L3)
5 APDI angL3L4L2L7 + angL2L7L5L6 + angL3L4L17L18
e arithmetic sum of the angle between FH plane(L3L4) to facial plane (L2L7) the angle of facial plane(L2L7) to AB plane (L5L6) and the angle of FH plane
(L3L4) to palatal plane (L17L18)
6 FHI |L1L10||L2L8|
e ratio of posterior face height (PFH the distancefrom L1 to L10) to anterior face height (AFH the
distance from L2 to L8)
7 FHA angL1L2L10L9 e angle between SN plane (L1L2) to mandibularplane (L10L9)
8 MW |L12L11| x(L12)ltx(L11)
minus|L12L11| otherwise1113896When the x ordinate of L12 is less than L11rsquos theMW
is |L12L11| otherwise MW is minus|L12L11|
Table 9 Eight standard cephalometric measurement methods for classification of anatomical types [28]
No Measurement Type 1 Type 2 Type 3 Type 41 ANB 32degsim57deg class I (normal) gt57deg class II lt32deg class III mdash
2 SNB 746degsim787deg normalmandible lt746deg retrognathic mandible gt787deg prognathic mandible mdash
3 SNA 794degsim832deg normal maxilla gt832deg prognathic maxilla lt794deg retrognathic maxilla mdash4 ODI Normal 745deg plusmn 607deg gt805deg deep bite tendency lt684deg open bite tendency mdash5 APDI Normal 814deg plusmn 38deg lt776deg class II tendency gt852deg class III tendency mdash6 FHI Normal 065sim075 gt075 short face tendency lt065 long face tendency mdash
7 FHA Normal 268degsim314deg gt314deg mandible high angletendency
lt268deg mandible lower angletendency mdash
8 MW Normal 2mmsim45mm MW 0mm edge to edge MW lt0mm anterior cross biteMWgt45mm
large over jet
Table 10 Comparison of our method to two methods in term of SCR using Test1Data
Methode success classification rates SCR ()
ANB SNB SNA ODI APDI FHI FHA MW Average[24] 6499 8452 6845 8464 8214 6792 7554 8219 7630[26] 5942 7109 5900 7804 8016 5897 7703 8394 7096Ours 5861 7885 5986 7659 8349 8244 7718 8320 7503
12 Journal of Healthcare Engineering
Table 11 Description of cephalometric measurements indatabase2
Name Description inmathematics Description in words
8e angles of three points (unit deg)
SNA angL39L1L6 e angle of landmarks L39 L1L6
SNB angL39L1L22 e angle of landmarks L39 L1L22
ANB angL6L1L22 e angle of landmarks L6 L1L22
SNPg angL39L20L1 e angle of landmarks L39 L20L1
NAPg angL1L6L20 e angle of landmarks L1 L6L20
NSGn angL1L39L19 e angle of landmarks L1 L39L19
Ns-Prn-Pos angL2L3L14 e angle of landmarks L2 L3L14
Cm-Sn-UL angL4L7L9 e angle of landmarks L4 L7L9
LL-Bprime-Pos angL12L13L14 e angle of landmarks L12 L13L14
Angle ofthe jaw angL41L17L18 e angle of landmarks L41 L17
L18Angle ofconvexity angL2L7L14 e angle of landmarks L2 L7
L148e angles between two planes (unit deg)
FH-SN angL44L29L39L1 e angle between FH plane(L44L29) and SN plane (L39L1)
UI-SN angL28L25L39L1 e angle between upper incisor(L28L25) and SN plane (L39L1)
LI-FH angL24L21L44L29e angle between lower incisoraxis (L24L21) and FH plane
(L44L29)
UI-LI angL28L25L24L21 e angle between upper andlower incisors (L28L25 L24L21)
H angle angL9L14L8L14 e angle between H plane(L9L14)and NB plane (L8L14)
Z angle angL44L29L9L14e rear lower angle between FHplane (L44L29) and H plane
(L9L14)8e distances between two points (unit mm)
N-Me |L1L18|e forward height the distancebetween landmarks L1 L18
N-ANS |L1L5|
e up-forward height thedistance between landmarks L1
L5
ANS-Me |L5L18|
e down-forward height thedistance between landmarks L5
L18
Stoms-UI |L10L25|e vertical distance between
landmarks L10 L258e distances from the point to the plane in the horizontal direction(unit mm)
UI-AP |L25L6L20|e distance between landmark
L25 to AP plane (L6L20)
LI-AP |L24L6L20|e distance from landmark L24
to AP plane (L6L20)
UL-EP |L9L3L14|e distance from landmark L9
to EP plane (L3L14)
Table 12 Result of our method for 27 measurements in terms ofMAE and MAElowast using database2
Name MAE MAElowast MAE-MAElowast
8e angles of three points (unit deg)SNA 195 170 024SNB 157 117 039ANB 129 108 021SNPg 159 120 039NAPg 283 244 039NSGn 142 096 047Ns-Prn-Pos 168 110 058Cm-Sn-UL 552 380 172LL-Bprime-Pos 395 344 051Angle of the jaw 320 166 154Angle of convexity 267 186 0818e angles between two planes (unit deg)FH-SN 200 196 004UI-SN 471 281 190LI-FH 334 227 107UI-LI 690 466 224H angle 094 080 014Z angle 169 186 -0178e distances between two points (unit mm)N-Me 137 110 027N-ANS 157 134 024ANS-Me 106 096 010Stoms-UI 075 042 0338e distances from the point to the plane in the horizontal direction(unit mm)UI-AP 096 071 025LI-AP 096 076 020UL-EP 050 036 014LL-EP 045 039 005MaxE 206 179 0278e distances between two points projected to a plane (unit mm)Wits 470 353 116MAElowast interobserver variability
Table 11 Continued
Name Description inmathematics Description in words
LL-EP |L12L13L14|e distance from landmark L12
to EP plane (L3L14)
MaxE |L6L5L37|
Maxillary length the distancefrom landmark L6 to palatal
plane (L5L37)8e distances between two points projected to a plane (unit mm)
Wits |L6L32L34L22|
e distance between the twolandmarks L6 and L22 projectedto functional jaw plane (L32L34)
Table 13 e interobserver error of manual marking betweendoctor1 and doctor2
Interobserver variabilityMRE (mm) SD (mm)
database1 138 155database2 126 127
Journal of Healthcare Engineering 13
algorithm is confirmed using this clinical dataset In addi-tion automatic measurement of clinical parameters has alsoachieved satisfactory results In the future we will put moreefforts to improve the performance of automatic analysis inlateral cephalograms so that the automatic system can beutilized in clinical practice to obtain objective measurementMore research will be conducted to reduce the computa-tional complexity of the algorithm as well
Data Availability
e database1 of 2015 ISBI Challenge is available at httpwwwntustedutwsimcweiwangISBI2015challenge1indexhtml e database2 of Peking University School andHospital of Stomatology cannot be made public availabledue to privacy concerns for the patients
Disclosure
is work is based on research conducted at Beijing Instituteof Technology
Conflicts of Interest
e authors declare that there are no conflicts of interestregarding the publication of this paper
Acknowledgments
e authors would like to express their gratitude to De-partment of Orthodontics Peking University School andHospital of Stomatology China for the supply of imagedatabase2
References
[1] J Yang X Ling Y Lu et al ldquoCephalometric image analysisand measurement for orthognathic surgeryrdquo Medical amp Bi-ological Engineering amp Computing vol 39 no 3 pp 279ndash2842001
[2] B H Broadbent ldquoA new x-ray technique and its applicationto orthodontiardquo Angle Orthodontist vol 1 pp 45-46 1931
[3] H Hofrath ldquoDie Bedeutung des Rontgenfern undAbstandsaufnahme fiir de Diagnostik der KieferanomalienrdquoFortschritte der Kieferorthopadie vol 2 pp 232ndash258 1931
[4] R Leonardi D Giordano F Maiorana et al ldquoAutomaticcephalometric analysis - a systematic reviewrdquo Angle Ortho-dontist vol 78 no 1 pp 145ndash151 2008
[5] T S Douglas ldquoImage processing for craniofacial landmarkidentification and measurement a review of photogrammetryand cephalometryrdquo Computerized Medical Imaging andGraphics vol 28 no 7 pp 401ndash409 2004
[6] D S Gill and F B Naini ldquoChapter 9 cephalometric analysisrdquoin Orthodontics Principles and Practice pp 78ndash87 BlackwellScience Publishing Oxford UK 2011
[7] A D Levy-Mandel A N Venetsanopoulos and J K TsotsosldquoKnowledge-based landmarking of cephalogramsrdquo Com-puters and Biomedical Research vol 19 no 3 pp 282ndash3091986
[8] V Grau M Alcaniz M C Juan et al ldquoAutomatic localizationof cephalometric landmarksrdquo Journal of Biomedical In-formatics vol 34 no 3 pp 146ndash156 2001
[9] J Ashish M Tanmoy and H K Sardana ldquoA novel strategyfor automatic localization of cephalometric landmarksrdquo inProceedings of IEEE International Conference on ComputerEngineering and Technology pp V3-284ndashV3-288 TianjinChina 2010
[10] T Mondal A Jain and H K Sardana ldquoAutomatic craniofacialstructure detection on cephalometric imagesrdquo IEEE Trans-actions on Image Processing vol 20 no 9 pp 2606ndash2614 2011
[11] A Kaur and C Singh ldquoAutomatic cephalometric landmarkdetection using Zernike moments and template matchingrdquoSignal Image and Video Processing vol 9 no 1 pp 117ndash1322015
[12] D B Forsyth and D N Davis ldquoAssessment of an automatedcephalometric analysis systemrdquo European Journal of Ortho-dontics vol 18 no 5 pp 471ndash478 1996
[13] S Rueda and M Alcaniz ldquoAn approach for the automaticcephalometric landmark detection using mathematicalmorphology and active appearance modelsrdquo in Proceedings ofMedical Image Computing and Computer-Assisted In-tervention (MICCAI 2006) pp 159ndash166 Lecture Notes inComputer Science R Larsen Ed Copenhagen DenmarkOctober 2006
[14] W Yue D Yin C Li et al ldquoAutomated 2-D cephalometricanalysis on X-ray images by a model-based approachrdquo IEEETransactions on Biomedical Engineering vol 53 no 8pp 1615ndash1623 2006
[15] R Kafieh A mehri S sadri et al ldquoAutomatic landmarkdetection in cephalometry using a modified Active ShapeModel with sub image matchingrdquo in Proceedings of IEEEInternational Conference on Machine Vision pp 73ndash78Islamabad Pakistan 2008
[16] J Keustermans W Mollemans D Vandermeulen andP Suetens ldquoAutomated cephalometric landmark identifica-tion using shape and local appearance modelsrdquo in Proceedingsof 20th International Conference on Pattern Recognitionpp 2464ndash2467 Istanbul Turkey August 2010
[17] P Vucinic Z Trpovski and I Scepan ldquoAutomatic land-marking of cephalograms using active appearance modelsrdquoEuropean Journal of Orthodontics vol 32 no 3 pp 233ndash2412010
[18] S Chakrabartty M Yagi T Shibata et al ldquoRobust cepha-lometric landmark identification using support vector ma-chinesrdquo in Proceedings of IEEE International Conference onAcoustics Speech and Signal Processing vol 2 pp 825ndash828Vancouver Canada May 2003
[19] I El-Feghi M A Sid-Ahmed and M Ahmadi ldquoAutomaticlocalization of craniofacial landmarks for assisted cephalom-etryrdquo Pattern Recognition vol 37 no 3 pp 609ndash621 2004
[20] R Leonardi D Giordano and F Maiorana ldquoAn evaluation ofcellular neural networks for the automatic identification ofcephalometric landmarks on digital imagesrdquo Journal of Bio-medicine and Biotechnology vol 2009 Article ID 717102 12pages 2009
[21] L Favaedi M Petrou and IEEE ldquoCephalometric landmarksidentification using probabilistic relaxationrdquo in Proceedings of2010 Annual International Conference of the IEEE Engineeringin Medicine and Biology Society pp 4391ndash4394 IEEE Engi-neering in Medicine and Biology Society Conference Pro-ceedings Buenos Aires Argentina August 2010
[22] M Farshbaf and A A Pouyan ldquoLandmark detection oncephalometric radiology images through combining classi-fiersrdquo in Proceedings of 2010 17th Iranian Conference ofBiomedical Engineering (ICBME) pp 1ndash4 Isfahan IranNovember 2010
14 Journal of Healthcare Engineering
[23] L Breiman ldquoRandom forestsrdquo Machine Learning vol 45no 1 pp 5ndash32 2001
[24] C Lindner and T F Cootes ldquoFully automatic cephalometricevaluation using random forest regression-votingrdquo in Pro-ceedings of International Symposium on Biomedical Imaging(ISBI) Grand Challenges in Dental X-ray Image Ana-lysismdashAutomated Detection and Analysis for Diagnosis inCephalometric X-ray Image Brooklyn NY USA May 2015
[25] C Lindner C-W Wang C-T Huang et al ldquoFully automaticsystem for accurate localisation and analysis of cephalometriclandmarks in lateral cephalogramsrdquo Scientific Reports vol 6no 1 2016
[26] B Ibragimov et al ldquoComputerized cephalometry by gametheory with shape- and appearance-based landmark re-finementrdquo in Proceedings of International Symposium onBiomedical imaging (ISBI) Grand Challenges in Dental X-rayImage AnalysismdashAutomated Detection and Analysis forDiagnosis in Cephalometric X-ray Image Brooklyn NYUSA May 2015
[27] C-WWang C-T HuangM-C Hsieh et al ldquoEvaluation andcomparison of anatomical landmark detection methods forcephalometric X-ray images a grand challengerdquo IEEETransactions on Medical Imaging vol 34 no 9 pp 1890ndash1900 2015
[28] C W Wang C T Huang J H Lee et al ldquoA benchmark forcomparison of dental radiography analysis algorithmsrdquoMedical Image Analysis vol 31 pp 63ndash76 2016
[29] R Vandaele J AcetoMMuller et al ldquoLandmark detection in2D bioimages for geometric morphometrics a multi-resolution tree-based approachrdquo Scientific Reports vol 8no 1 2018
[30] D G Lowe ldquoDistinctive image features from scale-invariantKeypointsrdquo International Journal of Computer Vision vol 60no 2 pp 91ndash110 2004
[31] A Vedaldi and B Fulkerson ldquoVLFeat an open and portablelibrary of computer vision algorithmsrdquo in Proceedings ofConference on ACM multimedia pp 1469ndash1472 FirenzeItaly October httpwwwvlfeatorg
[32] C Qiu L Jiang and C Li ldquoRandomly selected decision treefor test-cost sensitive learningrdquo Applied Soft Computingvol 53 pp 27ndash33 2017
[33] K Kim and J S Hong ldquoA hybrid decision tree algorithm formixed numeric and categorical data in regression analysisrdquoPattern Recognition Letters vol 98 pp 39ndash45 2017
[34] L Breiman and J H Friedman Classification and RegressionTrees CRC Press Boca Raton FL USA 1984
[35] C Lindner P A Bromiley M C Ionita et al ldquoRobust andaccurate shape model matching using random forest re-gression-votingrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 37 no 9 pp 1862ndash1874 2015
[36] J Gall and V S Lempitsky ldquoClass-specific Hough forests forobject detectionrdquo in Proceedings of IEEE Conference onComputer Vision and Pattern Recognition (CVPR) pp 143ndash157 Miami FL USA June 2009
Journal of Healthcare Engineering 15
International Journal of
AerospaceEngineeringHindawiwwwhindawicom Volume 2018
RoboticsJournal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Active and Passive Electronic Components
VLSI Design
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Shock and Vibration
Hindawiwwwhindawicom Volume 2018
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawiwwwhindawicom
Volume 2018
Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom
The Scientific World Journal
Volume 2018
Control Scienceand Engineering
Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom
Journal ofEngineeringVolume 2018
SensorsJournal of
Hindawiwwwhindawicom Volume 2018
International Journal of
RotatingMachinery
Hindawiwwwhindawicom Volume 2018
Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Navigation and Observation
International Journal of
Hindawi
wwwhindawicom Volume 2018
Advances in
Multimedia
Submit your manuscripts atwwwhindawicom
final resulting location is multiresolution coarse-to-finestrategy has greatly improved the accuracy of cephalometriclandmark detection
22 Parameter Measurement Measurements are eitherangular or linear parameters calculated by using cephalo-metric landmarks and planes (refer to Tables 1 and 2)According to geometrical structure measurements can beclassified into five classes the angle of three points the angleof two planes the distance between two points the distancefrom a point to a plane and the distance between two pointsprojected to a plane All measurements can be calculatedautomatically in our system as described in the following
2218e Angle of8ree Points Assume point B is the vertexamong three points A B and C then the angle of these threepoints is calculated by
angABC arccosa2 + c2 minus b2
2ac1113888 1113889 times
180π
1113874 1113875 (7)
anda AminusB
b AminusC
c BminusC
⎧⎪⎪⎨
⎪⎪⎩(8)
where AminusB
(xA minus xB)2 + (yA minusyB)21113969
represents theEuclidean distance of A and B
222 8e Angle between Two Planes As the cephalometricradiographs are 2D images in this study the planes areprojected as straight lines us the planes are determinedby two points e angle between two planes AB and CD iscalculated by
angAB arctanyA minusyB
xA minus xB
1113888 1113889 times180π
1113874 1113875
angCD arctanyC minusyD
xC minusxD
1113888 1113889 times180π
1113874 1113875
⎧⎪⎪⎪⎪⎪⎪⎪⎨
⎪⎪⎪⎪⎪⎪⎪⎩
(9)
angABCD |angABminusangCD| (10)
223 8e Distance between Two Points e distance be-tween two points is calculated using the following equation
|AB| AminusB times ratio (11)
ratio is a scale indicator that can be calculated by thecalibrated gauge in the lateral cephalograms and its unit ismmpixel
224 8e Distance from a Point to a Plane in the HorizontalDirection e plane AB is defined as
y + k1x + k0 0 (12)
us the distance of point C to the plane AB is cal-culated by
|C middot AB| yC + k1xC + k0
1 + k21
1113969
111386811138681113868111386811138681113868111386811138681113868111386811138681113868
111386811138681113868111386811138681113868111386811138681113868111386811138681113868
times ratio (13)
225 8e Distance between Two Points Projected to a Planee calculation of the distance between two points projectedto a plane is illustrated in Figure 5 In order to calculate thedistance between two points projected to a plane first we useEquations (4)ndash(12) to determine the plane AB en wecalculate the distance from two points C and D to the planeAB as c1 and c2 by Equations (4)ndash(13) ird the distance c0between two points C and D is calculated by Equations(4)ndash(11) Finally the distance |C middot AB middot D| between twopoints C and D projected to the plane AB is represented as c
and is calculated by
c
c20 minus c1 minus c2( 11138572
1113969
times ratio (14)
3 Experimental Evaluation
31 Data Description
311 Database of 2015 ISBI Challenge e benchmarkdatabase (database1) included 300 cephalometric radio-graphs (150 for TrainingData 150 for Test1Data) which isdescribed in Wang et al [28] All of the cephalograms werecollected from 300 patients aged from 6 to 60 years old e
Feature extraction
Regression treeconstruction
d
f
Feature extractionf_new d_new
Prediction Voting
R1
Ground truth oflandmarks
Training patches
Testing patches
Testing process
Training process
Landmarkpositions
P (x y)
Figure 4 Algorithm of decision tree regression voting
Journal of Healthcare Engineering 7
image resolution was 1935 times 2400 pixels For evaluation 19landmarks were manually annotated by two experienceddoctors in each cephalogram (see illustrations in Figure 6)the ground truth was the average of the annotations by bothdoctors and eight clinical measurements were used forclassification of anatomical types
312 Database of Peking University School and Hospital ofStomatology e database2 included 165 cephalometricradiographs as illustrated in Table 3 (the IRB approvalnumber is PKUSSIRB-201415063) ere were 55 cephalo-grams collected from 55Chinese adult subjects (29 females 26males) who were diagnosed as skeletal class I (0 lt ANB lt 4)with minor dental crowding and a harmoniouse other 110cephalograms were collected from the 55 skeletal class IIIpatients (ANB lt 0) (32 females 23 males) who underwentcombined surgical orthodontic treatment at the PekingUniversity School and Hospital of Stomatology from 2010 to2013 e image resolutions were different in the range from758 times 925 to 2690 times 3630 pixels For evaluation 45 landmarksweremanually annotated by two experienced orthodontists ineach cephalogram as shown in Figure 7 the ground truth wasthe average of annotations by both doctors and 27 clinicalmeasurements were used for future cephalometric analysis
313 Experimental Settings In the initial scale of landmarkdetection for training K 50 for prediction Kprime 400 Inthe other scales the additional parametersW and S are set to48 and 40 separately Because the image resolutions aredifferent preprocessing is required to rescale all the imagesto the fixed width of 1960 pixels in database2 For database2we test our algorithm by using 5-fold cross validation
32 Landmark Detection
321 Evaluation Criterion e first evaluation criterion isthe mean radial error with the associated standard deviatione radial error E ie the distance between the predictedposition and the true position of each landmark is defined as
Ei Ai minusBi
times ratio ai isin A bi isin B 1le ileM (15)
where A B represent the estimated and true positions ofeach landmark for all cephalograms in the dataset Ai Bi arethe corresponding positions in set A and B respectively and
M is the total number of cephalograms in the dataset emean radial error (MRE) and the associated standard de-viation (SD) for each landmark are defined as
MRE mean(E) 1113936
Mi1Ei
M
SD
1113936Mi1 Ei minusMRE( 1113857
2
Mminus 1
1113971 (16)
e second evaluation criterion is the success detectionrate with respect to the 25mm 3mm 5mm and 10mmprecision ranges If Ei is less than a precision range thedetection of the landmark is considered as a successfuldetection in the precision range otherwise it is consideredas a failed detection e success detection rate (SDR) pz
with precision less than z is defined as
pz Ei lt z1113864 1113865
Mtimes 100 1le ileM (17)
where z denotes four precision ranges used in the evaluationincluding 20mm 25mm 3mm and 4mm
322 Experimental Results
(1) Results of database1 Experimental results of cephalo-metric landmark detection using database1 is shown inTable 4 e MREs of landmarks L10 L4 L19 L5 and L16are more than 2mm e other 14 landmarks are all withinthe MRE of 2mm in which 3 landmarks are within the MREof 1mm e average MRE and SD of 19 landmarks are169mm and 143mm respectively It shows that the de-tection of cephalometric landmarks is accurate by ourproposed method e SDRs are 7337 7965 8446and 9067 within the precision ranges of 20 25 30 and40mm respectively In 2mm precision range the SDR ismore than 90 for four landmarks (L9 L12 L13 L14) theSDR is between 80 and 90 for six landmarks (L1 L7 L8L11 L15 L17) the SDR is between 70 and 80 for twolandmarks (L6 and L18) the SDRs for the other sevenlandmarks are less than 70 where the SDR of landmarkL10 is the lowest It can be seen from Table 4 that thelandmark L10 is the most difficult to detect accurately
Comparison of our method with three state-of-the-artmethods using Test1data is shown in Table 5 e differencebetween MRE of our proposed method and RFRV-CLM in[24] is less than 1 pixel which means that their performanceis comparable within the resolution of image
Furthermore we conduct the comparison with the toptwo methods in 2015 ISBI Challenge of cephalometriclandmark detection in terms of MRE SD and SDR as il-lustrated in Table 6 Our method achieves the SDR of 7337within the precision range of 20mm which is similar to theSDR of 7368 of the best method e MRE of our pro-posed method is 169mm which is comparable to that ofmethod of RFRV-CLM but the SD of our method is lessthan that of method of RFRV-CLM e results show that
D
AB
C
c1
c2
c0
c
Figure 5 e distance calculation between two points projected toa plane
8 Journal of Healthcare Engineering
our method is accurate for landmark detection in lateralcephalograms and is more robust
(2) Results of database2 Two examples of landmark de-tection in database2 are shown in Figure 8 It can be ob-served from the figure that the predicted locations of 45landmarks in cephalograms are near to the ground truthlocations which shows the success of our proposed algo-rithm for landmark detection
For the quantitative assessment of the proposed algo-rithm the statistical result is shown in Table 7eMRE andSD of the proposed method to detect 45 landmarks are171mm and 139mm respectively e average SDRs of 45landmarks within the precision range 20mm 25mm3mm and 4mm are 7208 8063 8646 and 9307respectively e experimental results in database2 showcomparable performance to the results of database1 It
indicates that the proposed method can be successfullyapplied to the detection of more clinical landmarks
33 Measurement Analysis
331 Evaluation Criterion For the classification of ana-tomical types eight cephalometric measurements wereusually used e description of these eight measurementsand the methods for classification are explained in Tables 8and 9 respectively
One evaluation criterion for measurement analysis is thesuccess classification rate (SCR) for these 8 popular methodsof analysis which is calculated using confusion matrix Inthe confusion matrix each column represents the instancesof an estimated type while each row represents the instancesof the ground truth type e SCR is defined as the averageddiagonal of the confusion matrix
For evaluation the performance of the proposed systemfor more measurement analysis in database2 another cri-terion the mean absolute error (MAE) is calculated by
MAE 1113936
Mi1
1113954Vi minusVi
11138681113868111386811138681113868111386811138681113868
M (18)
where 1113954Vi is the value of angular or linear measurementestimated by our system and Vi is the ground truth mea-surement obtained using human annotated landmarks
1 2
3
4
5
6
78 9
101112 13
14
15
16
17 18
19
(a)
L1 S
L2 N
L3 Or
L4 P
L5 A
L6 B
L7 Pg
L8 Me
L9 Gn
L10 Go
L11 LI
L12 UI
L13 UL
L14 LL
L15 Sn
L16 Pos
L17 PNS
L18 ANS
L19 Ar
(b)
Figure 6 Cephalogram annotation example showing the 19 landmarks in database1 (a) Cephalogram annotation example (b) Descriptionof 19 landmarks
Table 3 e description of database2
Female MaleClass I 29 26
Class III Pretreatment 32 23Posttreatment 32 23
Total number of cephalograms 165Total number of patients 110
Journal of Healthcare Engineering 9
332 Experimental Results
(1) Results of database1 Using the detected 19 landmarks 8cephalometric measurements are calculated for classificationof anatomical types in database1 e comparison of ourmethod to the best twomethods is given in Table 10 where wehave achieved the best classification results for two mea-surements of APDI and FHI e average SCR obtained byour method is 7503 which is much better than the methodin [26] and is comparable to the method of RFRV-CLM [24]
(2) Results of database2 According to the detected 45landmarks 27 measurements are automatically calculated byour system using database2 including 17 angular and 10linear measurements which are illustrated in Table 11
e MAE and MAElowast of 27 measurements are illustratedin Table 12 Here the MAE represents the performance ofmeasurement analysis of our automatic system e MAElowastrepresents interobserver variability calculated between theground truth values obtained by the two experts For angularmeasurements the difference between MAE and MAElowast iswithin 05deg for 917 measurements the difference between
12
345
6 78
91011
1213
14
1516
17
181920
212223
24252627
28
29
30
31
32333435
363738
39
40
414243
44
45
(a)
L1 N L16 Mes L31 L6A
L2 Ns L17 Go L32 UL5
L3 Prn L18 Me L33 L6E
L4 Cm L19 Gn L34 UL6
L5 ANS L20 Pg L35 U6E
L6 A L21 LIA L36 U6A
L7 Sn L22 B L37 PNS
L8 Ss L23 Id L38 Ptm
L9 UL L24 LI L39 S
L10 Stoms L25 UI L40 Co
L11 Stomi L26 FA L41 Ar
L12 LL L27 SPr L42 Ba
L13 Si L28 UIA L43 Bolton
L14 Pos L29 Or L44 P
L15 Gs L30 Se L45 ULI
(b)
Figure 7 Cephalogram annotation example showing landmarks in database2 (a) Cephalogram annotation example (b)e description of45 cephalometric landmarks
Table 4 e experimental results of 19 landmark detection inTest1Data
Landmark MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
L1 131 126 8667 9267 9467 9733L2 192 205 6800 7200 7933 8867L3 181 174 6667 8133 8800 9533L4 311 259 5000 5400 5933 6733L5 224 156 5600 6600 7533 8667L6 159 139 7200 7867 8400 9400L7 123 087 8067 9133 9667 9933L8 108 088 8867 9533 9667 9867L9 087 075 9133 9333 9800 9933L10 398 233 2333 2933 3800 5333L11 097 089 8733 9200 9600 9867L12 090 170 9467 9467 9600 9667L13 102 071 9200 9533 9800 10000L14 089 061 9333 9867 10000 10000L15 118 116 8533 9200 9333 9800L16 214 148 5000 6067 7267 9067L17 116 085 8600 9267 9467 9867L18 177 151 7200 7933 8600 9267L19 297 277 5000 5400 5800 6733Average 169 143 7337 7965 8446 9067
Table 5 Comparison of our method with three methods in term ofMRE using Test1Dtata
Method [24] [26] [29] OursMRE (pixels) 1674 1846 1779 1692
Table 6 Comparison of our method to two methods in terms ofMRE SD and SDR using Test1Dtata
Method MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
[24] 167 165 7368 8021 8519 9147[26] 184 176 7172 7740 8193 8804Ours 169 143 7337 7965 8446 9067
10 Journal of Healthcare Engineering
MAE and MAElowast is within 1deg for 1217 measurements andthe difference betweenMAE andMAElowast is within 2deg for 1617measurements In particular theMAE of ZAngle is less thanMAElowast e other one measurement has the difference of224deg For linear measurements the difference betweenMAEand MAElowast is within 05mm for 910 measurements eother one measurement has the difference of 116mm eresults show that our automatic system is efficient and ac-curate for measurement analysis
4 Discussion
e interobserver variability of human annotation is ana-lyzed Table 13 shows that the MRE and SD between an-notations by two orthodontists are 138mm and 155mm fordatabase1 respectively [27] for database2 the MRE and SDbetween annotations of two orthodontists are 126mm and127mm respectively Experimental results show that theproposed algorithm of automatic cephalometric landmark
detection can achieve the MRE of less than 2mm and the SDalmost equal to that of manual marking erefore theperformance of automatic landmark detection by the pro-posed algorithm is comparable to manual marking in term ofthe interobserver variability between two clinical expertsFurthermore the detected landmarks are used to calculate theangular and linearmeasurements in lateral cephalogramsesatisfactory results of measurement analysis are presented inexperiments based on the accurate landmark detection Itshows that the proposed algorithm has the potential to beapplied in clinical practice of cephalometric analysis for or-thodontic diagnosis and treatment planning
e detection accuracy of some landmarks is lower thanthe average value and there are mainly three main reasons (i)some landmarks are located at the overlaying anatomicalstructures such as landmarksGo UL5 L6E UL6 andU6E (ii)some landmarks have large variability of manual marking dueto large anatomical variability among subjects especially inabnormality such as landmarks A ANS Pos Ar Ba andBolton and (iii) structural information is not obvious due tolittle intensity variability in the neighborhood of some land-marks in images such as landmarks P Co L6A and U6A
e proposed system follows clinical cephalometricanalysis procedure and the accuracy of the system can beevaluated by the manual marking accuracy ere are severallimitations in this study On one hand except for the algo-rithm the data effects to the performance of the system in-clude three aspects (i) the quality of the training data (ii) thesize of the training dataset and (iii) the shape and appearancevariation exhibited in the training data On the other hand
12
345
6 78
91011
1213
14
1516
17
181920
212223
24252627
28
29
30
31
32333435
363738
39
40
414243
44
45
True locationsPredicted locations
(a)
12
345
6 78
91011
1213
14
1516
17
181920
21222324252627
28
29
30
31
32333435
3637
38
39
4041
4243
44
45
True locationsPredicted locations
(b)
Figure 8 Examples of automatic cephalometric landmark detection (a) Example of detection in Class I (b) Example of detection in Class III
Table 7 e statistical results of automatic cephalometric land-mark detection in database2
No of 5-fold
MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
1 172 135 7249 8132 8704 93372 172 138 7138 7994 8563 92733 166 132 7431 8206 8744 93374 177 136 6967 7896 8532 92935 168 154 7253 8088 8687 9296Average 171 139 7208 8063 8646 9307
Journal of Healthcare Engineering 11
performance of the system depends on consistency betweenthe training data and the testing data similarly as any su-pervised learning-based methods
5 Conclusion
In conclusion we design a new framework of landmarkdetection in lateral cephalograms with low-to-high resolu-tions In each image resolution decision tree regressionvoting is employed in landmark detection e proposedalgorithm takes full advantage of image information indifferent resolutions In lower resolution the primary localstructure information rather than local detail informationcan be extracted to predict the positions of anatomical
structure involving the specific landmarks In higher reso-lution the local structure information involves more detailinformation and it is useful for prediction positions oflandmarks in the neighborhood As demonstrated in ex-perimental results the proposed algorithm has achievedgood performance Compared with state-of-the-art methodsusing the benchmark database our algorithm has obtainedthe comparable accuracy of landmark detection in terms ofMRE SD and SDR Tested by our own clinical database ouralgorithm has also obtained average 72 successful de-tection rate within precision range of 20mm In particular45 landmarks have been detected in our database which isover two times of the number of landmarks in the bench-mark database erefore the extensibility of the proposed
Table 8 Description of cephalometric measurements in database1 [28]
No Measurement Description in mathematics Description in words1 ANB angL5L2L6 e angle between the landmark 5 2 and 62 SNB angL1L2L6 e angle between the landmark 1 2 and 63 SNA angL1L2L5 e angle between the landmark 1 2 and 5
4 ODI angL5L6L8L10 + angL17L18L4L3e arithmetic sum of the angle between AB plane(L5L6) to mandibular plane (L8L10) and the angle of
palatal plane (L17L18) to FH plane (L4L3)
5 APDI angL3L4L2L7 + angL2L7L5L6 + angL3L4L17L18
e arithmetic sum of the angle between FH plane(L3L4) to facial plane (L2L7) the angle of facial plane(L2L7) to AB plane (L5L6) and the angle of FH plane
(L3L4) to palatal plane (L17L18)
6 FHI |L1L10||L2L8|
e ratio of posterior face height (PFH the distancefrom L1 to L10) to anterior face height (AFH the
distance from L2 to L8)
7 FHA angL1L2L10L9 e angle between SN plane (L1L2) to mandibularplane (L10L9)
8 MW |L12L11| x(L12)ltx(L11)
minus|L12L11| otherwise1113896When the x ordinate of L12 is less than L11rsquos theMW
is |L12L11| otherwise MW is minus|L12L11|
Table 9 Eight standard cephalometric measurement methods for classification of anatomical types [28]
No Measurement Type 1 Type 2 Type 3 Type 41 ANB 32degsim57deg class I (normal) gt57deg class II lt32deg class III mdash
2 SNB 746degsim787deg normalmandible lt746deg retrognathic mandible gt787deg prognathic mandible mdash
3 SNA 794degsim832deg normal maxilla gt832deg prognathic maxilla lt794deg retrognathic maxilla mdash4 ODI Normal 745deg plusmn 607deg gt805deg deep bite tendency lt684deg open bite tendency mdash5 APDI Normal 814deg plusmn 38deg lt776deg class II tendency gt852deg class III tendency mdash6 FHI Normal 065sim075 gt075 short face tendency lt065 long face tendency mdash
7 FHA Normal 268degsim314deg gt314deg mandible high angletendency
lt268deg mandible lower angletendency mdash
8 MW Normal 2mmsim45mm MW 0mm edge to edge MW lt0mm anterior cross biteMWgt45mm
large over jet
Table 10 Comparison of our method to two methods in term of SCR using Test1Data
Methode success classification rates SCR ()
ANB SNB SNA ODI APDI FHI FHA MW Average[24] 6499 8452 6845 8464 8214 6792 7554 8219 7630[26] 5942 7109 5900 7804 8016 5897 7703 8394 7096Ours 5861 7885 5986 7659 8349 8244 7718 8320 7503
12 Journal of Healthcare Engineering
Table 11 Description of cephalometric measurements indatabase2
Name Description inmathematics Description in words
8e angles of three points (unit deg)
SNA angL39L1L6 e angle of landmarks L39 L1L6
SNB angL39L1L22 e angle of landmarks L39 L1L22
ANB angL6L1L22 e angle of landmarks L6 L1L22
SNPg angL39L20L1 e angle of landmarks L39 L20L1
NAPg angL1L6L20 e angle of landmarks L1 L6L20
NSGn angL1L39L19 e angle of landmarks L1 L39L19
Ns-Prn-Pos angL2L3L14 e angle of landmarks L2 L3L14
Cm-Sn-UL angL4L7L9 e angle of landmarks L4 L7L9
LL-Bprime-Pos angL12L13L14 e angle of landmarks L12 L13L14
Angle ofthe jaw angL41L17L18 e angle of landmarks L41 L17
L18Angle ofconvexity angL2L7L14 e angle of landmarks L2 L7
L148e angles between two planes (unit deg)
FH-SN angL44L29L39L1 e angle between FH plane(L44L29) and SN plane (L39L1)
UI-SN angL28L25L39L1 e angle between upper incisor(L28L25) and SN plane (L39L1)
LI-FH angL24L21L44L29e angle between lower incisoraxis (L24L21) and FH plane
(L44L29)
UI-LI angL28L25L24L21 e angle between upper andlower incisors (L28L25 L24L21)
H angle angL9L14L8L14 e angle between H plane(L9L14)and NB plane (L8L14)
Z angle angL44L29L9L14e rear lower angle between FHplane (L44L29) and H plane
(L9L14)8e distances between two points (unit mm)
N-Me |L1L18|e forward height the distancebetween landmarks L1 L18
N-ANS |L1L5|
e up-forward height thedistance between landmarks L1
L5
ANS-Me |L5L18|
e down-forward height thedistance between landmarks L5
L18
Stoms-UI |L10L25|e vertical distance between
landmarks L10 L258e distances from the point to the plane in the horizontal direction(unit mm)
UI-AP |L25L6L20|e distance between landmark
L25 to AP plane (L6L20)
LI-AP |L24L6L20|e distance from landmark L24
to AP plane (L6L20)
UL-EP |L9L3L14|e distance from landmark L9
to EP plane (L3L14)
Table 12 Result of our method for 27 measurements in terms ofMAE and MAElowast using database2
Name MAE MAElowast MAE-MAElowast
8e angles of three points (unit deg)SNA 195 170 024SNB 157 117 039ANB 129 108 021SNPg 159 120 039NAPg 283 244 039NSGn 142 096 047Ns-Prn-Pos 168 110 058Cm-Sn-UL 552 380 172LL-Bprime-Pos 395 344 051Angle of the jaw 320 166 154Angle of convexity 267 186 0818e angles between two planes (unit deg)FH-SN 200 196 004UI-SN 471 281 190LI-FH 334 227 107UI-LI 690 466 224H angle 094 080 014Z angle 169 186 -0178e distances between two points (unit mm)N-Me 137 110 027N-ANS 157 134 024ANS-Me 106 096 010Stoms-UI 075 042 0338e distances from the point to the plane in the horizontal direction(unit mm)UI-AP 096 071 025LI-AP 096 076 020UL-EP 050 036 014LL-EP 045 039 005MaxE 206 179 0278e distances between two points projected to a plane (unit mm)Wits 470 353 116MAElowast interobserver variability
Table 11 Continued
Name Description inmathematics Description in words
LL-EP |L12L13L14|e distance from landmark L12
to EP plane (L3L14)
MaxE |L6L5L37|
Maxillary length the distancefrom landmark L6 to palatal
plane (L5L37)8e distances between two points projected to a plane (unit mm)
Wits |L6L32L34L22|
e distance between the twolandmarks L6 and L22 projectedto functional jaw plane (L32L34)
Table 13 e interobserver error of manual marking betweendoctor1 and doctor2
Interobserver variabilityMRE (mm) SD (mm)
database1 138 155database2 126 127
Journal of Healthcare Engineering 13
algorithm is confirmed using this clinical dataset In addi-tion automatic measurement of clinical parameters has alsoachieved satisfactory results In the future we will put moreefforts to improve the performance of automatic analysis inlateral cephalograms so that the automatic system can beutilized in clinical practice to obtain objective measurementMore research will be conducted to reduce the computa-tional complexity of the algorithm as well
Data Availability
e database1 of 2015 ISBI Challenge is available at httpwwwntustedutwsimcweiwangISBI2015challenge1indexhtml e database2 of Peking University School andHospital of Stomatology cannot be made public availabledue to privacy concerns for the patients
Disclosure
is work is based on research conducted at Beijing Instituteof Technology
Conflicts of Interest
e authors declare that there are no conflicts of interestregarding the publication of this paper
Acknowledgments
e authors would like to express their gratitude to De-partment of Orthodontics Peking University School andHospital of Stomatology China for the supply of imagedatabase2
References
[1] J Yang X Ling Y Lu et al ldquoCephalometric image analysisand measurement for orthognathic surgeryrdquo Medical amp Bi-ological Engineering amp Computing vol 39 no 3 pp 279ndash2842001
[2] B H Broadbent ldquoA new x-ray technique and its applicationto orthodontiardquo Angle Orthodontist vol 1 pp 45-46 1931
[3] H Hofrath ldquoDie Bedeutung des Rontgenfern undAbstandsaufnahme fiir de Diagnostik der KieferanomalienrdquoFortschritte der Kieferorthopadie vol 2 pp 232ndash258 1931
[4] R Leonardi D Giordano F Maiorana et al ldquoAutomaticcephalometric analysis - a systematic reviewrdquo Angle Ortho-dontist vol 78 no 1 pp 145ndash151 2008
[5] T S Douglas ldquoImage processing for craniofacial landmarkidentification and measurement a review of photogrammetryand cephalometryrdquo Computerized Medical Imaging andGraphics vol 28 no 7 pp 401ndash409 2004
[6] D S Gill and F B Naini ldquoChapter 9 cephalometric analysisrdquoin Orthodontics Principles and Practice pp 78ndash87 BlackwellScience Publishing Oxford UK 2011
[7] A D Levy-Mandel A N Venetsanopoulos and J K TsotsosldquoKnowledge-based landmarking of cephalogramsrdquo Com-puters and Biomedical Research vol 19 no 3 pp 282ndash3091986
[8] V Grau M Alcaniz M C Juan et al ldquoAutomatic localizationof cephalometric landmarksrdquo Journal of Biomedical In-formatics vol 34 no 3 pp 146ndash156 2001
[9] J Ashish M Tanmoy and H K Sardana ldquoA novel strategyfor automatic localization of cephalometric landmarksrdquo inProceedings of IEEE International Conference on ComputerEngineering and Technology pp V3-284ndashV3-288 TianjinChina 2010
[10] T Mondal A Jain and H K Sardana ldquoAutomatic craniofacialstructure detection on cephalometric imagesrdquo IEEE Trans-actions on Image Processing vol 20 no 9 pp 2606ndash2614 2011
[11] A Kaur and C Singh ldquoAutomatic cephalometric landmarkdetection using Zernike moments and template matchingrdquoSignal Image and Video Processing vol 9 no 1 pp 117ndash1322015
[12] D B Forsyth and D N Davis ldquoAssessment of an automatedcephalometric analysis systemrdquo European Journal of Ortho-dontics vol 18 no 5 pp 471ndash478 1996
[13] S Rueda and M Alcaniz ldquoAn approach for the automaticcephalometric landmark detection using mathematicalmorphology and active appearance modelsrdquo in Proceedings ofMedical Image Computing and Computer-Assisted In-tervention (MICCAI 2006) pp 159ndash166 Lecture Notes inComputer Science R Larsen Ed Copenhagen DenmarkOctober 2006
[14] W Yue D Yin C Li et al ldquoAutomated 2-D cephalometricanalysis on X-ray images by a model-based approachrdquo IEEETransactions on Biomedical Engineering vol 53 no 8pp 1615ndash1623 2006
[15] R Kafieh A mehri S sadri et al ldquoAutomatic landmarkdetection in cephalometry using a modified Active ShapeModel with sub image matchingrdquo in Proceedings of IEEEInternational Conference on Machine Vision pp 73ndash78Islamabad Pakistan 2008
[16] J Keustermans W Mollemans D Vandermeulen andP Suetens ldquoAutomated cephalometric landmark identifica-tion using shape and local appearance modelsrdquo in Proceedingsof 20th International Conference on Pattern Recognitionpp 2464ndash2467 Istanbul Turkey August 2010
[17] P Vucinic Z Trpovski and I Scepan ldquoAutomatic land-marking of cephalograms using active appearance modelsrdquoEuropean Journal of Orthodontics vol 32 no 3 pp 233ndash2412010
[18] S Chakrabartty M Yagi T Shibata et al ldquoRobust cepha-lometric landmark identification using support vector ma-chinesrdquo in Proceedings of IEEE International Conference onAcoustics Speech and Signal Processing vol 2 pp 825ndash828Vancouver Canada May 2003
[19] I El-Feghi M A Sid-Ahmed and M Ahmadi ldquoAutomaticlocalization of craniofacial landmarks for assisted cephalom-etryrdquo Pattern Recognition vol 37 no 3 pp 609ndash621 2004
[20] R Leonardi D Giordano and F Maiorana ldquoAn evaluation ofcellular neural networks for the automatic identification ofcephalometric landmarks on digital imagesrdquo Journal of Bio-medicine and Biotechnology vol 2009 Article ID 717102 12pages 2009
[21] L Favaedi M Petrou and IEEE ldquoCephalometric landmarksidentification using probabilistic relaxationrdquo in Proceedings of2010 Annual International Conference of the IEEE Engineeringin Medicine and Biology Society pp 4391ndash4394 IEEE Engi-neering in Medicine and Biology Society Conference Pro-ceedings Buenos Aires Argentina August 2010
[22] M Farshbaf and A A Pouyan ldquoLandmark detection oncephalometric radiology images through combining classi-fiersrdquo in Proceedings of 2010 17th Iranian Conference ofBiomedical Engineering (ICBME) pp 1ndash4 Isfahan IranNovember 2010
14 Journal of Healthcare Engineering
[23] L Breiman ldquoRandom forestsrdquo Machine Learning vol 45no 1 pp 5ndash32 2001
[24] C Lindner and T F Cootes ldquoFully automatic cephalometricevaluation using random forest regression-votingrdquo in Pro-ceedings of International Symposium on Biomedical Imaging(ISBI) Grand Challenges in Dental X-ray Image Ana-lysismdashAutomated Detection and Analysis for Diagnosis inCephalometric X-ray Image Brooklyn NY USA May 2015
[25] C Lindner C-W Wang C-T Huang et al ldquoFully automaticsystem for accurate localisation and analysis of cephalometriclandmarks in lateral cephalogramsrdquo Scientific Reports vol 6no 1 2016
[26] B Ibragimov et al ldquoComputerized cephalometry by gametheory with shape- and appearance-based landmark re-finementrdquo in Proceedings of International Symposium onBiomedical imaging (ISBI) Grand Challenges in Dental X-rayImage AnalysismdashAutomated Detection and Analysis forDiagnosis in Cephalometric X-ray Image Brooklyn NYUSA May 2015
[27] C-WWang C-T HuangM-C Hsieh et al ldquoEvaluation andcomparison of anatomical landmark detection methods forcephalometric X-ray images a grand challengerdquo IEEETransactions on Medical Imaging vol 34 no 9 pp 1890ndash1900 2015
[28] C W Wang C T Huang J H Lee et al ldquoA benchmark forcomparison of dental radiography analysis algorithmsrdquoMedical Image Analysis vol 31 pp 63ndash76 2016
[29] R Vandaele J AcetoMMuller et al ldquoLandmark detection in2D bioimages for geometric morphometrics a multi-resolution tree-based approachrdquo Scientific Reports vol 8no 1 2018
[30] D G Lowe ldquoDistinctive image features from scale-invariantKeypointsrdquo International Journal of Computer Vision vol 60no 2 pp 91ndash110 2004
[31] A Vedaldi and B Fulkerson ldquoVLFeat an open and portablelibrary of computer vision algorithmsrdquo in Proceedings ofConference on ACM multimedia pp 1469ndash1472 FirenzeItaly October httpwwwvlfeatorg
[32] C Qiu L Jiang and C Li ldquoRandomly selected decision treefor test-cost sensitive learningrdquo Applied Soft Computingvol 53 pp 27ndash33 2017
[33] K Kim and J S Hong ldquoA hybrid decision tree algorithm formixed numeric and categorical data in regression analysisrdquoPattern Recognition Letters vol 98 pp 39ndash45 2017
[34] L Breiman and J H Friedman Classification and RegressionTrees CRC Press Boca Raton FL USA 1984
[35] C Lindner P A Bromiley M C Ionita et al ldquoRobust andaccurate shape model matching using random forest re-gression-votingrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 37 no 9 pp 1862ndash1874 2015
[36] J Gall and V S Lempitsky ldquoClass-specific Hough forests forobject detectionrdquo in Proceedings of IEEE Conference onComputer Vision and Pattern Recognition (CVPR) pp 143ndash157 Miami FL USA June 2009
Journal of Healthcare Engineering 15
International Journal of
AerospaceEngineeringHindawiwwwhindawicom Volume 2018
RoboticsJournal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Active and Passive Electronic Components
VLSI Design
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Shock and Vibration
Hindawiwwwhindawicom Volume 2018
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawiwwwhindawicom
Volume 2018
Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom
The Scientific World Journal
Volume 2018
Control Scienceand Engineering
Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom
Journal ofEngineeringVolume 2018
SensorsJournal of
Hindawiwwwhindawicom Volume 2018
International Journal of
RotatingMachinery
Hindawiwwwhindawicom Volume 2018
Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Navigation and Observation
International Journal of
Hindawi
wwwhindawicom Volume 2018
Advances in
Multimedia
Submit your manuscripts atwwwhindawicom
image resolution was 1935 times 2400 pixels For evaluation 19landmarks were manually annotated by two experienceddoctors in each cephalogram (see illustrations in Figure 6)the ground truth was the average of the annotations by bothdoctors and eight clinical measurements were used forclassification of anatomical types
312 Database of Peking University School and Hospital ofStomatology e database2 included 165 cephalometricradiographs as illustrated in Table 3 (the IRB approvalnumber is PKUSSIRB-201415063) ere were 55 cephalo-grams collected from 55Chinese adult subjects (29 females 26males) who were diagnosed as skeletal class I (0 lt ANB lt 4)with minor dental crowding and a harmoniouse other 110cephalograms were collected from the 55 skeletal class IIIpatients (ANB lt 0) (32 females 23 males) who underwentcombined surgical orthodontic treatment at the PekingUniversity School and Hospital of Stomatology from 2010 to2013 e image resolutions were different in the range from758 times 925 to 2690 times 3630 pixels For evaluation 45 landmarksweremanually annotated by two experienced orthodontists ineach cephalogram as shown in Figure 7 the ground truth wasthe average of annotations by both doctors and 27 clinicalmeasurements were used for future cephalometric analysis
313 Experimental Settings In the initial scale of landmarkdetection for training K 50 for prediction Kprime 400 Inthe other scales the additional parametersW and S are set to48 and 40 separately Because the image resolutions aredifferent preprocessing is required to rescale all the imagesto the fixed width of 1960 pixels in database2 For database2we test our algorithm by using 5-fold cross validation
32 Landmark Detection
321 Evaluation Criterion e first evaluation criterion isthe mean radial error with the associated standard deviatione radial error E ie the distance between the predictedposition and the true position of each landmark is defined as
Ei Ai minusBi
times ratio ai isin A bi isin B 1le ileM (15)
where A B represent the estimated and true positions ofeach landmark for all cephalograms in the dataset Ai Bi arethe corresponding positions in set A and B respectively and
M is the total number of cephalograms in the dataset emean radial error (MRE) and the associated standard de-viation (SD) for each landmark are defined as
MRE mean(E) 1113936
Mi1Ei
M
SD
1113936Mi1 Ei minusMRE( 1113857
2
Mminus 1
1113971 (16)
e second evaluation criterion is the success detectionrate with respect to the 25mm 3mm 5mm and 10mmprecision ranges If Ei is less than a precision range thedetection of the landmark is considered as a successfuldetection in the precision range otherwise it is consideredas a failed detection e success detection rate (SDR) pz
with precision less than z is defined as
pz Ei lt z1113864 1113865
Mtimes 100 1le ileM (17)
where z denotes four precision ranges used in the evaluationincluding 20mm 25mm 3mm and 4mm
322 Experimental Results
(1) Results of database1 Experimental results of cephalo-metric landmark detection using database1 is shown inTable 4 e MREs of landmarks L10 L4 L19 L5 and L16are more than 2mm e other 14 landmarks are all withinthe MRE of 2mm in which 3 landmarks are within the MREof 1mm e average MRE and SD of 19 landmarks are169mm and 143mm respectively It shows that the de-tection of cephalometric landmarks is accurate by ourproposed method e SDRs are 7337 7965 8446and 9067 within the precision ranges of 20 25 30 and40mm respectively In 2mm precision range the SDR ismore than 90 for four landmarks (L9 L12 L13 L14) theSDR is between 80 and 90 for six landmarks (L1 L7 L8L11 L15 L17) the SDR is between 70 and 80 for twolandmarks (L6 and L18) the SDRs for the other sevenlandmarks are less than 70 where the SDR of landmarkL10 is the lowest It can be seen from Table 4 that thelandmark L10 is the most difficult to detect accurately
Comparison of our method with three state-of-the-artmethods using Test1data is shown in Table 5 e differencebetween MRE of our proposed method and RFRV-CLM in[24] is less than 1 pixel which means that their performanceis comparable within the resolution of image
Furthermore we conduct the comparison with the toptwo methods in 2015 ISBI Challenge of cephalometriclandmark detection in terms of MRE SD and SDR as il-lustrated in Table 6 Our method achieves the SDR of 7337within the precision range of 20mm which is similar to theSDR of 7368 of the best method e MRE of our pro-posed method is 169mm which is comparable to that ofmethod of RFRV-CLM but the SD of our method is lessthan that of method of RFRV-CLM e results show that
D
AB
C
c1
c2
c0
c
Figure 5 e distance calculation between two points projected toa plane
8 Journal of Healthcare Engineering
our method is accurate for landmark detection in lateralcephalograms and is more robust
(2) Results of database2 Two examples of landmark de-tection in database2 are shown in Figure 8 It can be ob-served from the figure that the predicted locations of 45landmarks in cephalograms are near to the ground truthlocations which shows the success of our proposed algo-rithm for landmark detection
For the quantitative assessment of the proposed algo-rithm the statistical result is shown in Table 7eMRE andSD of the proposed method to detect 45 landmarks are171mm and 139mm respectively e average SDRs of 45landmarks within the precision range 20mm 25mm3mm and 4mm are 7208 8063 8646 and 9307respectively e experimental results in database2 showcomparable performance to the results of database1 It
indicates that the proposed method can be successfullyapplied to the detection of more clinical landmarks
33 Measurement Analysis
331 Evaluation Criterion For the classification of ana-tomical types eight cephalometric measurements wereusually used e description of these eight measurementsand the methods for classification are explained in Tables 8and 9 respectively
One evaluation criterion for measurement analysis is thesuccess classification rate (SCR) for these 8 popular methodsof analysis which is calculated using confusion matrix Inthe confusion matrix each column represents the instancesof an estimated type while each row represents the instancesof the ground truth type e SCR is defined as the averageddiagonal of the confusion matrix
For evaluation the performance of the proposed systemfor more measurement analysis in database2 another cri-terion the mean absolute error (MAE) is calculated by
MAE 1113936
Mi1
1113954Vi minusVi
11138681113868111386811138681113868111386811138681113868
M (18)
where 1113954Vi is the value of angular or linear measurementestimated by our system and Vi is the ground truth mea-surement obtained using human annotated landmarks
1 2
3
4
5
6
78 9
101112 13
14
15
16
17 18
19
(a)
L1 S
L2 N
L3 Or
L4 P
L5 A
L6 B
L7 Pg
L8 Me
L9 Gn
L10 Go
L11 LI
L12 UI
L13 UL
L14 LL
L15 Sn
L16 Pos
L17 PNS
L18 ANS
L19 Ar
(b)
Figure 6 Cephalogram annotation example showing the 19 landmarks in database1 (a) Cephalogram annotation example (b) Descriptionof 19 landmarks
Table 3 e description of database2
Female MaleClass I 29 26
Class III Pretreatment 32 23Posttreatment 32 23
Total number of cephalograms 165Total number of patients 110
Journal of Healthcare Engineering 9
332 Experimental Results
(1) Results of database1 Using the detected 19 landmarks 8cephalometric measurements are calculated for classificationof anatomical types in database1 e comparison of ourmethod to the best twomethods is given in Table 10 where wehave achieved the best classification results for two mea-surements of APDI and FHI e average SCR obtained byour method is 7503 which is much better than the methodin [26] and is comparable to the method of RFRV-CLM [24]
(2) Results of database2 According to the detected 45landmarks 27 measurements are automatically calculated byour system using database2 including 17 angular and 10linear measurements which are illustrated in Table 11
e MAE and MAElowast of 27 measurements are illustratedin Table 12 Here the MAE represents the performance ofmeasurement analysis of our automatic system e MAElowastrepresents interobserver variability calculated between theground truth values obtained by the two experts For angularmeasurements the difference between MAE and MAElowast iswithin 05deg for 917 measurements the difference between
12
345
6 78
91011
1213
14
1516
17
181920
212223
24252627
28
29
30
31
32333435
363738
39
40
414243
44
45
(a)
L1 N L16 Mes L31 L6A
L2 Ns L17 Go L32 UL5
L3 Prn L18 Me L33 L6E
L4 Cm L19 Gn L34 UL6
L5 ANS L20 Pg L35 U6E
L6 A L21 LIA L36 U6A
L7 Sn L22 B L37 PNS
L8 Ss L23 Id L38 Ptm
L9 UL L24 LI L39 S
L10 Stoms L25 UI L40 Co
L11 Stomi L26 FA L41 Ar
L12 LL L27 SPr L42 Ba
L13 Si L28 UIA L43 Bolton
L14 Pos L29 Or L44 P
L15 Gs L30 Se L45 ULI
(b)
Figure 7 Cephalogram annotation example showing landmarks in database2 (a) Cephalogram annotation example (b)e description of45 cephalometric landmarks
Table 4 e experimental results of 19 landmark detection inTest1Data
Landmark MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
L1 131 126 8667 9267 9467 9733L2 192 205 6800 7200 7933 8867L3 181 174 6667 8133 8800 9533L4 311 259 5000 5400 5933 6733L5 224 156 5600 6600 7533 8667L6 159 139 7200 7867 8400 9400L7 123 087 8067 9133 9667 9933L8 108 088 8867 9533 9667 9867L9 087 075 9133 9333 9800 9933L10 398 233 2333 2933 3800 5333L11 097 089 8733 9200 9600 9867L12 090 170 9467 9467 9600 9667L13 102 071 9200 9533 9800 10000L14 089 061 9333 9867 10000 10000L15 118 116 8533 9200 9333 9800L16 214 148 5000 6067 7267 9067L17 116 085 8600 9267 9467 9867L18 177 151 7200 7933 8600 9267L19 297 277 5000 5400 5800 6733Average 169 143 7337 7965 8446 9067
Table 5 Comparison of our method with three methods in term ofMRE using Test1Dtata
Method [24] [26] [29] OursMRE (pixels) 1674 1846 1779 1692
Table 6 Comparison of our method to two methods in terms ofMRE SD and SDR using Test1Dtata
Method MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
[24] 167 165 7368 8021 8519 9147[26] 184 176 7172 7740 8193 8804Ours 169 143 7337 7965 8446 9067
10 Journal of Healthcare Engineering
MAE and MAElowast is within 1deg for 1217 measurements andthe difference betweenMAE andMAElowast is within 2deg for 1617measurements In particular theMAE of ZAngle is less thanMAElowast e other one measurement has the difference of224deg For linear measurements the difference betweenMAEand MAElowast is within 05mm for 910 measurements eother one measurement has the difference of 116mm eresults show that our automatic system is efficient and ac-curate for measurement analysis
4 Discussion
e interobserver variability of human annotation is ana-lyzed Table 13 shows that the MRE and SD between an-notations by two orthodontists are 138mm and 155mm fordatabase1 respectively [27] for database2 the MRE and SDbetween annotations of two orthodontists are 126mm and127mm respectively Experimental results show that theproposed algorithm of automatic cephalometric landmark
detection can achieve the MRE of less than 2mm and the SDalmost equal to that of manual marking erefore theperformance of automatic landmark detection by the pro-posed algorithm is comparable to manual marking in term ofthe interobserver variability between two clinical expertsFurthermore the detected landmarks are used to calculate theangular and linearmeasurements in lateral cephalogramsesatisfactory results of measurement analysis are presented inexperiments based on the accurate landmark detection Itshows that the proposed algorithm has the potential to beapplied in clinical practice of cephalometric analysis for or-thodontic diagnosis and treatment planning
e detection accuracy of some landmarks is lower thanthe average value and there are mainly three main reasons (i)some landmarks are located at the overlaying anatomicalstructures such as landmarksGo UL5 L6E UL6 andU6E (ii)some landmarks have large variability of manual marking dueto large anatomical variability among subjects especially inabnormality such as landmarks A ANS Pos Ar Ba andBolton and (iii) structural information is not obvious due tolittle intensity variability in the neighborhood of some land-marks in images such as landmarks P Co L6A and U6A
e proposed system follows clinical cephalometricanalysis procedure and the accuracy of the system can beevaluated by the manual marking accuracy ere are severallimitations in this study On one hand except for the algo-rithm the data effects to the performance of the system in-clude three aspects (i) the quality of the training data (ii) thesize of the training dataset and (iii) the shape and appearancevariation exhibited in the training data On the other hand
12
345
6 78
91011
1213
14
1516
17
181920
212223
24252627
28
29
30
31
32333435
363738
39
40
414243
44
45
True locationsPredicted locations
(a)
12
345
6 78
91011
1213
14
1516
17
181920
21222324252627
28
29
30
31
32333435
3637
38
39
4041
4243
44
45
True locationsPredicted locations
(b)
Figure 8 Examples of automatic cephalometric landmark detection (a) Example of detection in Class I (b) Example of detection in Class III
Table 7 e statistical results of automatic cephalometric land-mark detection in database2
No of 5-fold
MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
1 172 135 7249 8132 8704 93372 172 138 7138 7994 8563 92733 166 132 7431 8206 8744 93374 177 136 6967 7896 8532 92935 168 154 7253 8088 8687 9296Average 171 139 7208 8063 8646 9307
Journal of Healthcare Engineering 11
performance of the system depends on consistency betweenthe training data and the testing data similarly as any su-pervised learning-based methods
5 Conclusion
In conclusion we design a new framework of landmarkdetection in lateral cephalograms with low-to-high resolu-tions In each image resolution decision tree regressionvoting is employed in landmark detection e proposedalgorithm takes full advantage of image information indifferent resolutions In lower resolution the primary localstructure information rather than local detail informationcan be extracted to predict the positions of anatomical
structure involving the specific landmarks In higher reso-lution the local structure information involves more detailinformation and it is useful for prediction positions oflandmarks in the neighborhood As demonstrated in ex-perimental results the proposed algorithm has achievedgood performance Compared with state-of-the-art methodsusing the benchmark database our algorithm has obtainedthe comparable accuracy of landmark detection in terms ofMRE SD and SDR Tested by our own clinical database ouralgorithm has also obtained average 72 successful de-tection rate within precision range of 20mm In particular45 landmarks have been detected in our database which isover two times of the number of landmarks in the bench-mark database erefore the extensibility of the proposed
Table 8 Description of cephalometric measurements in database1 [28]
No Measurement Description in mathematics Description in words1 ANB angL5L2L6 e angle between the landmark 5 2 and 62 SNB angL1L2L6 e angle between the landmark 1 2 and 63 SNA angL1L2L5 e angle between the landmark 1 2 and 5
4 ODI angL5L6L8L10 + angL17L18L4L3e arithmetic sum of the angle between AB plane(L5L6) to mandibular plane (L8L10) and the angle of
palatal plane (L17L18) to FH plane (L4L3)
5 APDI angL3L4L2L7 + angL2L7L5L6 + angL3L4L17L18
e arithmetic sum of the angle between FH plane(L3L4) to facial plane (L2L7) the angle of facial plane(L2L7) to AB plane (L5L6) and the angle of FH plane
(L3L4) to palatal plane (L17L18)
6 FHI |L1L10||L2L8|
e ratio of posterior face height (PFH the distancefrom L1 to L10) to anterior face height (AFH the
distance from L2 to L8)
7 FHA angL1L2L10L9 e angle between SN plane (L1L2) to mandibularplane (L10L9)
8 MW |L12L11| x(L12)ltx(L11)
minus|L12L11| otherwise1113896When the x ordinate of L12 is less than L11rsquos theMW
is |L12L11| otherwise MW is minus|L12L11|
Table 9 Eight standard cephalometric measurement methods for classification of anatomical types [28]
No Measurement Type 1 Type 2 Type 3 Type 41 ANB 32degsim57deg class I (normal) gt57deg class II lt32deg class III mdash
2 SNB 746degsim787deg normalmandible lt746deg retrognathic mandible gt787deg prognathic mandible mdash
3 SNA 794degsim832deg normal maxilla gt832deg prognathic maxilla lt794deg retrognathic maxilla mdash4 ODI Normal 745deg plusmn 607deg gt805deg deep bite tendency lt684deg open bite tendency mdash5 APDI Normal 814deg plusmn 38deg lt776deg class II tendency gt852deg class III tendency mdash6 FHI Normal 065sim075 gt075 short face tendency lt065 long face tendency mdash
7 FHA Normal 268degsim314deg gt314deg mandible high angletendency
lt268deg mandible lower angletendency mdash
8 MW Normal 2mmsim45mm MW 0mm edge to edge MW lt0mm anterior cross biteMWgt45mm
large over jet
Table 10 Comparison of our method to two methods in term of SCR using Test1Data
Methode success classification rates SCR ()
ANB SNB SNA ODI APDI FHI FHA MW Average[24] 6499 8452 6845 8464 8214 6792 7554 8219 7630[26] 5942 7109 5900 7804 8016 5897 7703 8394 7096Ours 5861 7885 5986 7659 8349 8244 7718 8320 7503
12 Journal of Healthcare Engineering
Table 11 Description of cephalometric measurements indatabase2
Name Description inmathematics Description in words
8e angles of three points (unit deg)
SNA angL39L1L6 e angle of landmarks L39 L1L6
SNB angL39L1L22 e angle of landmarks L39 L1L22
ANB angL6L1L22 e angle of landmarks L6 L1L22
SNPg angL39L20L1 e angle of landmarks L39 L20L1
NAPg angL1L6L20 e angle of landmarks L1 L6L20
NSGn angL1L39L19 e angle of landmarks L1 L39L19
Ns-Prn-Pos angL2L3L14 e angle of landmarks L2 L3L14
Cm-Sn-UL angL4L7L9 e angle of landmarks L4 L7L9
LL-Bprime-Pos angL12L13L14 e angle of landmarks L12 L13L14
Angle ofthe jaw angL41L17L18 e angle of landmarks L41 L17
L18Angle ofconvexity angL2L7L14 e angle of landmarks L2 L7
L148e angles between two planes (unit deg)
FH-SN angL44L29L39L1 e angle between FH plane(L44L29) and SN plane (L39L1)
UI-SN angL28L25L39L1 e angle between upper incisor(L28L25) and SN plane (L39L1)
LI-FH angL24L21L44L29e angle between lower incisoraxis (L24L21) and FH plane
(L44L29)
UI-LI angL28L25L24L21 e angle between upper andlower incisors (L28L25 L24L21)
H angle angL9L14L8L14 e angle between H plane(L9L14)and NB plane (L8L14)
Z angle angL44L29L9L14e rear lower angle between FHplane (L44L29) and H plane
(L9L14)8e distances between two points (unit mm)
N-Me |L1L18|e forward height the distancebetween landmarks L1 L18
N-ANS |L1L5|
e up-forward height thedistance between landmarks L1
L5
ANS-Me |L5L18|
e down-forward height thedistance between landmarks L5
L18
Stoms-UI |L10L25|e vertical distance between
landmarks L10 L258e distances from the point to the plane in the horizontal direction(unit mm)
UI-AP |L25L6L20|e distance between landmark
L25 to AP plane (L6L20)
LI-AP |L24L6L20|e distance from landmark L24
to AP plane (L6L20)
UL-EP |L9L3L14|e distance from landmark L9
to EP plane (L3L14)
Table 12 Result of our method for 27 measurements in terms ofMAE and MAElowast using database2
Name MAE MAElowast MAE-MAElowast
8e angles of three points (unit deg)SNA 195 170 024SNB 157 117 039ANB 129 108 021SNPg 159 120 039NAPg 283 244 039NSGn 142 096 047Ns-Prn-Pos 168 110 058Cm-Sn-UL 552 380 172LL-Bprime-Pos 395 344 051Angle of the jaw 320 166 154Angle of convexity 267 186 0818e angles between two planes (unit deg)FH-SN 200 196 004UI-SN 471 281 190LI-FH 334 227 107UI-LI 690 466 224H angle 094 080 014Z angle 169 186 -0178e distances between two points (unit mm)N-Me 137 110 027N-ANS 157 134 024ANS-Me 106 096 010Stoms-UI 075 042 0338e distances from the point to the plane in the horizontal direction(unit mm)UI-AP 096 071 025LI-AP 096 076 020UL-EP 050 036 014LL-EP 045 039 005MaxE 206 179 0278e distances between two points projected to a plane (unit mm)Wits 470 353 116MAElowast interobserver variability
Table 11 Continued
Name Description inmathematics Description in words
LL-EP |L12L13L14|e distance from landmark L12
to EP plane (L3L14)
MaxE |L6L5L37|
Maxillary length the distancefrom landmark L6 to palatal
plane (L5L37)8e distances between two points projected to a plane (unit mm)
Wits |L6L32L34L22|
e distance between the twolandmarks L6 and L22 projectedto functional jaw plane (L32L34)
Table 13 e interobserver error of manual marking betweendoctor1 and doctor2
Interobserver variabilityMRE (mm) SD (mm)
database1 138 155database2 126 127
Journal of Healthcare Engineering 13
algorithm is confirmed using this clinical dataset In addi-tion automatic measurement of clinical parameters has alsoachieved satisfactory results In the future we will put moreefforts to improve the performance of automatic analysis inlateral cephalograms so that the automatic system can beutilized in clinical practice to obtain objective measurementMore research will be conducted to reduce the computa-tional complexity of the algorithm as well
Data Availability
e database1 of 2015 ISBI Challenge is available at httpwwwntustedutwsimcweiwangISBI2015challenge1indexhtml e database2 of Peking University School andHospital of Stomatology cannot be made public availabledue to privacy concerns for the patients
Disclosure
is work is based on research conducted at Beijing Instituteof Technology
Conflicts of Interest
e authors declare that there are no conflicts of interestregarding the publication of this paper
Acknowledgments
e authors would like to express their gratitude to De-partment of Orthodontics Peking University School andHospital of Stomatology China for the supply of imagedatabase2
References
[1] J Yang X Ling Y Lu et al ldquoCephalometric image analysisand measurement for orthognathic surgeryrdquo Medical amp Bi-ological Engineering amp Computing vol 39 no 3 pp 279ndash2842001
[2] B H Broadbent ldquoA new x-ray technique and its applicationto orthodontiardquo Angle Orthodontist vol 1 pp 45-46 1931
[3] H Hofrath ldquoDie Bedeutung des Rontgenfern undAbstandsaufnahme fiir de Diagnostik der KieferanomalienrdquoFortschritte der Kieferorthopadie vol 2 pp 232ndash258 1931
[4] R Leonardi D Giordano F Maiorana et al ldquoAutomaticcephalometric analysis - a systematic reviewrdquo Angle Ortho-dontist vol 78 no 1 pp 145ndash151 2008
[5] T S Douglas ldquoImage processing for craniofacial landmarkidentification and measurement a review of photogrammetryand cephalometryrdquo Computerized Medical Imaging andGraphics vol 28 no 7 pp 401ndash409 2004
[6] D S Gill and F B Naini ldquoChapter 9 cephalometric analysisrdquoin Orthodontics Principles and Practice pp 78ndash87 BlackwellScience Publishing Oxford UK 2011
[7] A D Levy-Mandel A N Venetsanopoulos and J K TsotsosldquoKnowledge-based landmarking of cephalogramsrdquo Com-puters and Biomedical Research vol 19 no 3 pp 282ndash3091986
[8] V Grau M Alcaniz M C Juan et al ldquoAutomatic localizationof cephalometric landmarksrdquo Journal of Biomedical In-formatics vol 34 no 3 pp 146ndash156 2001
[9] J Ashish M Tanmoy and H K Sardana ldquoA novel strategyfor automatic localization of cephalometric landmarksrdquo inProceedings of IEEE International Conference on ComputerEngineering and Technology pp V3-284ndashV3-288 TianjinChina 2010
[10] T Mondal A Jain and H K Sardana ldquoAutomatic craniofacialstructure detection on cephalometric imagesrdquo IEEE Trans-actions on Image Processing vol 20 no 9 pp 2606ndash2614 2011
[11] A Kaur and C Singh ldquoAutomatic cephalometric landmarkdetection using Zernike moments and template matchingrdquoSignal Image and Video Processing vol 9 no 1 pp 117ndash1322015
[12] D B Forsyth and D N Davis ldquoAssessment of an automatedcephalometric analysis systemrdquo European Journal of Ortho-dontics vol 18 no 5 pp 471ndash478 1996
[13] S Rueda and M Alcaniz ldquoAn approach for the automaticcephalometric landmark detection using mathematicalmorphology and active appearance modelsrdquo in Proceedings ofMedical Image Computing and Computer-Assisted In-tervention (MICCAI 2006) pp 159ndash166 Lecture Notes inComputer Science R Larsen Ed Copenhagen DenmarkOctober 2006
[14] W Yue D Yin C Li et al ldquoAutomated 2-D cephalometricanalysis on X-ray images by a model-based approachrdquo IEEETransactions on Biomedical Engineering vol 53 no 8pp 1615ndash1623 2006
[15] R Kafieh A mehri S sadri et al ldquoAutomatic landmarkdetection in cephalometry using a modified Active ShapeModel with sub image matchingrdquo in Proceedings of IEEEInternational Conference on Machine Vision pp 73ndash78Islamabad Pakistan 2008
[16] J Keustermans W Mollemans D Vandermeulen andP Suetens ldquoAutomated cephalometric landmark identifica-tion using shape and local appearance modelsrdquo in Proceedingsof 20th International Conference on Pattern Recognitionpp 2464ndash2467 Istanbul Turkey August 2010
[17] P Vucinic Z Trpovski and I Scepan ldquoAutomatic land-marking of cephalograms using active appearance modelsrdquoEuropean Journal of Orthodontics vol 32 no 3 pp 233ndash2412010
[18] S Chakrabartty M Yagi T Shibata et al ldquoRobust cepha-lometric landmark identification using support vector ma-chinesrdquo in Proceedings of IEEE International Conference onAcoustics Speech and Signal Processing vol 2 pp 825ndash828Vancouver Canada May 2003
[19] I El-Feghi M A Sid-Ahmed and M Ahmadi ldquoAutomaticlocalization of craniofacial landmarks for assisted cephalom-etryrdquo Pattern Recognition vol 37 no 3 pp 609ndash621 2004
[20] R Leonardi D Giordano and F Maiorana ldquoAn evaluation ofcellular neural networks for the automatic identification ofcephalometric landmarks on digital imagesrdquo Journal of Bio-medicine and Biotechnology vol 2009 Article ID 717102 12pages 2009
[21] L Favaedi M Petrou and IEEE ldquoCephalometric landmarksidentification using probabilistic relaxationrdquo in Proceedings of2010 Annual International Conference of the IEEE Engineeringin Medicine and Biology Society pp 4391ndash4394 IEEE Engi-neering in Medicine and Biology Society Conference Pro-ceedings Buenos Aires Argentina August 2010
[22] M Farshbaf and A A Pouyan ldquoLandmark detection oncephalometric radiology images through combining classi-fiersrdquo in Proceedings of 2010 17th Iranian Conference ofBiomedical Engineering (ICBME) pp 1ndash4 Isfahan IranNovember 2010
14 Journal of Healthcare Engineering
[23] L Breiman ldquoRandom forestsrdquo Machine Learning vol 45no 1 pp 5ndash32 2001
[24] C Lindner and T F Cootes ldquoFully automatic cephalometricevaluation using random forest regression-votingrdquo in Pro-ceedings of International Symposium on Biomedical Imaging(ISBI) Grand Challenges in Dental X-ray Image Ana-lysismdashAutomated Detection and Analysis for Diagnosis inCephalometric X-ray Image Brooklyn NY USA May 2015
[25] C Lindner C-W Wang C-T Huang et al ldquoFully automaticsystem for accurate localisation and analysis of cephalometriclandmarks in lateral cephalogramsrdquo Scientific Reports vol 6no 1 2016
[26] B Ibragimov et al ldquoComputerized cephalometry by gametheory with shape- and appearance-based landmark re-finementrdquo in Proceedings of International Symposium onBiomedical imaging (ISBI) Grand Challenges in Dental X-rayImage AnalysismdashAutomated Detection and Analysis forDiagnosis in Cephalometric X-ray Image Brooklyn NYUSA May 2015
[27] C-WWang C-T HuangM-C Hsieh et al ldquoEvaluation andcomparison of anatomical landmark detection methods forcephalometric X-ray images a grand challengerdquo IEEETransactions on Medical Imaging vol 34 no 9 pp 1890ndash1900 2015
[28] C W Wang C T Huang J H Lee et al ldquoA benchmark forcomparison of dental radiography analysis algorithmsrdquoMedical Image Analysis vol 31 pp 63ndash76 2016
[29] R Vandaele J AcetoMMuller et al ldquoLandmark detection in2D bioimages for geometric morphometrics a multi-resolution tree-based approachrdquo Scientific Reports vol 8no 1 2018
[30] D G Lowe ldquoDistinctive image features from scale-invariantKeypointsrdquo International Journal of Computer Vision vol 60no 2 pp 91ndash110 2004
[31] A Vedaldi and B Fulkerson ldquoVLFeat an open and portablelibrary of computer vision algorithmsrdquo in Proceedings ofConference on ACM multimedia pp 1469ndash1472 FirenzeItaly October httpwwwvlfeatorg
[32] C Qiu L Jiang and C Li ldquoRandomly selected decision treefor test-cost sensitive learningrdquo Applied Soft Computingvol 53 pp 27ndash33 2017
[33] K Kim and J S Hong ldquoA hybrid decision tree algorithm formixed numeric and categorical data in regression analysisrdquoPattern Recognition Letters vol 98 pp 39ndash45 2017
[34] L Breiman and J H Friedman Classification and RegressionTrees CRC Press Boca Raton FL USA 1984
[35] C Lindner P A Bromiley M C Ionita et al ldquoRobust andaccurate shape model matching using random forest re-gression-votingrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 37 no 9 pp 1862ndash1874 2015
[36] J Gall and V S Lempitsky ldquoClass-specific Hough forests forobject detectionrdquo in Proceedings of IEEE Conference onComputer Vision and Pattern Recognition (CVPR) pp 143ndash157 Miami FL USA June 2009
Journal of Healthcare Engineering 15
International Journal of
AerospaceEngineeringHindawiwwwhindawicom Volume 2018
RoboticsJournal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Active and Passive Electronic Components
VLSI Design
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Shock and Vibration
Hindawiwwwhindawicom Volume 2018
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawiwwwhindawicom
Volume 2018
Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom
The Scientific World Journal
Volume 2018
Control Scienceand Engineering
Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom
Journal ofEngineeringVolume 2018
SensorsJournal of
Hindawiwwwhindawicom Volume 2018
International Journal of
RotatingMachinery
Hindawiwwwhindawicom Volume 2018
Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Navigation and Observation
International Journal of
Hindawi
wwwhindawicom Volume 2018
Advances in
Multimedia
Submit your manuscripts atwwwhindawicom
our method is accurate for landmark detection in lateralcephalograms and is more robust
(2) Results of database2 Two examples of landmark de-tection in database2 are shown in Figure 8 It can be ob-served from the figure that the predicted locations of 45landmarks in cephalograms are near to the ground truthlocations which shows the success of our proposed algo-rithm for landmark detection
For the quantitative assessment of the proposed algo-rithm the statistical result is shown in Table 7eMRE andSD of the proposed method to detect 45 landmarks are171mm and 139mm respectively e average SDRs of 45landmarks within the precision range 20mm 25mm3mm and 4mm are 7208 8063 8646 and 9307respectively e experimental results in database2 showcomparable performance to the results of database1 It
indicates that the proposed method can be successfullyapplied to the detection of more clinical landmarks
33 Measurement Analysis
331 Evaluation Criterion For the classification of ana-tomical types eight cephalometric measurements wereusually used e description of these eight measurementsand the methods for classification are explained in Tables 8and 9 respectively
One evaluation criterion for measurement analysis is thesuccess classification rate (SCR) for these 8 popular methodsof analysis which is calculated using confusion matrix Inthe confusion matrix each column represents the instancesof an estimated type while each row represents the instancesof the ground truth type e SCR is defined as the averageddiagonal of the confusion matrix
For evaluation the performance of the proposed systemfor more measurement analysis in database2 another cri-terion the mean absolute error (MAE) is calculated by
MAE 1113936
Mi1
1113954Vi minusVi
11138681113868111386811138681113868111386811138681113868
M (18)
where 1113954Vi is the value of angular or linear measurementestimated by our system and Vi is the ground truth mea-surement obtained using human annotated landmarks
1 2
3
4
5
6
78 9
101112 13
14
15
16
17 18
19
(a)
L1 S
L2 N
L3 Or
L4 P
L5 A
L6 B
L7 Pg
L8 Me
L9 Gn
L10 Go
L11 LI
L12 UI
L13 UL
L14 LL
L15 Sn
L16 Pos
L17 PNS
L18 ANS
L19 Ar
(b)
Figure 6 Cephalogram annotation example showing the 19 landmarks in database1 (a) Cephalogram annotation example (b) Descriptionof 19 landmarks
Table 3 e description of database2
Female MaleClass I 29 26
Class III Pretreatment 32 23Posttreatment 32 23
Total number of cephalograms 165Total number of patients 110
Journal of Healthcare Engineering 9
332 Experimental Results
(1) Results of database1 Using the detected 19 landmarks 8cephalometric measurements are calculated for classificationof anatomical types in database1 e comparison of ourmethod to the best twomethods is given in Table 10 where wehave achieved the best classification results for two mea-surements of APDI and FHI e average SCR obtained byour method is 7503 which is much better than the methodin [26] and is comparable to the method of RFRV-CLM [24]
(2) Results of database2 According to the detected 45landmarks 27 measurements are automatically calculated byour system using database2 including 17 angular and 10linear measurements which are illustrated in Table 11
e MAE and MAElowast of 27 measurements are illustratedin Table 12 Here the MAE represents the performance ofmeasurement analysis of our automatic system e MAElowastrepresents interobserver variability calculated between theground truth values obtained by the two experts For angularmeasurements the difference between MAE and MAElowast iswithin 05deg for 917 measurements the difference between
12
345
6 78
91011
1213
14
1516
17
181920
212223
24252627
28
29
30
31
32333435
363738
39
40
414243
44
45
(a)
L1 N L16 Mes L31 L6A
L2 Ns L17 Go L32 UL5
L3 Prn L18 Me L33 L6E
L4 Cm L19 Gn L34 UL6
L5 ANS L20 Pg L35 U6E
L6 A L21 LIA L36 U6A
L7 Sn L22 B L37 PNS
L8 Ss L23 Id L38 Ptm
L9 UL L24 LI L39 S
L10 Stoms L25 UI L40 Co
L11 Stomi L26 FA L41 Ar
L12 LL L27 SPr L42 Ba
L13 Si L28 UIA L43 Bolton
L14 Pos L29 Or L44 P
L15 Gs L30 Se L45 ULI
(b)
Figure 7 Cephalogram annotation example showing landmarks in database2 (a) Cephalogram annotation example (b)e description of45 cephalometric landmarks
Table 4 e experimental results of 19 landmark detection inTest1Data
Landmark MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
L1 131 126 8667 9267 9467 9733L2 192 205 6800 7200 7933 8867L3 181 174 6667 8133 8800 9533L4 311 259 5000 5400 5933 6733L5 224 156 5600 6600 7533 8667L6 159 139 7200 7867 8400 9400L7 123 087 8067 9133 9667 9933L8 108 088 8867 9533 9667 9867L9 087 075 9133 9333 9800 9933L10 398 233 2333 2933 3800 5333L11 097 089 8733 9200 9600 9867L12 090 170 9467 9467 9600 9667L13 102 071 9200 9533 9800 10000L14 089 061 9333 9867 10000 10000L15 118 116 8533 9200 9333 9800L16 214 148 5000 6067 7267 9067L17 116 085 8600 9267 9467 9867L18 177 151 7200 7933 8600 9267L19 297 277 5000 5400 5800 6733Average 169 143 7337 7965 8446 9067
Table 5 Comparison of our method with three methods in term ofMRE using Test1Dtata
Method [24] [26] [29] OursMRE (pixels) 1674 1846 1779 1692
Table 6 Comparison of our method to two methods in terms ofMRE SD and SDR using Test1Dtata
Method MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
[24] 167 165 7368 8021 8519 9147[26] 184 176 7172 7740 8193 8804Ours 169 143 7337 7965 8446 9067
10 Journal of Healthcare Engineering
MAE and MAElowast is within 1deg for 1217 measurements andthe difference betweenMAE andMAElowast is within 2deg for 1617measurements In particular theMAE of ZAngle is less thanMAElowast e other one measurement has the difference of224deg For linear measurements the difference betweenMAEand MAElowast is within 05mm for 910 measurements eother one measurement has the difference of 116mm eresults show that our automatic system is efficient and ac-curate for measurement analysis
4 Discussion
e interobserver variability of human annotation is ana-lyzed Table 13 shows that the MRE and SD between an-notations by two orthodontists are 138mm and 155mm fordatabase1 respectively [27] for database2 the MRE and SDbetween annotations of two orthodontists are 126mm and127mm respectively Experimental results show that theproposed algorithm of automatic cephalometric landmark
detection can achieve the MRE of less than 2mm and the SDalmost equal to that of manual marking erefore theperformance of automatic landmark detection by the pro-posed algorithm is comparable to manual marking in term ofthe interobserver variability between two clinical expertsFurthermore the detected landmarks are used to calculate theangular and linearmeasurements in lateral cephalogramsesatisfactory results of measurement analysis are presented inexperiments based on the accurate landmark detection Itshows that the proposed algorithm has the potential to beapplied in clinical practice of cephalometric analysis for or-thodontic diagnosis and treatment planning
e detection accuracy of some landmarks is lower thanthe average value and there are mainly three main reasons (i)some landmarks are located at the overlaying anatomicalstructures such as landmarksGo UL5 L6E UL6 andU6E (ii)some landmarks have large variability of manual marking dueto large anatomical variability among subjects especially inabnormality such as landmarks A ANS Pos Ar Ba andBolton and (iii) structural information is not obvious due tolittle intensity variability in the neighborhood of some land-marks in images such as landmarks P Co L6A and U6A
e proposed system follows clinical cephalometricanalysis procedure and the accuracy of the system can beevaluated by the manual marking accuracy ere are severallimitations in this study On one hand except for the algo-rithm the data effects to the performance of the system in-clude three aspects (i) the quality of the training data (ii) thesize of the training dataset and (iii) the shape and appearancevariation exhibited in the training data On the other hand
12
345
6 78
91011
1213
14
1516
17
181920
212223
24252627
28
29
30
31
32333435
363738
39
40
414243
44
45
True locationsPredicted locations
(a)
12
345
6 78
91011
1213
14
1516
17
181920
21222324252627
28
29
30
31
32333435
3637
38
39
4041
4243
44
45
True locationsPredicted locations
(b)
Figure 8 Examples of automatic cephalometric landmark detection (a) Example of detection in Class I (b) Example of detection in Class III
Table 7 e statistical results of automatic cephalometric land-mark detection in database2
No of 5-fold
MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
1 172 135 7249 8132 8704 93372 172 138 7138 7994 8563 92733 166 132 7431 8206 8744 93374 177 136 6967 7896 8532 92935 168 154 7253 8088 8687 9296Average 171 139 7208 8063 8646 9307
Journal of Healthcare Engineering 11
performance of the system depends on consistency betweenthe training data and the testing data similarly as any su-pervised learning-based methods
5 Conclusion
In conclusion we design a new framework of landmarkdetection in lateral cephalograms with low-to-high resolu-tions In each image resolution decision tree regressionvoting is employed in landmark detection e proposedalgorithm takes full advantage of image information indifferent resolutions In lower resolution the primary localstructure information rather than local detail informationcan be extracted to predict the positions of anatomical
structure involving the specific landmarks In higher reso-lution the local structure information involves more detailinformation and it is useful for prediction positions oflandmarks in the neighborhood As demonstrated in ex-perimental results the proposed algorithm has achievedgood performance Compared with state-of-the-art methodsusing the benchmark database our algorithm has obtainedthe comparable accuracy of landmark detection in terms ofMRE SD and SDR Tested by our own clinical database ouralgorithm has also obtained average 72 successful de-tection rate within precision range of 20mm In particular45 landmarks have been detected in our database which isover two times of the number of landmarks in the bench-mark database erefore the extensibility of the proposed
Table 8 Description of cephalometric measurements in database1 [28]
No Measurement Description in mathematics Description in words1 ANB angL5L2L6 e angle between the landmark 5 2 and 62 SNB angL1L2L6 e angle between the landmark 1 2 and 63 SNA angL1L2L5 e angle between the landmark 1 2 and 5
4 ODI angL5L6L8L10 + angL17L18L4L3e arithmetic sum of the angle between AB plane(L5L6) to mandibular plane (L8L10) and the angle of
palatal plane (L17L18) to FH plane (L4L3)
5 APDI angL3L4L2L7 + angL2L7L5L6 + angL3L4L17L18
e arithmetic sum of the angle between FH plane(L3L4) to facial plane (L2L7) the angle of facial plane(L2L7) to AB plane (L5L6) and the angle of FH plane
(L3L4) to palatal plane (L17L18)
6 FHI |L1L10||L2L8|
e ratio of posterior face height (PFH the distancefrom L1 to L10) to anterior face height (AFH the
distance from L2 to L8)
7 FHA angL1L2L10L9 e angle between SN plane (L1L2) to mandibularplane (L10L9)
8 MW |L12L11| x(L12)ltx(L11)
minus|L12L11| otherwise1113896When the x ordinate of L12 is less than L11rsquos theMW
is |L12L11| otherwise MW is minus|L12L11|
Table 9 Eight standard cephalometric measurement methods for classification of anatomical types [28]
No Measurement Type 1 Type 2 Type 3 Type 41 ANB 32degsim57deg class I (normal) gt57deg class II lt32deg class III mdash
2 SNB 746degsim787deg normalmandible lt746deg retrognathic mandible gt787deg prognathic mandible mdash
3 SNA 794degsim832deg normal maxilla gt832deg prognathic maxilla lt794deg retrognathic maxilla mdash4 ODI Normal 745deg plusmn 607deg gt805deg deep bite tendency lt684deg open bite tendency mdash5 APDI Normal 814deg plusmn 38deg lt776deg class II tendency gt852deg class III tendency mdash6 FHI Normal 065sim075 gt075 short face tendency lt065 long face tendency mdash
7 FHA Normal 268degsim314deg gt314deg mandible high angletendency
lt268deg mandible lower angletendency mdash
8 MW Normal 2mmsim45mm MW 0mm edge to edge MW lt0mm anterior cross biteMWgt45mm
large over jet
Table 10 Comparison of our method to two methods in term of SCR using Test1Data
Methode success classification rates SCR ()
ANB SNB SNA ODI APDI FHI FHA MW Average[24] 6499 8452 6845 8464 8214 6792 7554 8219 7630[26] 5942 7109 5900 7804 8016 5897 7703 8394 7096Ours 5861 7885 5986 7659 8349 8244 7718 8320 7503
12 Journal of Healthcare Engineering
Table 11 Description of cephalometric measurements indatabase2
Name Description inmathematics Description in words
8e angles of three points (unit deg)
SNA angL39L1L6 e angle of landmarks L39 L1L6
SNB angL39L1L22 e angle of landmarks L39 L1L22
ANB angL6L1L22 e angle of landmarks L6 L1L22
SNPg angL39L20L1 e angle of landmarks L39 L20L1
NAPg angL1L6L20 e angle of landmarks L1 L6L20
NSGn angL1L39L19 e angle of landmarks L1 L39L19
Ns-Prn-Pos angL2L3L14 e angle of landmarks L2 L3L14
Cm-Sn-UL angL4L7L9 e angle of landmarks L4 L7L9
LL-Bprime-Pos angL12L13L14 e angle of landmarks L12 L13L14
Angle ofthe jaw angL41L17L18 e angle of landmarks L41 L17
L18Angle ofconvexity angL2L7L14 e angle of landmarks L2 L7
L148e angles between two planes (unit deg)
FH-SN angL44L29L39L1 e angle between FH plane(L44L29) and SN plane (L39L1)
UI-SN angL28L25L39L1 e angle between upper incisor(L28L25) and SN plane (L39L1)
LI-FH angL24L21L44L29e angle between lower incisoraxis (L24L21) and FH plane
(L44L29)
UI-LI angL28L25L24L21 e angle between upper andlower incisors (L28L25 L24L21)
H angle angL9L14L8L14 e angle between H plane(L9L14)and NB plane (L8L14)
Z angle angL44L29L9L14e rear lower angle between FHplane (L44L29) and H plane
(L9L14)8e distances between two points (unit mm)
N-Me |L1L18|e forward height the distancebetween landmarks L1 L18
N-ANS |L1L5|
e up-forward height thedistance between landmarks L1
L5
ANS-Me |L5L18|
e down-forward height thedistance between landmarks L5
L18
Stoms-UI |L10L25|e vertical distance between
landmarks L10 L258e distances from the point to the plane in the horizontal direction(unit mm)
UI-AP |L25L6L20|e distance between landmark
L25 to AP plane (L6L20)
LI-AP |L24L6L20|e distance from landmark L24
to AP plane (L6L20)
UL-EP |L9L3L14|e distance from landmark L9
to EP plane (L3L14)
Table 12 Result of our method for 27 measurements in terms ofMAE and MAElowast using database2
Name MAE MAElowast MAE-MAElowast
8e angles of three points (unit deg)SNA 195 170 024SNB 157 117 039ANB 129 108 021SNPg 159 120 039NAPg 283 244 039NSGn 142 096 047Ns-Prn-Pos 168 110 058Cm-Sn-UL 552 380 172LL-Bprime-Pos 395 344 051Angle of the jaw 320 166 154Angle of convexity 267 186 0818e angles between two planes (unit deg)FH-SN 200 196 004UI-SN 471 281 190LI-FH 334 227 107UI-LI 690 466 224H angle 094 080 014Z angle 169 186 -0178e distances between two points (unit mm)N-Me 137 110 027N-ANS 157 134 024ANS-Me 106 096 010Stoms-UI 075 042 0338e distances from the point to the plane in the horizontal direction(unit mm)UI-AP 096 071 025LI-AP 096 076 020UL-EP 050 036 014LL-EP 045 039 005MaxE 206 179 0278e distances between two points projected to a plane (unit mm)Wits 470 353 116MAElowast interobserver variability
Table 11 Continued
Name Description inmathematics Description in words
LL-EP |L12L13L14|e distance from landmark L12
to EP plane (L3L14)
MaxE |L6L5L37|
Maxillary length the distancefrom landmark L6 to palatal
plane (L5L37)8e distances between two points projected to a plane (unit mm)
Wits |L6L32L34L22|
e distance between the twolandmarks L6 and L22 projectedto functional jaw plane (L32L34)
Table 13 e interobserver error of manual marking betweendoctor1 and doctor2
Interobserver variabilityMRE (mm) SD (mm)
database1 138 155database2 126 127
Journal of Healthcare Engineering 13
algorithm is confirmed using this clinical dataset In addi-tion automatic measurement of clinical parameters has alsoachieved satisfactory results In the future we will put moreefforts to improve the performance of automatic analysis inlateral cephalograms so that the automatic system can beutilized in clinical practice to obtain objective measurementMore research will be conducted to reduce the computa-tional complexity of the algorithm as well
Data Availability
e database1 of 2015 ISBI Challenge is available at httpwwwntustedutwsimcweiwangISBI2015challenge1indexhtml e database2 of Peking University School andHospital of Stomatology cannot be made public availabledue to privacy concerns for the patients
Disclosure
is work is based on research conducted at Beijing Instituteof Technology
Conflicts of Interest
e authors declare that there are no conflicts of interestregarding the publication of this paper
Acknowledgments
e authors would like to express their gratitude to De-partment of Orthodontics Peking University School andHospital of Stomatology China for the supply of imagedatabase2
References
[1] J Yang X Ling Y Lu et al ldquoCephalometric image analysisand measurement for orthognathic surgeryrdquo Medical amp Bi-ological Engineering amp Computing vol 39 no 3 pp 279ndash2842001
[2] B H Broadbent ldquoA new x-ray technique and its applicationto orthodontiardquo Angle Orthodontist vol 1 pp 45-46 1931
[3] H Hofrath ldquoDie Bedeutung des Rontgenfern undAbstandsaufnahme fiir de Diagnostik der KieferanomalienrdquoFortschritte der Kieferorthopadie vol 2 pp 232ndash258 1931
[4] R Leonardi D Giordano F Maiorana et al ldquoAutomaticcephalometric analysis - a systematic reviewrdquo Angle Ortho-dontist vol 78 no 1 pp 145ndash151 2008
[5] T S Douglas ldquoImage processing for craniofacial landmarkidentification and measurement a review of photogrammetryand cephalometryrdquo Computerized Medical Imaging andGraphics vol 28 no 7 pp 401ndash409 2004
[6] D S Gill and F B Naini ldquoChapter 9 cephalometric analysisrdquoin Orthodontics Principles and Practice pp 78ndash87 BlackwellScience Publishing Oxford UK 2011
[7] A D Levy-Mandel A N Venetsanopoulos and J K TsotsosldquoKnowledge-based landmarking of cephalogramsrdquo Com-puters and Biomedical Research vol 19 no 3 pp 282ndash3091986
[8] V Grau M Alcaniz M C Juan et al ldquoAutomatic localizationof cephalometric landmarksrdquo Journal of Biomedical In-formatics vol 34 no 3 pp 146ndash156 2001
[9] J Ashish M Tanmoy and H K Sardana ldquoA novel strategyfor automatic localization of cephalometric landmarksrdquo inProceedings of IEEE International Conference on ComputerEngineering and Technology pp V3-284ndashV3-288 TianjinChina 2010
[10] T Mondal A Jain and H K Sardana ldquoAutomatic craniofacialstructure detection on cephalometric imagesrdquo IEEE Trans-actions on Image Processing vol 20 no 9 pp 2606ndash2614 2011
[11] A Kaur and C Singh ldquoAutomatic cephalometric landmarkdetection using Zernike moments and template matchingrdquoSignal Image and Video Processing vol 9 no 1 pp 117ndash1322015
[12] D B Forsyth and D N Davis ldquoAssessment of an automatedcephalometric analysis systemrdquo European Journal of Ortho-dontics vol 18 no 5 pp 471ndash478 1996
[13] S Rueda and M Alcaniz ldquoAn approach for the automaticcephalometric landmark detection using mathematicalmorphology and active appearance modelsrdquo in Proceedings ofMedical Image Computing and Computer-Assisted In-tervention (MICCAI 2006) pp 159ndash166 Lecture Notes inComputer Science R Larsen Ed Copenhagen DenmarkOctober 2006
[14] W Yue D Yin C Li et al ldquoAutomated 2-D cephalometricanalysis on X-ray images by a model-based approachrdquo IEEETransactions on Biomedical Engineering vol 53 no 8pp 1615ndash1623 2006
[15] R Kafieh A mehri S sadri et al ldquoAutomatic landmarkdetection in cephalometry using a modified Active ShapeModel with sub image matchingrdquo in Proceedings of IEEEInternational Conference on Machine Vision pp 73ndash78Islamabad Pakistan 2008
[16] J Keustermans W Mollemans D Vandermeulen andP Suetens ldquoAutomated cephalometric landmark identifica-tion using shape and local appearance modelsrdquo in Proceedingsof 20th International Conference on Pattern Recognitionpp 2464ndash2467 Istanbul Turkey August 2010
[17] P Vucinic Z Trpovski and I Scepan ldquoAutomatic land-marking of cephalograms using active appearance modelsrdquoEuropean Journal of Orthodontics vol 32 no 3 pp 233ndash2412010
[18] S Chakrabartty M Yagi T Shibata et al ldquoRobust cepha-lometric landmark identification using support vector ma-chinesrdquo in Proceedings of IEEE International Conference onAcoustics Speech and Signal Processing vol 2 pp 825ndash828Vancouver Canada May 2003
[19] I El-Feghi M A Sid-Ahmed and M Ahmadi ldquoAutomaticlocalization of craniofacial landmarks for assisted cephalom-etryrdquo Pattern Recognition vol 37 no 3 pp 609ndash621 2004
[20] R Leonardi D Giordano and F Maiorana ldquoAn evaluation ofcellular neural networks for the automatic identification ofcephalometric landmarks on digital imagesrdquo Journal of Bio-medicine and Biotechnology vol 2009 Article ID 717102 12pages 2009
[21] L Favaedi M Petrou and IEEE ldquoCephalometric landmarksidentification using probabilistic relaxationrdquo in Proceedings of2010 Annual International Conference of the IEEE Engineeringin Medicine and Biology Society pp 4391ndash4394 IEEE Engi-neering in Medicine and Biology Society Conference Pro-ceedings Buenos Aires Argentina August 2010
[22] M Farshbaf and A A Pouyan ldquoLandmark detection oncephalometric radiology images through combining classi-fiersrdquo in Proceedings of 2010 17th Iranian Conference ofBiomedical Engineering (ICBME) pp 1ndash4 Isfahan IranNovember 2010
14 Journal of Healthcare Engineering
[23] L Breiman ldquoRandom forestsrdquo Machine Learning vol 45no 1 pp 5ndash32 2001
[24] C Lindner and T F Cootes ldquoFully automatic cephalometricevaluation using random forest regression-votingrdquo in Pro-ceedings of International Symposium on Biomedical Imaging(ISBI) Grand Challenges in Dental X-ray Image Ana-lysismdashAutomated Detection and Analysis for Diagnosis inCephalometric X-ray Image Brooklyn NY USA May 2015
[25] C Lindner C-W Wang C-T Huang et al ldquoFully automaticsystem for accurate localisation and analysis of cephalometriclandmarks in lateral cephalogramsrdquo Scientific Reports vol 6no 1 2016
[26] B Ibragimov et al ldquoComputerized cephalometry by gametheory with shape- and appearance-based landmark re-finementrdquo in Proceedings of International Symposium onBiomedical imaging (ISBI) Grand Challenges in Dental X-rayImage AnalysismdashAutomated Detection and Analysis forDiagnosis in Cephalometric X-ray Image Brooklyn NYUSA May 2015
[27] C-WWang C-T HuangM-C Hsieh et al ldquoEvaluation andcomparison of anatomical landmark detection methods forcephalometric X-ray images a grand challengerdquo IEEETransactions on Medical Imaging vol 34 no 9 pp 1890ndash1900 2015
[28] C W Wang C T Huang J H Lee et al ldquoA benchmark forcomparison of dental radiography analysis algorithmsrdquoMedical Image Analysis vol 31 pp 63ndash76 2016
[29] R Vandaele J AcetoMMuller et al ldquoLandmark detection in2D bioimages for geometric morphometrics a multi-resolution tree-based approachrdquo Scientific Reports vol 8no 1 2018
[30] D G Lowe ldquoDistinctive image features from scale-invariantKeypointsrdquo International Journal of Computer Vision vol 60no 2 pp 91ndash110 2004
[31] A Vedaldi and B Fulkerson ldquoVLFeat an open and portablelibrary of computer vision algorithmsrdquo in Proceedings ofConference on ACM multimedia pp 1469ndash1472 FirenzeItaly October httpwwwvlfeatorg
[32] C Qiu L Jiang and C Li ldquoRandomly selected decision treefor test-cost sensitive learningrdquo Applied Soft Computingvol 53 pp 27ndash33 2017
[33] K Kim and J S Hong ldquoA hybrid decision tree algorithm formixed numeric and categorical data in regression analysisrdquoPattern Recognition Letters vol 98 pp 39ndash45 2017
[34] L Breiman and J H Friedman Classification and RegressionTrees CRC Press Boca Raton FL USA 1984
[35] C Lindner P A Bromiley M C Ionita et al ldquoRobust andaccurate shape model matching using random forest re-gression-votingrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 37 no 9 pp 1862ndash1874 2015
[36] J Gall and V S Lempitsky ldquoClass-specific Hough forests forobject detectionrdquo in Proceedings of IEEE Conference onComputer Vision and Pattern Recognition (CVPR) pp 143ndash157 Miami FL USA June 2009
Journal of Healthcare Engineering 15
International Journal of
AerospaceEngineeringHindawiwwwhindawicom Volume 2018
RoboticsJournal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Active and Passive Electronic Components
VLSI Design
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Shock and Vibration
Hindawiwwwhindawicom Volume 2018
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawiwwwhindawicom
Volume 2018
Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom
The Scientific World Journal
Volume 2018
Control Scienceand Engineering
Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom
Journal ofEngineeringVolume 2018
SensorsJournal of
Hindawiwwwhindawicom Volume 2018
International Journal of
RotatingMachinery
Hindawiwwwhindawicom Volume 2018
Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Navigation and Observation
International Journal of
Hindawi
wwwhindawicom Volume 2018
Advances in
Multimedia
Submit your manuscripts atwwwhindawicom
332 Experimental Results
(1) Results of database1 Using the detected 19 landmarks 8cephalometric measurements are calculated for classificationof anatomical types in database1 e comparison of ourmethod to the best twomethods is given in Table 10 where wehave achieved the best classification results for two mea-surements of APDI and FHI e average SCR obtained byour method is 7503 which is much better than the methodin [26] and is comparable to the method of RFRV-CLM [24]
(2) Results of database2 According to the detected 45landmarks 27 measurements are automatically calculated byour system using database2 including 17 angular and 10linear measurements which are illustrated in Table 11
e MAE and MAElowast of 27 measurements are illustratedin Table 12 Here the MAE represents the performance ofmeasurement analysis of our automatic system e MAElowastrepresents interobserver variability calculated between theground truth values obtained by the two experts For angularmeasurements the difference between MAE and MAElowast iswithin 05deg for 917 measurements the difference between
12
345
6 78
91011
1213
14
1516
17
181920
212223
24252627
28
29
30
31
32333435
363738
39
40
414243
44
45
(a)
L1 N L16 Mes L31 L6A
L2 Ns L17 Go L32 UL5
L3 Prn L18 Me L33 L6E
L4 Cm L19 Gn L34 UL6
L5 ANS L20 Pg L35 U6E
L6 A L21 LIA L36 U6A
L7 Sn L22 B L37 PNS
L8 Ss L23 Id L38 Ptm
L9 UL L24 LI L39 S
L10 Stoms L25 UI L40 Co
L11 Stomi L26 FA L41 Ar
L12 LL L27 SPr L42 Ba
L13 Si L28 UIA L43 Bolton
L14 Pos L29 Or L44 P
L15 Gs L30 Se L45 ULI
(b)
Figure 7 Cephalogram annotation example showing landmarks in database2 (a) Cephalogram annotation example (b)e description of45 cephalometric landmarks
Table 4 e experimental results of 19 landmark detection inTest1Data
Landmark MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
L1 131 126 8667 9267 9467 9733L2 192 205 6800 7200 7933 8867L3 181 174 6667 8133 8800 9533L4 311 259 5000 5400 5933 6733L5 224 156 5600 6600 7533 8667L6 159 139 7200 7867 8400 9400L7 123 087 8067 9133 9667 9933L8 108 088 8867 9533 9667 9867L9 087 075 9133 9333 9800 9933L10 398 233 2333 2933 3800 5333L11 097 089 8733 9200 9600 9867L12 090 170 9467 9467 9600 9667L13 102 071 9200 9533 9800 10000L14 089 061 9333 9867 10000 10000L15 118 116 8533 9200 9333 9800L16 214 148 5000 6067 7267 9067L17 116 085 8600 9267 9467 9867L18 177 151 7200 7933 8600 9267L19 297 277 5000 5400 5800 6733Average 169 143 7337 7965 8446 9067
Table 5 Comparison of our method with three methods in term ofMRE using Test1Dtata
Method [24] [26] [29] OursMRE (pixels) 1674 1846 1779 1692
Table 6 Comparison of our method to two methods in terms ofMRE SD and SDR using Test1Dtata
Method MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
[24] 167 165 7368 8021 8519 9147[26] 184 176 7172 7740 8193 8804Ours 169 143 7337 7965 8446 9067
10 Journal of Healthcare Engineering
MAE and MAElowast is within 1deg for 1217 measurements andthe difference betweenMAE andMAElowast is within 2deg for 1617measurements In particular theMAE of ZAngle is less thanMAElowast e other one measurement has the difference of224deg For linear measurements the difference betweenMAEand MAElowast is within 05mm for 910 measurements eother one measurement has the difference of 116mm eresults show that our automatic system is efficient and ac-curate for measurement analysis
4 Discussion
e interobserver variability of human annotation is ana-lyzed Table 13 shows that the MRE and SD between an-notations by two orthodontists are 138mm and 155mm fordatabase1 respectively [27] for database2 the MRE and SDbetween annotations of two orthodontists are 126mm and127mm respectively Experimental results show that theproposed algorithm of automatic cephalometric landmark
detection can achieve the MRE of less than 2mm and the SDalmost equal to that of manual marking erefore theperformance of automatic landmark detection by the pro-posed algorithm is comparable to manual marking in term ofthe interobserver variability between two clinical expertsFurthermore the detected landmarks are used to calculate theangular and linearmeasurements in lateral cephalogramsesatisfactory results of measurement analysis are presented inexperiments based on the accurate landmark detection Itshows that the proposed algorithm has the potential to beapplied in clinical practice of cephalometric analysis for or-thodontic diagnosis and treatment planning
e detection accuracy of some landmarks is lower thanthe average value and there are mainly three main reasons (i)some landmarks are located at the overlaying anatomicalstructures such as landmarksGo UL5 L6E UL6 andU6E (ii)some landmarks have large variability of manual marking dueto large anatomical variability among subjects especially inabnormality such as landmarks A ANS Pos Ar Ba andBolton and (iii) structural information is not obvious due tolittle intensity variability in the neighborhood of some land-marks in images such as landmarks P Co L6A and U6A
e proposed system follows clinical cephalometricanalysis procedure and the accuracy of the system can beevaluated by the manual marking accuracy ere are severallimitations in this study On one hand except for the algo-rithm the data effects to the performance of the system in-clude three aspects (i) the quality of the training data (ii) thesize of the training dataset and (iii) the shape and appearancevariation exhibited in the training data On the other hand
12
345
6 78
91011
1213
14
1516
17
181920
212223
24252627
28
29
30
31
32333435
363738
39
40
414243
44
45
True locationsPredicted locations
(a)
12
345
6 78
91011
1213
14
1516
17
181920
21222324252627
28
29
30
31
32333435
3637
38
39
4041
4243
44
45
True locationsPredicted locations
(b)
Figure 8 Examples of automatic cephalometric landmark detection (a) Example of detection in Class I (b) Example of detection in Class III
Table 7 e statistical results of automatic cephalometric land-mark detection in database2
No of 5-fold
MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
1 172 135 7249 8132 8704 93372 172 138 7138 7994 8563 92733 166 132 7431 8206 8744 93374 177 136 6967 7896 8532 92935 168 154 7253 8088 8687 9296Average 171 139 7208 8063 8646 9307
Journal of Healthcare Engineering 11
performance of the system depends on consistency betweenthe training data and the testing data similarly as any su-pervised learning-based methods
5 Conclusion
In conclusion we design a new framework of landmarkdetection in lateral cephalograms with low-to-high resolu-tions In each image resolution decision tree regressionvoting is employed in landmark detection e proposedalgorithm takes full advantage of image information indifferent resolutions In lower resolution the primary localstructure information rather than local detail informationcan be extracted to predict the positions of anatomical
structure involving the specific landmarks In higher reso-lution the local structure information involves more detailinformation and it is useful for prediction positions oflandmarks in the neighborhood As demonstrated in ex-perimental results the proposed algorithm has achievedgood performance Compared with state-of-the-art methodsusing the benchmark database our algorithm has obtainedthe comparable accuracy of landmark detection in terms ofMRE SD and SDR Tested by our own clinical database ouralgorithm has also obtained average 72 successful de-tection rate within precision range of 20mm In particular45 landmarks have been detected in our database which isover two times of the number of landmarks in the bench-mark database erefore the extensibility of the proposed
Table 8 Description of cephalometric measurements in database1 [28]
No Measurement Description in mathematics Description in words1 ANB angL5L2L6 e angle between the landmark 5 2 and 62 SNB angL1L2L6 e angle between the landmark 1 2 and 63 SNA angL1L2L5 e angle between the landmark 1 2 and 5
4 ODI angL5L6L8L10 + angL17L18L4L3e arithmetic sum of the angle between AB plane(L5L6) to mandibular plane (L8L10) and the angle of
palatal plane (L17L18) to FH plane (L4L3)
5 APDI angL3L4L2L7 + angL2L7L5L6 + angL3L4L17L18
e arithmetic sum of the angle between FH plane(L3L4) to facial plane (L2L7) the angle of facial plane(L2L7) to AB plane (L5L6) and the angle of FH plane
(L3L4) to palatal plane (L17L18)
6 FHI |L1L10||L2L8|
e ratio of posterior face height (PFH the distancefrom L1 to L10) to anterior face height (AFH the
distance from L2 to L8)
7 FHA angL1L2L10L9 e angle between SN plane (L1L2) to mandibularplane (L10L9)
8 MW |L12L11| x(L12)ltx(L11)
minus|L12L11| otherwise1113896When the x ordinate of L12 is less than L11rsquos theMW
is |L12L11| otherwise MW is minus|L12L11|
Table 9 Eight standard cephalometric measurement methods for classification of anatomical types [28]
No Measurement Type 1 Type 2 Type 3 Type 41 ANB 32degsim57deg class I (normal) gt57deg class II lt32deg class III mdash
2 SNB 746degsim787deg normalmandible lt746deg retrognathic mandible gt787deg prognathic mandible mdash
3 SNA 794degsim832deg normal maxilla gt832deg prognathic maxilla lt794deg retrognathic maxilla mdash4 ODI Normal 745deg plusmn 607deg gt805deg deep bite tendency lt684deg open bite tendency mdash5 APDI Normal 814deg plusmn 38deg lt776deg class II tendency gt852deg class III tendency mdash6 FHI Normal 065sim075 gt075 short face tendency lt065 long face tendency mdash
7 FHA Normal 268degsim314deg gt314deg mandible high angletendency
lt268deg mandible lower angletendency mdash
8 MW Normal 2mmsim45mm MW 0mm edge to edge MW lt0mm anterior cross biteMWgt45mm
large over jet
Table 10 Comparison of our method to two methods in term of SCR using Test1Data
Methode success classification rates SCR ()
ANB SNB SNA ODI APDI FHI FHA MW Average[24] 6499 8452 6845 8464 8214 6792 7554 8219 7630[26] 5942 7109 5900 7804 8016 5897 7703 8394 7096Ours 5861 7885 5986 7659 8349 8244 7718 8320 7503
12 Journal of Healthcare Engineering
Table 11 Description of cephalometric measurements indatabase2
Name Description inmathematics Description in words
8e angles of three points (unit deg)
SNA angL39L1L6 e angle of landmarks L39 L1L6
SNB angL39L1L22 e angle of landmarks L39 L1L22
ANB angL6L1L22 e angle of landmarks L6 L1L22
SNPg angL39L20L1 e angle of landmarks L39 L20L1
NAPg angL1L6L20 e angle of landmarks L1 L6L20
NSGn angL1L39L19 e angle of landmarks L1 L39L19
Ns-Prn-Pos angL2L3L14 e angle of landmarks L2 L3L14
Cm-Sn-UL angL4L7L9 e angle of landmarks L4 L7L9
LL-Bprime-Pos angL12L13L14 e angle of landmarks L12 L13L14
Angle ofthe jaw angL41L17L18 e angle of landmarks L41 L17
L18Angle ofconvexity angL2L7L14 e angle of landmarks L2 L7
L148e angles between two planes (unit deg)
FH-SN angL44L29L39L1 e angle between FH plane(L44L29) and SN plane (L39L1)
UI-SN angL28L25L39L1 e angle between upper incisor(L28L25) and SN plane (L39L1)
LI-FH angL24L21L44L29e angle between lower incisoraxis (L24L21) and FH plane
(L44L29)
UI-LI angL28L25L24L21 e angle between upper andlower incisors (L28L25 L24L21)
H angle angL9L14L8L14 e angle between H plane(L9L14)and NB plane (L8L14)
Z angle angL44L29L9L14e rear lower angle between FHplane (L44L29) and H plane
(L9L14)8e distances between two points (unit mm)
N-Me |L1L18|e forward height the distancebetween landmarks L1 L18
N-ANS |L1L5|
e up-forward height thedistance between landmarks L1
L5
ANS-Me |L5L18|
e down-forward height thedistance between landmarks L5
L18
Stoms-UI |L10L25|e vertical distance between
landmarks L10 L258e distances from the point to the plane in the horizontal direction(unit mm)
UI-AP |L25L6L20|e distance between landmark
L25 to AP plane (L6L20)
LI-AP |L24L6L20|e distance from landmark L24
to AP plane (L6L20)
UL-EP |L9L3L14|e distance from landmark L9
to EP plane (L3L14)
Table 12 Result of our method for 27 measurements in terms ofMAE and MAElowast using database2
Name MAE MAElowast MAE-MAElowast
8e angles of three points (unit deg)SNA 195 170 024SNB 157 117 039ANB 129 108 021SNPg 159 120 039NAPg 283 244 039NSGn 142 096 047Ns-Prn-Pos 168 110 058Cm-Sn-UL 552 380 172LL-Bprime-Pos 395 344 051Angle of the jaw 320 166 154Angle of convexity 267 186 0818e angles between two planes (unit deg)FH-SN 200 196 004UI-SN 471 281 190LI-FH 334 227 107UI-LI 690 466 224H angle 094 080 014Z angle 169 186 -0178e distances between two points (unit mm)N-Me 137 110 027N-ANS 157 134 024ANS-Me 106 096 010Stoms-UI 075 042 0338e distances from the point to the plane in the horizontal direction(unit mm)UI-AP 096 071 025LI-AP 096 076 020UL-EP 050 036 014LL-EP 045 039 005MaxE 206 179 0278e distances between two points projected to a plane (unit mm)Wits 470 353 116MAElowast interobserver variability
Table 11 Continued
Name Description inmathematics Description in words
LL-EP |L12L13L14|e distance from landmark L12
to EP plane (L3L14)
MaxE |L6L5L37|
Maxillary length the distancefrom landmark L6 to palatal
plane (L5L37)8e distances between two points projected to a plane (unit mm)
Wits |L6L32L34L22|
e distance between the twolandmarks L6 and L22 projectedto functional jaw plane (L32L34)
Table 13 e interobserver error of manual marking betweendoctor1 and doctor2
Interobserver variabilityMRE (mm) SD (mm)
database1 138 155database2 126 127
Journal of Healthcare Engineering 13
algorithm is confirmed using this clinical dataset In addi-tion automatic measurement of clinical parameters has alsoachieved satisfactory results In the future we will put moreefforts to improve the performance of automatic analysis inlateral cephalograms so that the automatic system can beutilized in clinical practice to obtain objective measurementMore research will be conducted to reduce the computa-tional complexity of the algorithm as well
Data Availability
e database1 of 2015 ISBI Challenge is available at httpwwwntustedutwsimcweiwangISBI2015challenge1indexhtml e database2 of Peking University School andHospital of Stomatology cannot be made public availabledue to privacy concerns for the patients
Disclosure
is work is based on research conducted at Beijing Instituteof Technology
Conflicts of Interest
e authors declare that there are no conflicts of interestregarding the publication of this paper
Acknowledgments
e authors would like to express their gratitude to De-partment of Orthodontics Peking University School andHospital of Stomatology China for the supply of imagedatabase2
References
[1] J Yang X Ling Y Lu et al ldquoCephalometric image analysisand measurement for orthognathic surgeryrdquo Medical amp Bi-ological Engineering amp Computing vol 39 no 3 pp 279ndash2842001
[2] B H Broadbent ldquoA new x-ray technique and its applicationto orthodontiardquo Angle Orthodontist vol 1 pp 45-46 1931
[3] H Hofrath ldquoDie Bedeutung des Rontgenfern undAbstandsaufnahme fiir de Diagnostik der KieferanomalienrdquoFortschritte der Kieferorthopadie vol 2 pp 232ndash258 1931
[4] R Leonardi D Giordano F Maiorana et al ldquoAutomaticcephalometric analysis - a systematic reviewrdquo Angle Ortho-dontist vol 78 no 1 pp 145ndash151 2008
[5] T S Douglas ldquoImage processing for craniofacial landmarkidentification and measurement a review of photogrammetryand cephalometryrdquo Computerized Medical Imaging andGraphics vol 28 no 7 pp 401ndash409 2004
[6] D S Gill and F B Naini ldquoChapter 9 cephalometric analysisrdquoin Orthodontics Principles and Practice pp 78ndash87 BlackwellScience Publishing Oxford UK 2011
[7] A D Levy-Mandel A N Venetsanopoulos and J K TsotsosldquoKnowledge-based landmarking of cephalogramsrdquo Com-puters and Biomedical Research vol 19 no 3 pp 282ndash3091986
[8] V Grau M Alcaniz M C Juan et al ldquoAutomatic localizationof cephalometric landmarksrdquo Journal of Biomedical In-formatics vol 34 no 3 pp 146ndash156 2001
[9] J Ashish M Tanmoy and H K Sardana ldquoA novel strategyfor automatic localization of cephalometric landmarksrdquo inProceedings of IEEE International Conference on ComputerEngineering and Technology pp V3-284ndashV3-288 TianjinChina 2010
[10] T Mondal A Jain and H K Sardana ldquoAutomatic craniofacialstructure detection on cephalometric imagesrdquo IEEE Trans-actions on Image Processing vol 20 no 9 pp 2606ndash2614 2011
[11] A Kaur and C Singh ldquoAutomatic cephalometric landmarkdetection using Zernike moments and template matchingrdquoSignal Image and Video Processing vol 9 no 1 pp 117ndash1322015
[12] D B Forsyth and D N Davis ldquoAssessment of an automatedcephalometric analysis systemrdquo European Journal of Ortho-dontics vol 18 no 5 pp 471ndash478 1996
[13] S Rueda and M Alcaniz ldquoAn approach for the automaticcephalometric landmark detection using mathematicalmorphology and active appearance modelsrdquo in Proceedings ofMedical Image Computing and Computer-Assisted In-tervention (MICCAI 2006) pp 159ndash166 Lecture Notes inComputer Science R Larsen Ed Copenhagen DenmarkOctober 2006
[14] W Yue D Yin C Li et al ldquoAutomated 2-D cephalometricanalysis on X-ray images by a model-based approachrdquo IEEETransactions on Biomedical Engineering vol 53 no 8pp 1615ndash1623 2006
[15] R Kafieh A mehri S sadri et al ldquoAutomatic landmarkdetection in cephalometry using a modified Active ShapeModel with sub image matchingrdquo in Proceedings of IEEEInternational Conference on Machine Vision pp 73ndash78Islamabad Pakistan 2008
[16] J Keustermans W Mollemans D Vandermeulen andP Suetens ldquoAutomated cephalometric landmark identifica-tion using shape and local appearance modelsrdquo in Proceedingsof 20th International Conference on Pattern Recognitionpp 2464ndash2467 Istanbul Turkey August 2010
[17] P Vucinic Z Trpovski and I Scepan ldquoAutomatic land-marking of cephalograms using active appearance modelsrdquoEuropean Journal of Orthodontics vol 32 no 3 pp 233ndash2412010
[18] S Chakrabartty M Yagi T Shibata et al ldquoRobust cepha-lometric landmark identification using support vector ma-chinesrdquo in Proceedings of IEEE International Conference onAcoustics Speech and Signal Processing vol 2 pp 825ndash828Vancouver Canada May 2003
[19] I El-Feghi M A Sid-Ahmed and M Ahmadi ldquoAutomaticlocalization of craniofacial landmarks for assisted cephalom-etryrdquo Pattern Recognition vol 37 no 3 pp 609ndash621 2004
[20] R Leonardi D Giordano and F Maiorana ldquoAn evaluation ofcellular neural networks for the automatic identification ofcephalometric landmarks on digital imagesrdquo Journal of Bio-medicine and Biotechnology vol 2009 Article ID 717102 12pages 2009
[21] L Favaedi M Petrou and IEEE ldquoCephalometric landmarksidentification using probabilistic relaxationrdquo in Proceedings of2010 Annual International Conference of the IEEE Engineeringin Medicine and Biology Society pp 4391ndash4394 IEEE Engi-neering in Medicine and Biology Society Conference Pro-ceedings Buenos Aires Argentina August 2010
[22] M Farshbaf and A A Pouyan ldquoLandmark detection oncephalometric radiology images through combining classi-fiersrdquo in Proceedings of 2010 17th Iranian Conference ofBiomedical Engineering (ICBME) pp 1ndash4 Isfahan IranNovember 2010
14 Journal of Healthcare Engineering
[23] L Breiman ldquoRandom forestsrdquo Machine Learning vol 45no 1 pp 5ndash32 2001
[24] C Lindner and T F Cootes ldquoFully automatic cephalometricevaluation using random forest regression-votingrdquo in Pro-ceedings of International Symposium on Biomedical Imaging(ISBI) Grand Challenges in Dental X-ray Image Ana-lysismdashAutomated Detection and Analysis for Diagnosis inCephalometric X-ray Image Brooklyn NY USA May 2015
[25] C Lindner C-W Wang C-T Huang et al ldquoFully automaticsystem for accurate localisation and analysis of cephalometriclandmarks in lateral cephalogramsrdquo Scientific Reports vol 6no 1 2016
[26] B Ibragimov et al ldquoComputerized cephalometry by gametheory with shape- and appearance-based landmark re-finementrdquo in Proceedings of International Symposium onBiomedical imaging (ISBI) Grand Challenges in Dental X-rayImage AnalysismdashAutomated Detection and Analysis forDiagnosis in Cephalometric X-ray Image Brooklyn NYUSA May 2015
[27] C-WWang C-T HuangM-C Hsieh et al ldquoEvaluation andcomparison of anatomical landmark detection methods forcephalometric X-ray images a grand challengerdquo IEEETransactions on Medical Imaging vol 34 no 9 pp 1890ndash1900 2015
[28] C W Wang C T Huang J H Lee et al ldquoA benchmark forcomparison of dental radiography analysis algorithmsrdquoMedical Image Analysis vol 31 pp 63ndash76 2016
[29] R Vandaele J AcetoMMuller et al ldquoLandmark detection in2D bioimages for geometric morphometrics a multi-resolution tree-based approachrdquo Scientific Reports vol 8no 1 2018
[30] D G Lowe ldquoDistinctive image features from scale-invariantKeypointsrdquo International Journal of Computer Vision vol 60no 2 pp 91ndash110 2004
[31] A Vedaldi and B Fulkerson ldquoVLFeat an open and portablelibrary of computer vision algorithmsrdquo in Proceedings ofConference on ACM multimedia pp 1469ndash1472 FirenzeItaly October httpwwwvlfeatorg
[32] C Qiu L Jiang and C Li ldquoRandomly selected decision treefor test-cost sensitive learningrdquo Applied Soft Computingvol 53 pp 27ndash33 2017
[33] K Kim and J S Hong ldquoA hybrid decision tree algorithm formixed numeric and categorical data in regression analysisrdquoPattern Recognition Letters vol 98 pp 39ndash45 2017
[34] L Breiman and J H Friedman Classification and RegressionTrees CRC Press Boca Raton FL USA 1984
[35] C Lindner P A Bromiley M C Ionita et al ldquoRobust andaccurate shape model matching using random forest re-gression-votingrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 37 no 9 pp 1862ndash1874 2015
[36] J Gall and V S Lempitsky ldquoClass-specific Hough forests forobject detectionrdquo in Proceedings of IEEE Conference onComputer Vision and Pattern Recognition (CVPR) pp 143ndash157 Miami FL USA June 2009
Journal of Healthcare Engineering 15
International Journal of
AerospaceEngineeringHindawiwwwhindawicom Volume 2018
RoboticsJournal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Active and Passive Electronic Components
VLSI Design
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Shock and Vibration
Hindawiwwwhindawicom Volume 2018
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawiwwwhindawicom
Volume 2018
Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom
The Scientific World Journal
Volume 2018
Control Scienceand Engineering
Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom
Journal ofEngineeringVolume 2018
SensorsJournal of
Hindawiwwwhindawicom Volume 2018
International Journal of
RotatingMachinery
Hindawiwwwhindawicom Volume 2018
Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Navigation and Observation
International Journal of
Hindawi
wwwhindawicom Volume 2018
Advances in
Multimedia
Submit your manuscripts atwwwhindawicom
MAE and MAElowast is within 1deg for 1217 measurements andthe difference betweenMAE andMAElowast is within 2deg for 1617measurements In particular theMAE of ZAngle is less thanMAElowast e other one measurement has the difference of224deg For linear measurements the difference betweenMAEand MAElowast is within 05mm for 910 measurements eother one measurement has the difference of 116mm eresults show that our automatic system is efficient and ac-curate for measurement analysis
4 Discussion
e interobserver variability of human annotation is ana-lyzed Table 13 shows that the MRE and SD between an-notations by two orthodontists are 138mm and 155mm fordatabase1 respectively [27] for database2 the MRE and SDbetween annotations of two orthodontists are 126mm and127mm respectively Experimental results show that theproposed algorithm of automatic cephalometric landmark
detection can achieve the MRE of less than 2mm and the SDalmost equal to that of manual marking erefore theperformance of automatic landmark detection by the pro-posed algorithm is comparable to manual marking in term ofthe interobserver variability between two clinical expertsFurthermore the detected landmarks are used to calculate theangular and linearmeasurements in lateral cephalogramsesatisfactory results of measurement analysis are presented inexperiments based on the accurate landmark detection Itshows that the proposed algorithm has the potential to beapplied in clinical practice of cephalometric analysis for or-thodontic diagnosis and treatment planning
e detection accuracy of some landmarks is lower thanthe average value and there are mainly three main reasons (i)some landmarks are located at the overlaying anatomicalstructures such as landmarksGo UL5 L6E UL6 andU6E (ii)some landmarks have large variability of manual marking dueto large anatomical variability among subjects especially inabnormality such as landmarks A ANS Pos Ar Ba andBolton and (iii) structural information is not obvious due tolittle intensity variability in the neighborhood of some land-marks in images such as landmarks P Co L6A and U6A
e proposed system follows clinical cephalometricanalysis procedure and the accuracy of the system can beevaluated by the manual marking accuracy ere are severallimitations in this study On one hand except for the algo-rithm the data effects to the performance of the system in-clude three aspects (i) the quality of the training data (ii) thesize of the training dataset and (iii) the shape and appearancevariation exhibited in the training data On the other hand
12
345
6 78
91011
1213
14
1516
17
181920
212223
24252627
28
29
30
31
32333435
363738
39
40
414243
44
45
True locationsPredicted locations
(a)
12
345
6 78
91011
1213
14
1516
17
181920
21222324252627
28
29
30
31
32333435
3637
38
39
4041
4243
44
45
True locationsPredicted locations
(b)
Figure 8 Examples of automatic cephalometric landmark detection (a) Example of detection in Class I (b) Example of detection in Class III
Table 7 e statistical results of automatic cephalometric land-mark detection in database2
No of 5-fold
MRE(mm)
SD(mm)
SDR ()20mm 25mm 30mm 40mm
1 172 135 7249 8132 8704 93372 172 138 7138 7994 8563 92733 166 132 7431 8206 8744 93374 177 136 6967 7896 8532 92935 168 154 7253 8088 8687 9296Average 171 139 7208 8063 8646 9307
Journal of Healthcare Engineering 11
performance of the system depends on consistency betweenthe training data and the testing data similarly as any su-pervised learning-based methods
5 Conclusion
In conclusion we design a new framework of landmarkdetection in lateral cephalograms with low-to-high resolu-tions In each image resolution decision tree regressionvoting is employed in landmark detection e proposedalgorithm takes full advantage of image information indifferent resolutions In lower resolution the primary localstructure information rather than local detail informationcan be extracted to predict the positions of anatomical
structure involving the specific landmarks In higher reso-lution the local structure information involves more detailinformation and it is useful for prediction positions oflandmarks in the neighborhood As demonstrated in ex-perimental results the proposed algorithm has achievedgood performance Compared with state-of-the-art methodsusing the benchmark database our algorithm has obtainedthe comparable accuracy of landmark detection in terms ofMRE SD and SDR Tested by our own clinical database ouralgorithm has also obtained average 72 successful de-tection rate within precision range of 20mm In particular45 landmarks have been detected in our database which isover two times of the number of landmarks in the bench-mark database erefore the extensibility of the proposed
Table 8 Description of cephalometric measurements in database1 [28]
No Measurement Description in mathematics Description in words1 ANB angL5L2L6 e angle between the landmark 5 2 and 62 SNB angL1L2L6 e angle between the landmark 1 2 and 63 SNA angL1L2L5 e angle between the landmark 1 2 and 5
4 ODI angL5L6L8L10 + angL17L18L4L3e arithmetic sum of the angle between AB plane(L5L6) to mandibular plane (L8L10) and the angle of
palatal plane (L17L18) to FH plane (L4L3)
5 APDI angL3L4L2L7 + angL2L7L5L6 + angL3L4L17L18
e arithmetic sum of the angle between FH plane(L3L4) to facial plane (L2L7) the angle of facial plane(L2L7) to AB plane (L5L6) and the angle of FH plane
(L3L4) to palatal plane (L17L18)
6 FHI |L1L10||L2L8|
e ratio of posterior face height (PFH the distancefrom L1 to L10) to anterior face height (AFH the
distance from L2 to L8)
7 FHA angL1L2L10L9 e angle between SN plane (L1L2) to mandibularplane (L10L9)
8 MW |L12L11| x(L12)ltx(L11)
minus|L12L11| otherwise1113896When the x ordinate of L12 is less than L11rsquos theMW
is |L12L11| otherwise MW is minus|L12L11|
Table 9 Eight standard cephalometric measurement methods for classification of anatomical types [28]
No Measurement Type 1 Type 2 Type 3 Type 41 ANB 32degsim57deg class I (normal) gt57deg class II lt32deg class III mdash
2 SNB 746degsim787deg normalmandible lt746deg retrognathic mandible gt787deg prognathic mandible mdash
3 SNA 794degsim832deg normal maxilla gt832deg prognathic maxilla lt794deg retrognathic maxilla mdash4 ODI Normal 745deg plusmn 607deg gt805deg deep bite tendency lt684deg open bite tendency mdash5 APDI Normal 814deg plusmn 38deg lt776deg class II tendency gt852deg class III tendency mdash6 FHI Normal 065sim075 gt075 short face tendency lt065 long face tendency mdash
7 FHA Normal 268degsim314deg gt314deg mandible high angletendency
lt268deg mandible lower angletendency mdash
8 MW Normal 2mmsim45mm MW 0mm edge to edge MW lt0mm anterior cross biteMWgt45mm
large over jet
Table 10 Comparison of our method to two methods in term of SCR using Test1Data
Methode success classification rates SCR ()
ANB SNB SNA ODI APDI FHI FHA MW Average[24] 6499 8452 6845 8464 8214 6792 7554 8219 7630[26] 5942 7109 5900 7804 8016 5897 7703 8394 7096Ours 5861 7885 5986 7659 8349 8244 7718 8320 7503
12 Journal of Healthcare Engineering
Table 11 Description of cephalometric measurements indatabase2
Name Description inmathematics Description in words
8e angles of three points (unit deg)
SNA angL39L1L6 e angle of landmarks L39 L1L6
SNB angL39L1L22 e angle of landmarks L39 L1L22
ANB angL6L1L22 e angle of landmarks L6 L1L22
SNPg angL39L20L1 e angle of landmarks L39 L20L1
NAPg angL1L6L20 e angle of landmarks L1 L6L20
NSGn angL1L39L19 e angle of landmarks L1 L39L19
Ns-Prn-Pos angL2L3L14 e angle of landmarks L2 L3L14
Cm-Sn-UL angL4L7L9 e angle of landmarks L4 L7L9
LL-Bprime-Pos angL12L13L14 e angle of landmarks L12 L13L14
Angle ofthe jaw angL41L17L18 e angle of landmarks L41 L17
L18Angle ofconvexity angL2L7L14 e angle of landmarks L2 L7
L148e angles between two planes (unit deg)
FH-SN angL44L29L39L1 e angle between FH plane(L44L29) and SN plane (L39L1)
UI-SN angL28L25L39L1 e angle between upper incisor(L28L25) and SN plane (L39L1)
LI-FH angL24L21L44L29e angle between lower incisoraxis (L24L21) and FH plane
(L44L29)
UI-LI angL28L25L24L21 e angle between upper andlower incisors (L28L25 L24L21)
H angle angL9L14L8L14 e angle between H plane(L9L14)and NB plane (L8L14)
Z angle angL44L29L9L14e rear lower angle between FHplane (L44L29) and H plane
(L9L14)8e distances between two points (unit mm)
N-Me |L1L18|e forward height the distancebetween landmarks L1 L18
N-ANS |L1L5|
e up-forward height thedistance between landmarks L1
L5
ANS-Me |L5L18|
e down-forward height thedistance between landmarks L5
L18
Stoms-UI |L10L25|e vertical distance between
landmarks L10 L258e distances from the point to the plane in the horizontal direction(unit mm)
UI-AP |L25L6L20|e distance between landmark
L25 to AP plane (L6L20)
LI-AP |L24L6L20|e distance from landmark L24
to AP plane (L6L20)
UL-EP |L9L3L14|e distance from landmark L9
to EP plane (L3L14)
Table 12 Result of our method for 27 measurements in terms ofMAE and MAElowast using database2
Name MAE MAElowast MAE-MAElowast
8e angles of three points (unit deg)SNA 195 170 024SNB 157 117 039ANB 129 108 021SNPg 159 120 039NAPg 283 244 039NSGn 142 096 047Ns-Prn-Pos 168 110 058Cm-Sn-UL 552 380 172LL-Bprime-Pos 395 344 051Angle of the jaw 320 166 154Angle of convexity 267 186 0818e angles between two planes (unit deg)FH-SN 200 196 004UI-SN 471 281 190LI-FH 334 227 107UI-LI 690 466 224H angle 094 080 014Z angle 169 186 -0178e distances between two points (unit mm)N-Me 137 110 027N-ANS 157 134 024ANS-Me 106 096 010Stoms-UI 075 042 0338e distances from the point to the plane in the horizontal direction(unit mm)UI-AP 096 071 025LI-AP 096 076 020UL-EP 050 036 014LL-EP 045 039 005MaxE 206 179 0278e distances between two points projected to a plane (unit mm)Wits 470 353 116MAElowast interobserver variability
Table 11 Continued
Name Description inmathematics Description in words
LL-EP |L12L13L14|e distance from landmark L12
to EP plane (L3L14)
MaxE |L6L5L37|
Maxillary length the distancefrom landmark L6 to palatal
plane (L5L37)8e distances between two points projected to a plane (unit mm)
Wits |L6L32L34L22|
e distance between the twolandmarks L6 and L22 projectedto functional jaw plane (L32L34)
Table 13 e interobserver error of manual marking betweendoctor1 and doctor2
Interobserver variabilityMRE (mm) SD (mm)
database1 138 155database2 126 127
Journal of Healthcare Engineering 13
algorithm is confirmed using this clinical dataset In addi-tion automatic measurement of clinical parameters has alsoachieved satisfactory results In the future we will put moreefforts to improve the performance of automatic analysis inlateral cephalograms so that the automatic system can beutilized in clinical practice to obtain objective measurementMore research will be conducted to reduce the computa-tional complexity of the algorithm as well
Data Availability
e database1 of 2015 ISBI Challenge is available at httpwwwntustedutwsimcweiwangISBI2015challenge1indexhtml e database2 of Peking University School andHospital of Stomatology cannot be made public availabledue to privacy concerns for the patients
Disclosure
is work is based on research conducted at Beijing Instituteof Technology
Conflicts of Interest
e authors declare that there are no conflicts of interestregarding the publication of this paper
Acknowledgments
e authors would like to express their gratitude to De-partment of Orthodontics Peking University School andHospital of Stomatology China for the supply of imagedatabase2
References
[1] J Yang X Ling Y Lu et al ldquoCephalometric image analysisand measurement for orthognathic surgeryrdquo Medical amp Bi-ological Engineering amp Computing vol 39 no 3 pp 279ndash2842001
[2] B H Broadbent ldquoA new x-ray technique and its applicationto orthodontiardquo Angle Orthodontist vol 1 pp 45-46 1931
[3] H Hofrath ldquoDie Bedeutung des Rontgenfern undAbstandsaufnahme fiir de Diagnostik der KieferanomalienrdquoFortschritte der Kieferorthopadie vol 2 pp 232ndash258 1931
[4] R Leonardi D Giordano F Maiorana et al ldquoAutomaticcephalometric analysis - a systematic reviewrdquo Angle Ortho-dontist vol 78 no 1 pp 145ndash151 2008
[5] T S Douglas ldquoImage processing for craniofacial landmarkidentification and measurement a review of photogrammetryand cephalometryrdquo Computerized Medical Imaging andGraphics vol 28 no 7 pp 401ndash409 2004
[6] D S Gill and F B Naini ldquoChapter 9 cephalometric analysisrdquoin Orthodontics Principles and Practice pp 78ndash87 BlackwellScience Publishing Oxford UK 2011
[7] A D Levy-Mandel A N Venetsanopoulos and J K TsotsosldquoKnowledge-based landmarking of cephalogramsrdquo Com-puters and Biomedical Research vol 19 no 3 pp 282ndash3091986
[8] V Grau M Alcaniz M C Juan et al ldquoAutomatic localizationof cephalometric landmarksrdquo Journal of Biomedical In-formatics vol 34 no 3 pp 146ndash156 2001
[9] J Ashish M Tanmoy and H K Sardana ldquoA novel strategyfor automatic localization of cephalometric landmarksrdquo inProceedings of IEEE International Conference on ComputerEngineering and Technology pp V3-284ndashV3-288 TianjinChina 2010
[10] T Mondal A Jain and H K Sardana ldquoAutomatic craniofacialstructure detection on cephalometric imagesrdquo IEEE Trans-actions on Image Processing vol 20 no 9 pp 2606ndash2614 2011
[11] A Kaur and C Singh ldquoAutomatic cephalometric landmarkdetection using Zernike moments and template matchingrdquoSignal Image and Video Processing vol 9 no 1 pp 117ndash1322015
[12] D B Forsyth and D N Davis ldquoAssessment of an automatedcephalometric analysis systemrdquo European Journal of Ortho-dontics vol 18 no 5 pp 471ndash478 1996
[13] S Rueda and M Alcaniz ldquoAn approach for the automaticcephalometric landmark detection using mathematicalmorphology and active appearance modelsrdquo in Proceedings ofMedical Image Computing and Computer-Assisted In-tervention (MICCAI 2006) pp 159ndash166 Lecture Notes inComputer Science R Larsen Ed Copenhagen DenmarkOctober 2006
[14] W Yue D Yin C Li et al ldquoAutomated 2-D cephalometricanalysis on X-ray images by a model-based approachrdquo IEEETransactions on Biomedical Engineering vol 53 no 8pp 1615ndash1623 2006
[15] R Kafieh A mehri S sadri et al ldquoAutomatic landmarkdetection in cephalometry using a modified Active ShapeModel with sub image matchingrdquo in Proceedings of IEEEInternational Conference on Machine Vision pp 73ndash78Islamabad Pakistan 2008
[16] J Keustermans W Mollemans D Vandermeulen andP Suetens ldquoAutomated cephalometric landmark identifica-tion using shape and local appearance modelsrdquo in Proceedingsof 20th International Conference on Pattern Recognitionpp 2464ndash2467 Istanbul Turkey August 2010
[17] P Vucinic Z Trpovski and I Scepan ldquoAutomatic land-marking of cephalograms using active appearance modelsrdquoEuropean Journal of Orthodontics vol 32 no 3 pp 233ndash2412010
[18] S Chakrabartty M Yagi T Shibata et al ldquoRobust cepha-lometric landmark identification using support vector ma-chinesrdquo in Proceedings of IEEE International Conference onAcoustics Speech and Signal Processing vol 2 pp 825ndash828Vancouver Canada May 2003
[19] I El-Feghi M A Sid-Ahmed and M Ahmadi ldquoAutomaticlocalization of craniofacial landmarks for assisted cephalom-etryrdquo Pattern Recognition vol 37 no 3 pp 609ndash621 2004
[20] R Leonardi D Giordano and F Maiorana ldquoAn evaluation ofcellular neural networks for the automatic identification ofcephalometric landmarks on digital imagesrdquo Journal of Bio-medicine and Biotechnology vol 2009 Article ID 717102 12pages 2009
[21] L Favaedi M Petrou and IEEE ldquoCephalometric landmarksidentification using probabilistic relaxationrdquo in Proceedings of2010 Annual International Conference of the IEEE Engineeringin Medicine and Biology Society pp 4391ndash4394 IEEE Engi-neering in Medicine and Biology Society Conference Pro-ceedings Buenos Aires Argentina August 2010
[22] M Farshbaf and A A Pouyan ldquoLandmark detection oncephalometric radiology images through combining classi-fiersrdquo in Proceedings of 2010 17th Iranian Conference ofBiomedical Engineering (ICBME) pp 1ndash4 Isfahan IranNovember 2010
14 Journal of Healthcare Engineering
[23] L Breiman ldquoRandom forestsrdquo Machine Learning vol 45no 1 pp 5ndash32 2001
[24] C Lindner and T F Cootes ldquoFully automatic cephalometricevaluation using random forest regression-votingrdquo in Pro-ceedings of International Symposium on Biomedical Imaging(ISBI) Grand Challenges in Dental X-ray Image Ana-lysismdashAutomated Detection and Analysis for Diagnosis inCephalometric X-ray Image Brooklyn NY USA May 2015
[25] C Lindner C-W Wang C-T Huang et al ldquoFully automaticsystem for accurate localisation and analysis of cephalometriclandmarks in lateral cephalogramsrdquo Scientific Reports vol 6no 1 2016
[26] B Ibragimov et al ldquoComputerized cephalometry by gametheory with shape- and appearance-based landmark re-finementrdquo in Proceedings of International Symposium onBiomedical imaging (ISBI) Grand Challenges in Dental X-rayImage AnalysismdashAutomated Detection and Analysis forDiagnosis in Cephalometric X-ray Image Brooklyn NYUSA May 2015
[27] C-WWang C-T HuangM-C Hsieh et al ldquoEvaluation andcomparison of anatomical landmark detection methods forcephalometric X-ray images a grand challengerdquo IEEETransactions on Medical Imaging vol 34 no 9 pp 1890ndash1900 2015
[28] C W Wang C T Huang J H Lee et al ldquoA benchmark forcomparison of dental radiography analysis algorithmsrdquoMedical Image Analysis vol 31 pp 63ndash76 2016
[29] R Vandaele J AcetoMMuller et al ldquoLandmark detection in2D bioimages for geometric morphometrics a multi-resolution tree-based approachrdquo Scientific Reports vol 8no 1 2018
[30] D G Lowe ldquoDistinctive image features from scale-invariantKeypointsrdquo International Journal of Computer Vision vol 60no 2 pp 91ndash110 2004
[31] A Vedaldi and B Fulkerson ldquoVLFeat an open and portablelibrary of computer vision algorithmsrdquo in Proceedings ofConference on ACM multimedia pp 1469ndash1472 FirenzeItaly October httpwwwvlfeatorg
[32] C Qiu L Jiang and C Li ldquoRandomly selected decision treefor test-cost sensitive learningrdquo Applied Soft Computingvol 53 pp 27ndash33 2017
[33] K Kim and J S Hong ldquoA hybrid decision tree algorithm formixed numeric and categorical data in regression analysisrdquoPattern Recognition Letters vol 98 pp 39ndash45 2017
[34] L Breiman and J H Friedman Classification and RegressionTrees CRC Press Boca Raton FL USA 1984
[35] C Lindner P A Bromiley M C Ionita et al ldquoRobust andaccurate shape model matching using random forest re-gression-votingrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 37 no 9 pp 1862ndash1874 2015
[36] J Gall and V S Lempitsky ldquoClass-specific Hough forests forobject detectionrdquo in Proceedings of IEEE Conference onComputer Vision and Pattern Recognition (CVPR) pp 143ndash157 Miami FL USA June 2009
Journal of Healthcare Engineering 15
International Journal of
AerospaceEngineeringHindawiwwwhindawicom Volume 2018
RoboticsJournal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Active and Passive Electronic Components
VLSI Design
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Shock and Vibration
Hindawiwwwhindawicom Volume 2018
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawiwwwhindawicom
Volume 2018
Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom
The Scientific World Journal
Volume 2018
Control Scienceand Engineering
Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom
Journal ofEngineeringVolume 2018
SensorsJournal of
Hindawiwwwhindawicom Volume 2018
International Journal of
RotatingMachinery
Hindawiwwwhindawicom Volume 2018
Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Navigation and Observation
International Journal of
Hindawi
wwwhindawicom Volume 2018
Advances in
Multimedia
Submit your manuscripts atwwwhindawicom
performance of the system depends on consistency betweenthe training data and the testing data similarly as any su-pervised learning-based methods
5 Conclusion
In conclusion we design a new framework of landmarkdetection in lateral cephalograms with low-to-high resolu-tions In each image resolution decision tree regressionvoting is employed in landmark detection e proposedalgorithm takes full advantage of image information indifferent resolutions In lower resolution the primary localstructure information rather than local detail informationcan be extracted to predict the positions of anatomical
structure involving the specific landmarks In higher reso-lution the local structure information involves more detailinformation and it is useful for prediction positions oflandmarks in the neighborhood As demonstrated in ex-perimental results the proposed algorithm has achievedgood performance Compared with state-of-the-art methodsusing the benchmark database our algorithm has obtainedthe comparable accuracy of landmark detection in terms ofMRE SD and SDR Tested by our own clinical database ouralgorithm has also obtained average 72 successful de-tection rate within precision range of 20mm In particular45 landmarks have been detected in our database which isover two times of the number of landmarks in the bench-mark database erefore the extensibility of the proposed
Table 8 Description of cephalometric measurements in database1 [28]
No Measurement Description in mathematics Description in words1 ANB angL5L2L6 e angle between the landmark 5 2 and 62 SNB angL1L2L6 e angle between the landmark 1 2 and 63 SNA angL1L2L5 e angle between the landmark 1 2 and 5
4 ODI angL5L6L8L10 + angL17L18L4L3e arithmetic sum of the angle between AB plane(L5L6) to mandibular plane (L8L10) and the angle of
palatal plane (L17L18) to FH plane (L4L3)
5 APDI angL3L4L2L7 + angL2L7L5L6 + angL3L4L17L18
e arithmetic sum of the angle between FH plane(L3L4) to facial plane (L2L7) the angle of facial plane(L2L7) to AB plane (L5L6) and the angle of FH plane
(L3L4) to palatal plane (L17L18)
6 FHI |L1L10||L2L8|
e ratio of posterior face height (PFH the distancefrom L1 to L10) to anterior face height (AFH the
distance from L2 to L8)
7 FHA angL1L2L10L9 e angle between SN plane (L1L2) to mandibularplane (L10L9)
8 MW |L12L11| x(L12)ltx(L11)
minus|L12L11| otherwise1113896When the x ordinate of L12 is less than L11rsquos theMW
is |L12L11| otherwise MW is minus|L12L11|
Table 9 Eight standard cephalometric measurement methods for classification of anatomical types [28]
No Measurement Type 1 Type 2 Type 3 Type 41 ANB 32degsim57deg class I (normal) gt57deg class II lt32deg class III mdash
2 SNB 746degsim787deg normalmandible lt746deg retrognathic mandible gt787deg prognathic mandible mdash
3 SNA 794degsim832deg normal maxilla gt832deg prognathic maxilla lt794deg retrognathic maxilla mdash4 ODI Normal 745deg plusmn 607deg gt805deg deep bite tendency lt684deg open bite tendency mdash5 APDI Normal 814deg plusmn 38deg lt776deg class II tendency gt852deg class III tendency mdash6 FHI Normal 065sim075 gt075 short face tendency lt065 long face tendency mdash
7 FHA Normal 268degsim314deg gt314deg mandible high angletendency
lt268deg mandible lower angletendency mdash
8 MW Normal 2mmsim45mm MW 0mm edge to edge MW lt0mm anterior cross biteMWgt45mm
large over jet
Table 10 Comparison of our method to two methods in term of SCR using Test1Data
Methode success classification rates SCR ()
ANB SNB SNA ODI APDI FHI FHA MW Average[24] 6499 8452 6845 8464 8214 6792 7554 8219 7630[26] 5942 7109 5900 7804 8016 5897 7703 8394 7096Ours 5861 7885 5986 7659 8349 8244 7718 8320 7503
12 Journal of Healthcare Engineering
Table 11 Description of cephalometric measurements indatabase2
Name Description inmathematics Description in words
8e angles of three points (unit deg)
SNA angL39L1L6 e angle of landmarks L39 L1L6
SNB angL39L1L22 e angle of landmarks L39 L1L22
ANB angL6L1L22 e angle of landmarks L6 L1L22
SNPg angL39L20L1 e angle of landmarks L39 L20L1
NAPg angL1L6L20 e angle of landmarks L1 L6L20
NSGn angL1L39L19 e angle of landmarks L1 L39L19
Ns-Prn-Pos angL2L3L14 e angle of landmarks L2 L3L14
Cm-Sn-UL angL4L7L9 e angle of landmarks L4 L7L9
LL-Bprime-Pos angL12L13L14 e angle of landmarks L12 L13L14
Angle ofthe jaw angL41L17L18 e angle of landmarks L41 L17
L18Angle ofconvexity angL2L7L14 e angle of landmarks L2 L7
L148e angles between two planes (unit deg)
FH-SN angL44L29L39L1 e angle between FH plane(L44L29) and SN plane (L39L1)
UI-SN angL28L25L39L1 e angle between upper incisor(L28L25) and SN plane (L39L1)
LI-FH angL24L21L44L29e angle between lower incisoraxis (L24L21) and FH plane
(L44L29)
UI-LI angL28L25L24L21 e angle between upper andlower incisors (L28L25 L24L21)
H angle angL9L14L8L14 e angle between H plane(L9L14)and NB plane (L8L14)
Z angle angL44L29L9L14e rear lower angle between FHplane (L44L29) and H plane
(L9L14)8e distances between two points (unit mm)
N-Me |L1L18|e forward height the distancebetween landmarks L1 L18
N-ANS |L1L5|
e up-forward height thedistance between landmarks L1
L5
ANS-Me |L5L18|
e down-forward height thedistance between landmarks L5
L18
Stoms-UI |L10L25|e vertical distance between
landmarks L10 L258e distances from the point to the plane in the horizontal direction(unit mm)
UI-AP |L25L6L20|e distance between landmark
L25 to AP plane (L6L20)
LI-AP |L24L6L20|e distance from landmark L24
to AP plane (L6L20)
UL-EP |L9L3L14|e distance from landmark L9
to EP plane (L3L14)
Table 12 Result of our method for 27 measurements in terms ofMAE and MAElowast using database2
Name MAE MAElowast MAE-MAElowast
8e angles of three points (unit deg)SNA 195 170 024SNB 157 117 039ANB 129 108 021SNPg 159 120 039NAPg 283 244 039NSGn 142 096 047Ns-Prn-Pos 168 110 058Cm-Sn-UL 552 380 172LL-Bprime-Pos 395 344 051Angle of the jaw 320 166 154Angle of convexity 267 186 0818e angles between two planes (unit deg)FH-SN 200 196 004UI-SN 471 281 190LI-FH 334 227 107UI-LI 690 466 224H angle 094 080 014Z angle 169 186 -0178e distances between two points (unit mm)N-Me 137 110 027N-ANS 157 134 024ANS-Me 106 096 010Stoms-UI 075 042 0338e distances from the point to the plane in the horizontal direction(unit mm)UI-AP 096 071 025LI-AP 096 076 020UL-EP 050 036 014LL-EP 045 039 005MaxE 206 179 0278e distances between two points projected to a plane (unit mm)Wits 470 353 116MAElowast interobserver variability
Table 11 Continued
Name Description inmathematics Description in words
LL-EP |L12L13L14|e distance from landmark L12
to EP plane (L3L14)
MaxE |L6L5L37|
Maxillary length the distancefrom landmark L6 to palatal
plane (L5L37)8e distances between two points projected to a plane (unit mm)
Wits |L6L32L34L22|
e distance between the twolandmarks L6 and L22 projectedto functional jaw plane (L32L34)
Table 13 e interobserver error of manual marking betweendoctor1 and doctor2
Interobserver variabilityMRE (mm) SD (mm)
database1 138 155database2 126 127
Journal of Healthcare Engineering 13
algorithm is confirmed using this clinical dataset In addi-tion automatic measurement of clinical parameters has alsoachieved satisfactory results In the future we will put moreefforts to improve the performance of automatic analysis inlateral cephalograms so that the automatic system can beutilized in clinical practice to obtain objective measurementMore research will be conducted to reduce the computa-tional complexity of the algorithm as well
Data Availability
e database1 of 2015 ISBI Challenge is available at httpwwwntustedutwsimcweiwangISBI2015challenge1indexhtml e database2 of Peking University School andHospital of Stomatology cannot be made public availabledue to privacy concerns for the patients
Disclosure
is work is based on research conducted at Beijing Instituteof Technology
Conflicts of Interest
e authors declare that there are no conflicts of interestregarding the publication of this paper
Acknowledgments
e authors would like to express their gratitude to De-partment of Orthodontics Peking University School andHospital of Stomatology China for the supply of imagedatabase2
References
[1] J Yang X Ling Y Lu et al ldquoCephalometric image analysisand measurement for orthognathic surgeryrdquo Medical amp Bi-ological Engineering amp Computing vol 39 no 3 pp 279ndash2842001
[2] B H Broadbent ldquoA new x-ray technique and its applicationto orthodontiardquo Angle Orthodontist vol 1 pp 45-46 1931
[3] H Hofrath ldquoDie Bedeutung des Rontgenfern undAbstandsaufnahme fiir de Diagnostik der KieferanomalienrdquoFortschritte der Kieferorthopadie vol 2 pp 232ndash258 1931
[4] R Leonardi D Giordano F Maiorana et al ldquoAutomaticcephalometric analysis - a systematic reviewrdquo Angle Ortho-dontist vol 78 no 1 pp 145ndash151 2008
[5] T S Douglas ldquoImage processing for craniofacial landmarkidentification and measurement a review of photogrammetryand cephalometryrdquo Computerized Medical Imaging andGraphics vol 28 no 7 pp 401ndash409 2004
[6] D S Gill and F B Naini ldquoChapter 9 cephalometric analysisrdquoin Orthodontics Principles and Practice pp 78ndash87 BlackwellScience Publishing Oxford UK 2011
[7] A D Levy-Mandel A N Venetsanopoulos and J K TsotsosldquoKnowledge-based landmarking of cephalogramsrdquo Com-puters and Biomedical Research vol 19 no 3 pp 282ndash3091986
[8] V Grau M Alcaniz M C Juan et al ldquoAutomatic localizationof cephalometric landmarksrdquo Journal of Biomedical In-formatics vol 34 no 3 pp 146ndash156 2001
[9] J Ashish M Tanmoy and H K Sardana ldquoA novel strategyfor automatic localization of cephalometric landmarksrdquo inProceedings of IEEE International Conference on ComputerEngineering and Technology pp V3-284ndashV3-288 TianjinChina 2010
[10] T Mondal A Jain and H K Sardana ldquoAutomatic craniofacialstructure detection on cephalometric imagesrdquo IEEE Trans-actions on Image Processing vol 20 no 9 pp 2606ndash2614 2011
[11] A Kaur and C Singh ldquoAutomatic cephalometric landmarkdetection using Zernike moments and template matchingrdquoSignal Image and Video Processing vol 9 no 1 pp 117ndash1322015
[12] D B Forsyth and D N Davis ldquoAssessment of an automatedcephalometric analysis systemrdquo European Journal of Ortho-dontics vol 18 no 5 pp 471ndash478 1996
[13] S Rueda and M Alcaniz ldquoAn approach for the automaticcephalometric landmark detection using mathematicalmorphology and active appearance modelsrdquo in Proceedings ofMedical Image Computing and Computer-Assisted In-tervention (MICCAI 2006) pp 159ndash166 Lecture Notes inComputer Science R Larsen Ed Copenhagen DenmarkOctober 2006
[14] W Yue D Yin C Li et al ldquoAutomated 2-D cephalometricanalysis on X-ray images by a model-based approachrdquo IEEETransactions on Biomedical Engineering vol 53 no 8pp 1615ndash1623 2006
[15] R Kafieh A mehri S sadri et al ldquoAutomatic landmarkdetection in cephalometry using a modified Active ShapeModel with sub image matchingrdquo in Proceedings of IEEEInternational Conference on Machine Vision pp 73ndash78Islamabad Pakistan 2008
[16] J Keustermans W Mollemans D Vandermeulen andP Suetens ldquoAutomated cephalometric landmark identifica-tion using shape and local appearance modelsrdquo in Proceedingsof 20th International Conference on Pattern Recognitionpp 2464ndash2467 Istanbul Turkey August 2010
[17] P Vucinic Z Trpovski and I Scepan ldquoAutomatic land-marking of cephalograms using active appearance modelsrdquoEuropean Journal of Orthodontics vol 32 no 3 pp 233ndash2412010
[18] S Chakrabartty M Yagi T Shibata et al ldquoRobust cepha-lometric landmark identification using support vector ma-chinesrdquo in Proceedings of IEEE International Conference onAcoustics Speech and Signal Processing vol 2 pp 825ndash828Vancouver Canada May 2003
[19] I El-Feghi M A Sid-Ahmed and M Ahmadi ldquoAutomaticlocalization of craniofacial landmarks for assisted cephalom-etryrdquo Pattern Recognition vol 37 no 3 pp 609ndash621 2004
[20] R Leonardi D Giordano and F Maiorana ldquoAn evaluation ofcellular neural networks for the automatic identification ofcephalometric landmarks on digital imagesrdquo Journal of Bio-medicine and Biotechnology vol 2009 Article ID 717102 12pages 2009
[21] L Favaedi M Petrou and IEEE ldquoCephalometric landmarksidentification using probabilistic relaxationrdquo in Proceedings of2010 Annual International Conference of the IEEE Engineeringin Medicine and Biology Society pp 4391ndash4394 IEEE Engi-neering in Medicine and Biology Society Conference Pro-ceedings Buenos Aires Argentina August 2010
[22] M Farshbaf and A A Pouyan ldquoLandmark detection oncephalometric radiology images through combining classi-fiersrdquo in Proceedings of 2010 17th Iranian Conference ofBiomedical Engineering (ICBME) pp 1ndash4 Isfahan IranNovember 2010
14 Journal of Healthcare Engineering
[23] L Breiman ldquoRandom forestsrdquo Machine Learning vol 45no 1 pp 5ndash32 2001
[24] C Lindner and T F Cootes ldquoFully automatic cephalometricevaluation using random forest regression-votingrdquo in Pro-ceedings of International Symposium on Biomedical Imaging(ISBI) Grand Challenges in Dental X-ray Image Ana-lysismdashAutomated Detection and Analysis for Diagnosis inCephalometric X-ray Image Brooklyn NY USA May 2015
[25] C Lindner C-W Wang C-T Huang et al ldquoFully automaticsystem for accurate localisation and analysis of cephalometriclandmarks in lateral cephalogramsrdquo Scientific Reports vol 6no 1 2016
[26] B Ibragimov et al ldquoComputerized cephalometry by gametheory with shape- and appearance-based landmark re-finementrdquo in Proceedings of International Symposium onBiomedical imaging (ISBI) Grand Challenges in Dental X-rayImage AnalysismdashAutomated Detection and Analysis forDiagnosis in Cephalometric X-ray Image Brooklyn NYUSA May 2015
[27] C-WWang C-T HuangM-C Hsieh et al ldquoEvaluation andcomparison of anatomical landmark detection methods forcephalometric X-ray images a grand challengerdquo IEEETransactions on Medical Imaging vol 34 no 9 pp 1890ndash1900 2015
[28] C W Wang C T Huang J H Lee et al ldquoA benchmark forcomparison of dental radiography analysis algorithmsrdquoMedical Image Analysis vol 31 pp 63ndash76 2016
[29] R Vandaele J AcetoMMuller et al ldquoLandmark detection in2D bioimages for geometric morphometrics a multi-resolution tree-based approachrdquo Scientific Reports vol 8no 1 2018
[30] D G Lowe ldquoDistinctive image features from scale-invariantKeypointsrdquo International Journal of Computer Vision vol 60no 2 pp 91ndash110 2004
[31] A Vedaldi and B Fulkerson ldquoVLFeat an open and portablelibrary of computer vision algorithmsrdquo in Proceedings ofConference on ACM multimedia pp 1469ndash1472 FirenzeItaly October httpwwwvlfeatorg
[32] C Qiu L Jiang and C Li ldquoRandomly selected decision treefor test-cost sensitive learningrdquo Applied Soft Computingvol 53 pp 27ndash33 2017
[33] K Kim and J S Hong ldquoA hybrid decision tree algorithm formixed numeric and categorical data in regression analysisrdquoPattern Recognition Letters vol 98 pp 39ndash45 2017
[34] L Breiman and J H Friedman Classification and RegressionTrees CRC Press Boca Raton FL USA 1984
[35] C Lindner P A Bromiley M C Ionita et al ldquoRobust andaccurate shape model matching using random forest re-gression-votingrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 37 no 9 pp 1862ndash1874 2015
[36] J Gall and V S Lempitsky ldquoClass-specific Hough forests forobject detectionrdquo in Proceedings of IEEE Conference onComputer Vision and Pattern Recognition (CVPR) pp 143ndash157 Miami FL USA June 2009
Journal of Healthcare Engineering 15
International Journal of
AerospaceEngineeringHindawiwwwhindawicom Volume 2018
RoboticsJournal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Active and Passive Electronic Components
VLSI Design
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Shock and Vibration
Hindawiwwwhindawicom Volume 2018
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawiwwwhindawicom
Volume 2018
Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom
The Scientific World Journal
Volume 2018
Control Scienceand Engineering
Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom
Journal ofEngineeringVolume 2018
SensorsJournal of
Hindawiwwwhindawicom Volume 2018
International Journal of
RotatingMachinery
Hindawiwwwhindawicom Volume 2018
Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Navigation and Observation
International Journal of
Hindawi
wwwhindawicom Volume 2018
Advances in
Multimedia
Submit your manuscripts atwwwhindawicom
Table 11 Description of cephalometric measurements indatabase2
Name Description inmathematics Description in words
8e angles of three points (unit deg)
SNA angL39L1L6 e angle of landmarks L39 L1L6
SNB angL39L1L22 e angle of landmarks L39 L1L22
ANB angL6L1L22 e angle of landmarks L6 L1L22
SNPg angL39L20L1 e angle of landmarks L39 L20L1
NAPg angL1L6L20 e angle of landmarks L1 L6L20
NSGn angL1L39L19 e angle of landmarks L1 L39L19
Ns-Prn-Pos angL2L3L14 e angle of landmarks L2 L3L14
Cm-Sn-UL angL4L7L9 e angle of landmarks L4 L7L9
LL-Bprime-Pos angL12L13L14 e angle of landmarks L12 L13L14
Angle ofthe jaw angL41L17L18 e angle of landmarks L41 L17
L18Angle ofconvexity angL2L7L14 e angle of landmarks L2 L7
L148e angles between two planes (unit deg)
FH-SN angL44L29L39L1 e angle between FH plane(L44L29) and SN plane (L39L1)
UI-SN angL28L25L39L1 e angle between upper incisor(L28L25) and SN plane (L39L1)
LI-FH angL24L21L44L29e angle between lower incisoraxis (L24L21) and FH plane
(L44L29)
UI-LI angL28L25L24L21 e angle between upper andlower incisors (L28L25 L24L21)
H angle angL9L14L8L14 e angle between H plane(L9L14)and NB plane (L8L14)
Z angle angL44L29L9L14e rear lower angle between FHplane (L44L29) and H plane
(L9L14)8e distances between two points (unit mm)
N-Me |L1L18|e forward height the distancebetween landmarks L1 L18
N-ANS |L1L5|
e up-forward height thedistance between landmarks L1
L5
ANS-Me |L5L18|
e down-forward height thedistance between landmarks L5
L18
Stoms-UI |L10L25|e vertical distance between
landmarks L10 L258e distances from the point to the plane in the horizontal direction(unit mm)
UI-AP |L25L6L20|e distance between landmark
L25 to AP plane (L6L20)
LI-AP |L24L6L20|e distance from landmark L24
to AP plane (L6L20)
UL-EP |L9L3L14|e distance from landmark L9
to EP plane (L3L14)
Table 12 Result of our method for 27 measurements in terms ofMAE and MAElowast using database2
Name MAE MAElowast MAE-MAElowast
8e angles of three points (unit deg)SNA 195 170 024SNB 157 117 039ANB 129 108 021SNPg 159 120 039NAPg 283 244 039NSGn 142 096 047Ns-Prn-Pos 168 110 058Cm-Sn-UL 552 380 172LL-Bprime-Pos 395 344 051Angle of the jaw 320 166 154Angle of convexity 267 186 0818e angles between two planes (unit deg)FH-SN 200 196 004UI-SN 471 281 190LI-FH 334 227 107UI-LI 690 466 224H angle 094 080 014Z angle 169 186 -0178e distances between two points (unit mm)N-Me 137 110 027N-ANS 157 134 024ANS-Me 106 096 010Stoms-UI 075 042 0338e distances from the point to the plane in the horizontal direction(unit mm)UI-AP 096 071 025LI-AP 096 076 020UL-EP 050 036 014LL-EP 045 039 005MaxE 206 179 0278e distances between two points projected to a plane (unit mm)Wits 470 353 116MAElowast interobserver variability
Table 11 Continued
Name Description inmathematics Description in words
LL-EP |L12L13L14|e distance from landmark L12
to EP plane (L3L14)
MaxE |L6L5L37|
Maxillary length the distancefrom landmark L6 to palatal
plane (L5L37)8e distances between two points projected to a plane (unit mm)
Wits |L6L32L34L22|
e distance between the twolandmarks L6 and L22 projectedto functional jaw plane (L32L34)
Table 13 e interobserver error of manual marking betweendoctor1 and doctor2
Interobserver variabilityMRE (mm) SD (mm)
database1 138 155database2 126 127
Journal of Healthcare Engineering 13
algorithm is confirmed using this clinical dataset In addi-tion automatic measurement of clinical parameters has alsoachieved satisfactory results In the future we will put moreefforts to improve the performance of automatic analysis inlateral cephalograms so that the automatic system can beutilized in clinical practice to obtain objective measurementMore research will be conducted to reduce the computa-tional complexity of the algorithm as well
Data Availability
e database1 of 2015 ISBI Challenge is available at httpwwwntustedutwsimcweiwangISBI2015challenge1indexhtml e database2 of Peking University School andHospital of Stomatology cannot be made public availabledue to privacy concerns for the patients
Disclosure
is work is based on research conducted at Beijing Instituteof Technology
Conflicts of Interest
e authors declare that there are no conflicts of interestregarding the publication of this paper
Acknowledgments
e authors would like to express their gratitude to De-partment of Orthodontics Peking University School andHospital of Stomatology China for the supply of imagedatabase2
References
[1] J Yang X Ling Y Lu et al ldquoCephalometric image analysisand measurement for orthognathic surgeryrdquo Medical amp Bi-ological Engineering amp Computing vol 39 no 3 pp 279ndash2842001
[2] B H Broadbent ldquoA new x-ray technique and its applicationto orthodontiardquo Angle Orthodontist vol 1 pp 45-46 1931
[3] H Hofrath ldquoDie Bedeutung des Rontgenfern undAbstandsaufnahme fiir de Diagnostik der KieferanomalienrdquoFortschritte der Kieferorthopadie vol 2 pp 232ndash258 1931
[4] R Leonardi D Giordano F Maiorana et al ldquoAutomaticcephalometric analysis - a systematic reviewrdquo Angle Ortho-dontist vol 78 no 1 pp 145ndash151 2008
[5] T S Douglas ldquoImage processing for craniofacial landmarkidentification and measurement a review of photogrammetryand cephalometryrdquo Computerized Medical Imaging andGraphics vol 28 no 7 pp 401ndash409 2004
[6] D S Gill and F B Naini ldquoChapter 9 cephalometric analysisrdquoin Orthodontics Principles and Practice pp 78ndash87 BlackwellScience Publishing Oxford UK 2011
[7] A D Levy-Mandel A N Venetsanopoulos and J K TsotsosldquoKnowledge-based landmarking of cephalogramsrdquo Com-puters and Biomedical Research vol 19 no 3 pp 282ndash3091986
[8] V Grau M Alcaniz M C Juan et al ldquoAutomatic localizationof cephalometric landmarksrdquo Journal of Biomedical In-formatics vol 34 no 3 pp 146ndash156 2001
[9] J Ashish M Tanmoy and H K Sardana ldquoA novel strategyfor automatic localization of cephalometric landmarksrdquo inProceedings of IEEE International Conference on ComputerEngineering and Technology pp V3-284ndashV3-288 TianjinChina 2010
[10] T Mondal A Jain and H K Sardana ldquoAutomatic craniofacialstructure detection on cephalometric imagesrdquo IEEE Trans-actions on Image Processing vol 20 no 9 pp 2606ndash2614 2011
[11] A Kaur and C Singh ldquoAutomatic cephalometric landmarkdetection using Zernike moments and template matchingrdquoSignal Image and Video Processing vol 9 no 1 pp 117ndash1322015
[12] D B Forsyth and D N Davis ldquoAssessment of an automatedcephalometric analysis systemrdquo European Journal of Ortho-dontics vol 18 no 5 pp 471ndash478 1996
[13] S Rueda and M Alcaniz ldquoAn approach for the automaticcephalometric landmark detection using mathematicalmorphology and active appearance modelsrdquo in Proceedings ofMedical Image Computing and Computer-Assisted In-tervention (MICCAI 2006) pp 159ndash166 Lecture Notes inComputer Science R Larsen Ed Copenhagen DenmarkOctober 2006
[14] W Yue D Yin C Li et al ldquoAutomated 2-D cephalometricanalysis on X-ray images by a model-based approachrdquo IEEETransactions on Biomedical Engineering vol 53 no 8pp 1615ndash1623 2006
[15] R Kafieh A mehri S sadri et al ldquoAutomatic landmarkdetection in cephalometry using a modified Active ShapeModel with sub image matchingrdquo in Proceedings of IEEEInternational Conference on Machine Vision pp 73ndash78Islamabad Pakistan 2008
[16] J Keustermans W Mollemans D Vandermeulen andP Suetens ldquoAutomated cephalometric landmark identifica-tion using shape and local appearance modelsrdquo in Proceedingsof 20th International Conference on Pattern Recognitionpp 2464ndash2467 Istanbul Turkey August 2010
[17] P Vucinic Z Trpovski and I Scepan ldquoAutomatic land-marking of cephalograms using active appearance modelsrdquoEuropean Journal of Orthodontics vol 32 no 3 pp 233ndash2412010
[18] S Chakrabartty M Yagi T Shibata et al ldquoRobust cepha-lometric landmark identification using support vector ma-chinesrdquo in Proceedings of IEEE International Conference onAcoustics Speech and Signal Processing vol 2 pp 825ndash828Vancouver Canada May 2003
[19] I El-Feghi M A Sid-Ahmed and M Ahmadi ldquoAutomaticlocalization of craniofacial landmarks for assisted cephalom-etryrdquo Pattern Recognition vol 37 no 3 pp 609ndash621 2004
[20] R Leonardi D Giordano and F Maiorana ldquoAn evaluation ofcellular neural networks for the automatic identification ofcephalometric landmarks on digital imagesrdquo Journal of Bio-medicine and Biotechnology vol 2009 Article ID 717102 12pages 2009
[21] L Favaedi M Petrou and IEEE ldquoCephalometric landmarksidentification using probabilistic relaxationrdquo in Proceedings of2010 Annual International Conference of the IEEE Engineeringin Medicine and Biology Society pp 4391ndash4394 IEEE Engi-neering in Medicine and Biology Society Conference Pro-ceedings Buenos Aires Argentina August 2010
[22] M Farshbaf and A A Pouyan ldquoLandmark detection oncephalometric radiology images through combining classi-fiersrdquo in Proceedings of 2010 17th Iranian Conference ofBiomedical Engineering (ICBME) pp 1ndash4 Isfahan IranNovember 2010
14 Journal of Healthcare Engineering
[23] L Breiman ldquoRandom forestsrdquo Machine Learning vol 45no 1 pp 5ndash32 2001
[24] C Lindner and T F Cootes ldquoFully automatic cephalometricevaluation using random forest regression-votingrdquo in Pro-ceedings of International Symposium on Biomedical Imaging(ISBI) Grand Challenges in Dental X-ray Image Ana-lysismdashAutomated Detection and Analysis for Diagnosis inCephalometric X-ray Image Brooklyn NY USA May 2015
[25] C Lindner C-W Wang C-T Huang et al ldquoFully automaticsystem for accurate localisation and analysis of cephalometriclandmarks in lateral cephalogramsrdquo Scientific Reports vol 6no 1 2016
[26] B Ibragimov et al ldquoComputerized cephalometry by gametheory with shape- and appearance-based landmark re-finementrdquo in Proceedings of International Symposium onBiomedical imaging (ISBI) Grand Challenges in Dental X-rayImage AnalysismdashAutomated Detection and Analysis forDiagnosis in Cephalometric X-ray Image Brooklyn NYUSA May 2015
[27] C-WWang C-T HuangM-C Hsieh et al ldquoEvaluation andcomparison of anatomical landmark detection methods forcephalometric X-ray images a grand challengerdquo IEEETransactions on Medical Imaging vol 34 no 9 pp 1890ndash1900 2015
[28] C W Wang C T Huang J H Lee et al ldquoA benchmark forcomparison of dental radiography analysis algorithmsrdquoMedical Image Analysis vol 31 pp 63ndash76 2016
[29] R Vandaele J AcetoMMuller et al ldquoLandmark detection in2D bioimages for geometric morphometrics a multi-resolution tree-based approachrdquo Scientific Reports vol 8no 1 2018
[30] D G Lowe ldquoDistinctive image features from scale-invariantKeypointsrdquo International Journal of Computer Vision vol 60no 2 pp 91ndash110 2004
[31] A Vedaldi and B Fulkerson ldquoVLFeat an open and portablelibrary of computer vision algorithmsrdquo in Proceedings ofConference on ACM multimedia pp 1469ndash1472 FirenzeItaly October httpwwwvlfeatorg
[32] C Qiu L Jiang and C Li ldquoRandomly selected decision treefor test-cost sensitive learningrdquo Applied Soft Computingvol 53 pp 27ndash33 2017
[33] K Kim and J S Hong ldquoA hybrid decision tree algorithm formixed numeric and categorical data in regression analysisrdquoPattern Recognition Letters vol 98 pp 39ndash45 2017
[34] L Breiman and J H Friedman Classification and RegressionTrees CRC Press Boca Raton FL USA 1984
[35] C Lindner P A Bromiley M C Ionita et al ldquoRobust andaccurate shape model matching using random forest re-gression-votingrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 37 no 9 pp 1862ndash1874 2015
[36] J Gall and V S Lempitsky ldquoClass-specific Hough forests forobject detectionrdquo in Proceedings of IEEE Conference onComputer Vision and Pattern Recognition (CVPR) pp 143ndash157 Miami FL USA June 2009
Journal of Healthcare Engineering 15
International Journal of
AerospaceEngineeringHindawiwwwhindawicom Volume 2018
RoboticsJournal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Active and Passive Electronic Components
VLSI Design
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Shock and Vibration
Hindawiwwwhindawicom Volume 2018
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawiwwwhindawicom
Volume 2018
Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom
The Scientific World Journal
Volume 2018
Control Scienceand Engineering
Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom
Journal ofEngineeringVolume 2018
SensorsJournal of
Hindawiwwwhindawicom Volume 2018
International Journal of
RotatingMachinery
Hindawiwwwhindawicom Volume 2018
Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Navigation and Observation
International Journal of
Hindawi
wwwhindawicom Volume 2018
Advances in
Multimedia
Submit your manuscripts atwwwhindawicom
algorithm is confirmed using this clinical dataset In addi-tion automatic measurement of clinical parameters has alsoachieved satisfactory results In the future we will put moreefforts to improve the performance of automatic analysis inlateral cephalograms so that the automatic system can beutilized in clinical practice to obtain objective measurementMore research will be conducted to reduce the computa-tional complexity of the algorithm as well
Data Availability
e database1 of 2015 ISBI Challenge is available at httpwwwntustedutwsimcweiwangISBI2015challenge1indexhtml e database2 of Peking University School andHospital of Stomatology cannot be made public availabledue to privacy concerns for the patients
Disclosure
is work is based on research conducted at Beijing Instituteof Technology
Conflicts of Interest
e authors declare that there are no conflicts of interestregarding the publication of this paper
Acknowledgments
e authors would like to express their gratitude to De-partment of Orthodontics Peking University School andHospital of Stomatology China for the supply of imagedatabase2
References
[1] J Yang X Ling Y Lu et al ldquoCephalometric image analysisand measurement for orthognathic surgeryrdquo Medical amp Bi-ological Engineering amp Computing vol 39 no 3 pp 279ndash2842001
[2] B H Broadbent ldquoA new x-ray technique and its applicationto orthodontiardquo Angle Orthodontist vol 1 pp 45-46 1931
[3] H Hofrath ldquoDie Bedeutung des Rontgenfern undAbstandsaufnahme fiir de Diagnostik der KieferanomalienrdquoFortschritte der Kieferorthopadie vol 2 pp 232ndash258 1931
[4] R Leonardi D Giordano F Maiorana et al ldquoAutomaticcephalometric analysis - a systematic reviewrdquo Angle Ortho-dontist vol 78 no 1 pp 145ndash151 2008
[5] T S Douglas ldquoImage processing for craniofacial landmarkidentification and measurement a review of photogrammetryand cephalometryrdquo Computerized Medical Imaging andGraphics vol 28 no 7 pp 401ndash409 2004
[6] D S Gill and F B Naini ldquoChapter 9 cephalometric analysisrdquoin Orthodontics Principles and Practice pp 78ndash87 BlackwellScience Publishing Oxford UK 2011
[7] A D Levy-Mandel A N Venetsanopoulos and J K TsotsosldquoKnowledge-based landmarking of cephalogramsrdquo Com-puters and Biomedical Research vol 19 no 3 pp 282ndash3091986
[8] V Grau M Alcaniz M C Juan et al ldquoAutomatic localizationof cephalometric landmarksrdquo Journal of Biomedical In-formatics vol 34 no 3 pp 146ndash156 2001
[9] J Ashish M Tanmoy and H K Sardana ldquoA novel strategyfor automatic localization of cephalometric landmarksrdquo inProceedings of IEEE International Conference on ComputerEngineering and Technology pp V3-284ndashV3-288 TianjinChina 2010
[10] T Mondal A Jain and H K Sardana ldquoAutomatic craniofacialstructure detection on cephalometric imagesrdquo IEEE Trans-actions on Image Processing vol 20 no 9 pp 2606ndash2614 2011
[11] A Kaur and C Singh ldquoAutomatic cephalometric landmarkdetection using Zernike moments and template matchingrdquoSignal Image and Video Processing vol 9 no 1 pp 117ndash1322015
[12] D B Forsyth and D N Davis ldquoAssessment of an automatedcephalometric analysis systemrdquo European Journal of Ortho-dontics vol 18 no 5 pp 471ndash478 1996
[13] S Rueda and M Alcaniz ldquoAn approach for the automaticcephalometric landmark detection using mathematicalmorphology and active appearance modelsrdquo in Proceedings ofMedical Image Computing and Computer-Assisted In-tervention (MICCAI 2006) pp 159ndash166 Lecture Notes inComputer Science R Larsen Ed Copenhagen DenmarkOctober 2006
[14] W Yue D Yin C Li et al ldquoAutomated 2-D cephalometricanalysis on X-ray images by a model-based approachrdquo IEEETransactions on Biomedical Engineering vol 53 no 8pp 1615ndash1623 2006
[15] R Kafieh A mehri S sadri et al ldquoAutomatic landmarkdetection in cephalometry using a modified Active ShapeModel with sub image matchingrdquo in Proceedings of IEEEInternational Conference on Machine Vision pp 73ndash78Islamabad Pakistan 2008
[16] J Keustermans W Mollemans D Vandermeulen andP Suetens ldquoAutomated cephalometric landmark identifica-tion using shape and local appearance modelsrdquo in Proceedingsof 20th International Conference on Pattern Recognitionpp 2464ndash2467 Istanbul Turkey August 2010
[17] P Vucinic Z Trpovski and I Scepan ldquoAutomatic land-marking of cephalograms using active appearance modelsrdquoEuropean Journal of Orthodontics vol 32 no 3 pp 233ndash2412010
[18] S Chakrabartty M Yagi T Shibata et al ldquoRobust cepha-lometric landmark identification using support vector ma-chinesrdquo in Proceedings of IEEE International Conference onAcoustics Speech and Signal Processing vol 2 pp 825ndash828Vancouver Canada May 2003
[19] I El-Feghi M A Sid-Ahmed and M Ahmadi ldquoAutomaticlocalization of craniofacial landmarks for assisted cephalom-etryrdquo Pattern Recognition vol 37 no 3 pp 609ndash621 2004
[20] R Leonardi D Giordano and F Maiorana ldquoAn evaluation ofcellular neural networks for the automatic identification ofcephalometric landmarks on digital imagesrdquo Journal of Bio-medicine and Biotechnology vol 2009 Article ID 717102 12pages 2009
[21] L Favaedi M Petrou and IEEE ldquoCephalometric landmarksidentification using probabilistic relaxationrdquo in Proceedings of2010 Annual International Conference of the IEEE Engineeringin Medicine and Biology Society pp 4391ndash4394 IEEE Engi-neering in Medicine and Biology Society Conference Pro-ceedings Buenos Aires Argentina August 2010
[22] M Farshbaf and A A Pouyan ldquoLandmark detection oncephalometric radiology images through combining classi-fiersrdquo in Proceedings of 2010 17th Iranian Conference ofBiomedical Engineering (ICBME) pp 1ndash4 Isfahan IranNovember 2010
14 Journal of Healthcare Engineering
[23] L Breiman ldquoRandom forestsrdquo Machine Learning vol 45no 1 pp 5ndash32 2001
[24] C Lindner and T F Cootes ldquoFully automatic cephalometricevaluation using random forest regression-votingrdquo in Pro-ceedings of International Symposium on Biomedical Imaging(ISBI) Grand Challenges in Dental X-ray Image Ana-lysismdashAutomated Detection and Analysis for Diagnosis inCephalometric X-ray Image Brooklyn NY USA May 2015
[25] C Lindner C-W Wang C-T Huang et al ldquoFully automaticsystem for accurate localisation and analysis of cephalometriclandmarks in lateral cephalogramsrdquo Scientific Reports vol 6no 1 2016
[26] B Ibragimov et al ldquoComputerized cephalometry by gametheory with shape- and appearance-based landmark re-finementrdquo in Proceedings of International Symposium onBiomedical imaging (ISBI) Grand Challenges in Dental X-rayImage AnalysismdashAutomated Detection and Analysis forDiagnosis in Cephalometric X-ray Image Brooklyn NYUSA May 2015
[27] C-WWang C-T HuangM-C Hsieh et al ldquoEvaluation andcomparison of anatomical landmark detection methods forcephalometric X-ray images a grand challengerdquo IEEETransactions on Medical Imaging vol 34 no 9 pp 1890ndash1900 2015
[28] C W Wang C T Huang J H Lee et al ldquoA benchmark forcomparison of dental radiography analysis algorithmsrdquoMedical Image Analysis vol 31 pp 63ndash76 2016
[29] R Vandaele J AcetoMMuller et al ldquoLandmark detection in2D bioimages for geometric morphometrics a multi-resolution tree-based approachrdquo Scientific Reports vol 8no 1 2018
[30] D G Lowe ldquoDistinctive image features from scale-invariantKeypointsrdquo International Journal of Computer Vision vol 60no 2 pp 91ndash110 2004
[31] A Vedaldi and B Fulkerson ldquoVLFeat an open and portablelibrary of computer vision algorithmsrdquo in Proceedings ofConference on ACM multimedia pp 1469ndash1472 FirenzeItaly October httpwwwvlfeatorg
[32] C Qiu L Jiang and C Li ldquoRandomly selected decision treefor test-cost sensitive learningrdquo Applied Soft Computingvol 53 pp 27ndash33 2017
[33] K Kim and J S Hong ldquoA hybrid decision tree algorithm formixed numeric and categorical data in regression analysisrdquoPattern Recognition Letters vol 98 pp 39ndash45 2017
[34] L Breiman and J H Friedman Classification and RegressionTrees CRC Press Boca Raton FL USA 1984
[35] C Lindner P A Bromiley M C Ionita et al ldquoRobust andaccurate shape model matching using random forest re-gression-votingrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 37 no 9 pp 1862ndash1874 2015
[36] J Gall and V S Lempitsky ldquoClass-specific Hough forests forobject detectionrdquo in Proceedings of IEEE Conference onComputer Vision and Pattern Recognition (CVPR) pp 143ndash157 Miami FL USA June 2009
Journal of Healthcare Engineering 15
International Journal of
AerospaceEngineeringHindawiwwwhindawicom Volume 2018
RoboticsJournal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Active and Passive Electronic Components
VLSI Design
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Shock and Vibration
Hindawiwwwhindawicom Volume 2018
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawiwwwhindawicom
Volume 2018
Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom
The Scientific World Journal
Volume 2018
Control Scienceand Engineering
Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom
Journal ofEngineeringVolume 2018
SensorsJournal of
Hindawiwwwhindawicom Volume 2018
International Journal of
RotatingMachinery
Hindawiwwwhindawicom Volume 2018
Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Navigation and Observation
International Journal of
Hindawi
wwwhindawicom Volume 2018
Advances in
Multimedia
Submit your manuscripts atwwwhindawicom
[23] L Breiman ldquoRandom forestsrdquo Machine Learning vol 45no 1 pp 5ndash32 2001
[24] C Lindner and T F Cootes ldquoFully automatic cephalometricevaluation using random forest regression-votingrdquo in Pro-ceedings of International Symposium on Biomedical Imaging(ISBI) Grand Challenges in Dental X-ray Image Ana-lysismdashAutomated Detection and Analysis for Diagnosis inCephalometric X-ray Image Brooklyn NY USA May 2015
[25] C Lindner C-W Wang C-T Huang et al ldquoFully automaticsystem for accurate localisation and analysis of cephalometriclandmarks in lateral cephalogramsrdquo Scientific Reports vol 6no 1 2016
[26] B Ibragimov et al ldquoComputerized cephalometry by gametheory with shape- and appearance-based landmark re-finementrdquo in Proceedings of International Symposium onBiomedical imaging (ISBI) Grand Challenges in Dental X-rayImage AnalysismdashAutomated Detection and Analysis forDiagnosis in Cephalometric X-ray Image Brooklyn NYUSA May 2015
[27] C-WWang C-T HuangM-C Hsieh et al ldquoEvaluation andcomparison of anatomical landmark detection methods forcephalometric X-ray images a grand challengerdquo IEEETransactions on Medical Imaging vol 34 no 9 pp 1890ndash1900 2015
[28] C W Wang C T Huang J H Lee et al ldquoA benchmark forcomparison of dental radiography analysis algorithmsrdquoMedical Image Analysis vol 31 pp 63ndash76 2016
[29] R Vandaele J AcetoMMuller et al ldquoLandmark detection in2D bioimages for geometric morphometrics a multi-resolution tree-based approachrdquo Scientific Reports vol 8no 1 2018
[30] D G Lowe ldquoDistinctive image features from scale-invariantKeypointsrdquo International Journal of Computer Vision vol 60no 2 pp 91ndash110 2004
[31] A Vedaldi and B Fulkerson ldquoVLFeat an open and portablelibrary of computer vision algorithmsrdquo in Proceedings ofConference on ACM multimedia pp 1469ndash1472 FirenzeItaly October httpwwwvlfeatorg
[32] C Qiu L Jiang and C Li ldquoRandomly selected decision treefor test-cost sensitive learningrdquo Applied Soft Computingvol 53 pp 27ndash33 2017
[33] K Kim and J S Hong ldquoA hybrid decision tree algorithm formixed numeric and categorical data in regression analysisrdquoPattern Recognition Letters vol 98 pp 39ndash45 2017
[34] L Breiman and J H Friedman Classification and RegressionTrees CRC Press Boca Raton FL USA 1984
[35] C Lindner P A Bromiley M C Ionita et al ldquoRobust andaccurate shape model matching using random forest re-gression-votingrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 37 no 9 pp 1862ndash1874 2015
[36] J Gall and V S Lempitsky ldquoClass-specific Hough forests forobject detectionrdquo in Proceedings of IEEE Conference onComputer Vision and Pattern Recognition (CVPR) pp 143ndash157 Miami FL USA June 2009
Journal of Healthcare Engineering 15
International Journal of
AerospaceEngineeringHindawiwwwhindawicom Volume 2018
RoboticsJournal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Active and Passive Electronic Components
VLSI Design
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Shock and Vibration
Hindawiwwwhindawicom Volume 2018
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawiwwwhindawicom
Volume 2018
Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom
The Scientific World Journal
Volume 2018
Control Scienceand Engineering
Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom
Journal ofEngineeringVolume 2018
SensorsJournal of
Hindawiwwwhindawicom Volume 2018
International Journal of
RotatingMachinery
Hindawiwwwhindawicom Volume 2018
Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Navigation and Observation
International Journal of
Hindawi
wwwhindawicom Volume 2018
Advances in
Multimedia
Submit your manuscripts atwwwhindawicom
International Journal of
AerospaceEngineeringHindawiwwwhindawicom Volume 2018
RoboticsJournal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Active and Passive Electronic Components
VLSI Design
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Shock and Vibration
Hindawiwwwhindawicom Volume 2018
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawiwwwhindawicom
Volume 2018
Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom
The Scientific World Journal
Volume 2018
Control Scienceand Engineering
Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom
Journal ofEngineeringVolume 2018
SensorsJournal of
Hindawiwwwhindawicom Volume 2018
International Journal of
RotatingMachinery
Hindawiwwwhindawicom Volume 2018
Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawiwwwhindawicom Volume 2018
Hindawiwwwhindawicom Volume 2018
Navigation and Observation
International Journal of
Hindawi
wwwhindawicom Volume 2018
Advances in
Multimedia
Submit your manuscripts atwwwhindawicom