Now Reading
Automated assessment of mandibular shape asymmetry in 3-dimensions

Automated assessment of mandibular shape asymmetry in 3-dimensions



American Journal of Orthodontics and Dentofacial Orthopedics, 2022-05-01, Volume 161, Issue 5, Pages 698-707, Copyright © 2022


Introduction

This study aimed to develop an automatic pipeline for analyzing mandibular shape asymmetry in 3-dimensions.

Methods

Forty patients with skeletal Class I pattern and 80 patients with skeletal Class III pattern were used. The mandible was automatically segmented from the cone-beam computed tomography images using a U-net deep learning network. A total of 17,415 uniformly sampled quasi-landmarks were automatically identified on the mandibular surface via a template mapping technique. After alignment with the robust Procrustes superimposition, the pointwise surface-to-surface distance between original and reflected mandibles was visualized in a color-coded map, indicating the location of asymmetry. The degree of overall mandibular asymmetry and the asymmetry of subskeletal units were scored using the root-mean-squared-error between the left and right sides. These asymmetry parameters were compared between the skeletal Class I and skeletal Class III groups.

Results

The mandible shape was significantly more asymmetrical in patients with skeletal Class III pattern with positional asymmetry. The condyles were identified as the most asymmetric region in all groups, followed by the coronoid process and the ramus.

Conclusions

This automated approach to quantify mandibular shape asymmetry will facilitate high-throughput image processing for big data analysis. The spatially-dense landmarks allow for evaluating mandibular asymmetry over the entire surface, which overcomes the information loss inherent in conventional linear distance or angular measurements. Precise quantification of the asymmetry can provide important information for individualized diagnosis and treatment planning in orthodontics and orthognathic surgery.

Highlights

  • Develop an automatic pipeline for mandibular shape asymmetry assessment in 3-dimensions.

  • Automatically segment the mandible from CBCT images using a U-net deep learning network.

  • Automatically identify spatially-dense landmarks on the entire mandibular surface.

The mandible is the primary moving and functioning bone in the craniofacial skeleton and plays a central role in determining facial morphology and esthetics. Facial asymmetry can be caused by a discrepancy in the size and shape of the 2 halves of the mandible (shape asymmetry) or by a misalignment between the midface and the mandible (positional asymmetry). Shape asymmetry of the mandible is a common craniofacial deformity that occurs in a diverse set of congenital and acquired conditions such as craniofacial microsomia, trauma, fracture, arthritis, or infection of the temporomandibular joints. , An asymmetrically shaped mandible could coexist with the positional asymmetry of the mandible. Imbalanced occlusion and abnormal stress distribution on the articular surface could affect the condylar modeling during the active growth period. Alternatively, the unpredictable nature of growth can result in progressive mandibular shape deformity with age.

Detecting and quantifying asymmetry is important to clinicians, facilitating more accurate differentiation and diagnosis of the causes of asymmetry and more effective treatment planning. Traditionally, posteroanterior cephalograms and submentovertex radiographs are taken to determine the presence and degree of the mandibular asymmetry. , In the classic triangulation method, the left side and right side of the mandible are simplified as the triangles between the condylar point, gonion, and menton, then the shape asymmetry is measured as the difference between the 2 sides. Others evaluate the mandibular asymmetry using a reference midline. A reference midline is often generated by connecting median landmarks or bisecting the lines connecting bilateral landmarks of the midface. Differences are compared between pairwise corresponding linear distances perpendicularly to the reference midline. , In general, landmark placement is difficult in 2-dimensional (2D) planes because spatially separate structures are projected onto overlapping positions in the 2D image plane. Manual landmarking is laborious and requires a skilled operator with anatomic knowledge. Interoperator and intraoperator landmarking variability are important sources of error and inconsistency in linear distance or angle measurements. Moreover, the rotation of the mandible relative to the 2D image plane will adversely affect the measurement of the morphologic asymmetry. This makes the positional and morphologic asymmetry of the mandible challenging to disentangle from a 2D radiograph.

Computed tomography, either spiral computed tomography or cone-beam computed tomography (CBCT), offers greater precision in measuring craniofacial structures in 3-dimensions (3D). However, the deformity of the mandible has often been summarized as the difference in distances, angles, area, or ratio between the left and right sides of the jaw. , Arguably, these do not accurately represent the complex structure of the mandible. In addition, the reproducibility in identifying landmarks in smooth structures such as the condyle is considered a major source of errors in these analyses. Different landmark choices could lead to contrasting outcomes. Furthermore, as big data initiatives become increasingly common in dentistry and surgical disciplines, there is pressure to develop fast, automatic, and standardized measurements for patient evaluation in this field.

Therefore, this study aimed to develop an automatic pipeline for mandibular shape asymmetry assessment. This comprises automatically segmenting the mandible from CBCT images, identifying spatially-dense landmarks on the mandibular surface, and comparing original and reflected copies of the images to determine the asymmetry ( Fig 1 ). We illustrate this method by comparing the mandibular shape asymmetry between adults with the skeletal Class I pattern with those with the skeletal Class III pattern.

The automated mandibular shape asymmetry assessment pipeline.
Fig 1
The automated mandibular shape asymmetry assessment pipeline.

Material and methods

The patients were retrospectively collected at the Department of Orthodontics at Peking University School of Stomatology from 2015-2018. We selected 120 adult subjects (aged >18 years) whose CBCT scans were taken for clinical indications. The patients were divided into skeletal Class I group (40 subjects; mean age, 20.32 ± 3.78 years) and skeletal Class III group (80 subjects; mean age, 21.20 ± 4.65 years) on the basis of the ANB angle (normal value, 2.7°; standard deviation, 2.0°). Patients were further divided by the positional asymmetry of the mandible, which was defined by manually measuring the distance from the hard-tissue menton point to the midsagittal reference plane in the CBCT images. A distance between the midsagittal reference plane and skeletal menton >4 mm was taken as an indication of the positional asymmetry of the mandible. Finally, 3 subgroups were constituted. Group 1: patients with skeletal Class I pattern without positional asymmetry (n = 40); group 2: patients with skeletal Class III pattern without positional asymmetry (n = 40); and group 3: patients with skeletal Class III pattern with positional asymmetry (n = 40). The following criteria also had to be fulfilled: (1) Chinese ethnicity; (2) no multiple missing teeth other than third molars; and (3) no congenital diseases affecting growth and development, no previous craniofacial surgery, facial fractures or facial surgery, degenerative disease in the temporomandibular joint, and craniofacial anomalies. Ethical approval was obtained from the Research Ethics Committee of the Peking University School and Hospital of Stomatology (PKUSSIRB-202057109). Written informed consent was obtained from all participants.

All the CBCT scans were obtained from the same device (NewTom 9000; Quantitative Radiology, Verona, Italy). Patients were instructed to sit naturally upright, close their mouths in maximum intercuspation, and relax their lips. The field of view in the selected samples was 16 × 13 cm or 17 × 23 cm with a scan time 18.0-26.9 seconds. Exposure parameters for CBCT images were 120 kVp and 3-8 mA. The original isotropic voxel size was 0.5 mm 3 .

The mandible was segmented from each CBCT image using a 3D U-net architecture, a deep-learning-based automatic segmentation approach. The framework of the automatic segmentation is shown in Figure 2 . Briefly, the segmentation network was trained using 48 segmented CBCT images. For these 48 images, segmentation was performed with ITK-SNAP open-source software case by case ( http://www.itksnap.org/pmwiki/pmwiki.php ). , It requires an initial segmentation using global thresholding to grossly segment the main part mandible, followed by a selection of seed points and a “region competition snake” algorithm to generate the region of interest, such as the condyles.

Mandible segmentation framework. Conv , Convolution layer.
Fig 2
Mandible segmentation framework. Conv , Convolution layer.

For the automatic segmentation approach, the original CBCT images were cropped into patches of 192 × 192 × 192 in the training and inference stage because of the limitation of graphics processing unit memory. The network had an encoder-decoder structure with long skip connections. The encoder compressed the image patch into feature maps in low resolution, and the decoder aimed to estimate a probability at each voxel that it belongs to the mandible. In this study, the encoder had 5 ResNet-like blocks, each followed by a 2 × 2 × 2 average pooling layer. The decoder had 4 ResNet-like blocks and two 3 × 3 × 3 convolutional layers, followed by an upsampling layer. Each ResNet block consisted of two 3 × 3 × 3 convolutional layers for feature extraction and one 1 × 1 × 1 convolutional layer for residual connection. The instance normalization layer and Leaky ReLU activation followed each convolutional layer. The feature volumes in the decoder stage were composed of volumes from the proceeding layers in the decoder and those from the encoder with the same resolution. The output of the decoder was the mandible segmentation of patches. We used a cross-entropy loss function to train the segmentation network. In the inference stage, the mandible segmentation of patches from each CBCT image was merged to get the final mandible segmentation. The overlapping sliding window method was used to crop patches with a stride of 60 × 60 × 60. For the overlapping areas, we used the average probability of each voxel belonging to the mandible to get the final mandible prediction. The segmentation network was implemented using the open-source PyTorch.

The ITK-SNAP software is open-source, and the framework of the proposed automatic method is implemented using an open-source convolutional neural network, and thereby both are free. The time efficiency and the accuracy of the automatic mandible segmentation approach were further compared. Twenty new CBCT images were used for the validation test. The ITK-SNAP segmented result was considered as a ground truth. The ITK-SNAP and automatically segmented mandibles were compared using the Dice similarity coefficient, which assesses to what degree the same voxels are selected by each segmentation. This index ranges from 0 (no overlap) to 1 (complete overlap). The Average Hausdorff Distance was used to evaluate the discrepancy between the outer surfaces of the mandibles segmented using each approach.

The outer surface of the mandible was tessellated with the standard marching cubes technique in MATLAB software ( https://www.mathworks.cn/help/matlab/ref/isosurface.html ). Each mandible was then represented by a surface mesh composed of a dense cloud of points linked to define the mandibular surface.

A previously developed open-source template mapping technique was used to automatically identify spatially-dense quasi-landmarks on the mandibular surface. Essentially, a generic mandibular template was represented by 17,415 quasi-landmarks; these landmarks were defined by x-, y-, and z-coordinates. The template was translated, rotated, and scaled (rigid registration) to roughly align to each target mandible. Then, the template was deformed into the shape of each mandible via a nonrigid registration. , This procedure ensured that a large number of quasi-landmarks cover the entire surface of the bone, including discrete areas such as the ramus, the condyles, and chin, in which traditional anatomic landmarks are poorly defined by local geometry. After template mapping, each quasi-landmark was a single measurement in a specific anatomic location of the mandible and was in spatial correspondence across all patients ( Fig 3 ). The reflected mandible was generated by reversing the sign of the x-coordinate of each vertex for the original mandible, which generated a reversed 3D image of the entire mandible. This reflected mandible was registered by the same template mapping procedure. The accuracy and reproducibility of the template mapping have been recently validated by Verhelst et al. The average Euclidean distance between manual and corresponding automatic landmarks was 1.40 mm for unaltered and 1.76 mm for operated mandibles, respectively. The variation among repeated mappings was 0.0067 mm and 0.0077 mm for pre and postoperative samples, respectively.

The mandible is represented by spatially-dense quasi-landmarks on the mandibular surface. After the template mapping, each quasi-landmark occupied the same position on a given mandible as on all the other mandibles.
Fig 3
The mandible is represented by spatially-dense quasi-landmarks on the mandibular surface. After the template mapping, each quasi-landmark occupied the same position on a given mandible as on all the other mandibles.

Shape asymmetry of the mandible was assessed by superimposing it onto its reflected version using a robust Procrustes alignment. The discrepancy between corresponding quasi-landmarks of the 2 configurations indicated where the asymmetry occurred. The difference at each quasi-landmark was projected onto the original configuration and graphically visualized by a color map in millimeters. This indicated the location and magnitude of the mandibular shape asymmetry for each patient.

An overall asymmetry index was obtained by calculating the root-mean-squared error between superimposed landmarks of the original and reflected configurations. Teeth were cut off based on the correspondence so that only the nondental component of the mandibular asymmetry was evaluated. Different mandibular regions, including the chin, mandibular body, ramus, condyle, and coronoid process, were further defined on the template by a modified method of Duran et al ( Fig 4 ). The asymmetry indexes of these regions were further calculated to assess the regional asymmetry. All the analyses were implemented using custom-written code in the Python programming language. The overall asymmetry index and the asymmetry indexes of mandibular regions were compared among 3 subgroups using the Kruskal-Wallis H-test via IBM SPSS statistical software (version 23.0; IBM, Armonk, NY). At P <0.05, the difference was considered significant.

The 5 mandibular regions defined on the template.
Fig 4
The 5 mandibular regions defined on the template.

Results

Automatic segmentation of each mandible executed in 12-30 seconds on an NVIDIA GTX TITAN XP GPU (Nvidia Corporation, Santa Clara, Calif). The compared ITK-SNAP method typically required 15 to 20 minutes.

Mandibles segmented from the proposed automated method were compared against the ITK-SNAP segmentation method. The Dice similarity coefficient was 0.969 ± 0.005, and the Average Hausdorff Distance was 0.035 ± 0.040 (mm), indicating almost complete overlap between the automatically segmented mandibles and the ITK-SNAP segmented mandibles. Boundary deviations were predominantly <1 mm over the mandibular surfaces ( Fig 5 ).

The discrepancy between the proposed automatic method and the ITK-SNAP method in 20 test patients. The color dark blue indicates there is no discrepancy between the 2 methods.
Fig 5
The discrepancy between the proposed automatic method and the ITK-SNAP method in 20 test patients. The color dark blue indicates there is no discrepancy between the 2 methods.

The overall mandibular asymmetry index and regional indexes were higher in the skeletal Class III group with positional asymmetry. All differences were statistically significant. Condyles have been identified as the main sites of the asymmetric regions in all groups, followed by the coronoid process and the ramus ( Table ).

Table
Mandibular asymmetry comparison between patients with skeletal Class I and skeletal Class III patterns
Variables G1: Patients with skeletal Class I pattern without positional asymmetry G2: Patients with skeletal Class III pattern without positional asymmetry G3: Patients with skeletal Class III pattern with positional asymmetry P values
Median (IQR) Min-Max Median (IQR) Min-Max Median (IQR) Min-Max G1-G2-G3 G1-G2 G2-G3 G1-G3
Overall AI 1.51 (1.28-1.90) 0.86-3.74 1.62 (1.36-2.05) 0.91-4.96 3.24 (2.56-4.05) 1.13-6.31 <0.001 ∗∗ 0.871 <0.001 ∗∗ <0.001 ∗∗
Condylar AI 2.14 (1.86-3.03) 0.82-6.85 2.27 (1.70-3.41) 1.05-12.95 6.77 (4.72-8.88) 1.66-13.13 <0.001 ∗∗ >0.999 <0.001 ∗∗ <0.001 ∗∗
Coronoid process AI 1.78 (1.24-2.38) 0.80-3.23 1.95 (1.27-2.80) 0.85-3.72 2.97 (2.26-3.94) 0.81-7.95 <0.001 ∗∗ 0.677 <0.001 ∗∗ <0.001 ∗∗
Ramus AI 1.53 (1.26-2.07) 0.83-3.13 1.57 (1.37-3.21) 1.68-2.78 2.62 (2.19-3.27) 1.55-5.64 <0.001 ∗∗ >0.999 <0.001 ∗∗ <0.001 ∗∗
Mandibular body AI 1.53 (1.26-2.07) 1.26-3.13 1.42 (1.04-1.68) 0.81-2.96 1.93 (1.43-2.40) 1.03-3.88 <0.001 ∗∗ 0.183 <0.001 ∗∗ <0.001 ∗∗
Chin AI 1.01 (0.76-1.38) 0.68-3.76 1.29 (0.90-1.56) 0.61-3.31 2.43 (1.79-3.18) 1.02-6.08 <0.001 ∗∗ 0.197 <0.001 ∗∗ <0.001 ∗∗
Note. Group difference was determined using Kruskal-Wallis H-test.
IQR , interquartile range (25th, 75th percentile); Min , minimum; Max , maximum; AI , asymmetry index.

∗∗ P <0.001.

The quantification and visualization effects were plotted in 6 selected patients, indicating the region and severity of the mandibular shape asymmetry for patient-based analysis ( Fig 6 ).

Shape asymmetry of the mandibles varies from patient to patient. Red areas indicate regions in which the asymmetry was >4 mm, and dark blue indicate no asymmetry.
Fig 6
Shape asymmetry of the mandibles varies from patient to patient. Red areas indicate regions in which the asymmetry was >4 mm, and dark blue indicate no asymmetry.

You're Reading a Preview

Become a DentistryKey membership for Full access and enjoy Unlimited articles

Become membership

If you are a member. Log in here