Three-Dimensional Soft Tissue Simulation in Orthognathic Surgery



Three-Dimensional Soft Tissue Simulation in Orthognathic Surgery




Atlas of the Oral and Maxillofacial Surgery Clinics of North America, 2020-09-01, Volume 28, Issue 2, Pages 73-82, Copyright © 2020 Elsevier Inc.



Key points

  • Image and volume fusion precision affects soft tissue simulation accuracy.

  • Soft tissue response is not linear and requires regression algorithms.

  • Soft tissue response algorithms vary with facial type.

  • Simulation accuracy varies with the anatomic region of concern being most accurate in the midline and most variable laterally.

  • Although somewhat flawed, 3-dimensional soft tissue simulation remains the best option for patient communication.


Introduction: nature of the problem

Treatment success in orthognathic surgery is dependent on a stable and functional occlusal correction leading to an outcome that is esthetically pleasing and acceptable to the patient. Although both criteria generally go hand in glove, it is entirely possible to correct the malocclusion and fail to meet the patient’s expectations relative to facial appearance. Legacy planning techniques and careful plan execution have achieved and continue to produce acceptable accuracy when skeletal/occlusal outcome is analyzed. However, the overlying soft tissue response has been difficult to forecast and has been limited to profile representations. Early efforts involved cephalometric tracings from a standardized lateral radiograph. The planned soft tissue profile changes were drawn freehand based on response ratios derived from literature reports of research data. These ratios were often flawed by data averaging and the combination of various facial types to produce a larger sample ( Fig. 1 ). Another option to portray appearance changes involved cutting and reassembly of life-size photographic transparencies to approximate the desired profile changes.

Soft tissue response taken from legacy research is flawed by data averaging in patients having similar surgery but different lip posture. The lip response to mandibular advancement would be very different in each. This explains the wide peri-oral variation when legacy data are used as the basis for 3D simulation algorithms. ( A-C ) All tracings of patients with mandibular deficiency, but the varying soft tissue posture would require customized soft tissue response ratios for an accurate simulation.
Fig. 1
Soft tissue response taken from legacy research is flawed by data averaging in patients having similar surgery but different lip posture. The lip response to mandibular advancement would be very different in each. This explains the wide peri-oral variation when legacy data are used as the basis for 3D simulation algorithms. (
A-C ) All tracings of patients with mandibular deficiency, but the varying soft tissue posture would require customized soft tissue response ratios for an accurate simulation.

The advent of the personal computer led to efficiency in data gathering and manipulation. Software programs allowed the previously hand-drawn data to be digitized and printed or plotted. Skeletal segments could be moved on-screen, and the soft tissue profile changes were generated using the legacy response data. Program evolution allowed coupling of cephalometric data with a profile digital image that was morphed and provided a simulation that was easier for both clinician and patient to understand. It became obvious, however, that software programs varied in their ability to produce a soft tissue simulation that approximated the actual outcome. Smith and colleagues completed a perceptual study involving orthodontists, surgeons, and lay public scoring the likeness of 2-dimensional (2D) computer-based simulations to the actual treatment outcome. Previous research on accuracy had been landmark specific but failed to look at visual assessment. The following observations resulted from the study:

  • Some programs have default soft tissue response ratios that are “hard coded”

  • Default response ratios were least effective in producing simulations in patients having vertical facial excess or deficiency

  • Programs allowing creation of ratios specific to facial type produce the best simulations

  • Most reported “prediction” ratios are linear

  • Although software response ratios are linear, actual soft tissue response is not

The limitation of 2D computer-aided planning remains the inability to produce other facial views. Patients rarely observe their profile unless from a photo or using multiple mirrors. Although the frontal view can be altered freehand using a combination of computer morphing and cut and paste, changes in the oblique, submentovertex and coronal views cannot be simulated ( Fig. 2 ).

Freehand treatment simulation is a quick option to demonstrate frontal view changes in asymmetry correction. The image on the right shows soft tissue alteration from correcting the maxillary cant and moving the chin to the midline. Although insufficient for actual planning, the images serve as a basis for further discussion.
Fig. 2
Freehand treatment simulation is a quick option to demonstrate frontal view changes in asymmetry correction. The image on the right shows soft tissue alteration from correcting the maxillary cant and moving the chin to the midline. Although insufficient for actual planning, the images serve as a basis for further discussion.


Three-dimensional virtual planning in orthognathic surgery

Three-dimensional (3D) reformation of computed tomography (CT) data has been possible since the late 1970s, but its use was limited by image quality and processing time. Development of multidetector row CT in the early 1990s created a viable option. The initial programs for craniofacial virtual planning could import this data but scanner access, radiation concerns and the associated cost limited routine usage. In the late 1990s, the development of flat-panel CT, commonly known as cone beam CT (CBCT), opened the door for widespread use of the technique in multiple disciplines including oral and maxillofacial surgery. Cost largely ceased to be an issue due to much lower hardware expense, machine availability, and low acquisition time. Currently, good resolution with isometric voxels, limited noise, and much lower radiation exposure have made CBCT the imaging choice for 3D planning in the craniofacial region.


Freehand three-dimensional facial morphing

Contemporary 3D virtual planning for orthognathic surgery requires a minimum of a DICOM (Digital Imaging and Communications in Medicine) volume of the area of interest and high-resolution dental models. 3D digital facial images are not necessary but are ideal for patient education. The alternative of freehand 3D facial morphing offers an option for communication during preliminary discussions when the patient has not made a firm commitment to proceeding with treatment. Face Gen was originally developed as a virtual sketch pad for law enforcement and a tool for “high-end” video gamers to create their own avatar ( https://facegen.com/ ). A frontal and profile digital image are quickly linked by mapping 20 landmarks on the image pair. Algorithms create a wire-mesh model using these landmarks and overlays photorealistic shadowing and pigmentation ( Fig. 3 ). Processing time on a contemporary laptop or workstation is 90 seconds. The resulting image can be rotated to any orientation and morphing can be done by click and drag or using a series of scroll bars that are interactive and modify different facial regions ( Fig. 4 ). The result can be saved in a variety of graphics formats and added to the diagnostic record. The photorealistic image can be seen to compare favorably with the actual treatment outcome ( Figs. 5 and 6 ).

Three-dimensional freehand morphing is a more sophisticated option for use in preliminary patient discussion. Record acquisition is limited to profile and frontal photographs that are linked by digitizing anatomic soft tissue landmarks ( https://facegen.com/ ).
Fig. 3
Three-dimensional freehand morphing is a more sophisticated option for use in preliminary patient discussion. Record acquisition is limited to profile and frontal photographs that are linked by digitizing anatomic soft tissue landmarks (
https://facegen.com/ ).

The software quickly creates a wire-mesh face ( A ) that is combined with photorealistic pigmentation and shadowing ( B ) to produce a 3D likeness ( C ) that can be altered. Basic hair styles can be added to create a more realistic image ( D ) ( https://facegen.com/ ).
Fig. 4
The software quickly creates a wire-mesh face (
A ) that is combined with photorealistic pigmentation and shadowing (
B ) to produce a 3D likeness (
C ) that can be altered. Basic hair styles can be added to create a more realistic image (
D ) (
https://facegen.com/ ).

Changes with morphing are based on viewer perception. Options include “click and drag” on the image itself or the use of a series of interconnected sliders that control specific facial regions ( https://facegen.com/ ).
Fig. 5
Changes with morphing are based on viewer perception. Options include “click and drag” on the image itself or the use of a series of interconnected sliders that control specific facial regions (
https://facegen.com/ ).

These images demonstrate the simulation possible with freehand 3D morphing versus the actual treatment outcome. Such software programs are useful tools in helping patients decide whether to proceed with more involved diagnostic records and treatment planning ( https://facegen.com/ ).
Fig. 6
These images demonstrate the simulation possible with freehand 3D morphing versus the actual treatment outcome. Such software programs are useful tools in helping patients decide whether to proceed with more involved diagnostic records and treatment planning (
https://facegen.com/ ).


Data-driven three-dimensional virtual planning

Comprehensive 3D planning requires at a minimum a DICOM craniofacial volume and high-resolution virtual dental models. Although a 3D photo is desirable, in the absence of an expensive 3D camera system, a 2D photo-wrap is more than adequate for patient communication ( Figs. 7 and 8 ). Once the data have been imported, segmented, and oriented, the planning process involves a sequence of steps leading to the virtual surgery:

  • Fusion of high-resolution virtual models to the DICOM volume

  • 2D photo-wrap or fusion of a 3D photo to the DICOM volume soft tissue (optional)

  • Identification of volumetric segments to be mobilized

  • Conversion of volumetric segments to surface data

  • Noise cleanup

  • Osteotomy design

  • Landmark identification: hard tissue mandatory, soft tissue optional but desirable

  • 3D treatment planning

  • Surgical guide design and export for 3D printing

A 2D photo-wrap is a software alternative to a 3D photo produced by separate hardware that is expensive and required dedicated clinic space. Anatomic regions from a frontal digital image are mapped onto the soft tissue from the CBCT volume. Images rendered with Dolphin 3D Surgery software ( www.dolphinimaging.com/ ).
Fig. 7
A 2D photo-wrap is a software alternative to a 3D photo produced by separate hardware that is expensive and required dedicated clinic space. Anatomic regions from a frontal digital image are mapped onto the soft tissue from the CBCT volume. Images rendered with Dolphin 3D Surgery software (
www.dolphinimaging.com/ ).

The software produces a wire mesh with photorealistic pigmentation and shadowing on the volume soft tissue. This eliminates the image fusion process required when merging a separate 3D photo with the DICOM volume ( https://www.dolphinimaging.com/ ).
Fig. 8
The software produces a wire mesh with photorealistic pigmentation and shadowing on the volume soft tissue. This eliminates the image fusion process required when merging a separate 3D photo with the DICOM volume (
https://www.dolphinimaging.com/ ).


Landmark identification and image fusion

Soft tissue simulation can be no better than the weakest step in the process. Image fusion, both 3D photo and digital models, involves fitting a smooth wire frame skin (stereolithography [STL] file) to a volume that is composed of voxel peaks and valleys. Computer scientists would question whether the STL is fused to the valleys, the peaks, or an interpolated space somewhere in the middle. Although this may concern mathematicians, this variability is of little, if any, clinical consequence when using a high-resolution volume. Fusion of 3D photos is further affected by the volume and photo being acquired at different time points ( Fig. 9 ). Soft tissue drape in the volume can be altered by patient position (supine vs upright), head stabilizing devices, and movement that may occur during scan acquisition. Two-dimensional photo wrapping negates the time point issue but is still subject to the variables associated with soft tissue drape. Both methods produce an image pair that is useable but at a considerable difference in expense and time ( Fig. 10 ).

This colormap displays the discrepancies in fusing a 3D photo from an upright patient with a supine position CT volume. Although there are areas of very good juxtaposition, there is less accuracy in the perioral region due to soft tissue drape and head posture differences between the 2 studies. Images rendered with Dolphin 3D Surgery software ( www.dolphinimaging.com/ ).
Fig. 9
This colormap displays the discrepancies in fusing a 3D photo from an upright patient with a supine position CT volume. Although there are areas of very good juxtaposition, there is less accuracy in the perioral region due to soft tissue drape and head posture differences between the 2 studies. Images rendered with Dolphin 3D Surgery software (
www.dolphinimaging.com/ ).

Three-dimensional photos are a noninvasive data collection method. The data are precise and can be used for analysis of actual postoperative change versus soft tissue simulation. In the actual planning process and for patient education, the photo-wrap ( B ) is certainly more economical in terms of time and expense than the 3D photo ( A ). Images rendered with Dolphin 3D Surgery software ( www.dolphinimaging.com/ ).
Fig. 10
Three-dimensional photos are a noninvasive data collection method. The data are precise and can be used for analysis of actual postoperative change versus soft tissue simulation. In the actual planning process and for patient education, the photo-wrap (
B ) is certainly more economical in terms of time and expense than the 3D photo (
A ). Images rendered with Dolphin 3D Surgery software (
www.dolphinimaging.com/ ).

Landmark identification on 3D surfaces is an additional concern. The landmarks used are derived from a mix of cephalometric and anthropometric methods. The flaws related to the concept of cephalometrics to define facial shape were illustrated by Moyers and Bookstein. Even if ignoring the theoretic flaws, the variability in cephalometric landmark identification is significant and was demonstrated by Baumrind and Franz. Gwilliam and colleagues found similar variability between different observers in identifying soft tissue landmarks on 3D facial images. Standard deviations greater than 1 mm were found in the vast majority of landmarks measured. Like Baumrind and Franz, they observed that an ellipsoid of error was associated with most landmarks. Because the soft tissue movement in 3D virtual planning software is dependent on accurate identification of hard tissue landmarks and the associated soft tissue landmarks lying on the surface, the end user must be mindful of these limitations.


Algorithms: complexity or simplicity?

Recognizing the variability in fusion and landmark identification, the question remains: what algorithms are used to produce the virtual soft tissue response to hard tissue movement? Schendel and Lane have described a “mass springs” biomechanical model that involves a myriad of nonlinear connector points between the wire frame of the 3D photo and the underlying skeletal structures. This is proposed to address the problem of variable response due to the thickness and stiffness of overlying soft tissues. Their research results indicated the average difference between simulation and actual outcome was less than a millimeter in both root mean square calculation and discriminate point landmarks. Knoops and colleagues advocate a probabilistic finite element method that allows latitude for manual input of patient-specific differences but requires user sophistication with the software. Regardless of the mathematical model, the greatest variability is seen in the lips and perioral region, which is the case in most 2D and 3D simulation accuracy studies.

An alternative approach is to apply the considerable volume of 2D response data to the 3D simulation process. Because all 2D data involve changes in the mid-sagittal plane, the obvious unknown areas are the tissues that lie laterally. A secondary concern is that the vast majority of the data are linear. Although the percentage of soft tissue response to hard tissue movement can be varied in the software, it remains linear throughout the entire range of simulated movement. The software used in the illustrations (Dolphin Imaging version 12 build 20) allows user customization of mid-sagittal response ratios. The parasagittal tissues are modeled with a combination of linear and nonlinear 3D real-time morphing techniques based on a set of discrete landmarks (Swanwa Liao, PhD, 3D developer and software engineer, Dolphin Imaging and Management Solutions, Chatsworth, CA, personal communication, 2019). As indicated by Smith and colleagues, it is clear that different ratio sets are required for different facial types. The 3D soft tissue ratio editor gives the end user the option to modify these ratios based on clinical experience and retrospective outcome data ( Fig. 11 ). In addition, user-friendly tools can be used to adjust lip posture, which remains the “art” portion of simulation due to highly variable response in this region to surgical movement ( Fig. 12 ).

Legacy 2D research has shown hard-coded algorithms will not produce an accurate soft tissue treatment response for all facial types. To achieve the best simulation possible, the end user needs the flexibility to create and apply custom algorithms based on facial pattern and clinical experience. The interface shown above provides the tools to input the data that are needed specific to that task. Images rendered with Dolphin 3D Surgery software ( www.dolphinimaging.com/ ).
Fig. 11
Legacy 2D research has shown hard-coded algorithms will not produce an accurate soft tissue treatment response for all facial types. To achieve the best simulation possible, the end user needs the flexibility to create and apply custom algorithms based on facial pattern and clinical experience. The interface shown above provides the tools to input the data that are needed specific to that task. Images rendered with Dolphin 3D Surgery software (
www.dolphinimaging.com/ ).

Research has shown the perioral region, specifically the lower lip, to be the most variable area in terms of treatment response. Creating pleasing lip posture is a mixture of science and art that is facilitated by software tools designed for that purpose as seen in the lower images. Images rendered with Dolphin 3D Surgery software ( www.dolphinimaging.com/ ).
Fig. 12
Research has shown the perioral region, specifically the lower lip, to be the most variable area in terms of treatment response. Creating pleasing lip posture is a mixture of science and art that is facilitated by software tools designed for that purpose as seen in the lower images. Images rendered with Dolphin 3D Surgery software (
www.dolphinimaging.com/ ).


Virtual reality versus reality

Assessment of the postoperative outcome versus predicted soft tissue response involves metrics or a mixture of metrics and perceptive techniques. Side-by-side comparison is certainly the simplest method but yields no useful data unless graded on a Likert or visual analogue scale by observers ( Figs. 13 and 14 ). The actual assessment of differences between simulation and outcome requires superimposition of the preoperative and postoperative data sets. Superimposition methods include the following:

  • Landmark-based superimposition that is subject to the errors of manual landmark identification.

  • Surface-based registration that can be affected by volume segmentation and interpolation method.

  • Volume-based superimposition that is considered to have the lowest variability ( Figs. 15 and 16 ).

    Volume on volume superimposition of the preoperative and postoperative DICOM data can be viewed from all dimensions and analyzed with a mix of metrics and colorimetric methods. Accuracy in achieving the plan can be assessed by superimposition of STL (surface data) files from the skeletal simulation over the actual skeletal result. Images rendered with Dolphin 3D Surgery software ( www.dolphinimaging.com/ ).
    Fig. 15
    Volume on volume superimposition of the preoperative and postoperative DICOM data can be viewed from all dimensions and analyzed with a mix of metrics and colorimetric methods. Accuracy in achieving the plan can be assessed by superimposition of STL (surface data) files from the skeletal simulation over the actual skeletal result. Images rendered with Dolphin 3D Surgery software (
    www.dolphinimaging.com/ ).

    The best accuracy for volume comparison is achieved by voxel to voxel matching on the unchanged anatomy on the preop and postop images. An area of the cranial base is defined and auto-superimposition algorithms quickly provide exact alignment. Images rendered with Dolphin 3D Surgery software ( www.dolphinimaging.com/ ).
    Fig. 16
    The best accuracy for volume comparison is achieved by voxel to voxel matching on the unchanged anatomy on the preop and postop images. An area of the cranial base is defined and auto-superimposition algorithms quickly provide exact alignment. Images rendered with Dolphin 3D Surgery software (
    www.dolphinimaging.com/ ).

The use of customized soft tissue rules and minor lip adjustments have produced a soft tissue simulation ( A ) that compares favorably with the actual outcome profile ( B ). Provided the planned skeletal movements have been achieved, the actual outcome invariably is more natural and pleasing in appearance. Images rendered with Dolphin 3D Surgery software ( www.dolphinimaging.com/ ).
Fig. 13
The use of customized soft tissue rules and minor lip adjustments have produced a soft tissue simulation (
A ) that compares favorably with the actual outcome profile (
B ). Provided the planned skeletal movements have been achieved, the actual outcome invariably is more natural and pleasing in appearance. Images rendered with Dolphin 3D Surgery software (
www.dolphinimaging.com/ ).

The soft tissue simulation ( A ) can be viewed from all dimensions during the planning process and closely match the outcome ( B ). In addition to lip response, eyes and nostrils are a challenge for both 2D photo wrapping and 3D photos, but still provide a close likeness. Images rendered with Dolphin 3D Surgery software ( www.dolphinimaging.com/ ).
Fig. 14
The soft tissue simulation (
A ) can be viewed from all dimensions during the planning process and closely match the outcome (
B ). In addition to lip response, eyes and nostrils are a challenge for both 2D photo wrapping and 3D photos, but still provide a close likeness. Images rendered with Dolphin 3D Surgery software (
www.dolphinimaging.com/ ).

Once superimposed, simulated versus actual outcome can be assessed by statistical methods, colorimetric, or a blend of the two. For example, root mean squared error (or deviation) can calculate the number of 3D mesh points on the simulation that do not match the actual outcome. Colorimetric methods ( Fig. 17 ) involve a mix of visual assessment and measurement of the differences between actual and simulated outcome.

Analogous techniques are used to compare the simulated soft tissue response with the actual soft tissue outcome. The surface image from the treatment plan is superimposed on the volumetric soft tissue using manual “best fit” methods or automated algorithms. Variability is typically seen in the perioral region and predominantly in the lower lip. Images rendered with Dolphin 3D Surgery software ( www.dolphinimaging.com/ ).
Fig. 17
Analogous techniques are used to compare the simulated soft tissue response with the actual soft tissue outcome. The surface image from the treatment plan is superimposed on the volumetric soft tissue using manual “best fit” methods or automated algorithms. Variability is typically seen in the perioral region and predominantly in the lower lip. Images rendered with Dolphin 3D Surgery software (
www.dolphinimaging.com/ ).


You're Reading a Preview

Become a DentistryKey membership for Full access and enjoy Unlimited articles

Become membership

If you are a member. Log in here

Was this article helpful?

Three-Dimensional Soft Tissue Simulation in Orthognathic Surgery Paul M. Thomas DMD, MS, FDS[RCSEd] Atlas of the Oral and Maxillofacial Surgery Clinics of North America, 2020-09-01, Volume 28, Issue 2, Pages 73-82, Copyright © 2020 Elsevier Inc. Key points Image and volume fusion precision affects soft tissue simulation accuracy. Soft tissue response is not linear and requires regression algorithms. Soft tissue response algorithms vary with facial type. Simulation accuracy varies with the anatomic region of concern being most accurate in the midline and most variable laterally. Although somewhat flawed, 3-dimensional soft tissue simulation remains the best option for patient communication. Introduction: nature of the problem Treatment success in orthognathic surgery is dependent on a stable and functional occlusal correction leading to an outcome that is esthetically pleasing and acceptable to the patient. Although both criteria generally go hand in glove, it is entirely possible to correct the malocclusion and fail to meet the patient’s expectations relative to facial appearance. Legacy planning techniques and careful plan execution have achieved and continue to produce acceptable accuracy when skeletal/occlusal outcome is analyzed. However, the overlying soft tissue response has been difficult to forecast and has been limited to profile representations. Early efforts involved cephalometric tracings from a standardized lateral radiograph. The planned soft tissue profile changes were drawn freehand based on response ratios derived from literature reports of research data. These ratios were often flawed by data averaging and the combination of various facial types to produce a larger sample ( Fig. 1 ). Another option to portray appearance changes involved cutting and reassembly of life-size photographic transparencies to approximate the desired profile changes. Fig. 1 Soft tissue response taken from legacy research is flawed by data averaging in patients having similar surgery but different lip posture. The lip response to mandibular advancement would be very different in each. This explains the wide peri-oral variation when legacy data are used as the basis for 3D simulation algorithms. ( A-C ) All tracings of patients with mandibular deficiency, but the varying soft tissue posture would require customized soft tissue response ratios for an accurate simulation. The advent of the personal computer led to efficiency in data gathering and manipulation. Software programs allowed the previously hand-drawn data to be digitized and printed or plotted. Skeletal segments could be moved on-screen, and the soft tissue profile changes were generated using the legacy response data. Program evolution allowed coupling of cephalometric data with a profile digital image that was morphed and provided a simulation that was easier for both clinician and patient to understand. It became obvious, however, that software programs varied in their ability to produce a soft tissue simulation that approximated the actual outcome. Smith and colleagues completed a perceptual study involving orthodontists, surgeons, and lay public scoring the likeness of 2-dimensional (2D) computer-based simulations to the actual treatment outcome. Previous research on accuracy had been landmark specific but failed to look at visual assessment. The following observations resulted from the study: Some programs have default soft tissue response ratios that are “hard coded” Default response ratios were least effective in producing simulations in patients having vertical facial excess or deficiency Programs allowing creation of ratios specific to facial type produce the best simulations Most reported “prediction” ratios are linear Although software response ratios are linear, actual soft tissue response is not The limitation of 2D computer-aided planning remains the inability to produce other facial views. Patients rarely observe their profile unless from a photo or using multiple mirrors. Although the frontal view can be altered freehand using a combination of computer morphing and cut and paste, changes in the oblique, submentovertex and coronal views cannot be simulated ( Fig. 2 ). Fig. 2 Freehand treatment simulation is a quick option to demonstrate frontal view changes in asymmetry correction. The image on the right shows soft tissue alteration from correcting the maxillary cant and moving the chin to the midline. Although insufficient for actual planning, the images serve as a basis for further discussion. Three-dimensional virtual planning in orthognathic surgery Three-dimensional (3D) reformation of computed tomography (CT) data has been possible since the late 1970s, but its use was limited by image quality and processing time. Development of multidetector row CT in the early 1990s created a viable option. The initial programs for craniofacial virtual planning could import this data but scanner access, radiation concerns and the associated cost limited routine usage. In the late 1990s, the development of flat-panel CT, commonly known as cone beam CT (CBCT), opened the door for widespread use of the technique in multiple disciplines including oral and maxillofacial surgery. Cost largely ceased to be an issue due to much lower hardware expense, machine availability, and low acquisition time. Currently, good resolution with isometric voxels, limited noise, and much lower radiation exposure have made CBCT the imaging choice for 3D planning in the craniofacial region. Freehand three-dimensional facial morphing Contemporary 3D virtual planning for orthognathic surgery requires a minimum of a DICOM (Digital Imaging and Communications in Medicine) volume of the area of interest and high-resolution dental models. 3D digital facial images are not necessary but are ideal for patient education. The alternative of freehand 3D facial morphing offers an option for communication during preliminary discussions when the patient has not made a firm commitment to proceeding with treatment. Face Gen was originally developed as a virtual sketch pad for law enforcement and a tool for “high-end” video gamers to create their own avatar ( https://facegen.com/ ). A frontal and profile digital image are quickly linked by mapping 20 landmarks on the image pair. Algorithms create a wire-mesh model using these landmarks and overlays photorealistic shadowing and pigmentation ( Fig. 3 ). Processing time on a contemporary laptop or workstation is 90 seconds. The resulting image can be rotated to any orientation and morphing can be done by click and drag or using a series of scroll bars that are interactive and modify different facial regions ( Fig. 4 ). The result can be saved in a variety of graphics formats and added to the diagnostic record. The photorealistic image can be seen to compare favorably with the actual treatment outcome ( Figs. 5 and 6 ). Fig. 3 Three-dimensional freehand morphing is a more sophisticated option for use in preliminary patient discussion. Record acquisition is limited to profile and frontal photographs that are linked by digitizing anatomic soft tissue landmarks ( https://facegen.com/ ). Fig. 4 The software quickly creates a wire-mesh face ( A ) that is combined with photorealistic pigmentation and shadowing ( B ) to produce a 3D likeness ( C ) that can be altered. Basic hair styles can be added to create a more realistic image ( D ) ( https://facegen.com/ ). Fig. 5 Changes with morphing are based on viewer perception. Options include “click and drag” on the image itself or the use of a series of interconnected sliders that control specific facial regions ( https://facegen.com/ ). Fig. 6 These images demonstrate the simulation possible with freehand 3D morphing versus the actual treatment outcome. Such software programs are useful tools in helping patients decide whether to proceed with more involved diagnostic records and treatment planning ( https://facegen.com/ ). Data-driven three-dimensional virtual planning Comprehensive 3D planning requires at a minimum a DICOM craniofacial volume and high-resolution virtual dental models. Although a 3D photo is desirable, in the absence of an expensive 3D camera system, a 2D photo-wrap is more than adequate for patient communication ( Figs. 7 and 8 ). Once the data have been imported, segmented, and oriented, the planning process involves a sequence of steps leading to the virtual surgery: Fusion of high-resolution virtual models to the DICOM volume 2D photo-wrap or fusion of a 3D photo to the DICOM volume soft tissue (optional) Identification of volumetric segments to be mobilized Conversion of volumetric segments to surface data Noise cleanup Osteotomy design Landmark identification: hard tissue mandatory, soft tissue optional but desirable 3D treatment planning Surgical guide design and export for 3D printing Fig. 7 A 2D photo-wrap is a software alternative to a 3D photo produced by separate hardware that is expensive and required dedicated clinic space. Anatomic regions from a frontal digital image are mapped onto the soft tissue from the CBCT volume. Images rendered with Dolphin 3D Surgery software ( www.dolphinimaging.com/ ). Fig. 8 The software produces a wire mesh with photorealistic pigmentation and shadowing on the volume soft tissue. This eliminates the image fusion process required when merging a separate 3D photo with the DICOM volume ( https://www.dolphinimaging.com/ ). Landmark identification and image fusion Soft tissue simulation can be no better than the weakest step in the process. Image fusion, both 3D photo and digital models, involves fitting a smooth wire frame skin (stereolithography [STL] file) to a volume that is composed of voxel peaks and valleys. Computer scientists would question whether the STL is fused to the valleys, the peaks, or an interpolated space somewhere in the middle. Although this may concern mathematicians, this variability is of little, if any, clinical consequence when using a high-resolution volume. Fusion of 3D photos is further affected by the volume and photo being acquired at different time points ( Fig. 9 ). Soft tissue drape in the volume can be altered by patient position (supine vs upright), head stabilizing devices, and movement that may occur during scan acquisition. Two-dimensional photo wrapping negates the time point issue but is still subject to the variables associated with soft tissue drape. Both methods produce an image pair that is useable but at a considerable difference in expense and time ( Fig. 10 ). Fig. 9 This colormap displays the discrepancies in fusing a 3D photo from an upright patient with a supine position CT volume. Although there are areas of very good juxtaposition, there is less accuracy in the perioral region due to soft tissue drape and head posture differences between the 2 studies. Images rendered with Dolphin 3D Surgery software ( www.dolphinimaging.com/ ). Fig. 10 Three-dimensional photos are a noninvasive data collection method. The data are precise and can be used for analysis of actual postoperative change versus soft tissue simulation. In the actual planning process and for patient education, the photo-wrap ( B ) is certainly more economical in terms of time and expense than the 3D photo ( A ). Images rendered with Dolphin 3D Surgery software ( www.dolphinimaging.com/ ). Landmark identification on 3D surfaces is an additional concern. The landmarks used are derived from a mix of cephalometric and anthropometric methods. The flaws related to the concept of cephalometrics to define facial shape were illustrated by Moyers and Bookstein. Even if ignoring the theoretic flaws, the variability in cephalometric landmark identification is significant and was demonstrated by Baumrind and Franz. Gwilliam and colleagues found similar variability between different observers in identifying soft tissue landmarks on 3D facial images. Standard deviations greater than 1 mm were found in the vast majority of landmarks measured. Like Baumrind and Franz, they observed that an ellipsoid of error was associated with most landmarks. Because the soft tissue movement in 3D virtual planning software is dependent on accurate identification of hard tissue landmarks and the associated soft tissue landmarks lying on the surface, the end user must be mindful of these limitations. Algorithms: complexity or simplicity? Recognizing the variability in fusion and landmark identification, the question remains: what algorithms are used to produce the virtual soft tissue response to hard tissue movement? Schendel and Lane have described a “mass springs” biomechanical model that involves a myriad of nonlinear connector points between the wire frame of the 3D photo and the underlying skeletal structures. This is proposed to address the problem of variable response due to the thickness and stiffness of overlying soft tissues. Their research results indicated the average difference between simulation and actual outcome was less than a millimeter in both root mean square calculation and discriminate point landmarks. Knoops and colleagues advocate a probabilistic finite element method that allows latitude for manual input of patient-specific differences but requires user sophistication with the software. Regardless of the mathematical model, the greatest variability is seen in the lips and perioral region, which is the case in most 2D and 3D simulation accuracy studies. An alternative approach is to apply the considerable volume of 2D response data to the 3D simulation process. Because all 2D data involve changes in the mid-sagittal plane, the obvious unknown areas are the tissues that lie laterally. A secondary concern is that the vast majority of the data are linear. Although the percentage of soft tissue response to hard tissue movement can be varied in the software, it remains linear throughout the entire range of simulated movement. The software used in the illustrations (Dolphin Imaging version 12 build 20) allows user customization of mid-sagittal response ratios. The parasagittal tissues are modeled with a combination of linear and nonlinear 3D real-time morphing techniques based on a set of discrete landmarks (Swanwa Liao, PhD, 3D developer and software engineer, Dolphin Imaging and Management Solutions, Chatsworth, CA, personal communication, 2019). As indicated by Smith and colleagues, it is clear that different ratio sets are required for different facial types. The 3D soft tissue ratio editor gives the end user the option to modify these ratios based on clinical experience and retrospective outcome data ( Fig. 11 ). In addition, user-friendly tools can be used to adjust lip posture, which remains the “art” portion of simulation due to highly variable response in this region to surgical movement ( Fig. 12 ). Fig. 11 Legacy 2D research has shown hard-coded algorithms will not produce an accurate soft tissue treatment response for all facial types. To achieve the best simulation possible, the end user needs the flexibility to create and apply custom algorithms based on facial pattern and clinical experience. The interface shown above provides the tools to input the data that are needed specific to that task. Images rendered with Dolphin 3D Surgery software ( www.dolphinimaging.com/ ). Fig. 12 Research has shown the perioral region, specifically the lower lip, to be the most variable area in terms of treatment response. Creating pleasing lip posture is a mixture of science and art that is facilitated by software tools designed for that purpose as seen in the lower images. Images rendered with Dolphin 3D Surgery software ( www.dolphinimaging.com/ ). Virtual reality versus reality Assessment of the postoperative outcome versus predicted soft tissue response involves metrics or a mixture of metrics and perceptive techniques. Side-by-side comparison is certainly the simplest method but yields no useful data unless graded on a Likert or visual analogue scale by observers ( Figs. 13 and 14 ). The actual assessment of differences between simulation and outcome requires superimposition of the preoperative and postoperative data sets. Superimposition methods include the following: Landmark-based superimposition that is subject to the errors of manual landmark identification. Surface-based registration that can be affected by volume segmentation and interpolation method. Volume-based superimposition that is considered to have the lowest variability ( Figs. 15 and 16 ). Fig. 15 Volume on volume superimposition of the preoperative and postoperative DICOM data can be viewed from all dimensions and analyzed with a mix of metrics and colorimetric methods. Accuracy in achieving the plan can be assessed by superimposition of STL (surface data) files from the skeletal simulation over the actual skeletal result. Images rendered with Dolphin 3D Surgery software ( www.dolphinimaging.com/ ). Fig. 16 The best accuracy for volume comparison is achieved by voxel to voxel matching on the unchanged anatomy on the preop and postop images. An area of the cranial base is defined and auto-superimposition algorithms quickly provide exact alignment. Images rendered with Dolphin 3D Surgery software ( www.dolphinimaging.com/ ). Fig. 13 The use of customized soft tissue rules and minor lip adjustments have produced a soft tissue simulation ( A ) that compares favorably with the actual outcome profile ( B ). Provided the planned skeletal movements have been achieved, the actual outcome invariably is more natural and pleasing in appearance. Images rendered with Dolphin 3D Surgery software ( www.dolphinimaging.com/ ). Fig. 14 The soft tissue simulation ( A ) can be viewed from all dimensions during the planning process and closely match the outcome ( B ). In addition to lip response, eyes and nostrils are a challenge for both 2D photo wrapping and 3D photos, but still provide a close likeness. Images rendered with Dolphin 3D Surgery software ( www.dolphinimaging.com/ ). Once superimposed, simulated versus actual outcome can be assessed by statistical methods, colorimetric, or a blend of the two. For example, root mean squared error (or deviation) can calculate the number of 3D mesh points on the simulation that do not match the actual outcome. Colorimetric methods ( Fig. 17 ) involve a mix of visual assessment and measurement of the differences between actual and simulated outcome. Fig. 17 Analogous techniques are used to compare the simulated soft tissue response with the actual soft tissue outcome. The surface image from the treatment plan is superimposed on the volumetric soft tissue using manual “best fit” methods or automated algorithms. Variability is typically seen in the perioral region and predominantly in the lower lip. Images rendered with Dolphin 3D Surgery software ( www.dolphinimaging.com/ ). Summary Three-dimensional virtual orthognathic planning is becoming increasingly commonplace due to the availability of user-friendly software in addition to service laboratories that can accomplish much of the work before finalizing a plan. Contemporary CBCT low-radiation protocols have all but eliminated the concerns with obtaining volumetric data on postoperative patients. The accuracy of 3D soft tissue simulation varies with the software used, but the variation is relatively small and acceptable for clinical use. Visual learning has always been a key means of communicating projected treatment outcome to patients. Phillips and colleagues found imaging to be the best information source when compared with other physical records. Simulated soft tissue response in treatment simulation will continue to improve as postoperative data are used in software development and refinement. Acknowledgments The author would like to acknowledge and express my gratitude to Lindsay Winchester, BDS, LDS DOrth, MSc, MOrth, FDS, and Ed Lin, DDS, MS, for facilitating the procurement of various patient images used in this article. Disclosure The author is a technical advisor for Dolphin Imaging and Management Solutions. References 1. Smith J.D., Thomas P.M., Proffit W.R.: A comparison of current prediction imaging programs. Am J Orthod Dentofacial Orthop 2004; 125: pp. 527-536. 2. Moyers R.E., Bookstein F.L.: The inappropriateness of conventional cephalometrics. J Orthod 1979; 75: pp. 599-617. 3. Baumrind S., Franz R.C.: The reliability of head film measurements: landmark identification. Am J Orthod 1971; 60: pp. 111-127. 4. Gwilliam J.R., Cunningham S.J., Hutton T.: Reproducibility of soft tissue landmarks on three-dimensional facial scans. Eur J Orthod 2006; 28: pp. 408-415. 5. Schendel S., Lane C.: 3D orthognathic surgery simulation using image fusion. Semin Orthod 2009; 15: pp. 48-56. 6. Schendel S., Jacobson R., Khalessi S.: 3-Dimensional facial simulation in orthognathic surgery: is it accurate?. J Oral Maxillofac Surg 2013; 71: pp. 1406-1414. 7. Knoops P.G.M., Borghi A., Breakey R.W.F., et. al.: Three-dimensional soft tissue prediction in orthognathic surgery: a clinical comparison of Dolphin, ProPlan CMF, and probabilistic finite element modelling. Int J Oral Maxillofac Surg 2019; 48: pp. 511-518. 8. Borba A.M., da Silva E.J., da Silva A.L.F., et. al.: Accuracy of orthognathic surgical outcomes using 2- and 3-dimensional landmarks—the case for apples and oranges?. J Oral Maxillofac Surg 2018; 76: pp. 1746-1752. 9. Almukhtar A., Ju X., Khambay B., et. al.: Comparison of the accuracy of voxel based registration and surface based registration for 3D assessment of surgical change following orthognathic surgery. PLoS One 2014; 9: pp. e93402. 10. Ghoneima A., Cho H., Farouk K., et. al.: Accuracy and reliability of landmark-based, surface-based and voxel-based 3D cone-beam computed tomography superimposition methods. Orthod Craniofac Res 2017; 11. Phillips C., Hill B.J., Cannac C.: The influence of video imaging on patient’s perceptions and expectations. Angle Orthod 1995; 65: pp. 263-270.

Related Articles

Leave A Comment?

You must be logged in to post a comment.