Loading...

Student Conference Medical Engineering Science 2013

Proceedings

by T. M. Buzug et al. (Author)

Anthology 2013 202 Pages

Medicine - Biomedical Engineering

Excerpt

Contents

Segmentation and Registration I
Segmentation of Anatomical Structures using Statistical Shape Models based on Level Sets
M. Schaar, J. Ehrhardt
Development and evaluation of a tool for the generation of probabilistic expert segmentations
C. Steinberg, J. H. Moltz, B. Geisler, H. Handels
ASM-based segmentation of the middle phalanx in digital radiographic images of the hand
G. Kabelitz, R. Dendere, T S. Douglas
Multiple Sclerosis Lesion Segmentation Using Dictionary Learning and Sparse Coding
N. Weiss, A. Rao, D. Rueckert

Segmentation and Registration II
Lung Fissure Detection Using a Line Enhancing Filter
H. Wendland, T. Klinder, J. Erhardt, R. Wiemker
Joint Non-Linear Registration and Level Set Segmentation of the Left Ventricle in 4D MR Image Data
T. Kepp, J. Ehrhardt, H. Handels
Accelerated Diffeomorphic Non-Rigid 2D Image Registration Using Stokes Regularization
F. Tramnitzke, G. Biros
Optimization of the Detection Performance of Förstner-Rohr-type Operators
in Abdominal and Thoracic Tomographic Image Data
C. Duscha, R. Werner, H. Handels
Medical Image Registration using the Locally Normalized Cross-Correlation:
Application to Multiple Sclerosis Brain MRI
A. Cordes, M. Modat, S. Ourselin

Biomedical Optics I
Comparison of two setups to improve the performance of an open-path gas detector
regarding optical misalignment
G. Fischer, A. Troellsch
Characterization of an Electronic Speckle Pattern Detection System
A. van Oepen, J. Horstmann, R. Brinkmann
Analysis of a diode-pumped Pr3+:LiYF4 crystal in a linear resonator
P. von Brunn
Mid-infrared laser spectroscopy using a tunable gain-switched Cr2+:ZnSe laser
M. Evers, D. Welford, D. Manstein, R. Birngruber
Design and development of a miniaturized scanning probe
L. M. Wurster, W. C. Warger, M. J. Gora, R. Carruth, G. J. Tearney, R. Birngruber

Biomedical Optics II
Stray light rejection by structured illumination
P. K. Fink, D. Hillmann, G. L. Franke, D. Ramm, P. Koch, G. Hüttmann Temperature induced tissue deformation monitored by dynamic speckle interferometry
K. Bliedtner, E. Seifert, R. Brinkmann
Speckle variance optical coherence tomography for imaging microcirculation
D. Klawitter, D. Hillmann, M. Pieper, P. Steven, J. Wenzel, G. Hüttmann
Polarization-sensitive optical coherence tomography on different tissues samples for tumor discrimination
F. Fleischhauer, H. Schulz-Hildebrandt, T. Bonin, G. Hüttmann
An approach to increase the speed of Optical Coherence Tomography using a Virtually Imaged Phased Array
H. Sudkamp, H. Y. Lee, G. Hüttmann, A. K. Ellerbee
Broadband spectral domain optical coherence tomography
J. Pavlita, C. Winter

Micro- and Nanotechology
Development and evaluation of a manufacturing technology for an efficient production
of polymer foils for active implants
L. Borchert
Fluorescence Photometric Detection of Europium-doped Citrate Coated Iron Oxide
Magnetic Nanoparticles (Eu-C-MNP)
U. Kalapis, Y. Kobayashi, R. Hauptmann, S. Wagner, J. Schnorr Investigation of different tissue samples with Micro-CT and MPS for determination
of iron oxide concentration in tracers for MPI
L. Aulmann, K. Lüdtke-Buzug
Characterization of absorption properties of upconversion nanoparticles in the high excitation intensity regime
F. Nguyen, A. Zvyagin, R. Brinkmann

Imaging and Image Computing
Feature analysis for Parkinson’s disease diagnostics in 3D transcranial sonography volumes
N. Traulsen, A. Mertins
Stationary and semi-stationary acquisition in small animal multi-pinhole SPECT:
impact on spatial resolution, noise propagation and image quality
C. Lange, I. Apostolova, M. Lukas, B. Gregor-Mamoudou, W. Brenner, R. Buchert
VAMIR - Visual Analytics for Medical Image Retrieval: Preliminary Study on PET-CT data
F. Nette, A. Kumar, K. Klein, J. Kim, H. Handels
Obj ect recognition in clinical use
D. Münster, A. Schläfer
Applying Texture Features to Cell Classification
C. Kluck

Biomedical Engineering I
Usability testing of the EUROLabLiquidHandler
L. Rudolph
Analysis of an Oral Fluid Drug Testing Device
A. Meffert
Risk management of new liposuction cannulas
C. Berg
Modulation of cortical oscillations in vitro through DC electric field stimulation
J. F. Weinert, M. Perez-Zabalza
Development of a six-axis force/torque sensor with a non-linear calibration as a medical safety device
I. Kuhlemann, R. Bruder, A. Schweikard

Biomedical Engineering II
Acquisition of index fingertip time-resolved optical transmission spectra
Z. Janicijevic, B. Weber, B. Nestler
Graphical tool for target selection and live target tracking for ultrasound
F. Bien, P Jauer, R. Bruder
Design of a Graphical User Interface used to assess the visual attention behaviour
of upper limb prosthesis users
D. Ewert, F. A. Popa, P J. Kyberd
Concept for a multi product steering board for the life circle engineering
R. F. Knobloch, J.-G. Eilers
Synchronization Likelihood as Measure of Brain Connectivity During Visual Perception
F. Guth, H. Chan, P Kuo, Y. Chen

Conference Chair

Thorsten M. Buzug (Chair), Institute of Medical Engineering, University of Lübeck

Stephan Klein (Co-Chair), Center for Biomedical Technology, University of Applied Sciences Lübeck

Local Coordination

Kanina Botterweck, Medisert, BioMedTec Science Campus

Eugenie Ewert, Medisert, BioMedTec Science Campus

Christian Kaethner, Institute of Medical Engineering, University of Lübeck

Gisela Thaler, Institute of Medical Engineering, University of Lübeck

Scientific Program Committee

Erhardt Barth, Institute of Neuro- and Bioinformatics, University of Lübeck

Reginald Birngruber, Institute of Biomedical Optics, University of Lübeck

Henrik Botterweck, Center for Biomedical Technology, University of Applied Sciences Lübeck

Ralf Brinkmann, Institute of Biomedical Optics, University of Lübeck

Thorsten M. Buzug, Institute of Medical Engineering, University of Lübeck

Jens Christian Claussen, Institute of Neuro- and Bioinformatics, University of Lübeck

Jan Ehrhardt, Institute of Medical Informatics, University of Lübeck

Hartmut Gehring, Clinic of Anesthesiology, University Medical Center Schleswig-Holstein, Campus Lübeck

Heinz Handels, Institute of Medical Informatics, University of Lübeck

Christian Hübner, Institute of Physics, University of Lübeck

Gereon Hüttmann, Institute of Biomedical Optics, University of Lübeck

Josef Ingenerf, Institute of Medical Informatics, University of Lübeck

Stephan Klein, Center for Biomedical Technology, University of Applied Sciences Lübeck

Martin Koch, Institute of Medical Engineering, University of Lübeck

Kerstin Lüdtke-Buzug, Institute of Medical Engineering, University of Lübeck

Alfred Mertins, Institute for Signal Processing, University of Lübeck

Jan Modersitzki, Institute of Mathematics and Image Computing, University of Lübeck

Bodo Nestler, Center for Biomedical Technology, University of Applied Sciences Lübeck

Alexander Schläfer, Institute for Robotics and Cognitive Systems, University of Lübeck

Alfred Vogel, Institute of Biomedical Optics, University of Lübeck

Preface and Acknowledgements

The First Student Conference on Medical Engineering Science has been organized on March 29/30, 2012 by the BioMedTec Science Campus Lübeck in cooperation with Norgenta, the North German Life Science Agency and the technology trans­fer platform Medisert. Master and diploma students presented their recent research results to broad public from academics and industry.

Students from the Life Sciences programs at the BioMedTec Science Campus presented their results from projects carried out at the Laboratories and Institutes of Lübeck’s Universi­ties, in international research facilities or research-oriented in­dustrial companies. The conference focus has been placed on topics from medical engineering. Biomedical engineering has been established at the University of Applied Sciences Lübeck for decades and Medical Engineering Science (MIW) is an important bachelor and master program at the University of Lübeck as well. Both Universities jointly offer the internation­al master degree course Biomedical Engineering (BME). This is complemented with further life-science oriented programs of the University (Computer Sciences, Medical Computer Sci­ences, Mathematics in Medicine and Life Sciences, Molecular Life Science, Medicine) which contribute to the success of the interdisciplinary Medical Engineering Science and of Biomed­ical Engineering.

These competencies, which the conference program impres­sively reflects, also meet the requirements of biomedical in­dustry. It is known that Germany lacks graduates of the MINT programs in order to be able to compete on the global market. And MINT means Mathematics, Informatics, Natural Science and Technology. In Lübeck, one may optionally replace the „M“ with medicine. As can also be seen in the conference pro­gram, the fields of imaging and image processing are further foci in Lübeck. Excellent research achievements of Institutes,

Laboratories and Clinics at the BioMedTec Science Campus are closely linked with lecture plans, so that many students can demonstrate their competencies in project work in renowned national and international research facilities during their mas­ter degree program.

Finally, I want to thank all the people who worked with en­thusiasm and dedication to make the conference a successful event. Without the financial support of Norgenta, the North German Life Science Agency, and the commitment of An­gela Wäsche, this conference would not have been possible. Moreover, my thanks go to the technology transfer platform Medisert of the BioMedTec Science Campus. The professional management of Kanina Botterweck and her Medisert team has contributed substantially to the success of this conference. In the context of the project “Encounter with Research” of the Lübeck Engineering Laboratory (LILa), interested pupils of the upper secondary schools in Lübeck and surroundings are invited to participate as guests at the Student Conference. I would also like to express my thanks to the coordinators of LILa, Julia Hamer and Tina Anne Schütz, for the organization of this part of the program. Thanks too to the participants of the companies who, in workshops on the first day of the Students’ Conference, give insights into what companies expect from graduates: Philips Medical GmbH; Birte Loffhagen, Dräger Medical; Pia Jedamzik, Stryker Osteosynthesis; Dr. Ulrich Hoffmeister, Lübeck Chamber of Industry and Commerce; Dr. Frank Schnieders, Provecs Medical GmbH. Especially - and therefore as the final point - I would like to thank Bäbel Kratz personally and on behalf of all colleagues of the BioMedTec Science Campus. Bärbel Kratz from the Institute of Medical Engineering has been the first contact point for all questions of students and the program committee. Her excellent overview of all details of this event was the key to the success of the first Student Conference at the BioMedTec Science Campus.

Lübeck, March 29/30, 2012

Prof. Dr. Thorsten M. Buzug

Vicepresident of the University of Lübeck

Chair of the Student Conference on Medical Engineering

Science 2012

Segmentation and Registration I

Segmentation of Anatomical Structures using Statistical Shape Models based on Level Sets

M. Schaar and J. Ehrhardt

Abstract—We propose the advantage of including information about shape and pose of typical anatomical structures into the process of segmentation of medical images. Inspired by the work of Tsai et al. [1], we describe how to construct a statistical shape model using an implicit level set representation from a set of training samples. With aid of the inherent modes of variation of the data we guide the segmentation process to segment meaningful shapes and avoid leaking structures. This approach does not need point distribution models and is able to handle multidimensional data as well as low contrast images. Concluding we demonstrate this technique using it for segmentation of human kidneys in computer tomography images and compare the results of the segmentation process with and without shape prior information.

I. INTRODUCTION

The prominence of imaging systems like magnetic reso­nance (MR) or computer tomography (CT) has grown sig­nificantly in the last years as these techniques offer great possibilities to visualize the human body without painful interventions. In medical image interpretation often certain structures like organs or vessels are the center of focus and need to be evaluated more precisely. Here automatic or semi­automatic algorithms are used to group the data according to the grey values of the dataset. Occasionally these approaches fail to segment images which are blurred by low contrast or superimposed by noise.

In [2], Kass et al. proposed to use snakes for segmentation. This approach involves the deformation of an initial contour towards the object boundaries by minimization of an energy functional. Since local image features are used to evolve the contour, this may leak into regions with unsharp borders or low contrast. Furthermore this approach is unable to handle topology changes like bifurcation of vessels.

Simultaneously, Osher and Sethian introduced in [3] the representation of object contours as curves embedded in higher dimensional surfaces. Here an initial curve evolves towards the object boundaries using an implicit representation scheme of partial differential equations and energy minimization. One main advantage of this description is that changes in the topology of the object can be tracked during the segmentation process and will be assigned to the same structure.

In [4], Cootes et al. developed a parametric point distribu­tion model to represent the shape and appearance of certain objects. Therefore the mean shape of a set of samples is computed and the natural variation among the training images

M. Schaar, Medizinische Ingenieurwissenschaft, University of Lübeck; the work has been carried out at the Insitute of Medical Informatics, University of Lübeck, Lübeck, Germany (e-mail: moritz.schaar@gmail.com)

J. Ehrhardt is with the Institute of Medical Informatics, University of Lübeck, Lübeck, Germany (e-mail: ehrhardt@imi.uni-luebeck.de) is calculated. Using linear combinations of the eigenvectors that reflect the deviations new shapes can be generated. This information can be used to extend the segmentation of flexible objects to prefer meaningful shapes that are present in the training data.

Recently, Tsai et al. proposed in [1] the application of an shape-based approach for medical image segmentation using a parametric model. This combines the shape and appearance information described by Cootes et al. with the level set representation introduced by Osher and Sethian. It has been proven that this technique can handle multidimensional data, track topology changes and is robust to noise.

In this paper we describe how shape prior information can be used to guide the segmentation process. Therefore a short introduction on level set functions and their usage is given. Furthermore the calculation of a mean shape from a set of training images and the derivation of the inherent modes of variation will be illustrated. Using these directions of the main variance, a space of allowed configurations will be constructed and applied to constrain the segmentation to meaningful shapes. In conclusion, the theoretical aspects will be verified on segmentation of human kidneys in CT data.

II. MATERIAL AND METHODS

We propose to include prior knowledge about shape and pose into the segmentation process. Therefore an implicit shape model will be calculated based on several training im­ages. This mean shape will be used as an initial level set curve to adapt the real object boundaries iteratively. The change in terms of shape and pose are constrained to reasonable configurations covered in the set of training templates.

A. Level Set-based Segmentation

According to [5], the level set of a higher dimensional function[Abbildung in dieser Leseprobe nicht enthalten]at time t is defined by:

illustration not visible in this excerpt

where d(x,K(t)) is the signed distance from a point x G R" to the contour K(t). The region surrounded by K(t) is labelled R(t) and every point within this area is assigned a negative distance while points outside are labelled with positive distance values. The contour K(t) is determined by the zero level set:

illustration not visible in this excerpt

This representation of the curve is used to expand or shrink towards the real borders of the object solving a partial differ­ential equation at certain time steps:

illustration not visible in this excerpt

In (3), V(κ) is a function of the image gradient (for example 1+¿7|2 ) and will be called the speed function that influences the velocity of the moving point. The parameter κ provides the curvature of the level set function. During the adaption step the curve evolves perpendicular to the level set and thus in the direction of the normal vector.

The basic level set approach shown in (3) has been focus of many further developments. Caselles, Kimmel and Sapiro proposed in [6] to add shape prior information to lead the evolution of the level set function in a useful direction in relation towards the desired object:

illustration not visible in this excerpt

where νΦ · VV can be seen as an advection term which controls the boundary attraction of the evolving contour. The parameter c is a propagation term, a constant velocity like an image-dependent force.

B. Implicit Shape Model

Given a set of binary images, I1..,IN, N e N>0, the shape modelling process starts with the alignment of the binary surfaces to maximize the overlap and thus increasing the relation between the datasets. In consequence of missing corresponding points or landmarks, clever solutions like the Iterative Closest Point algorithm proposed in [7] by Besl and McKay are used to perform the alignment.

Afterwards each aligned image is transferred to the cor­responding level set function Φ1,..., ΦΝ. This step converts every input voxel value into a distance value. Subsequently the implicit mean shape Φ is calculated: and allow variations of the shape in terms of the eigenmodes. This yields to the following expression for the estimated model shape:

illustration not visible in this excerpt

The mean shape Φ will be extended by a weighted sum of shape parameters wk and the first L modes vk. Usually only a small number of principal components is needed to cover the majority of variances and using all available eigenmodes during segmentation might worsen the result by adding noise.

C. Segmentation using Shape Prior Information

Now, we want to combine level set segmentation with shape prior information. Therefore the shape parameter w\ and the pose parameter p[1] of an additionally applied rigid transformation, which will be calculated individually for every image i, need to be adapted iteratively in the contour evolution. Leventon, Grimson and Faugeras extended Caselles approach in [8] and suggested a maximum a posteriori estimation to identify appropriate values:

illustration not visible in this excerpt

where VI is the gradient of the image to be segmented.

Using this estimator we are now able to calculate Φm and guess the new surface Φ at time t+1:

illustration not visible in this excerpt

where α1 defines the update step size and a2 e [0,1] is a linear coefficient that determines how much to trust the maximum a posteriori estimate. The parameters α1 and α2 are inserted to adjust the influence of the shape model and the gradient- curvature model. Furthermore this weights the quality of the shape model.

To compare the outcomes both approaches have been eval­uated, the basic one by Caselles et al. and the extended one by Leventon et al. using shape prior information.

III. Results and Discussion

All of the challenging tasks mentioned in the previous section were performed using the Insight Segmentation and Registration Toolkit (ITK) provided by Kitware. This toolkit offers great open-source libraries to modify and evaluate medical image data.

The mean shape and the modes of variation have been constructed using a set of seven binary kidney segmenta­tions which have been extracted from the corresponding CT data of the abdomen. All images have been resampled to share the same size and spacing: 172x172x171 voxel and 0.98x0.98x0.98 mm. Six kidneys have been aligned on the remaining seventh using a 3D affine transformation to ensure the same relation between the image features.

illustration not visible in this excerpt

Fig. 1. (a) Mean shape generated from a dataset of seven segmented kidneys. (b) - (e) Deformed mean shape with respect to the first and second mode of variation.

A. Mean Shape and Eigenmodes of the Shape Model

The mean shape of the set of seven binary images has been calculated using (5). Fig. 1 shows the mean shape and the variations among the first two principal components. Unfortunately these variations do not correspond directly to simple parameters like size, curvature or a combination of these attributes and thus are difficult to interpret. Nevertheless it is clear that various weightings of the first eigenvector do change the size of the lower and upper part of the kidney.

TABLE I Eigenvalues of the dataset

illustration not visible in this excerpt

During the generation of the mean shape, the eigenvalues and the corresponding eigenvectors are decomposed. These eigenvalues can be seen in Table I. In this case, the first four components sum up to 88% of the total variance of the training data. This implies that using these major components will be enough to compose new shapes while saving computation time due omitting the last three eigenvectors.

B. Classical Level Set Segmentation

In a first evaluation we calculated the segmentation without shape prior information. Therefore we used the contour of the mean shape as the initial level set curve. The segmentation process starts with locating the zero level set near the object to be segmented with at least a small overlap. Afterwards this contour expands iteratively towards the object boundaries.

Using the normal vector of the border to determine the direction, the contour grows or shrinks iteratively until the change in terms of the root mean square deceeded a certain threshold or a maximum number of iterations has proceeded. Here we used 800 iterations until stopping the evolving pro­cess. The propagation value c was set to 0, the curvature κ and the step size a1 were defined as 1.0.

The results without shape guidance can be reviewed ex­emplary in Fig. 2. Here a comparison between the resulting contour and the binary segmentation (Fig. 2(a)) used for validation can be seen. Furthermore the zero level set of the mean model is shown in Fig. 2(b), which has been used as start of the contour expansion. Table II illustrates the results in terms of quality measurements like Dice coefficient, Hausdorff distance and mean surface distance [5]. This shows that the current setup is able to segment most of the images pretty

illustration not visible in this excerpt

Fig. 2. One example of the segmentation result (red) without shape guidance in contrast to (a) the binary ground truth segmentation (blue) and (b) the zero level set of the mean model (green) used as start contour.

illustration not visible in this excerpt

Fig. 3. Results (red) of the shape guided segmentation of the same patient shown in Fig. 2. Again, (a) shows the binary ground truth segmentation (blue) and (b) indicates the zero level set of the mean model (green).

well. But especially Image 4, which is opposed in Fig. 2 and Fig. 3, is not segmented in a satisfying way. Obviously the generated contour leaks in the region of the liver since both tissues share similar voxel values in the CT image.

C. Level Set Segmentation with Shape Guidance

In order to prevent leaking structures we extend the seg­mentation process with shape prior information. This guides the contour variations calculated during the expansion of the zero level set to meaningful shapes that can be found in a set of training samples. The space of ’legal’ configurations is defined by the weightings of the principal components as well as rotation and translation. Again the mean model has been used as initial level set and we stopped the segmentation process after proceeding 800 iterations. We set c to 0, κ and a1 to 1.0 and used a shape prior scaling a2 of 0.03.

To illustrate the advantage of shape guided segmentation process we show the same image data referred to in the pre­vious section in Fig. 3. Here the contour did not leak into the region of the liver as it is supposed to stick within the range of allowed variations of the mean shape. Besides the prevention of leaking structures, the overall outcomes improved as well. In detail, the distance values of Image 4 and 7 haven been reduced clearly while increasing the Dice coefficient which indicates a larger amount of overlap of the segmentation result and the ground truth data.

IV. Conclusions

The methods presented show, that including shape prior knowledge into the segmentation process has great potential. Choosing the parameters to be equal for all of the images helped us understand how they interact with the image and the model data. Although these settings do not look well suitable for every image individually, the overall result looks promising. However, adapting custom settings may improve the speed of the computation and the quality of the results.

Another aspect we need to take care of is the initial placement of the starting level set function. Assuming there is no overlap with the object we would like to segment, the results are not satisfying. This means robust methods for initialization are needed.

To prove the quality of the constructed model we will per­form a leave-one-out validation in the near future. Therefore only six binary kidney segmentations are used to calculate the mean shape and the modes of variation. Subsequently the remaining image will be segmented to see whether the model can be used to derive the new shape.

references

[1] A. Tsai et al., A shape-based approach to the segmentation of medical imagery using level sets, IEEE Transactions on Medical Imaging, Volume 22, Pages 137-154, 2003.
[2] M. Kass, A. Witkin and D. Terzopoulos, Snakes: Active contour models, International Journal of Computer Vision, Volume 1, Pages 321-331, 1988.
[3] S. Osher and J. A. Sethian, Fronts propagating with curvature dependent speed: Algorithms based on Hamilton-Jacobi formulation, Journal of Computational Physics, Volume 79, Pages 12-49, 1988.
[4] T. F. Cootes, A. Hill, C. J. Taylor and J. Haslam, The use of active shape models for locating structures in medical images, Image and Vision Computing, Volume 12, Pages 355-365, 1994.
[5] H. Handels, Medizinische Bildverarbeitung: Bildanalyse, Mustererken­nung und Visualisierung für die computergestützte arztliche Diagnostik und Therapie, Vieweg+Teubner Verlag / GWV Fachverlage GmbH, Volume 2, 2009.
[6] V. Caselles, R. Kimmel and G. Sapiro, Geodesic active contours, Inter­national Journal of Computer Vision, Volume 22, Pages 61-79, 1997.
[7] P. J. Besl and N. D. McKay, A method for registration of 3-D shapes, IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 14, Pages 239-256, 1992.
[8] M. E. Leventon, W. E. L. Grimson and O. D. Faugeras, Statistical shape influence in geodesic active contours, Conference on Computer Vision and Pattern Recognition, IEEE Computer Society, Pages 1316-1323, 2000.
[9] R. H. Davies, Learning shape: Optimal models of natural variability, PhD Thesis, University of Manchester, UK, 2002.

Development and evaluation of a tool for the generation of probabilistic expert segmentations

C. Steinberg, J. H. Moltz, B. Geisler and H. Handels

Abstract—Automatic and semi-automatic segmentation algo­rithms are often validated by comparing their results to a single reference, which is created manually by an expert. But as manual delineations even by experts always show some degree of variability, more than one reference mask should be used for a meaningful validation. Unfortunately usually not enough experts are available to generate the typical variability of manual segmentations. Therefore we developed a manual segmentation tool, which offers the opportunity to define uncertainties of the unknown true boundary resulting in a probability mask with the goal to reduce the number of experts without losing variability. The evaluation results show that the probability masks by three experts almost cover the variability of ten conventional expert delineations.

I. Introduction

Segmentation of anatomical or pathological structures is a major task of medical image processing. Especially tumor seg­mentation is an important part. Amongst others it is necessary for tumor volumetry, which in turn is used to classify the response of a tumor treatment. As manual segmentation is time-consuming and error-prone as well as subjective, several automatic and semi-automatic segmentation algorithms have been developed during recent years. But the validation of these methods is often unattended. Commonly, algorithms are evaluated by comparing their results to only a single reference segmentation, which is created manually by an expert. But as shown in Fig. 1 manual delineations even by experts always show some degree of variability, which should be taken into account during the validation process. At Fraunhofer MEVIS this variability has been analyzed with the focus on tumor volume [1]. As expected the more references are used the better the typical variability is covered allowing for a mean­ingful validation. But one problem is that usually at most three experts are available for manual tumor delineation, which is not enough to generate the typical variability. A solution could be a segmentation tool, which offers the user the opportunity to express uncertainties of the boundary with the goal to reduce the number of experts without losing variability. Christophe Restif already proposed a framework called Confidence Maps Estimating True Segmentations (Comets) [2], which generates

C. Steinberg, Medizinische Ingenieurwissenschaft, University of Lübeck; the work has been carried out at Fraunhover MEVIS - Institute for Medical Image Computing, Bremen, Germany (e-mail: christiane.steinberg@miw.uni- luebeck.de)

J. H. Moltz is with Fraunhover MEVIS - Institute for Medical Image Computing, Bremen, Germany (e-mail: jan.moltz@mevis.fraunhofer.de)

B. Geisler is with Fraunhover MEVlS - Institute for Medical Image Com­puting, Bremen, Germany (e-mail: benjamin.geisler@mevis.fraunhofer.de)

H. Handels is Director of Institute for Medical Informatics, University of Lübeck, Lübeck, Germany (e-mail: handels@imi.uni-luebeck.de)

illustration not visible in this excerpt

Fig. 2: Image examples of manual delineations by experts showing the two kinds of uncertainty, which causes the vari­ability.

confidence maps. It works as follows: an expert selects so called border pixels by drawing a continuous line at the ex­pected border and a certain number of inner limit pixels as well as outer limit pixels. Then a confidence factor is calculated for each border pixel using the limit pixels. Given the border pixels and their confidence factors finally the confidence of any pixel can be computed. But as developed for 2D images it is not suitable for 3D data sets, because the selection of the limit pixels in all directions is too time-consuming. So the basic idea behind this work is to develop a similar tool for 3D images. Moreover the main condition was to keep the additional work compared to conventional delineation as low as possible, since manual segmentation already is quite time­consuming.

II. MaterialandMethods A. Implementation

Basically we considered two reasons for the variability of manual delineations. on the one hand the edges of the desired

illustration not visible in this excerpt

Fig. 3: Left: example of a tumor segmentation with an un­certainty width of 6 pixels (diameter of the circle) and an image size of 54 x 45 pixels, yellow: drawn contour, green: calculated minimal contour, red: calculated maximal contour. Right: the resulting probability segmentation mask with linear values from 1 (white) to 0 (black).

illustration not visible in this excerpt

Fig. 4: Left: example of a tumor segmentation with four defined confidence regions. Right: the resulting probability segmentation mask. The number inside the particular region is the chosen confidence value.

object are not sharp and depending on the window settings one expert may draw the boundary more inwards and another one more outwards. This kind of variability we called statistical probability (Fig. 2a). On the other hand the region of interest (ROI) can contain areas, which are not clearly part of the object or background. So the first expert may include these parts in the segmentation and the second expert leaves them out. We named these areas confidence regions (Fig. 2b). Due to these two kinds of variability we developed two individual tools to enable the separated use respectively and combined them into a third tool afterwards. The implementation was done in MeVisLab, which is a modular framework for the development of image processing algorithms, visualization and interaction methods especially for medical images [3]. An overview gives for example Medical Image Analysis by Felix Ritter et al. [4].

1) GenerateSegmentationMaskWithStatisticalProbabilities: As the name implies, the first algorithm is used to draw a segmentation mask which includes statistical probabilities. For this purpose a so called uncertainty width, which is visualized by the diameter of a circle, must be selected. This width is set in pixel units and can be chosen for each contour individually, but is constant for one contour. The idea of the uncertainty width is that it represents the degree of insecurity, which depends on image contrast, noise etc., of the true boundary position, as it is always unknown. The center of the circle shall represent the mean contour, which has to be drawn manually. Afterwards the minimal and maximal contours are calculated automatically. The minimal contour lies inside the mean contour with a distance of half of the uncertainty width and the maximal contour the same distance outside. So the whole distance between minimal and maximal contour is exactly the chosen uncertainty width. In this context, minimal contour means everything inside belongs clearly to the object. Similarly the maximal contour means that everything outside is clearly background. The probability segmentation mask is then calculated as follows: every voxel inside the minimal contour gets the value 1. Every voxel outside the maximal contour is set to the value 0. The values of the remaining voxels are distributed linearly from 1 (inwards) to 0 (outwards) using the Euclidean distance. Since the calculated minimal and maximal contours can be corrected manually, the computation of the segmentation mask will be done at the press of a button. An example of an uncertainty width of 6 pixels shows Fig 3.
2) GenerateSegmentationMaskWithConfidenceRegions: Analogically to the first algorithm this one is used to draw a segmentation mask, which includes confidence regions. To do so the user delineates the areas, which are clearly tumor tissue. Again this is called the minimal contour for the same reason as mentioned above. Then the edit mode needs to be changed and the confidence regions can be defined. The user selects a confidence value ranging from 0.1 to 0.9 (each represented by a different color) and draws an additional boundary around the area, which is probably part of the tumor with a probability equal to the chosen confidence value. This step can be repeated as often as necessary until a probability has been allotted to all possible tumor parts. Fig. 4 displays an example of a tumor segmentation with confidence regions and the resulting probability mask. The yellow contour is the minimal contour, which has to be drawn first. The other contours are the boundaries of the confidence regions. Every additional boundary can either be a closed new contour (not shown in the image) or adjoined to the minimal contour (e.g. pink contour). It can not adjoin to the boundary of another confidence region. In case the user actually wants to do that, the solution is also shown in the example (blue and green contour). If the image contains overlapping contours the maximum confidence value is written into the probability mask.
3) GenerateProbabilitySegmentationMask: For our appli­cation we simply connected the two separated algorithms outlined above to one manual segmentation tool. First the desired object is segmented using an uncertainty width. and the resulting probability segmentation mask is computed. Then just the mean contour is displayed and confidence regions can be added. Again a probability mask is computed. In the end the two masks are combined into the final segmentation mask by calculating the maximum values.

illustration not visible in this excerpt

Fig. 5: Tumors that were used for evaluation including average volume and corresponding coefficient of variation. Here the centers of the slices are shown.

B. Data

For evaluation we used the same data set as in the analysis of variability [1]. The data set consists of 13 CT images from 13 different patients, acquired in seven hospitals and using scanners by four vendors. The slice thickness ranges between 0.8 and 1.5 mm. In each image a ROI, which contains a liver tumor, was cut out. The tumors vary in size, location, contrast to parenchyma, resolution, noise etc., but mostly small tumors were chosen to keep the segmentation effort low (see Fig. 5). For each tumor 10 manual delineations by different experts (radiologists and experienced radiology

TABLE I: Evaluation results of the minimal, mean and max­imal mask of the average probability mask and the average reference mask.

illustration not visible in this excerpt

technicians) were available and applied for evaluation. We used the resulting average mask as reference. In the following this mask is called average reference mask.

III. Results and Discussion

We asked one radiologist and two radiology technicians experienced with manual segmentation to segment the 13 tu­mors with our implemented tool resulting in 39 segmentations. They took approximately two hours. All three experts chose an uncertainty width for each case, whereas the confidence regions were only used by two of them.

For evaluation we compared the average mask of the three probability masks with the average mask of the ten manual delineations. To do so we built the minimal, mean and maximal mask by choosing a threshold of 1, 0.5 and 0.01, respectively, since most validation methods expect binary masks. We analyzed several similarity measures, but as many of them evaluate similar information, we focused on the Dice coefficient and the maximum symmetric absolute surface distance [5]. The results are shown in Table I. The Dice coefficients of the mean average masks are the highest up to 0.96 with a median value of 0.90. But even for the minimal average masks the median is still 0.79. On the other hand the maximum surface distances are quite low with median values of 2.14 mm (minimal masks), 2.44 mm (mean masks) and 3.91 mm (maximal masks). As expected the maximum surface distance rises with increasing tumor volume. Besides this numerical evaluation Fig. 6 displays three slices of both the average probability mask and the average reference mask of the second tumor for visual analysis. Overall, the evaluation shows that the results have an adequate accuracy.

We also analyzed the chosen uncertainty widths. For com­parison purposes we changed the unit from number of pixels to mm by multiplying the values with the pixel size of the particular image. Then we divided the results by the diameter of the mean average reference masks to relativize the uncertainty width to the tumor volume. Afterwards we

illustration not visible in this excerpt

Fig. 7: The graphic shows the average relative uncertainty widths plotted against the coefficients of variation for each tumor. Besides the best-fit line is displayed.

calculated the average uncertainty width of the three experts for each tumor. The results range from 0.03to0.27 and slightly correlate (correlation coefficient 0.73) to the coefficient of variation (COV). The correlation is shown in Fig. 7. This means the more the ten experts disagreed about the tumor delineation the wider was the averaged chosen uncertainty width.

In 10 of 39 segmentations one or more confidence regions have been drawn. One expert has drawn confidence regions in 7 of 13 cases and the second one in 3 cases. Both chose rather small confidence values, but did not agree about the region. As already mentioned the third expert did not use the confidence regions at all. Fig. 8 shows an example of how the confidence regions have been used. The probability segmentation mask of one expert (Fig. 8a) and the average reference mask of the ten expert segmentation masks (Fig. 8b) look very similar. The white parts (confidence value 1.0) mostly coincide and the expert has drawn a confidence region with confidence value 0.5 (gray part), where also approximately five experts have drawn the tumor boundary. In the end we asked the experts for a short feedback. All three experts used this tool for the first time and it was very unusual for them not to strictly draw the tumor boundary but to have the opportunity to define confidence regions. Generally they like the basic idea, yet they need to get used to it, before they tab the full potential of that tool. So probably the results could be improved if we repeated the evaluation after an adaptation phase and potentially the number of experts could be further reduced.

IV. Conclusions

We developed a manual segmentation tool for the gener­ation of probabilistic expert segmentations with the goal to cover the typical variability of manual delineations with fewer experts than before. We distinguish two kinds of insecurity, which causes the variability, and therefore implemented two individual segmentation tools. They have been connected afterwards. Hence the segmentation has to be done in two steps. Whereas the idea of the uncertainty width (step one) has been accepted right away, the confidence regions (step two) have been used rather rarely. For evaluation we compared the probability masks of three experts with ten conventional expert segmentation masks of 13 tumors. The results have shown that it is indeed possible to reduce the number of experts without losing variability.

Acknowledgment

We thank Christiane Engel and Gulsen Yanc for drawing the probability segmentation masks. This work was partly funded by the European Regional Development Fund.

References

[1] J. H. Moltz, S. Braunewell, J. Rühaak, F. Heckel, S. Barbieri, L. Tautz, H. K. Hahn and H.-O. Peitgen, Analysis of variability in manual liver tumor delineation in CT scans. Biomedical Imaging: From Nano to Macro, 2011 IEEE International Symposium, pp. 1974-1977, 2011.
[2] C. Restif, Revisiting the evaluation of segmentation results: introducing confidence maps. MICCAI 2007, pp. 588-595, 2007.
[3] MeVisLab Homepage, 2013. http://www.mevislab.de.
[4] F. Ritter, T. Boskamp, A. Homeyer, H. Laue, M. Schwier, F. Link and H.-O. Peitgen, Medical Image Analysis. IEEE Pulse, vol. 2, no. 6, pp. 60-70, Nov.-Dez. 2011.
[5] H. Handels, Medizinische Bildverarbeitung. pp. 153-154, Vieweg + Teubner Verlag, Wiesbaden, 2009.

ASM-based segmentation of the middle phalanx in digital radiographic images of the hand

G. Kabelitz., R. Dendere, T.S. Douglas

Abstract - We present a technique for segmenting the phalanx media of the middle finger in digital radiographic images recorded with a Lodox Statscan. The result of segmentation using active shape modeling (ASM) may be used for estimation of bone mineral density which could be used as an indicator for osteoporosis. Furthermore the algorithm has the ability to detect the region of interest automatically thus minimizing the dependence on an operator. For evaluation, we compared the contours produced by the ASM to contours made by manual segmentation and deformable models, using the Hausdorff distance. The ASM technique offers a slightly more accurate segmentation.

I. Introduction

Osteoporosis is a skeletal disease characterized by low bone mass and micro-architectural deterioration of bone tissue, with a consequent increase in bone fragility and susceptibility to fracture [1]. Itis diagnosed by measuring bone mineral density (BMD) and comparing it with the mean of a sex-matched, young and healthy group. Dual energy X-ray absorptiometry (DEXA) is the most common technique, and is considered the gold standard, for the measurement of BMD due to its low radiation dose and proven ability to predict fracture risk [2]- [3]. The World Health Organisation (WHO) definition of osteoporosis is based on spine, hip or forearm DEXA measurements of BMD and suggests that BMD measurements should be taken at these skeletal sites since they are the most common sites for osteoporotic fractures. However, osteoporosis affects the entire skeleton [4] and any skeletal site can be used to evaluate the initial fracture risk for the common fracture sites [5]-[6]. The hand, unlike the spine and hip, is far from organs with higher susceptibility to the effects of ionizing radiation and therefore BMD measurement taken in the hand would result in reduced effective dose. The phalanges are a particularly useful site because bone there is surrounded by little soft tissue, which may result in more accurate BMD measurements. Assessment of phalangeal BMD by DEXA or radiographic absorptiometry (RA) may have long-term value in predicting the risk of both hip and spine fracture [7]. DEXA has been shown to be a useful and accurate method for measuring BMD in hand bones [7-9] and phalangeal DEXA is potentially useful for clinical diagnosis of osteoporosis [8].

G. Kabelitz, Medizinische Ingenieurswissenschaft, University of Lübeck: the work has been carried out at Medical Imaging Research Unit, University of Cape Town, South Africa (gordiankabelitz@hotmail.de)

Ronald Dendere is with Medical Imaging Research Unit, University of Cape Town, South Africa (rdendere@gmail.com)

Tania S. Douglas is with the MRC/UCT Medical Imaging Research Unit, University of Cape Town, Observatory 7925, South Africa. (tania@ieee.org).

Precise segmentation of the region of interest (ROI) is crucial for correct and accurate BMD measurements using DEXA [10]. Region- [11], classification- [12]-[13] and threshold-based [14] segmentation techniques have been used to segment various bones in radiographic images. We apply active shape models (ASM) [15] and deformable models [16] for segmentation of the middle phalanx of the middle finger in digital radiographic hand images. Both methods have been employed successfully in the segmentation of the middle phalanx of the middle finger in hand images [8]; [17]. We introduce a technique that fully automates the segmentation for a deformable model and requires minimal user interaction for an ASM. The Algorithm provides a fast method and efficient way to achieve a BMD measurement for osteoporosis diagnosis. We use the Hausdorff distance to assess the accuracy of the two methods compared with manual segmentation.

II. Material and Methods

A. Materials and X-Ray scanning

The Phalanx Media III bones we scanned were provided by the anatomical museum in the Department of Human Biology at University of Cape Town. Overall we scanned 96 Phalanges using the Lodox Statscan with 100kVp and 50mA as tube settings.

The Lodox Statscan is a digital radiographic machine developed in South Africa using slot scanning and time delay integration to reduce the X-ray dose.

For comparison we scanned 15 subjects who had to sign an informed consent before scanning. We scanned the area between the finger tips and the junction of fingers and hand palm. The images were captured and saved in DICOM format with 14bit depth and a 2100x1990 pixel size.

B. Preprocessing and automatic location of phalanx media The first part of the algorithm is for preparing the image

and that simplifies the initialization for the ASM. The first step for search the ROI was a contrast enhancement using the built-in histogram equalization of MatLab. After that the automatic location starts with thresholding the image using Otsu’s method [18] and detecting all fingers. Assuming that the middle finger would be the largest finger, the largest object in the monochrome image will be declared the middle finger. The image is re-orientated so that the middle finger is approximately parallel to the edge of the images. The second step begins with a summation of all pixel values yij· in each column i as shown in Fig. 2:

illustration not visible in this excerpt

The position of the highest sum gives an approximation of the midline through the middle finger because of the highest bone Pixel density. The way the images were taken allows the assumption that the middle finger covers more than half the length of the image. We use some pixels located at the halfway mark of the column we identified before. Using the pixels for a region growing algorithm we try to identify all bone pixels. The result of the region growing is multiplied with the image in Fig. 1. In the new image we start a second summation along the rows comparable to (1). All pixel intensities with non-zero values are added. The result is a profile with two distinct sharp transitions as shown in Fig. 3. A search for the sharp changes of the summed values started from the point which is in the middle of all non-zeros values along the x-axis. In order to find the approximate boarders of the bone in the selected row a search is conducted positive and negative x-direction. Halfway between these points is close enough to the mid-bone for the ASM. The width of the middle phalanx at this y-position in Fig. 1 is extracted from the image which contains only bone pixels. A good estimation for the x- coordinate is the middle point of the bone width.

C. Active Shape Model-based Segmentation

ASM use statistical models derived from example shape. Therefore ASM is bound to one particular object shape. With a manual selection of distinct landmarks describing the boundaries of the target object, a training set is provided for that special shape. A mean shape is computed through alignment, translation, scaling and rotation of the training shapes. For segmentation the mean shape is applied to test images and manipulated.

1) Trainingof'themeanshape

We used for training the ASM the mentioned bones in order to avoid needless radiation exposure to probands. The bones were identified using [19]. 80 bones were used for the training set and the rest were nominated as test set of 16 bones. Images of the bones were acquired as described in Section II. At the prominent places like edges landmarks were placed for the ASM covering the features of the bone and on parts without easy identifiable points we added additional points for a more

illustration not visible in this excerpt

Fig. 2. Summation of image column (y-axis) against column position (x-axis), the vertical line shows the biggest summand.

illustration not visible in this excerpt

Fig. 3. Summation of the image rows (y-axis) against the row position (x-axis), the vertical lines mark the edge of the bone; the x- coordinate corresponds to the y-axis in the original image.

precise shape. Overall we placed 46 points to fully describe the bone shape as shown in Fig. 4. So the ith shape can be described by a vector xt containing the 46 points for the shape:

illustration not visible in this excerpt

For alignment the next shape Xj will be rotated by Θ, scaled by s and translated by t to minimize the weighted sum

illustration not visible in this excerpt

using Μ^,Θ) as rotation and scaling matrix and W is the weighting matrix which depends on the variance of corresponding points. More stable points get a higher significance and therefore a higher weight.

illustration not visible in this excerpt

Fig. 4. The ASM landmarks are placed on prominent spots. Additional landmarks were placed between these points.

After obtaining the aligned coordinates for each landmark a principal component analysis is carried out. Now we can calculate for each shape xt the deviation dxi from the mean axis using

illustration not visible in this excerpt

The covariance matrix of is computed and the eigenvectors can be used to describe any allowed shape by taking the mean shape and adding linear combination of the eigenvectors [15]. So we achieve a statistical description of the shape coordinates from the training set:

illustration not visible in this excerpt

where xm represents the mean shape, P is the matrix of the most significant eigenvectors of the covariance matrix and b is the vector of weights, one for each eigenvector.

2) Object search in Image

The initialization coordinates for the ASM are provided by the localization of the ROI. The success of the final segmentation depends highly on the quality of the initialization. A good initialization is obtained by aligning the centroid of the mean shape with the centroid of the ROI. For a good approximation we use the algorithm discussed in section

B. The orientation of the initial shape is computed using a rectangular template. The centroid of the template is located at the coordinates obtained at the search of ROI. The orientation of the bone is identified by summation of the gray values covered by the template. The higher the gray value sum the closer is the orientation of the template to the one of the bone as shown in Fig. 5. Automatically a pre-scaling factor is determined depending on the length of the found bone to reduce the computing time. Fig. 6 shows an example of a good ASM initialization. A multi-resolution search is carried out [22] using the pyramid technique and the Mahalanobis distance to measure the similarity between the test shape and the mean shape. The ASM algorithm is searching for related profiles in the images and the bone [15]. Once the iterative search is finished the user can indicate by eyesight if the found shape corresponds with the shape of the bone. If the computed shape is too big or too small the user can re-run the ASM algorithm using an adjusted scaling factor. This is done until the iterative modified mean shape resembles the test shape.

D. Validation

For validation the segmentation is compared to a manual segmentation as gold standard and segmentation was carried out using deformable models. Deformable models are a widely used method for image segmentation. It uses a curve moved by internal forces, that keep the curve smooth, and external forces, which are computed from the image data. It’s robust to image noise and boundary gaps [20].

illustration not visible in this excerpt

Fig. 5. Gray level covered by orientation template, the star shows the final re-orientation angle of the initial ASM shape.

illustration not visible in this excerpt

Fig. 6. ASM initialization at which the approximated center of the bone is marked with a cross (cropped image).

Two graduate students were asked to perform a manual segmentation of the test images under the guidance of a qualified radiographer and the average of those two contours was used as the basis of comparison. Between the ground truth and the extracted shape the Hausdorff distance [21] was used to quantify the similarity. The Hausdorff distance (HD) is defined as:

illustration not visible in this excerpt

where A and В are point sets from the images under comparison and d(A,B) is the directed Hausdorff distance from A to B.

III. Results and Discussion

As output of the segmentation algorithm applied to the test images we received a set of points marking the border of the bone.

illustration not visible in this excerpt

Fig. 7. Example for an ASM-based segmentation: a) test image (cropped), b) contour detected by ASM algorithm, c) comparison between ASM-based contour (red) and manual segmentation (blue) - Hausdorff distance: 4.53 pixels.

The ASM algorithm was tested on 16 test images and the output was compared with a manual segmentation. This comparison showed a close similarity and hence an almost accurate segmentation with an average Hausdorff distance of 5.7 pixels. The distances for the test images are between 3.61 and 8.49 pixels. So the segmentation works for all test images in an acceptable way. For testing the ASM algorithm with in- situ bones 15 participants were recruited and images of their left hand were taken since BMD measurements are normally taken in the weaker hand. Table I summarizes the differences between contours detected by ASM and the comparison algorithm, deformable model, to the manual segmentation. Fig.6 gives an impression on the similarity between ASM- based and manual segmentation. The ASM tends to delineate the borders of the bone with a high accuracy although it produces errors due to the presence of the other bones. To minimize the errors occurring with a bad initialization the introduced algorithm support a correct initialization. However, while user interaction is mandatory for the ASM algorithm, only one click of the mouse (indicating that the initialisation is good) is necessary in most cases due to the degree of automation for initialization; out of the 15 test images, only two images required correction for the initialization and no correction was required for the bone test images. The average Hausdorff distances obtained on the excised bones (5.70 pixels) and on human subjects (5.58 pixels) are almost equal and therefore the bone-trained ASM is suitable for segmenting actual hand images.

TABLE I HAUSDORFF DISTANCE FOR HUMAN SUBJECT

illustration not visible in this excerpt

Sotoca et al. [17] have used an ASM to segment the middle phalanx from hand radiographic images. However, they do not validate the accuracy of the segmentation but claim success based on results of BMD measurements based on the ASM segmentation. Their algorithm also relied on the user defined input for initialisation of the ASM. The result of ASM segmentation is highly dependent on initialization and having a user-defined starting point introduces subjectivity in the results. This is a drawback that has been addressed in this study.

IV. Conclusions

The validation pointed out that the ASM algorithm provides a high accuracy and is less time-consuming compared to manual segmentation and the results of the segmentation carried out with deformable models. The output of this segmentation can be used for estimating the BMD which can further be used to determine fracture risk or diagnosis of osteoporosis. So we can provide an easy and fast algorithm with a minimum of operator input.

Acknowledgements

The research was supported by Mr. Steiner, contact person of Lodox Systems.

References

[1] WHO. Prevention and management of osteoporosis. World Health Organization Technical Report Series, 921,2003, 1-164.
[2] G. M. Blake, I. Fogelman, “An update on dual-energy X-ray absorptiometry”. Seminars in Nuclear Medicine, 40(1), 62-73, 2010.
[3] A. El Maghraoui, C. Roux, “DXA scanning in clinical practice”. QJM, 101(8), 605-617, 2008.
[4] S. H. Ralston, “Bone densitometry and bone biopsy”. Best Pract.Res.Clin.Rheumatol., 19(3), 487-501, 2005.
[5] P. Ross, “Radiographic absorptiometry for measuring bone mass”. Osteoporosislnt, 7 (3), 103-107, 1997.
[6] R. D. Wasnich, “Perspective on fracture risk and phalangeal bone mineral density”. J.Clin.Densitom, 1(3), 259-268, 1998.
[7] A. A. Deodhar, J. Brabyn, P. W. Jones, M. J. Davis, A. D. Woolf, “Measurement of hand bone mineral content by dual energy x-ray absorptiometry: Development of the method, and its application in normal volunteers and in patients with rheumatoid arthritis”. Ann.Rheum.Dis., 53(10), 685-690, 1994.
[8] M. Gulam, M. M. Thornton, A. B. Hodsman, D. W. Holdsworth, “Bone mineral measurement of phalanges: Comparison of radiographic absorptiometry and area dual X-ray absorptiometry”. Radiology, 216(2), 586-591,2000.
[9] C. L. Hill, C.G. Schultz, R. Wu, B. E. Chatterton, L. G. Cleland, “Measurement of hand bone mineral density in early rheumatoid arthritis using dual energy X-ray absorptiometry”. Int.J.Rheum.Dis., 13(3), 230­234, 2010.
[10] J. W. Kwon, S. I. Cho, Y. B. Ahn, Y. M. Ro, “Noise reduction in DEXA image based on system noise modeling”. Biomedical and Pharmaceutical Engineering, 2009. ICBPE '09. International Conference on, pp. 1-6.
[11] E. Pietka, “Computer-assisted bone age assessment based on features automatically extracted from a hand radiograph”. Comput.Med.lmaging Graphics, 19(3), 251-259, 1995.
[12] T. S. Levitt, M.W. Hedgcock, J. W. Jr., Dye, S. E. Johnston, V. M. Shadle, D. Vosky, “Bayesian inference for model-based segmentation of computed radiographs of the hand”. Artif.lntell.Med., 5(4), 365-387, 1993.
[13] M. Rucci, G. Coppini, I. Nicoletti, D. Cheli, G. Valli, “Automatic analysis of hand radiographs for the assessment of skeletal age: A subsymbolic approach”. Comput.Biomed.Res.,, 28(3), 239-256, 1995.
[14] S. N. Cheng, H. P. Chan, L. T. Niklason, R. S. Adler, “Automated segmentation of regions of interest on hand radiographs”. Med.Phys., 21(8), 1293-1300, 1994.
[15] T. F. Cootes, C. J. Taylor, D. H. Cooper, J. Graham, “Active shape models-their training and application”. Comput.Vision Image Understanding, 61(1), 38-59, 1995.
[16] M. Kass, A. Witkin, D. Terzopoulos, “Snakes: Active contour models”. International Journal of Computer Vision, (4), 321-331, 1988.
[17] J. M. Sotoca, J. M. Iñesta, M. A. Belmonte, ”Hand bone segmentation in radioabsorptiometry images for computerised bone mass assessment”. Comput.Med.lmaging Graphics, 27(6), 459-467, 2003.
[18] N. Otsu, “A threshold selection method from gray-level histograms”, IEEE 'Trans. Sys., Man., Cyber, 9, 62-66, 1979.
[19] N. Navsa, Skeletal morphology of the human hand as applied inforensic anthropology, Diss. University of Pretoria, South Africa, 2010
[20] X. Chenyang, J. L Prince, “Snakes, shapes, and gradient vector flow”. IEEE Transactions on Image Processing, 7(3), 359-369, 1998.
[21] D. P. Huttenlocher, G. A. Klanderman, W. J. Rucklidge, “Comparing images using the hausdorff distance. Pattern Analysis and Machine Intelligence”, IEEE Transactions on, 15(9), 850-863, 1993.
[22] T. F. Cootes, C. J. Taylor, A. Lanitis, “Active Shape Models: Evaluation of a Multi-Resolution Method for Improving Image Search”. Proceedings of the British Maschine Vision Conference, 327-336, 1994

Multiple Sclerosis Lesion Segmentation Using Dictionary Learning and Sparse Coding

N. Weiss, A. Rao and D. Rueckert

Abstract—The segmentation of lesions in the brain during the development of Multiple Sclerosis is part of the diagnostic assessment for this disease and gives information on its current severity. This laborious process is still carried out in a manual or semiautomatic fashion by physicians because published auto­matic approaches have not been universal enough to be widely employed in clinical practice. Thus Multiple Sclerosis lesion segmentation remains an open problem. In this paper we present a new approach addressing this problem with dictionary learning and sparse coding methods. We show its general applicability to the problem of lesion segmentation by evaluating our approach on synthetic image data and comparing it to state-of-the-art methods. Furthermore the potential of using dictionary learning and sparse coding for such segmentation tasks is investigated and various possibilities for further experiments are discussed.

I. INTRODUCTION

Multiple Sclerosis (MS) is an autoimmune demyelinating disease occurring in the central nervous system (CNS). It is chronic, inflammatory and currently incurable [9]. The under­lying cause for the spontaneous degeneration of the myelin and subsequent the axons is still unknown although different environmental, genetic and infectious factors seem to have an impact [3]. As lesions can appear at various locations within the brain the symptoms vary across all patients. For instance numbness, weakness, visual impairment or loss of balance could emerge. Initial symptoms normally appear in young adulthood and twice as many women as men are affected [1]. Even though MS does not shorten the life expectancy signif­icantly patients suffer a lot from this unpredictable disease. Approved medications and therapies present a symptomatic treatment to decrease the severity, occurrence and duration of certain symptoms [9].

These treatments are evaluated monitoring the progress of the disease during clinical trials. Magnetic resonance imaging (MRI) makes a major contribution to this evaluation as it is very sensitive to most of the lesions appearing in the white matter (WM) of the brain [8]. Lesions are visible as hyper­intense areas in T2-weighted (T2w) and often hypointense in T1-weighted (T1w) MRI images. Counting these white matter lesions (WML) and determining their total lesion load (TLL) are key parts of describing the progression of MS and the current diagnosis criteria for MS (McDonald Criteria) [13]. In clinical practice the detection and segmentation of WML

N. Weiss, Medizinische Ingenieurwissenschaft, University of Luebeck; the work has been carried out at the Biomedical Image Analysis Group, Imperial College London (e-mail: nick.weiss@miw.uni-luebeck.de).

A. Rao is with Department of Computing, Imperial College London (e-mail: a.rao1@imperial.ac.uk).

D. Rueckert is with Department of Computing, Imperial College London (e-mail: d.rueckert@imperial.ac.uk).

is still done in a manual or semiautomatic fashion by most physicians. This is a time consuming task that suffers from a large intra- and interexpert variability [10]. Thus an automatic approach is highly desirable to overcome these variabilities and to allow for the automatic analysis of a great number of images without the need of human interaction.

Over the last two decades several automatic methods have been proposed. Llado et al. [11] and Garcia-Lorenzo et al. [6] have recently reviewed these methods and came to the conclusion that MS lesion detection and segmentation remains an open problem although some automatic methods showed promising results within small groups of patients. No auto­matic method has been widely employed in clinical practice as they are too specific to deal with the heterogeneity of the texture and location of lesions and differences in MRI image acquisitions while retaining their performance.

Different approaches can be classified as supervised and un­supervised methods. While supervised methods rely on atlases with previously segmented lesions unsupervised methods do not require labeled data [6]. Latter methods often try to model the intensity distribution of the healthy brain tissues, namely WM, grey matter (GM) and cerebrospinal fluid (CSF). Voxels that cannot be explained by this model are called outliers and labeled as lesions. Van Leemput et al. [18] developed a well- known method based on such a model.

We create an approach that also segments the lesions as outliers to the healthy brain tissue and can be classified as an unsupervised method. It introduces dictionary learning and sparse coding to the segmentation of MS lesions, which is new to our knowledge. Many areas of image processing have already benefited from this methodology [17], [4]. The principal idea is to learn a dictionary primarily from healthy brain image tissue and then try to sparsely reconstruct these image patches using the dictionary. Image patches containing lesions have a higher reconstruction error and thresholding this reconstruction error for every voxel then provides a lesion segmentation. In this paper we describe our new approach and evaluate it by using synthetic brain images with a ground truth segmentation. Finally we compare the results with other methods and outline the potential of this new approach.

II. MATERIAL AND METHODS

As stated in Fig. 1 our method can be divided into three different parts. The first part is preprocessing and includes intensity normalization and brain extraction for both T1w and T2w images. The second part focuses on learning a dictionary from patches extracted from the brain and trying to reconstruct these image patches using sparse coding. The last part shows lexicographically ordered vectors xi,..., xm 6 Mfe with к = 27 for a three-dimensional image patch of size 3x3x3.

A dictionary D G C = {Ď G Жкх1 s.t. Wj : 11111 < 1} with 1 atoms dj representing the column vectors of D is learned from these image patches so that it solves the opti­mization problem

illustration not visible in this excerpt

Dictionary learning searches for a basis D that satisfies Xi « Dai for most image patches. Although the image patches with lesions are included in this learning process they do not impair the dictionary’s ability to only represent the healthy brain image patches well. The reason for this is that less than one percent of the patches include lesions.

The Li-norm constraint in (2) introduces sparsity to its solution so that only a few atoms of the dictionary should be used to represent an image patch [12]. It has been shown that learning a dictionary like this leads to a very precise image reconstruction [4]. If we try to reconstruct the image patches in a second optimization step

illustration not visible in this excerpt

Fig. 1. A brief overview of the presented method. (A) Preprocessing including intensity normalization and brain extraction. (B) Dictionary learning and sparse coding reconstruction of the image patches. (C) Mapping of the reconstruction error and finding a threshold for the final segmentation.

how we provide relative reconstruction error maps for Tlw and T2w, combine them and get the threshold for the final WML segmentation. In the remaining subsections we give information about the synthetic image data, the evaluation and the implementation.

A. Preprocessing

Given an image I : Ω —> Y with ilcB3 and Y cZ the intensity can be normalized by the transformation n : Y ->Yn with Yn = {0,1,..., maxn} resulting in a normalized image

illustration not visible in this excerpt

with p 6 Ω, min = min{Z(x) 6 Y : x 6 Ω} and max = max{Z(x) e Y : x G Ω}. max„ is set to 4095 for the proposed approach. This preprocessing step is necessary as it simplifies finding the right dictionary learning and sparse coding parameters for different image data.

We only want to segment the brain tissue thus a brain mask needs to be created excluding the skull and all non-brain tissue. For the synthetic image data used in this paper a brain mask is provided along with the images. If this is not the case a simple brain mask can be created using well established methods such as the brain extraction tool (BET) [14].

B. Dictionary Learning and Sparse Coding

The m image patches lying within the brain mask (m « 1.5 x 106 for a voxel size of 1 mm3) are extracted and we will obtain a reconstruction error for each image patch depending on the sparsity constraint. An appropriately chosen parameter X2 shows that the healthy, most common brain tissue can be easily reconstructed with a few atoms while the image patches containing lesions produce a higher error using the same amount of atoms.

C. Reconstruction Error and Thresholding

The relative reconstruction error of each image patch (4) can now be obtained and mapped at the position of the centered voxel within the patch. The result is an error map throughout the whole brain. We run the last steps for both im­ages Tlw and T2w with different parameters X2 and combine these error maps by multiplying them. Although the T2w error map is more sensitive to lesions the use of the Tlw error map was necessary due to the fact that the T2w error map produced a lot of false positive voxels in the area of the CSF. That was not the case in the Tlw error map, so by combining them we could reduce the misclassified CSF (see Fig. 2).

Finally a threshold is applied to obtain the segmentation of the WML from the combined error map. It is found by determining a smooth density distribution of this error map and finding a characteristic point of this distribution that separates lesions from the healthy tissue. This characteristic point is defined once for the whole synthetic image data.

D. Synthetic image data

As suggested by Garcia-Lorenzo et al. [6] the freely avail­able Brain Web1 image data is a good first evaluation step to

1 http.//www.bic.mm.mcgiU.ca/brainweb/

III. Results and Discussion

As can be seen in Fig. 3 we tested our method using the synthetic BrainWeb image data, different levels of noise and intensity inhomogeneity. We added the results of four other unsupervised methods whose authors provided the results and are considered as state-of-the-art methods to segment MS lesions [18], [7], [5], [16].

Compared to the other approaches our method achieved results with an overall competitive DSC. Bigger differences

show a proof of concept and to test the method’s robustness towards noise and intensity inhomogeneity [2]. Furthermore it allows us to compare our results to others as many authors evaluate their methods with this data.

It is possible to create Tlw, T2w images with different levels of noise (1%, 3%, 5%, 7%, 9%) and intensity inhomogeneity (0%, 40%) in an online MRI simulator. Besides a healthy brain, three different lesion loads (mild, moderate, severe) can be simulated. The ground truth segmentation is available for all these WML as well as the brain segmentation. The images are simulated with an isotropic resolution of 1 mm3 and have a size of 181 x 217 x 181.

E. Evaluation

The most common validation measure in the context of segmentation is the Dice Similarity Coefficient (DSC)

illustration not visible in this excerpt

with the number of true positive {TP), false positive (FP) and false negative (FN) voxels. TP is defined by the number of overlapping voxels of our segmentation with the ground truth segmentation. FP is the number of voxels labeled as lesions in our segmentation without correspondency in the ground truth and for FN it is vice versa. The DSC rewards a method for its ability to detect lesions and to reject healthy tissue.

F Implementation

This concept has been implemented using MATLAB and the SPArse Modeling Software2 (SPAMS) [12]. Latter imple­mentation was used to solve the optimization problems (2) and (3). Different parameters have been tested for this method. A good result is provided by an image patch size of 3 x 3 x 3 and a dictionary D with l = 100 atoms. The following sparsity constraints were also determined empirically: λι = 1000 (for Tlw and T2w), X2 = 2000 (for Tlw) and À2 = 4000 (for T2w). The whole segmentation took approximate 3 minutes and was carried out using an Intel Core 2 Duo processor at 2.4 GHz with 4 GB of RAM.

2http://spams-devel.gforge.mria.fr

will appear if the image data just contains a mild lesion load. At the lower noise levels (up to 5 %) and independent of the intensity non-uniformity (0 %, 40 %) the results are significant lower for the mild lesion load with up to 35 percentage points (pps) compared to the best method by Garcia-Lorenzo [7]. For the higher noise levels (7 %, 9 %) and the mild lesion load our method produces better results than the other methods, which seem to have a higher sensitivity towards noise. Altogether all methods have the intuitive dependency that less and smaller lesions are more difficult to detect and segment. Also, constant false positive detected voxels have a higher impact on the DSC if the TLL is lower.

For the moderate case our method again provided the best results for the higher noise levels and is always within the reach of the best method for lower noise levels. The same can also be stated for the severe lesion load case although the differences for the low noise levels are slightly higher with up to 11 pps compared to the best method by Forbes [5].

Noticeable is the robustness of our approach towards noise throughout all experiments even if there is no explicit prepro­cessing step dealing with it. The dictionary obviously adapts itself as it is learned with noisy image patches. This is a clear advantage of our method compared to the others.

The evaluation with synthetic image data demonstrates a proof of concept and its robustness towards noise. It is a good first step for a new approach but it is limited. We only have one phantom and images with simulated lesions are much easier to segment [6]. Furthermore we did not consider sequences like fluid attenuated inversion recovery (FLAIR) which is known to produce images predestinated for lesion segmentation. Therefore a good next step for further evaluation will be to test the method with clinical image data. For this purpose a good validation framework is provided by the MS lesion segmentation challenge introduced as a workshop at MICCAI 2008 and still available[2] [15]. To adjust our method to real clinical data we first need to figure out how to integrate additional MRI sequences and find the right parameters to achieve good results. Additionally a good brain extraction is needed.

One idea to face the real clinical data is to learn separate dictionaries for WM, GM and CSF while learning each dictionary with all available intensity values from different MRI sequences. That way we try to classify the whole brain by looking at the error after the reconstruction process. If it cannot be well represented by any dictionary it is classified as outlier or lesion. We could also combine all dictionaries to one and have a look at the coefficients a. Furthermore we can change our method by learning another dictionary specially for lesions. This way this approach will become a supervised one and we need to learn dictionaries from various atlases which provide a segmentation for WM, GM, CSF and lesions. To adapt the learned dictionaries to other image data further preprocessing like a more sophisticated intensity normalization will be needed to change the image properties to the properties of the atlases.

IV. Conclusions

Overall we were able to show the general applicability of our method to the problem of MS lesion segmentation by eval­uating our approach on synthetic image data and comparing it to state-of-the-art methods. The results were competitive and displayed the robustness of our method towards noise. Although the MS lesion segmentation still remains an open problem the potential of using dictionary learning and sparse coding for such segmentation tasks is revealed and various possibilities for further experiments were discussed.

References

[1] P. A. Calabresi. Diagnosis and management of multiple sclerosis. American family physician, 70(10):1935—1944, Nov. 2004.
[2] C. Cocosco, V. Kollokian, K. Kwan, and G. B. Pike. BrainWeb: Online Interface to a 3D MRI Simulated Brain Database - Abstract - Europe PubMed Central. 1997.
[3] A. Compston and A. Coles. Multiple sclerosis. The Lancet, 372(9648):1502—1517, 2008.
[4] M. Elad. Sparse and Redundant Representations. From Theory to Applications in Signal and Image Processing. Springer, Jan. 2010.
[5] F. Forbes, S. Doyle, D. García-Lorenzo, C. Barillot, and M. Dojat. A Weighted Multi-Sequence Markov Model For Brain Lesion Segmenta­tion. 9:225-232, 2010.
[6] D. García-Lorenzo, S. Francis, S. Narayanan, D. L. Arnold, and D. L. Collins. Review of automatic segmentation methods of multiple sclerosis white matter lesions on conventional magnetic resonance imaging. Medical Image Analysis, 17(1):1-18, Jan. 2013.
[7] D. García-Lorenzo, J. Lecoeur, D. L. Arnold, D. L. Collins, and
C. Barillot. Multiple sclerosis lesion segmentation using an automatic multimodal graph cuts. Medical Image Computing and Computer- Assisted Intervention, 12(Pt 2):584—591, 2009.
[8] Y. Ge. Multiple sclerosis: the role of MR imaging. AJNR. American journal of neuroradiology, 27(6):1165—1176, June 2006.
[9] M. Goldenberg. Multiple Sclerosis Review. Pharmacy and Therapeutics, 37(3):175, Mar. 2012.
[10] J. Grimaud, M. Lai, J. Thorpe, P. Adeleine, L. Wang, G. J. Barker,
D. L. Plummer, P. S. Tofts, W. I. McDonald, and D. H. Miller. Quantification of MRI lesion load in multiple sclerosis: a comparison of three computer-assisted techniques. Magnetic Resonance Imaging, 14(5):495—505, 1996.
[11] X. Llado, A. Oliver, M. Cabezas, J. Freixenet, J. C. Vilanova, A. Quiles, L. Valls, L. Ramio-Torrenta, and A. Rovira. Segmentation of multiple sclerosis lesions in brain MRI: A review of automated approaches. Information Sciences, 186(1):164—185, Mar. 2012.
[12] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online dictionary learning for sparse coding. pages 689—696, 2009.
[13] C. H. Polman, S. C. Reingold, B. Banwell, M. Clanet, J. A. Cohen,
M. Filippi, K. Fujihara, E. Havrdova, M. Hutchinson, L. Kappos, F. D. Lublin, X. Montalban, P. O’Connor, M. Sandberg-Wollheim, A. J. Thompson, E. Waubant, B. Weinshenker, and J. S. Wolinsky. Diagnostic criteria for multiple sclerosis: 2010 revisions to the McDonald criteria. Annals of Neurology, 69(2):292—302, Feb. 2011.
[14] S. M. Smith. Fast robust automated brain extraction. Human Brain Mapping, 17(3):143—155, Nov. 2002.
[15] M. Styner, J. Lee, B. Chin, M. Chin, O. Commowick, H. Tran, S. Markovic-Plese, V. Jewells, and S. Warfield. 3D segmentation in the clinic: A grand challenge II: MS lesion segmentation. MIDAS Journal, pages 1—5, 2008.
[16] X. Tomas-Fernandez and S. K. Warfield. Population intensity outliers or a new model for brain WM abnormalities. pages 1543—1546, 2012.
[17] I. Tosic and P. Frossard. Dictionary learning. IEEE Signal Processing Magazine, 28(2):27—38, 2011.
[18] K. Van Leemput, F. Maes, D. Vandermeulen, A. Colchester, and P. Suetens. Automated segmentation of multiple sclerosis lesions by model outlier detection. IEEE Transactions on Medical Imaging, 20(8):677—688, 2001.

Segmentation and Registration II

Lung Fissure Detection Using a Line Enhancing Filter

H. Wendland, T. Klinder, J. Ehrhardt and R. Wiemker

Abstract—Automatic segmentation of the lung lobes is an essential task to assess the functionality of the lung or for accurate interventional planning. Therefore, the correct detection of the interlobular fissures is a crucial first step for lobe segmentation. Numerous approaches have been proposed for this task, many based on the analysis of the Hessian matrix. While these filters are able to detect the fissures in most cases, they also tend to highlight other structures in the lung. In this paper a novel line enhancing filter is presented. This filter uses a two dimensional model of a bright line on dark intensity background to represent the fissure. For each voxel of the image volume, an asymmetric neighborhood is correlated with the predefined model for multiple angles. A comparison of this new approach to the widely used Hessian filter showed superior performance with an AUC of 0.72 compared to an AUC of 0.67 for the Hessian filter.

I. INTRODUCTION

The human lungs are anatomically and functionally divided into five subunits called lobes, the left lung consists of two lobes and the right lung consists of three lobes. The lobes are physically separated by fissures, which are double layers of infolded visceral pleura.

Computed tomography (CT) is the modality of choice to obtain detailed three dimensional images of the lung. The exact localization of the lobe borders in these image volumes is essential for accurate diagnosis, evaluation of the functionality of the lung and for the planing of interventional procedures. It helps to determine the spread or progression of certain lung diseases and is thus mandatory for therapeutic monitoring.

In CT images, the fissures appear as a thin bright line of heightened attenuation. Nevertheless, even with modern CT scanners their width is only of one or two voxels. In addition to this, different acquisition modalities, image noise and partial volume effect affect the appearance of the fissures. Furthermore, the fissures are highly variable in their form and often influenced by pathologies. All this makes the correct detection of the lobes a challenging and time consuming task even for experienced radiologists. Hence, an efficient and reliable way to detect the physical boundaries of the lobes, the fissures, is needed.

Most existing fissure filters make use of the eigenvectors of the Hessian matrix, since the fissure is expected to be a three dimensional plane-like structure [1]. Other approaches

H. Wendland, Medizinische Ingenieurwissenschaft, University of Luebeck; the work has been carried out at the Institute of Medical Informatics, Univer­sity of Luebeck, Luebeck, Germany (e-mail: wendland@miw.uni-luebeck.de).

T. Klinder is with Philips Research Europe, Hamburg, Germany (e-mail: tobias.klinder@philips.com).

J. Ehrhardt is with Institue of Medical Informatics, University of Luebeck, Luebeck, Germany (e-mail: ehrhardt@imi.uni-luebeck.de).

R. Wiemker is with Philips Research Europe, Hamburg, Germany (e-mail: rafael.wiemker@philips.com).

make use of the position of the fissures relative to the airway or vascular system to calculate the most likely location for the fissure [2] [3] [4]. Even supervised approaches were presented for fissure detection [5]. But despite all efforts that have been made, there are still several limitations due to the stated challenges. Hessian-based filters also tend to highlight small structures like vessels. Indirect detection via airway and vascular system depends on the segmentation of the respective structures and is especially prone to pathologies and variations where those structures are located near the fissure. Supervised approaches require a large set of ground truth annotations to perform learning.

In this paper a line enhancing filter (LE-filter) is presented, which searches for small line segments in two dimensional cross sectional cuts. To cope with the high variations of the fissures, different line orientations are tested, ultimately yielding a feature image in which the fissures are highlighted and other structures are suppressed.

The remainder of this paper is organized as follows. In Section II the Hessian filter is reviewed as a reference method. The proposed LE-filter is presented in Section III. A compar­ison and evaluation of both filters is given in Section IV. The conclusion of the results is provided in Section V.

II. Reference Method

The Hessian filter, proposed by Wiemker et al. [1], searches for three dimensional planar structures using the second order derivatives:

illustration not visible in this excerpt

where g** denotes the partial derivative of the respective directions.

For fissure detection, an eigenanalysis of the Hessian is performed. Since fissures are expected to be a plane of high intensity on low intensity background, the most significant eigenvalue A0 is demanded to be negative. The other two eigenvalues should be close to zero for fissure voxels. Accord­ingly, the planeness P of an image voxel can be computed as follows:

illustration not visible in this excerpt

The overall fissureness of a voxel is calculated as the planeness multiplied with the gray value probability of that voxel to be in the expected range of hounsfield values for fissures:

illustration not visible in this excerpt

Proceedings

While the Hessian filter highlights fissures, the filter re­sponds also to other structures of high curvature, resulting in a high amount of spurious responses. Note that the derivatives work on a defined neighborhood. Furthermore the Hessian is usually calculated on a smoothed image. Thus, the Hessian filter can be calculated for different scales. However, the small nature of the fissure limits the filter to fine scales, prone to image noise.

III. Line enhancing filter

In cross sectional views, fissures appear as line-like struc­tures. Even radiologists identify the fissures by searching for bright line segments in the given cut plane (sagittal, axial or coronal). Motivated by this approach, the LE-filter searches for small fine segments in different orientations on a single image slice. For each voxel, the image data is correlated with a prior constructed model of the fissure. The fissure model is defined by two functions, one representing the appearance of the fissure and one responding to the dark background around the fissure which can be found due to the lack of vascular structures along the fissure. In contrast to the Hessian filter, which searches for planes in a three dimensional space the LE-filter is restricted to two dimensions, due to computational cost. A pixel-wise line search only has on degree of freedom, rotation. Searching for a plane patch in three dimensions would be strikingly more expensive. The slice-wise line search, how­ever, can be applied in all three cut planes since in every plane the fissure is existent as a fine with similar characteristics.

The profile p(xw) of the fissure model is determined by its length l along the direction of axis еь and its width w along the direction of the axis β\γ· An example of a profile is displayed in Fig. 1. Mathematically, the line profile is defined as follows:

illustration not visible in this excerpt

It is composed of two Gaussian functions with the standard derivation σρ for the fissure width and ав for the background gap, where A and В define the amplitude of the respective Gaussiane.

To correlate the fissure fine model with the image volume for each voxel the linear dependence between the neighbor­ hood and the defined model has to be computed:

illustration not visible in this excerpt

The correlation coefficient rjM is defined as the covariance of the image and model σ|Μ divided by their respective standard deviations ajj and омм- To account for variable fissure orientations different fine directions up to 180° are tested. In Fig. 2 the basic principle of the multiple testing is illustrated. The model profile can be seen in the horizontal box. For the red center pixel the correlation of the line model with itself is calculated. The arrow points to the result of the correlation, the magnitude of the current hypothesis. For the next hypothesis the model profile is rotated, testing a neigh­borhood of different orientation. This means the asymmetric profile is tested with a rotated version of itself leading to a lower correlation. The lowest response is reached when the profile patch is orthogonal to the model in the image. Out of all calculated hypothesis the maximum response is chosen representing the magnitude of the correlation or simply the fissureness of the current voxel. This multi-hypothesis testing makes sure, that even strongly curved fissures are detected. In addition to the maximum correlation between the model and the image patch, orientation information is also obtained.

illustration not visible in this excerpt

Fig. 2: Concept of the multiple hypothesis testing. The corre­lation of the line model model with the neighborhood of the red center pixel is calculated for different angles.

The LE-filter can be applied for different neighborhoods, defined by the length and width of the model. The main advantage of the Une enhancing filter over the Hessian is due to the asymmetric nature of the model. It allows to determine a preference direction, taking the smeared nature of the small fissures into account. Thus, more signal along the fissure orientation can be collected while structures alongside the fissures are ignored, resulting in a high sensitivity even when considering large neighborhoods. In contrast to this, the Hessian filter operates an a symmetric neighborhood, meaning that if the Hessian filter wants to detect fissures on a larger scale, it will in most cases incorporate more background than fissure pixels, ultimately leading to inferior images.

[...]

Details

Pages
202
Year
2013
ISBN (eBook)
9783656381921
ISBN (Book)
9783656381938
File size
66.1 MB
Language
English
Catalog Number
v210732
Institution / College
University Lübeck – Medisert
Grade
Tags
Segmentation and Registration Biomedical Optics Micro- and Nanotechology Imaging and Image Computing Biomedical Engineering

Author

  • T. M. Buzug et al. (Author)

Share

Previous

Title: Student Conference Medical Engineering Science 2013