• No results found

Medical image analysis, diagnosis and surgery planning

has the ability to learn complex structures from training data. However, deep learning is often too slow for interactive processing. We develop, analyze and evaluate interactive deep learning segmentation methods for quantification and treatment response analysis in neuroimaging. Interaction speed is obtained by dividing the segmentation procedure into an offline pre-segmentation step and an on-line interactive loop in which the user adds constraints until satisfactory result is obtained. The overarching aim is to allow detailed correct diagnosis, as well as accurate and precise analysis of treatment response in neuroimaging, in particular in quantification of intracranial aneurysm remnants and brain tumors (Gliomas WHO-grades III and IV) growth. See Figure 40.

Figure 40: Interactive deep learning segmentation for decision support in neuroradiology

41. Interactive Segmentation and Analysis of Medical Images

Filip Malmberg, Robin Strand, Ingela Nystr¨om

Partner:Joel Kullberg, H˚akan Ahlstr¨om, Dept. of Surgical Sciences, UU Funding:TN-Faculty, UU

Period:20110601–20170501

Abstract: Three-dimensional (3D) imaging technique such as computed tomography (CT) and magnetic resonance imaging (MRI) are now routinely used in medicine. This has lead to an ever increasing flow of high-resolution, high-dimensional, image data that needs to be qualitatively and quantitatively analyzed.

Typically, this analysis requires accurate segmentation of the image. At CBA, we have been developing powerful new methods for interactive image segmentation. In this project, we seek to employ these meth-ods for segmentation of medical images, in collaboration with the Dept. of Surgical Sciences at the UU Hospital. A publicly available software for interactive segmentation, emphSmartPaint, can be downloaded from urlhttp://www.cb.uu.se/ filip/SmartPaint/. To date, this software has been downloaded more than 1100 times. During 2017, this software was adapted to fit within a workflow for analysis of very large cohorts.

See Figure 41.

42. Comparison of Articular Osteochondrosis in Domestic Pigs and Wild Boars by Image Processing Robin Strand

Partner:Pernille Etterlin, Stina Ekman, Dept. of Biomedical Sciences and Veterinary Public Health, SLU;

Kristin Olstad, Dept. of Companion Animal Clinical Sciences, Norwegian University of Life Sciences;

Charles Ley, Dept. of Clinical Sciences, SLU

Funding:Gerhard Forsells stipendiestiftelse; TN-Faculty, UU Period:20150101–

Abstract:Articular osteochondrosis (OC) often develops in typical locations within joints and the charac-terization of OC distribution in the pig tarsus is incomplete. Prevalence of OC is high in domestic pigs but is presumed to be low in wild boars. In this project, we develop methods based on image registration for 3D analysis of OC distribution. In 2017, a paper was published in the journal Veterinary Pathology. See Figure 42

Figure 41: Interactive Segmentation and Analysis of Medical Images

Figure 42: Comparison of Articular Osteochondrosis in Domestic Pigs and Wild Boars by Image Pro-cessing

43. Methods for Combined MR and Radiation Therapy Equipment Robin Strand

Partner:Anders Ahnesj¨o, David Tilly, Dept. of Immunology, Genetics and Pathology, UU. Samuel Frans-son, H˚akan Ahlstr¨om, Dept. of Surgical Sciences, Radiology, UU

Funding:Vinnova; Barncancerfonden; TN-Faculty, UU Period:20160601–

Abstract:UU and Hospital are current investing in image guided radiotherapy. An important component in the strategy is a combined MR scanner and treatment unit, enabling MR imaging right before and during treatment making it possible to adjust for internal motion. In this project, we develop methods for fast de-tection and quantification of motion for real-time adjustment of the radiation therapy in the combined MR scanner and treatment unit. See Figure 43.

44. Computerized Image Analysis for Ophthalmologic Applications Filip Malmberg

Partner:Camilla Sandberg-Melin and Per Soderberg, Dept. of Neuroscience, UU Funding:

-Period:20150101–20171001

Abstract: Ophtalmology is the study of the anatomy, physiology, and diseases of the eye. Optical coherence tomography (OCT) is a non-invasive technique for generating 3D images of the retina of the eye, allowing

ophtalmologists to visualize the different structures of the retina. To complement visual inspection, this project aims to develop image analysis method for accurately measuring geometrical properties of the retina.

These measurements help with early detection, diagnosis and treatment guidance for a wide range of retinal diseases and conditions. During 2017, our work within this project was presented at the EVER congress (European Association for Vision and Eye Research).

Figure 43: Methods for Combined MR and Radiation Therapy Equipment

45. Image Processing for Virtual Design of Surgical Guides and Plates

Fredrik Nysj¨o, Pontus Olsson, Filip Malmberg, Ingrid Carlbom, Ingela Nystr¨om

Partner:Andreas Thor, Dept. of Surgical Sciences, Oral and Maxillofacial Surgery, UU Hospital; Andres Rodriguez Lorenzo, Dept. of Surgical Sciences, Plastic Surgery, UU Hospital; Daniel Buchbinder, Icahn School of Medicine at Mount Sinai, New York, NY, USA

Funding:

-Period:20150317–

Abstract: An important part of virtual planning for reconstructive surgery, such as cranio-maxillofacial (CMF) surgery, is the design of customized surgical tools and implants. In this project, we are looking into how distance transforms and constructive solid geometry can be used to generate 3D printable models of surgical guides and plates from segmented computed tomography (CT) images of a patient, and how the accuracy and precision of the modelling can be improved by using grayscale image information in combination with anti-aliased distance transforms. Another part of the project is to develop simple and interactive tools that allow a surgeon to create the models. We have implemented a set of design tools in our existing surgery planning system, HASP, and are currently testing them with surgeons. See Figure 44.

Figure 44: Image Processing for Virtual Design of Surgical Guides and Plates

46. Skeleton-Based Vascular Segmentation at Interactive Speed Krist´ına Lidayov´a, Ewert Bengtsson

Partner:Hans Frimmel-Division of Scientific Computing, UU; ¨Orjan Smedby and Chunliang Wang-School of Technology and Health, KTH

Funding:VR grant to ¨Orjan Smedby Period:201207–20170630

Abstract: Precise segmentation of vascular structures is crucial for studying the effect of stenosis on arterial blood flow. The goal of this project is to develop and evaluate vascular segmentation, which will be fast enough to permit interactive clinical use. The first part is the extraction of the centerline tree (skeleton) from the gray-scale CT image. Later, this skeleton is used as a seed region for a segmentation algorithm.

The algorithm consists of four levels, of which the first two levels detect healthy arteries of varying sizes, and the remaining two levels specialize in different types of vascular pathology: severe calcification and occlusion. An outline of the proposed algorithm is presented in Figure 45. The algorithm was published in Journal of Medical Imaging 4(2), 2017. During this year, focus has been on replacing the knowledge-based detection of vascular nodes by a convolutional neural network in the centerline tree detection algorithm.

The classifier itself yields a precision of 0.81 and recall of 0.83 for medium size vessels, and qualitatively, an enhanced representation of the vascular skeleton could be achieved. The algorithm was presented at the Medical Image Understanding and Analysis (MIUA) 2017 conference.

Figure 45: Skeleton-Based Vascular Segmentation at Interactive Speed

47. Airway Tree Segmentation in Subjects with Acute Respiratory Distress Syndrome

Krist´ına Lidayov´a, Ewert Bengtsson

Partner:Hans Frimmel - Dept. of Information Technology, UU; ¨Orjan Smedby-School of Technology and Health, KTH; Marcela Hern´andez Hoyos-Universidad de Los Andes, Bogot´a, Colombia; Maciej Orkisz-University of Lyon, CREATIS, Lyon, France

Funding:VR grant to ¨Orjan Smedby Period:201512–20170630

Abstract: Acute Respiratory Distress Syndrome (ARDS) presents a high mortality rate in intensive care units. Fast and accurate analysis of lung aeration on CT images may reduce the mortality rate in ARDS.

However, the accuracy of lung aeration analysis is hampered by two factors: the difficulty in delineating the outer boundary of the lungs (due to local lack of contrast), and the inclusion of internal structures not belonging to the parenchyma. To cope with both problems, an airway segmentation can be useful.

Our current method detects an approximate airway centerline tree and then applies the obtained intensity and distance information to restrict the region-growing segmentation and prevents it from leaking into the parenchyma. During 2017, the method was evaluated qualitatively on 70 thoractic CT images of subjects with ARDS, acquired at significantly different mechanical ventilation conditions. Quantitative evaluation was performed indirectly showed that the resulting segmentation contained important landmarks. These landmarks improve a registration-based segmentation of the lungs in difficult ARDS cases. The algorithm was presented at the Scandinavian conference on image analysis (SCIA) 2017. See Figure 46

Figure 46: Airway Tree Segmentation in Subjects with Acute Respiratory Distress Syndrome

48. Coverage Model and its Application to High Precision Medical Image Processing

Nataˇsa Sladoje, Joakim Lindblad

Partner:Buda Baji´c, Slobodan Draˇzi´c, Faculty of Technical Sciences, University of Novi Sad, Serbia Funding:Swedish Governmental Agency for Innovation Systems (VINNOVA); TN-Faculty, UU; Swedish Research Council

Period:201409–

Abstract: The coverage model, which we have been developing for several years now, provides a frame-work for representing objects in digital images as spatial fuzzy sets. Membership values indicate to what extent image elements are covered by the imaged components. The model is useful for improving informa-tion extracinforma-tion from digital images and reducing problems originating from limited spatial resoluinforma-tion. We have by now developed methods for estimation of a number of features of coverage representation of shapes and demonstrated their increased precision and accuracy, compared to the crisp representations. Our focus is also on the development of segmentation methods which result in coverage segmentation. During 2017 we have prepared and submitted a journal publication on a coverage segmentation method based on energy minimization, which improves and generalizes our previously published results. The method is applicable to blurred and noisy images, and provides coverage segmentation at increased spatial resolution, while pre-serving thin fuzzy object boundaries. We have suggested a suitable global optimization scheme to address a challenging non-convex optimization problem. We have evaluated the method on several synthetic and real images, confirming its very good performance. See Figure 47.

Figure 47: Coverage Model and its Application to High Precision Medical Image Processing

49. HASP: Haptics-Assisted Surgery Planning

Ingrid Carlbom, Pontus Olsson, Fredrik Nysj¨o, Johan Nysj¨o, Ingela Nystr¨om

Partner: Daniel Buchbinder, Icahn School of Medicine at Mount Sinai, New York, NY, USA; Andreas Thor, Johanna Nilsson, Dept. of Surgical Sciences, Oral and Maxillofacial Surgery, UU Hospital; Andres Rodriguez Lorenzo, Dept. of Surgical Sciences, Plastic Surgery, UU Hospital

Funding: BIO-X 1.5M SEK; Thur´eus Stiftelsen: 150 K SEK, TN-Faculty, Facultyof Medicine, UU Hospi-tal

Period:20150101–

Abstract: The goal of HASP, our haptics assisted surgery planning system, is to put the planning process for

complex head and neck surgery into the hands of the surgeon. During 2017, we continued the evaluation of HASP and the BoneSplit segmentation software both at the UU Hospital and at Mount Sinai Beth Israel in NYC. At the UU Hospital, a trauma study using HASP is ongoing and will be completed during 2018. We will evaluate the haptic model in HASP on CT data from a scanned plastic skull and ten retrospective cases.

For the plastic skull, we are comparing accuracy and precision between users, whereas for the retrospective cases we are comparing precision only. At Mount Sinai Beth Israel the focus has been validation of the accuracy of HASP with ten retrospective cases. Prospective cases are also in the planning. For each case, we produce a neomandible from resin models generated by a 3D printer of the mandible, cutting guides, fibula, and case-specific plates, that are cut and glued together. We will then compare a CT model of the reconstructed resin neomandible with the HASP neomandible, and verify their correspondence. We expect to complete the study during 2018. See Figure 48.

Figure 48: HASP: Haptics-Assisted Surgery Planning

50. Virtual Surgical Planning for Soft Tissue Resection and Reconstruction

Ludovic Blache, Filip Malmberg, Fredrik Nysj¨o, Ingela Nystr¨om, Ingrid Carlbom

Partner: Andres Rodriguez Lorenzo, Dept. of Surgical Sciences, Plastic Surgery, UU Hospital; Andreas Thor, Dept. of Surgical Sciences, Oral and Maxillofacial Surgery, UU Hospital

Funding:TN-Faculty Period:20161010–

Abstract: With the increased use of 3D models and CAD technologies in the medical domain, virtual surgi-cal planning is now frequently used. Most current solutions focus on bone surgisurgi-cal operations. However, for head and neck oncologic resection, soft tissue ablation and reconstruction are common operations. When the surgeon removes a tumor, a defect results in the face consisting of different tissue layers. To reconstruct this defect it is usually needed to transplant vascularized tissue from other parts of the body. In collaboration with the Dept. of Surgical Science at the UU Hospital, we aim at providing a virtual planning solution for such surgical operations. We have developed and implemented a modelling method to estimate the shape and dimensions of soft tissue resections. Our approach takes advantage of a simple sketch-based interface, which allows the user to paint the contour of the resection on a patient specific 3D model reconstructed from a CT scan. The volume is then virtually cut and carved following this pattern to provide a 3D model of the resected volume. We then seek to develop a numerical model, based on finite element method, to simulate the non-rigid behavior of the soft tissue flap during the reconstruction process. See Figure 49.

51. Statistical Considerations in Whole-Body MR Analyses Eva Breznik, Robin Strand, Filip Malmberg

Partner:Joel Kullberg, H˚akan Ahlstr¨om, Dept. of Surgical Sciences, UU Funding:Centre for Interdisciplinary Mathematics, CIM, UU; TN-Faculty, UU Period:201609–

Abstract: In this project, the focus is on testing and developing methods for Imiomics, to facilitate utiliza-tion of whole-body MR images for medical purposes. For inference about activated areas, present in the

image, statistical tests are done on series of images at every voxel. This introduces accuracy and reliability problems when drawing conclusions regarding the images or multi-voxel areas as a whole, due to the large number of tests that are considered at the same time. The solution to this problem is a proper multiple testing correction method. Therefore we need to test the existing ones on our specific datasets and explore possibilities of new ones, specifically tailored to our problem. Results have been in part presented at SSBA 2017 in Link¨oping. See Figure 50.

Figure 49: Virtual surgical planning for soft tissue resection and reconstruction

Figure 50: Statistical Considerations in Whole-Body MR Analyses

52. Abdominal Organ Segmentation

Eva Breznik, Robin Strand, Filip Malmberg

Partner:Joel Kullberg, H˚akan Ahlstr¨om, Division of Radiology, Dept. of Surgical Sciences, UU Funding:Centre for Interdisciplinary Mathematics, CIM, UU; TN-Faculty, UU

Period:201706–

Abstract: We focus on improving the existing registration method for whole-body scans by including segmentation results as prior knowledge. Segmentation of the organs in the abdomen is a daunting task, as the organs vary a lot in their properties and size. And having a robust method to segment a number of them would not only be useful in clinical setting, but it could also help guide the registration method in those areas, which are most challenging to register. In search for such a method, we apply convolutional neural networks, look at various architectures, better sampling strategies and possibilities of including prior knowledge in the process. Preliminary results on improvements we achieved by integrating anatomical knowledge with a fully convolutional network (deepMedic) were presented at WiML in Long Beach. See Figure 51.

Figure 51: Abdominal organ segmentation

53. Calving Detection Robin Strand

Partner:Doroth´ee Vallot, Rickard Pettersson, Dept. of Earth Sciences, UU. Sigit Adinugroho, MSc student, CBA; Penelope How, Institute of Geography, School of GeoSciences, University of Edinburgh, UK Funding:TN-Faculty

Period:20150101–

Abstract: Calving processes are an important unknown in glacier systems terminating in the ocean. Au-tomatic image analysis methods for the analysis of calving fronts of glaciers are monitored by time-lapse cameras are developed in this project. The methods are based on detecting changes in segmented calving fronts of glaciers. The area of the calving event is then computed based on the relative camera position.