• No results found

Annual Report 2019 Centre for Image Analysis Centrum f¨or bildanalys

N/A
N/A
Protected

Academic year: 2022

Share "Annual Report 2019 Centre for Image Analysis Centrum f¨or bildanalys"

Copied!
103
0
0

Loading.... (view fulltext now)

Full text

(1)
(2)

Annual Report 2019 Centre for Image Analysis

Centrum f¨or bildanalys

(3)

Cover:

Illustrations from the five PhD theses presented at Centre for Image Analysis (CBA) during 2019.

Further information in Section 4.2.

Cover design:

Leslie Solorzano Edited by:

Gunilla Borgefors, Filip Malmberg, Nadezhda Koriakina, Ingela Nystr¨om, Ida-Maria Sintorn, Leslie Solorzano

Centre for Image Analysis, Uppsala, Sweden

(4)

Contents

1 Introduction 5

1.1 General background . . . . 5

1.2 CBA research . . . . 6

1.3 How to contact CBA . . . . 7

2 Organisation 8 2.1 Finances . . . . 9

2.2 People . . . 10

3 Undergraduate education 12 3.1 Undergraduate courses . . . 12

3.2 Bachelor thesis . . . 13

3.3 Master theses . . . 14

4 Graduate education 19 4.1 Graduate courses . . . 19

4.2 Dissertations . . . 21

5 Research 25 5.1 Medical image analysis, diagnosis and surgery planning . . . 25

5.2 Microscopy: cell biology . . . 31

5.3 Microscopy: model organisms and tissues . . . 37

5.4 Mathematical and geometrical theory . . . 42

5.5 Humanities . . . 47

5.6 Cooperation partners . . . 51

6 Publications 55 6.1 Edited conference proceedings . . . 56

6.2 Journal articles . . . 56

6.3 Refereed conference proceedings . . . 71

6.4 Other . . . 75

7 Activities 77 7.1 Conference organisation . . . 77

7.2 Seminars held outside CBA . . . 78

7.3 Seminars at CBA . . . 81

7.4 Conference participation . . . 85

7.5 Non-refereed conference presentations . . . 87

7.6 Attended conferences . . . 90

7.7 Visiting scientists . . . 94

7.8 Visits to other research groups . . . 94

7.9 Committees . . . 96

(5)

1 Introduction

The Centre for Image Analysis (CBA) conducts research and graduate education in computerised image analysis and perceptualisation. Our role is to develop theory in image processing as such, but also to develop better methods, algorithms and systems for various applications. We have found applications primarily in digital humanities, life sciences, and medicine. In addition to our own research, CBA con- tributes to image technology promotion and application in other research units and society nationally as well as internationally.

1.1 General background

CBA was founded in 1988 and was until 2014 a collaboration between Uppsala University (UU) and the Swedish University of Agricultural Sciences (SLU). From an organisational point of view, CBA was an independent entity within our host universities until 2010. Under the auspices of the Disciplinary Domain of Science and Technology at Uppsala University, CBA is today hosted by the Department of Information Technology (IT), where we belong to one of the five divisions, namely the Division of Visual Information and Interaction (Vi2). The organisational matters are further outlined in Section 2.

A total of 38 persons within Vi2 were active in CBA research during 2019: 17 PhD students and 21 seniors (of which 3 are Professor Emeriti). Many of us have additional duties to research, for example, teaching, appointments within the Faculty, and leave for work outside academia, so the number 38 is not full-time equivalents. A complement to the CBA researchers are the 13 students who completed their Bachelor and Master thesis work with examination and/or supervision from CBA during 2019.

The number of staff in the corridor fluctuates over the year thanks to that we have world class sci- entists visiting CBA and CBA researchers visiting their groups, for longer or shorter periods, as an important ingredient of our activities. A highlight during 2019 was that we hosted Guest Professor Fred Hamprecht from the Interdisciplinary Center for Scientific Computing, Heidelberg University, during his seven months sabbatical. Professor Hamprecht organised a much appreciated PhD course on combinato- rial problems in computer vision based on exact and approximate solvers, and their relevance in the age of deep learning. He also contributed to the TissUUmaps project with novel ideas on our image-based approaches for spatial transcriptomics — a collaboration continuing after his return to Germany.

A fruitful example of collaboration we have is with the Department of Surgical Sciences; Radiology, where two of our staff members work part time at the Uppsala University Hospital in order to be close to radiology researchers and also have funding from there.

The activity level continues to be high, for example, there were as many as five PhD defenses during 2019, see illustrations from them on the cover-page. All our PhDs can be recognised as the main product during the years. These young well-educated persons will contribute to our society and within academia for many years to come. Another way to measure our results is to acknowledge our 33 journal papers and 14 fully reviewed conference papers. The publication results stem from a total of 42 ongoing re- search projects during 2019. Our projects are involving as many as 70 international labs and companies, more than 20 national collaboration partners, plus not forgetting all the local collaborations we have in Uppsala.

An outward-looking activity that is particularly noteworthy is the traditional annual national sympo- sium organised by the Swedish Society for Automated Image Analysis (SSBA) since late 1970’s, which rotates between the active universities with Chalmers hosting in March 2019. The symposium gath- ered about 100 participants from academia and also several companies; CBA – today Sweden’s largest academic image analysis group – had 20 participants.

We are very active in international and national societies and are pleased that our leaders are recog-

nised in these societies. Ingela Nystr¨om continues on new positions of trust within of the International

Association of Pattern Recognition (IAPR) after having been a member of the IAPR Executive Commit-

(6)

tee during 2008–2018 (President 2014–2016). We are also closely involved in the Network of EUropean BioImage Analysis (NEUBIAS), where Nataˇsa Sladoje and Carolina W¨ahlby serve as members of the management committee.

Nationally, CBA has two board members in the Swedish Society for Automated Image Analysis (SSBA), Ida-Maria Sintorn as Chair and Robin Strand as Vice-Chair. Other examples of national com- mittee appointments are that Carolina W¨ahlby serves on the Steering Group of the National Microscopy Infrastructure (NMI) and Ingela Nystr¨om continued as Vice-Chair of the Council for Research Infra- structure (RFI) within the Swedish Research Council.

During the last few years, we have been active on both national and local level to establish biomedical image analysis and biomedical engineering as more well-supported strategic research areas. The UU Faculties of Science and Technology, Medicine, and Pharmacy have formed the new centre Medtech Science and Innovation together with the UU Hospital. We are looking forward to the increased fund- ing and collaboration opportunities we expect to be the results of this new structure. For example, in December we arranged a joint workshop between Medtech and CBA to foster collaborations and grant applications. Our image analysis support for researchers within life science continues to develop with the national SciLifeLab facility within BioImage Informatics, with Carolina W¨ahlby as director and Pet- ter Ranefall as head. CBA has several elected members of learned socities. Ewert Bengtsson, Gunilla Borgefors, Christer Kiselman, and Carolina W¨ahlby are elected members of the Royal Society of Sci- ences in Uppsala. Christer Kiselman is elected member and Ingela Nystr¨om is elected as well as board member of the Royal Society of Arts and Sciences of Uppsala. In addition, Ewert Bengtsson, Gunilla Borgefors, and Carolina W¨ahlby are elected members of the Royal Swedish Academy of Engineering Sciences (IVA), and Christer Kiselman is an elected member of the Royal Swedish Academy of Sciences.

Researchers at CBA also serve on several journal editorial boards, scientific organisation boards, con- ference committees, and PhD dissertation committees. In addition, we take an active part in reviewing grant applications and scientific papers submitted to conferences and journals.

This annual report is available in printed form as well as on the CBA webpage, see http://www.cb.uu.se/annual_report/AR2019.pdf.

1.2 CBA research

The objective of CBA is to carry out research in computerised image analysis and perceptualisation.

We are pursuing this objective through a large number of research projects, ranging from fundamental mathematical methods development, to application-tailored development and testing in, for example, bio- medicine. We also have interdisciplinary collaboration with the humanities mainly through our projects on handwritten text recognition. In addition, we develop methods for perceptualisation, combining com- puter graphics, haptics, and image processing. Some of our projects lead to entrepreneurial efforts, which we interpret as a strength of our research.

Our research is organised in many projects of varying size, ranging in effort from a few person months

to several person years. There is a lot of interaction between different researchers; generally, a person is

involved in several different projects in different constellations with internal and external partners. See

Section 5 for details on and illustrations of all our research projects on the diverse topics.

(7)

1.3 How to contact CBA

CBA maintains a home-page (http://www.cb.uu.se/). There you can find the CBA annual report series is in existence since 1993 (approximately 100 pages each), lists of all publications since CBA was founded in 1988, and other material. Note that our seminar series is open to anyone interested. Please, join us on Mondays at 14:15. Staff members have their own homepages, which are found within the UU structure. On these, you can find some detailed course and project information, etc.

The Centre for Image Analysis (Centrum f¨or bildanalys, CBA) can also be reached by visiting us in the corridor at Campus Polacksbacken or by sending mail.

Visiting address:

L¨agerhyddsv¨agen 2 ITC, building 2, floor 1 Uppsala

Postal address:

Box 337

SE-751 05 Uppsala

SWEDEN

(8)

2 Organisation

In the early years, CBA was an independent entity belonging equally to Uppsala University (UU) and the Swedish University of Agricultural Sciences (SLU). However, multiple re-organisations at both universi- ties eventually led to the current situation, where SLU is no longer involved. Since 2016, CBA is hosted by the Department of Information Technology in the Division for Visual Information and Interaction (Vi2). CBA remains Sweden’s largest single academic group for image analysis, with a strong position nationally and internationally. This successful operation shows that centre formations in special cases are worth investing in and preserving long-term. Professor Ingela Nystr¨om is the Director of CBA since 2012.

The Board of the Disciplinary Domain of Science and Technology (TekNat) has an established in- struction for CBA with description of objectives, mission, organisation, board, and roles of the director.

The board consisted in 2019 of the following distinguished members (in alphabetic order):

• Teo Asplund, Dept. of Information Technology (PhD student representative – 2019-06-30)

• Anders Brun, Dept. of Information Technology

• Joel Kullberg, Dept. of Surgical Sciences; Radiology

• Nikolai Piskunov, Dept. of Physics and Astronomy (Vice-chair)

• Robin Strand, Dept. of Information Technology (adjunct in his role as Head of Division)

• H˚akan Wieslander, Dept. of Information Technology (PhD student representative 2019-07-01 – )

• Carolina W¨ahlby, Dept. of Information Technology (Chair)

• Maria ˚Agren, Dept. of History

The general research subject of CBA and its PhD subject is Computerised Image Processing, including both theory and applications. More specifically, our areas of particular strength is

• Image analysis theory based on discrete mathematics

• Method development based on, e.g., machine learning in AI

• Digital humanities

• Quantitative microscopy

• Biomedical image analysis

• Visualization and haptics

As image analysis currently is finding widespread application in research in many fields as well as in

society in general, we believe there is a need for a centre with an multi-disciplinary organisation. CBA

offers a strong application profile based on equally strong roots in fundamental image analysis research

and now reaching into the AI era. After 30 years, CBA has long experience and is more than ever at the

research front.

(9)

2.1 Finances

After the re-organisation, where CBA became part of the Department of Information Technology, the CBA economy is not separate, but integrated in activities as well as organisation within the division.

Therefore, we do not report finances per se. However, from the Faculty of Science and Technology, we have a grant to CBA of 600 KSEK to be used for joint CBA initiatives. Examples are travel and accom- modation for guest researchers, work with and printing of the annual report, maintaining the website, and a percentage to the Director of CBA.

Within UU, we have financial support from SciLifeLab, the Centre for Interdisciplinary Mathematics (CIM), eSSENCE (a strategic research programme in e-Science) as well as strategic funds from the IT department as a supplement to the faculty funds that came to the research program Image analysis and human-computer interaction (so-called FFF). We note that the share of external funding is increasing year by year. The funding agencies are, for example, the Swedish Research Council (VR), the Swedish Foun- dation for Strategic Research (SSF), Sweden’s Innovation Agency (VINNOVA), the European Research Council (ERC), and the Riksbankens jubileumsfond (RJ).

Even though CBA as a centre does not organise undergraduate and Master education, the Division Vi2

offers programmes and several courses on Image Analysis, Computer Graphics, and Scientific Visualiza-

tion as well as Human-Computer Interaction themes. Most of us teach in these courses up to 20%, and

some Associate Professors in fact teach more.

(10)

2.2 People

People affiliated with CBA and employed by the Department of Information Technology during 2019. In addition, there are numerous collaborators at other Departments and Universities who are affiliated with CBA. Information about CBA alumni is available on request from the Director of CBA.

Amin Allalou, PhD, Researcher Teo Asplund, Graduate Student Ewert Bengtsson, Professor Emeritus Karl Bengtsson Bernander, Graduate Student Gunilla Borgefors, Professor Emerita

Eva Breznik, Graduate Student Anders Brun, PhD, Researcher Sukalpa Chanda, PhD, PostDoc Ashis Kumar Dhara, PhD, PostDoc Anindya Gupta, PhD, PostDoc Ankit Gupta, Graduate Student

Anders Hast, Professor and Excellent Teacher Raphaela Heil, Graduate Student

Christer O. Kiselman, Professor Emeritus Anna Klemm, PhD, Bioinformatician Nadezdha Koriakina, Graduate Student Joakim Lindblad, PhD, Researcher Filip Malmberg, Docent, Researcher Damian Matuszewski, Graduate Student Fredrik Nysj¨o, Graduate Student Ingela Nystr¨om, Professor, Director Gabriele Partel, Graduate Student Nicolas Pielawski, Graduate Student

Kalyan Ram Ayyalasomayajula, Graduate Student Petter Ranefall, Docent, Bioinformatician

Stefan Seipel, Professor, UU and University of G¨avle Ida-Maria Sintorn, Docent, Associate Professor Nataˇsa Sladoje, Docent, Associate Professor Leslie Solorzano, Graduate Student

Robin Strand, Professor, Head of Division Amit Suveer, Graduate Student

Ekta Vats, PhD, PostDoc

Elisabeth Wetzer, Graduate Student H˚akan Wieslander, Graduate Student Tomas Wilkinson, Graduate Student Carolina W¨ahlby, Professor

Hangqin Zhang, PhD, PostDoc Johan ¨Ofverstedt, Graduate Student

The e-mail address of the staff is Firstname.Lastname@it.uu.se

(11)

Docent Degrees from CBA

1. Lennart Thurfjell, 1999, UU 2. Ingela Nystr¨om, 2002, UU 3. Lucia Ballerini, 2006, UU 4. Stina Svensson, 2007, SLU 5. Tomas Brandtberg, 2008, UU 6. Hans Frimmel, 2008, UU 7. Carolina W¨ahlby, 2009, UU 8. Anders Hast, 2010, UU 9. Pasha Razifar, 2010, UU 10. Cris Luengo, 2011, SLU 11. Robin Strand, 2012, UU 12. Ida-Maria Sintorn, 2012, UU 13. Nataˇsa Sladoje, 2015, UU 14. Petter Ranefall, 2016, UU 15. Filip Malmberg, 2017, UU

CBA people appointed Excellent Teachers

• Anders Hast 2014, UU

(12)

3 Undergraduate education

CBA either supervises or reviews many Master and some Bachelor theses each year, as our subjects are useful in many different industries or for other academic research groups. They are also popular with the students. This year, we were reviewers for 14 theses, and for four of them we were also supervisors. For the rest, seven were in co-operation with industries and two together with other research groups, both needing help with image analysis applications. CBA people are also responsible for or participate in many undergraduate courses, where subjects range from Image Analysis, Computer Graphics, Machine Learning, Medical Informatics, to Programming (course examiners in bold).

0 2 4 6 8 10 12 14 16 18

2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019

Figure 1: The number of Master theses from CBA 2001–2019.

3.1 Undergraduate courses

1. Computer Assisted Image Analysis II, 10p

Nataˇsa Sladoje, Teo Asplund, Carolina W¨ahlby, Robin Strand, Filip Malmberg, Joakim Lindblad, Anna Klemm, Fred Hamprecht, Elisabeth Wetzer, Johan ¨Ofverstedt

Period: 20190101–20190331 2. User Interface Programming I, 5p

Raphaela Heil

Period: 20190121–20190324

3. Natural Computation Methods for Machine Learning, 10p Teo Asplund

Period: 20190121–20190528 4. Computer Graphics, 10 hp

Teo Asplund,Anders Hast, Filip Malmberg, Fredrik Nysj¨o Period: 20190325–20190609

(13)

5. Computer-Assisted Image Analysis I, 5 hp

Filip Malmberg, Joakim Lindblad, Nataˇsa Sladoje, Robin Strand, Axel Andersson, Eva Breznik, Ankit Gupta, Nadezhda Koriakina, Nicolas Pielawski, Elisabeth Wetzer

Period: 20190902–20191220 6. Computer Programming I, 5p

Johan ¨Ofverstedt

Period: 20190902–20191103 7. Maintenance Programming, 5p

Raphaela Heil

Period: 20190903–20191021 8. Medical Informatics, 5p

Ingela Nystr¨om, Robin Strand Period: 20190906–20191025

9. Project in Computational Science, 15p

Joakim Lindblad, Filip Malmberg, Nataˇsa Sladoje Period: 20191001–20200109

10. Advanced Software Design, 5p Raphaela Heil

Period: 20191104–20200119

11. Bioinformatics for Masters Students: Getting to grips with gene expression, data mining and image analysis, 1p

Ida-Maria Sintorn Period: 20191114

3.2 Bachelor thesis

1. Date: 20191114

Visual Lab Assistant: Using Augmented Reality To Aid Users In Laboratory Environments Student: Ricardo Danza Madera

Supervisor: Simon Gollbo, Precisit AB, Uppsala Reviewer: Ingela Nystr¨om

Abstract: This thesis was inspired by the desire to make working in cluttered spaces easier. Laboratories are packed full of instruments and tools that scientists use to carry out experiments; this inevitably leads to an increased risk for human error as well as an often uncomfortable experience for the user. Protocols are used to carry out experiments and other processes where missteps will most likely spoil the entire experiment.

How could one improve the overall experience and effectiveness in such environments? That’s when the idea of using Augmented Reality (AR) came to mind. The challenge was to be able to follow a protocol using AR. The application would require objects to be tracked in space while working and recognize which state of the process the user was in. Using OpenCV for the Computer Vision aspect of the application and writing the software in C++, it was possible to create a successful proof-of-concept. The final result was an AR application that could track all the objects being used for the example protocol and successfully detect, and warn, when the user had made a mistake while creating a series of bacteria cultures. There is no doubt therefore, that with more time and development, a more polished product is possible. The question that is left to answer nonetheless, is whether such an application can pass a UX evaluation to determine its usability-value for users in a professional environment.

(14)

3.3 Master theses

1. Date: 20190417

DICOM Second Generation RT: An Analysis of New Radiation Concepts by way of First-Second Generation Conversion

Student: David Holst

Supervisor: Stefan P´all Boman, RaySearch Laboratories AB Reviewer: Robin Strand

Abstract: The current DICOM communication standard for radiotherapy is outdated and has serious de- sign issues. A new version of the standard, known as DICOM 2nd generation for Radiotherapy, has been introduced and this thesis examines new concepts relating to radiation delivery. Firstly, some background into the practice of radiotherapy is given, as well as a description of the DICOM standard. Secondly, the thesis describes the design issues of the current standard and how the 2nd generation aims to solve these.

Thirdly, the thesis explores the conversion of a first generation C-Arm Photon/Electron treatment plan to the second generation RT Radiation and RT Radiation Set IODs. A converter is implemented based on a model proposed in a previous work. With some simplifications, the conversion of Basic Static and Arc treatment plans is shown to be successful. Conversion of further dynamic plan types is judged to be fairly simple to implement following the same methodology. The conversion model’s efficacy and testability are discussed and while the model is flexible and facilitates extension to further modalities some areas of improvement are suggested. Lastly, a GUI for the converter is demonstrated and the possibilities of user interaction during conversion are discussed.

2. Date: 20190502

Improving a stereo-based visual odometry prototype with global optimization Student: Felix Verpers

Supervisor: Jimmy Jonsson, Saab Dynamics Reviewer: Nataˇsa Sladoje

Abstract: In this degree project global optimization methods, for a previously developed software prototype of a stereo odometry system, were studied. The existing softwareestimatesthe motion between stereo frames and builds up a map of selected stereo frameswhich accumulates increasing error over time. The aim of the project was to study methods to mitigate the error accumulated over time in the step-wise motionestimation.

One approach based on relative pose estimates and another approach based on reprojection optimization were implemented and evaluated for the existing platform.The results indicate that optimization based on relative keyframe estimates is promising for real-time usage. The second strategy based on reprojection of stereo triangulated points proved useful as a refinement step but the relatively small error reduction comes at an increased computational cost. Therefore, this approach requires further improvements to become applicable in situations where corrections are needed in real-time, and it is hard to justify the increased computations for the relatively small error reduction.The results also show that the global optimization primarily improves the absolute trajectory error.

3. Date: 20190702

A reliable method of tractography analysis: of DTI-data from anatomically and clinically difficult groups

Student: Johanna Blomstedt

Supervisor: Johanna M˚artensson, Dept. of Surgical Sciences, UU Reviewer: Filip Malmberg

Abstract: MRI is used to produce images of tissue in the body. DTI, specifically, makes it possible to track the effects of nerves where they are in the brain. This project includes a shell script and a guide for using the FMRIB Software Library, followed by StarTrack and then Trackvis in order to track difficult areas in the brain. The focus is on the trigeminal nerve (CN V). The method can be used to compare nerves in the same patient, or as a comparison to a healthy brain.

4. Date: 20190702

General image classifier for fluorescence microscopy using transfer learning Student: H˚akan ¨Ohrn

Supervisor: H˚akan Wieslander Reviewer: Carolina W¨ahlby

Abstract: Modern microscopy and automation technologies enable experiments which can produce millions

(15)

of images each day. The valuable information is often sparse, and requires clever methods to find useful data. In this thesis a general image classification tool for fluorescence microscopy images was developed usingfeatures extracted from a general Convolutional Neural Network (CNN) trained on natural images.

The user selects interesting regions in a microscopy image and then, through an iterative process, using active learning, continually builds a training data set to train a classifier that finds similar regions in other images. The classifier uses conformal prediction to find samples that, if labeled, would most improve the learned model as well as specifying the frequency of errors the classifier commits. The result show that with the appropriate choice of significance one can reach a high confidence in true positive. The active learning approach increased the precision with a downside of finding fewer examples.

5. Date: 20190826

Indoor navigation using vision-based localization and augmented reality Student: Tim Kulich

Supervisor: Arno Schoonbee, Knowit Reviewer: Stefan Seipel

Abstract: Implementing an indoor navigation system requires alternative techniques to the GPS. One so- lution is vision-based localization which takes advantage of visual landmarks and a camera to read the environment and determine positioning. Three computer vision algorithms used for pose estimation are tested and evaluated in this project in order to determine their viability in an indoor navigation system.

Two algorithms, SIFT (Scale-Invariant Feature Transform) and SURF (Speeded-Up Robust Features), take advantage of the natural features in an image, whereas the third algorithm, ArUco, uses a manufactured marker. The evaluation displayed certain advantages for all solutions, however with the goal of using it for a navigation system ArUco was the superior solution as it performed well for key criteria, mainly computa- tional performance and range of detection. An indoor navigation system for Android devices was developed using ArUco marker tracking for positioning and augmented reality for projecting the route. The application was able to successfully fulfill its goal of providing route guidance to a specific target location.

6. Date: 20190829

Image Based Flow Path Recognition for Chromatography Equipment Student: Ankur Shukla

Supervisor: Petra B¨angtsson, GE Healthcare Reviewer: Robin Strand

Abstract: The advancement in computer vision field with the help of deep learning methods is significant.

The increase in computational resources, have lead researchers developing solutions that could help them in achieving high accuracy in image segmentation tasks. We performed segmentation of different types of objects in the chromatography instruments used in GE Healthcare, Uppsala. In this thesis project, we investigated methods in Computer vision and deep learning to segment out the different type of objects in instrument image. For a machine to automatically learn the features directly from instrument image, a deep convolutional neural network was implemented based on a recently developed existing architecture.

The dataset was collected and preprocessed before using it with the neural network model. The model was trained with two different architecture Unet and Segnet developed for image segmentation. Both the used architecture is efficient and suitable for semantic segmentation tasks. Among different components to segment out in the instrument, there was a thin pipe. Unet was able to achieve good results while segmenting thin pipes with fewer data as well. Results show that Unet can act as a suitable architecture for segmenting different objects in an instrument even if we have only 100 images. Further advances can be done to improve the performance of the model by generating a better mask of the model and finding a way to collect more data for training the model.

7. Date: 20190919

Semantic and Instance Segmentation of Room Features in Floor Plans using Mask R-CNN Student: Fredrik Sandelin

Supervisor: Klas Sj¨oberg, Pythagoras AB Reviewer: Filip Malmberg

Abstract: Machine learning techniques within Computer Vision are rapidly improving computers’ high- level understanding of images, thus revealing new opportunities to accomplish tasks that previously required manual intervention from humans. This paper aims to study where the current machine learning state-of-the- art is when it comes to parsing and segmenting bitmap images of floor plans. To assess the above, this paper

(16)

evaluates one of the state-of-the-art models within instance segmentation, Mask R-CNN, on a size-limited and challenging floor plan dataset. The model handles detecting both objects and generating a high-quality segmentation map for each object, allowing for complete image segmentation using only a single network.

Additionally, in order to extend the dataset, synthetic data generation was explored, and results indicate that it aids the network with floor plan generalization. The network is evaluated on both semantic and instance segmentation metrics and results show that the network yields good, almost completely segmented floor plans on smaller blueprints with little noise, while yielding decent but not completely segmented floor plans on large blueprints with a large amount of noise. Based on the results and them being achieved from a limited dataset, Mask R-CNN shows that it has potential in both accuracy and robustness for floor plans segmentation, either as a standalone network or alternatively as part of a pipeline of several methods and techniques.

8. Date: 20190920

Automatic landmark identification in digital images of Drosophila wings for improved morphometric analysis

Student: Sebastian Bas Kan˚a Supervisor: Joakim Lindblad Reviewer: Nataˇsa Sladoje

Abstract: A considerable number of morphometric studies which are performed nowadays with fly wing images require manual annotation of landmarks or key-points. This work is tedious and time consuming for researchers. This is why there is interest in automating the process and therefore several approaches for this purpose have been developed. The problem with these methods is that they are difficult to use and are usually specific to a particular imaging format and species. This project’s objective is to develop two methods, one based on classic image analysis techniques, and another one based on machine learning algorithms. A comparison is made to understand the strengths of each approach and find a solution that is general and easy to use. The first method (classic) uses domain knowledge to extract features and match template structures to determine landmark locations. Every parameter is fine-tuned manually and requires a long time to develop. Nevertheless, the results achieve human-level precision. The second method uses deep learning algorithms to train 30 neural networks which divide the image into regions and extract the coordinates of the landmarks directly. The results obtained for the machine learning approach are similar (approximately 10-pixel precision for 2448 x 2048 size images), with the advantage that it does not require any domain knowledge and can be reused for any kind of format and species. A solution that combines the strengths of both methods seems to be the best path to find a fully automatic algorithm.

9. Date: 20190925

Reduction of Metal Artefacts in CT Images Using the Generative Adversarial Network CycleGAN Student: Elin Smevige

Supervisor: Sebastian Andersson, RaySearch Laboratories AB Reviewer: Nataˇsa Sladoje

Abstract: Techniques for metal artefact reduction (MAR) in CT images have been researched and developed for the last 40 years, with recent successes in the domain being attributed to the incorporation of deep learn- ing methods. Previous MAR studies have relied on annotated data, but due to a lack of annotated data in the medical imaging domain, exploration of unsupervised learning methods, which train on unannotated data, motivated this work. The CycleGAN model, a member of the generative adversarial network family, is an unsupervised learning method which performs image-to-image translation from a source to target domain.

In this work, the CycleGAN model is trained on data where CT images without metal artefacts compose the target domain and CT images with metal artefacts compose the source domain. The datasets consist of clin- ical CT images, synthesised metal artefact images, and a combination of the two datasets. The CycleGAN model is evaluated against an existing MAR technique, CNN-MAR, using the metrics: Kernel Inception Distance, Structural Similarity Index (SSIM), and a four component, gradient-based structural similarity index (4-G-SSIM). Additionally, the clinical acceptability of the models is assessed, by identifying per- centual changes in the dose calculation algorithm using the RayStation software. The CNN-MAR model outperforms the CycleGAN model for all metrics, on all datasets. Though the results of the CycleGAN performances are discouraging, further exploration of unsupervised learning methods in the MAR domian is highly encouraged.

10. Date: 20191010

(17)

OCR on Certificates Student: Anton Sj¨oberg

Supervisor: Albin Lundberg, Nova Cura Reviewer: Petter Ranefall

Abstract: The goal of this master thesis was to create a solution which puts relevant data from scanned images of certificates into the correct database, using Optical Character Recognition (OCR). A proof of concept was done, in order to see how well this process automates.

The solution is located in the system Nova Cura Flow by the company Nova Cura. The solution takes certifi- cates that have been manually scanned, retrieves the relevant data and puts it in the correct database. All the recognition is performed in a custom connector, written in C# as a .net framework solution. First, the cus- tom connector performs some image processing on the image, using the open source software Imagemagick among some other solutions. Some of these image processes are blurring and sharpening the image in an attempt at reducing noise. The processed image is then sent to the OCR part, which is performed by the open source software Tesseract. Here, all words in the document are retrieved, and the relevant words and the corresponding values are then extracted. This information will be the output of the custom connector.

This output is then parsed through, and if the recognition for the certificate was successful, the values are put into the correct database.

The solution works as a proof of concept of the fact that this process can be automated. Image quality, as well as how good the actual recognition is, are key factors in creating a good solution for this endeavor.

11. Date: 20191029

Structural representation models for multi-modal image registration in biomedical applications Student: Jo Gay

Supervisor: Joakim Lindblad, Johan ¨Ofverstedt Reviewer: Nataˇsa Sladoje

Abstract: In clinical applications it is often beneficial to use multiple imaging technologies to obtain in- formation about different biomedical aspects of the subject under investigation, and to make best use of such sets of images they need to first be registered or aligned. Registration of multi-modal images is a challenging task and is currently the topic of much research, with new methods being published frequently.

Structural representation models extract underlying features such as edges from images, distilling them into a common format that can be easily compared across different image modalities. This study compares the performance of two recent structural representation models on the task of aligning multi-modal biomedical images, specifically Second Harmonic Generation and Two Photon Excitation Fluorescence Microscopy images collected from skin samples. Performance is also evaluated on Bright field Microscopy images.

The two models evaluated here are PCANet-based Structural Representations (PSR, Zhu et al. (2018)) and Discriminative Local Derivative Patterns (dLDP, Jiang et al. (2017)). Mutual Information is used to provide a baseline for comparison. Although dLDP in particular gave promising results, worthy of further investi- gation, neither method outperformed the classic Mutual Information approach, as demonstrated in a series of experiments to register these particularly diverse modalities.

12. Date: 20191113

Simplifying stereo camera calibration using M-arrays Student: Sebastian Grans

Supervisor: Søren K. S. Gregersen, Technical University of Denmark Reviewer: Stefan Seipel

Abstract: Digitization of objects in three dimensions, also known as 3D scanning, is becoming increasingly prevalent in all types of fields. Ranging from manufacturing, medicine, and even cultural heritage preser- vation. Many 3D scanning methods rely on cameras to recover depth information and the accuracy of the resulting 3D scan is therefore dependent on their calibration. The calibration process is, for the end-user, relatively cumbersome due to how the popular computer vision libraries have chosen to implement cali- bration target detection. In this thesis, we have therefore focused on developing and implementing a new type of calibration target to simplify the calibration process for the end-user. The calibration board that was designed is based on colored circular calibration points which form an M-array, where each local neigh- borhood uniquely encodes the coordinates. This allows the board to be decoded despite being occluded or partially out of frame. This contrasts the calibration board implemented in most software libraries and tool- boxes which consists of a standard black and white checkered calibration board that does not allow partial

(18)

views. Our board was assessed by calibrating single cameras and high FOV cameras and comparing it to regular calibration. A structured light 3D scanning stereo setup was also calibrated which was used to scan and reconstruct calibrated artifacts. The reconstructions could then be compared to the real artifacts. In all experiments, we have been able to provide similar results to the checkerboard, while also being subjectively easier to use due to the support for partial observation. We have also discussed potential methods to further improve our target in terms of accuracy and ease of use.

13. Date: 20191113

Attention P-Net for Segmentation of Post-operative Glioblastoma in MRI Student: Isabelle Enlund ˚Astr¨om

Supervisor:Ashis Kumar Dhara Reviewer: Robin Strand

Abstract: Segmentation of post-operative glioblastoma is important for follow-up treatment. In this thesis, Fully Convolutional Networks (FCN) are utilised together with attention modules for segmentation of post- operative glioblastoma in MR images. Attention-based modules help the FCN to focus on relevant features to improve segmentation results. Channel and spatial attention combines both the spatial context as well as the semantic information in MR images. P-Net is used as a backbone for creating an architecture with existing bottleneck attention modules and was named attention P-Net. The proposed network and competing techniques were evaluated on a Uppsala University database containing T1-weighted MR images of brain from 12 subjects. The proposed framework shows substantial improvement over the existing techniques.

(19)

4 Graduate education

We offer several PhD courses each year, both for our own students and for others needing image analysis tools. We also participate as guest lecturers in other PhD courses. Five PhD students successfully defended their theses this year, two in digital humanities, two in quantita- tive microscopy, and one in mathematical morphology theory. Two of the opponents came from Finland, one from France, one from Spain, and one from the USA.

0 1 2 3 4 5 6 7

1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 PhD Docent

Figure 2: The number of new PhDs (orange) and Docents (brown) at CBA. CBA was founded in 1988, but did not obtain PhD examination rights in Image Analysis/Processing until 1993. The diagram shows all PhD and Docent degrees awarded by CBA so far: a total of 69 PhDs and 15 Docents.

4.1 Graduate courses

1. Classical & Modern Papers in Image Analysis, (up to) 10 hp Examiner: Nataˇsa Sladoje (2015–)

Period: 20110916–

Description: Presentations and discussions of classical or modern papers in image processing. The course is given continuously and organised at CBA. Participants are PhD students at CBA.

2. Deep Learning, 7.5 hp Examiner: Joakim Lindblad Period: 20180928–20190630

Description: Seminar style course on the fundamentals of Deep Learning. Participants are presenting chap- ters of the book, selected texts, or project works.

Course is given continuously and organised at CBA. Participants are mainly PhD students at CBA.

Main course literature: “Deep Learning”, Goodfellow, Bengio, Courville, 2016.

(20)

3. Elements of Deep Learning, 7.5 hp Examiner: Joakim Lindblad

Period: 20190901–

Description: Seminar and project style course on the Advanced concepts of Deep Learning. Participants are presenting selected topics and performing project work in groups.

Course is given continuously and organised at CBA. Participants are mainly PhD students at CBA. The course continues in 2020.

4. Advanced Electron Microscopy, 5 hp Guest Lecturer: Ida-Maria Sintorn Period: 20190212–0309

Description: The course provided a general introduction to scanning and transmission electron microscopy.

Lectures and labs were dedicated to special electron microscopy and focused ion beam techniques. Lecturers from the the Information Technology Center, ˚Angstr¨om laboratory, the Biomedical Center, the Geocentrum, the Swedish University of Agricultural Sciences, and Stockholm University contributed to this course. The course was an interdisciplinary course, open to participants from all fields where electron microscopy is used.

5. Research Methodology for Information Technology (ITFM), 5 hp Lecturers: Gunilla Borgefors, Ingela Nystr¨om

Guest Lecturers: Jonas Petersson, Ulrika Haak, Librarians at the Uppsala University Library Examiner: Ingela Nystr¨om

Period: 20190322–0516

Description: The goal is to give general and useful knowledge about how to become a good and published researcher in information technology and/or various applications thereof. The first part consists of five tra- ditional lectures on being a PhD student, practical ethics, using library resources, scientific writing, where to publish, and presenting your work orally and by poster. The second part consists of work by the partici- pants, some in co-operation with their supervisors.

Comment: 12 CBA PhD students completed the course.

6. Combinatorial Problems in Computer Vision, 5 hp

Lecturer: Guest Professor Fred Hamprecht, Univeristy of Heidelberg, Germany Examiner: Carolina W¨ahlby

Period: 20190425–0620

Description: Perennial computer vision problems including instance segmentation, tracking and image par- titioning entail combinatorial optimization problems. This course formulates important computer vision problems in terms of combinatorial optimization, analyses algorithms that give exact and approximate so- lutions, establishes their relevance in the age of deep learning, and surveys recent literature in the field.

Comment: A dozen CBA PhD students followed the course of which nine completed it.

(21)

4.2 Dissertations

The following five PhD students successfully defended their theses in the PhD subject Computerised Image Pro- cessing during 2019.

1. Date: 20190508

Learning based Segmentation and Generation Methods for Handwritten Document Images Student:Kalyan Ram Ayyalasomayajula

Supervisor: Anders Brun

Assistant Supervisors: Ewert Bengtsson, Filip Malmberg

Opponent: Professor Nicholas Howe, Dept. of Computer Science, Smith College, MA, USA Committee:

(1) Dr Gunnar Farneb¨ack, ContextVision AB, Link¨oping (2) Docent Ola Friman, SICK IVP AB, Link¨oping

(3) Professor Anders Heyden, Centre for Mathematical Sciences, Lund University (4) Docent Carl Olsson, Centre for Mathematical Sciences, Lund University

(5) Docent Josephine Sullivan, School of Electrical Engineering and Computer Science, KTH, Stockholm Chair: Ingela Nystr¨om

Publisher: Acta Universitatis Upsaliensis, ISBN: 978-91-513-0599-8

DiVA: http://uu.diva-portal.org/smash/record.jsf?pid=diva2%3A1297042&dswid=8218 Comment: This defense was held at Uppsala University Library Carolina Rediviva.

Abstract:

Computerized analysis of handwritten documents is an active research area in image analysis and computer vision. The goal is to create tools that can be available for use at university libraries and for researchers in the humanities. Working with large collections of handwritten documents is very time consuming and many old books and letters remain unread for centuries. Efficient computerized methods could help researchers in history, philology and computer linguistics to cost-effectively conduct a whole new type of research based on large collections of documents. The thesis makes a contribution to this area through the development of methods based on machine learning. The passage of time degrades historical documents. Humidity, stains, heat, mold and natural aging of the materials for hundreds of years make the documents increasingly dif- ficult to interpret. The first half of the dissertation is therefore focused on cleaning the visual information in these documents by image segmentation methods based on energy minimization and machine learning.

However, machine learning algorithms learn by imitating what is expected of them. One prerequisite for these methods to work is that ground truth is available. This causes a problem for historical documents because there is a shortage of experts who can help to interpret and interpret them. The second part of the thesis is therefore about automatically creating synthetic documents that are similar to handwritten histori- cal documents. Because they are generated from a known text, they have a given facet. The visual content of the generated historical documents includes variation in the writing style and also imitates degradation factors to make the images realistic. When machine learning is trained on synthetic images of handwritten text, with a known facet, in many cases they can even give an even better result for real historical documents.

2. Date: 20190604

Learning-based Word Search and Visualization for Historical Manuscript Images Student:Tomas Wilkinson

Supervisor: Anders Brun

Assistant Supervisors: Ewert Bengtsson, Anders Hast

Opponent: Professor Josep Llados, Computer Vision Center, Universitat Aut`onoma de Barcelona, Spain Committee:

(1) Professor J¨org Tiedemann, Dept. of Digital Humanities, University of Helsinki (2) Docent J¨orgen Ahlberg, Dept. of Electrical Engineering, Link¨oping University

(3) Associate Professor Fredrik Heintz, Dept. of Computer and Information Science, Link¨oping University (4) Dr Gunnar Farneb¨ack, ContextVision AB, Stockholm (substitute member)

Chair: Robin Strand

Publisher: Acta Universitatis Upsaliensis, ISBN: 978-91-513-0633-9

DiVA: http://uu.diva-portal.org/smash/record.jsf?pid=diva2%3A1303103&dswid=-8732 Comment: This defense was held at Uppsala University Library Carolina Rediviva.

(22)

Abstract:

Today, work with historical manuscripts is nearly exclusively done manually, by researchers in the humani- ties as well as laypeople mapping out their personal genealogy. This is a highly time consuming endeavour as it is not uncommon to spend months with the same volume of a few hundred pages. The last few decades have seen an ongoing effort to digitise manuscripts, both preservation purposes and to increase accessi- bility. This has the added effect of enabling the use methods and algorithms from Image Analysis and Machine Learning that have great potential in both making existing work more efficient and creating new methodologies for manuscript-based research.

The first part of this thesis focuses on Word Spotting, the task of searching for a given text query in a manuscript collection. This can be broken down into two tasks, detecting where the words are located on the page, and then ranking the words according to their similarity to a search query. We propose Deep Learning models to do both, separately and then simultaneously, and successfully search through a large manuscript collection consisting of over a hundred thousand pages.

A limiting factor in applying learning-based methods to historical manuscript images is the cost, and there- fore, lack of annotated data needed to train machine learning models. We propose several ways to mitigate this problem, including generating synthetic data, augmenting existing data to get better value from it, and learning from pre-existing, partially annotated data that was previously unusable.

In the second part, a method for visualising manuscript collections called the Image-based Word Cloud is proposed. Much like it text-based counterpart, it arranges the most representative words in a collection into a cloud, where the size of the words are proportional to their frequency of occurrence. This grants a user a single image overview of a manuscript collection, regardless of its size. We further propose a way to estimate a manuscripts production date. This can grant historians context that is crucial for correctly interpreting the contents of a manuscript.

3. Date: 20190605

Methods for Processing and Analysis of Biomedical TEM Images Student:Amit Suveer

Supervisor: Ida-Maria Sintorn

Assistant Supervisors: Nataˇsa Sladoje, Carolina W¨ahlby

Opponent: Professor Jari Hyttinen, BioMediTech, Tampere University of Technology, Finland Committee:

(1) Assoc. Professor Vedrana Andersen Dahl, Dept. of Applied Mathematics and Computer Science, Tech- nical University of Denmark, Lyngby, Denmark

(2) Professor Fred A. Hamprecht, Dept. of Physics and Astronomy, Heidelberg University, Germany (3) Associate Professor Sabine Leh, Dept. of Clinical Medicine, University of Bergen, Norway (4) Professor Klaus Leifer, Dept. of Engineering Sciences, UU

(5) Assoc. Professor Kevin Smith, School of Electrical Engineering and Computer Science, KTH, Stock- holm

Chair: Ingela Nystr¨om

Publisher: Acta Universitatis Upsaliensis, ISBN: 978-91-513-0653-7

DiVA: http://uu.diva-portal.org/smash/record.jsf?pid=diva2%3A1305491&dswid=-9024 Abstract:

Transmission Electron Microscopy (TEM) has the high resolving capability and high clinical significance;

however, the current manual diagnostic procedure using TEM is complicated and time-consuming, requir- ing rarely available expertise for analyzing TEM images of the biological specimen. This thesis addresses the bottlenecks of TEM-based analysis by proposing image analysis methods to automate and improve crit- ical time-consuming steps of currently manual diagnostic procedures. The automation is demonstrated on the computer-assisted diagnosis of Primary Ciliary Dyskinesia (PCD), a genetic condition for which TEM analysis is considered the gold standard.

The methods proposed for the automated workflow mimic the manual procedure performed by the pathol- ogists to detect objects of interest – diagnostically relevant cilia instances – followed by a computational step to combine information from multiple detected objects to enhance the important structural details. The workflow includes an approach for efficient search through a sample to identify objects and locate areas with a high density of objects of interest in low-resolution images, to perform high-resolution imaging of

(23)

the identified areas. Subsequently, high-quality objects in high-resolution images are detected, processed, and the extracted information is combined to enhance structural details.

This thesis also addresses the challenges typical for TEM imaging, such as sample drift and deformation, or damage due to high electron dose for long exposure times. Two alternative paths are investigated: (i) differ- ent strategies combining short exposure imaging with suitable denoising techniques, including conventional approaches and a proposed deep learning based method, are explored; (ii) conventional interpolation ap- proaches and a proposed deep learning based method are analyzed for super-resolution reconstruction using a single image. For both explored directions, in the best case scenario, the processing time is nearly 20 times faster as compared to the acquisition time for a single long exposure high illumination image. Moreover, the reconstruction approach (ii) requires nearly 16 times lesser data (storage space) and overcomes the need for high-resolution image acquisition.

Finally, the thesis addresses critical needs to enable objective and reliable evaluation of TEM image denois- ing approaches. A method for synthesizing realistic noise-free TEM reference images is proposed, and a denoising benchmark dataset is generated and made publicly available. The proposed dataset consists of noise-free references along with masks encompassing the critical diagnostic structures. This enables per- formance evaluation based on the capability of denoising methods to preserve structural details, instead of merely grading them based on the signal to noise ratio improvement and preservation of gross structures.

4. Date: 20191206

Precise Image-based Measurements through Irregular Sampling Student:Teo Asplund

Supervisor: Robin Strand

Assistant Supervisors: Gunilla Borgefors, Cris L. Luengo Hendriks, Matthew J. Thurley Opponent: Professor Hugues Talbot, CentraleSup´elec, Universit´e Paris-Saclay, France Committee:

(1) Enseignant-Chercheur Jean Cousty, ESIEE Paris, France

(2) Professor Anders Heyden, Centre for Mathematical Sciences, Lund University

(3) Professor Heikki K¨alvi¨ainen, School of Engineering Science, Lappeenranta-Lahti University of Tech- nology, Finland

(4) Professor Maya Neytcheva, Dept. of Information Technology, UU (substitute member) Chair: Ingela Nystr¨om

Publisher: Acta Universitatis Upsaliensis, ISBN: 978-91-513-0783-1

DiVA: http://uu.diva-portal.org/smash/record.jsf?pid=diva2%3A1361810&dswid=-8486 Abstract:

Mathematical morphology is a theory that is applicable broadly in signal processing, but in this thesis we focus mainly on image data. Fundamental concepts of morphology include the structuring element and the four operators: dilation, erosion, closing, and opening. One way of thinking about the role of the structuring element is as a probe, which traverses the signal (e.g. the image) systematically and inspects how well it

”fits” in a certain sense that depends on the operator.

Although morphology is defined in the discrete as well as in the continuous domain, often only the discrete case is considered in practice. However, commonly digital images are a representation of continuous reality and thus it is of interest to maintain a correspondence between mathematical morphology operating in the discrete and in the continuous domain. Therefore, much of this thesis investigates how to better approximate continuous morphology in the discrete domain. We present a number of issues relating to this goal when applying morphology in the regular, discrete case, and show that allowing for irregularly sampled signals can improve this approximation, since moving to irregularly sampled signals frees us from constraints (namely those imposed by the sampling lattice) that harm the correspondence in the regular case. The thesis develops a framework for applying morphology in the irregular case, using a wide range of structuring elements, including non-flat structuring elements (or structuring functions) and adaptive morphology. This proposed framework is then shown to better approximate continuous morphology than its regular, discrete counterpart.

Additionally, the thesis contains work dealing with regularly sampled images using regular, discrete mor- phology and weighting to improve results. However, these cases can be interpreted as specific instances of irregularly sampled signals, thus naturally connecting them to the overarching theme of irregular sampling, precise measurements, and mathematical morphology.

(24)

5. Date: 20191214

Image and Data Analysis for Biomedical Quantitative Microscopy Student:Damian Matuszewski

Supervisor: Ida-Maria Sintorn

Assistant Supervisor: Carolina W¨ahlby

Opponent: Professor Peter Horvath, Institute for Molecular Medicine Finland (FIMM), Helsinki, Finland Committee:

(1) Professor Irene Gu, Dept. of Electrical Engineering, Chalmers University of Technology, G¨oteborg (2) Professor Mats Gustafsson, Dept. of Medical Sciences, UU

(3) Associate Professor Carl Nettelblad, Dept. of Information Technology, UU

(4) Dr Simon Flyvbjerg Noerrelykke, Dept. of Information Technology and Electrical Engineering, ETH Z¨urich, Switzerland

(5) Professor P¨aivi ¨Ostling, Dept. of Oncology and Pathology, Karolinska Institutet, Solna Chair: Ingela Nystr¨om

Publisher: Acta Universitatis Upsaliensis, ISBN: 978-91-513-0771-8

DiVA: http://uu.diva-portal.org/smash/record.jsf?pid=diva2%3A1359483&dswid=9676 Abstract:

This thesis presents automatic image and data analysis methods to facilitate and improve microscopy-based research and diagnosis. New technologies and computational tools are necessary for handling the ever- growing amounts of data produced in life science. The thesis presents methods developed in three projects with different biomedical applications.

In the first project, we analyzed a large high-content screen aimed at enabling personalized medicine for glioblastoma patients. We focused on capturing drug-induced cell-cycle disruption in fluorescence mi- croscopy images of cancer cell cultures. Our main objectives were to identify drugs affecting the cell-cycle and to increase the understanding of different drugs’ mechanisms of action. Here, we present tools for automatic cell-cycle analysis and identification of drugs of interest and their effective doses.

In the second project, we developed a feature descriptor for image matching. Image matching is a central pre-processing step in many applications. For example, when two or more images must be matched and registered to create a larger field of view or to analyze differences and changes over time. Our descriptor is rotation-, scale-, and illumination-invariant and it has a short feature vector which makes it computationally attractive. The flexibility to combine it with any feature detector and the customization possibility make it a very versatile tool.

In the third project, we addressed two general problems for bridging the gap between deep learning method development and their use in practical scenarios. We developed a method for convolutional neural network training using minimally annotated images. In many biomedical applications, the objects of interest cannot be accurately delineated due to their fuzzy shape, ambiguous morphology, image quality, or the expert knowledge and time it requires. The minimal annotations, in this case, consist of center-points or centerlines of target objects of approximately known size. We demonstrated our training method in a challenging application of a multi-class semantic segmentation of viruses in transmission electron microscopy images.

We also systematically explored the influence of network architecture hyper-parameters on its size and performance and show the possibility to substantially reduce the size of a network without compromising its performance.

All methods in this thesis were designed to work with little or no input from biomedical experts but of course, require fine-tuning for new applications. The usefulness of the tools has been demonstrated by collaborators and other researchers and has inspired further development of related algorithms.

(25)

5 Research

In this section, we briefly present our research, loosely grouped in 41 projects. These projects are of very different sizes and longevity, and many partly merge with each other. The majority have total or partial external funding.

In Section 5.1, we list eight ongoing projects researching medical applications that concern the whole body or whole organs, together with surgical planning. This uses many different imaging modalities and the tools used are 3D image analysis, haptics, and visualization. In Section 5.2, we list nine projects that analyse cells, viruses, or proteins using a microscope, often in a time-series. Also in Section 5.3, we list seven projects using microscopic images, but here the emphasis is on whole tissues or whole organisms. The most used model organism is the zebrafish. In Section 5.4, we list eleven theoretical projects that are not aimed at a particular application, but improve image analysis methods in general. Our research is to a high extent driven by advanced applications which call for theoretically well founded generally applicable image analysis methods, always in a very high demand. Two of these projects are new, while one survives almost from the start of CBA. Finally, in Section 5.5, we list six projects in digital humanities. One is new and five investigate the possibilities of analysing handwritten texts using image processing and pattern recognition.

In Section 5.6, we have collected all our research partners — internationally, nationally, and locally in Uppsala — with whom we had active cooperation during 2019.

5.1 Medical image analysis, diagnosis and surgery planning

1. Imiomics — Large-Scale Analysis of Medical Volume Images Robin Strand, Filip Malmberg, Eva Breznik

Partner: Joel Kullberg, H˚akan Ahlstr¨om, Dept. of Surgical Sciences, Radiology, UU

Funding: Faculty of Medicine, UU; Swedish Research Council grant 2016-01040; AstraZeneca Period: 20120801–

Abstract: Magnetic resonance tomography (MR) images are very useful in clinical use and in medical research, e.g., for analyzing the composition of the human body. At the division of Radiology, UU, a huge amount of MR data, including whole body MR images, is acquired for research on the connection between the composition of the human body and disease. To do compare volume images voxel by voxel, we develop a large scale analysis method, enabled by image registration methods. These methods utilize, for example, segmented tissue and anatomical landmarks. Based on this idea, we have developed Imiomics (imaging omics) – an image analysis concept that allows statistical and holistic analysis of whole-body image data. The Imiomics concept is holistic in three respects: (i) The whole body is analyzed, (ii) All collected image data is used in the analysis and (iii) It allows integration of other collected non-imaging patient information in the analysis. During 2019, we continued the work on improving the registration method. We also published articles where Imiomics was used to find associations between image data and (i) the metabolic syndrome, (ii) vasodilation and (iii) adipose and lean tissue distribution. In addition, we evaluated Imiomics in anomaly detection in malignant disease. See Figure 3.

2. HASP: Haptics-Assisted Surgery Planning

Ingrid Carlbom, Pontus Olsson, Fredrik Nysj¨o, Johan Nysj¨o, Ingela Nystr¨om

Partner: Daniel Buchbinder, Icahn School of Medicine at Mount Sinai, New York, NY, USA; Andreas Thor, Johanna Nilsson, Dept. of Surgical Sciences, Oral & Maxillofacial Surgery, UU Hospital; Andres Rodriguez Lorenzo, Dept. of Surgical Sciences, Plastic Surgery, UU Hospital

Funding: BIO-X; Thur´eus Stiftelsen, TN faculty, MF faculty, UU Period: 20150101–

Abstract: The goal of HASP, our haptics assisted surgery planning system, is to put the planning process for complex head and neck surgery into the hands of the surgeon. In the last years, the focus has been on evaluating HASP and the BoneSplit segmentation software both at the Uppsala University Hospital and at Mount Sinai Beth Israel in NYC. At Mount Sinai Beth Israel, this has resulted in a validation study of the

(26)

Figure 3: Imiomics - Large-Scale Analysis of Medical Volume Images

accuracy of HASP with 12 retrospective cases and eight prospective cases. For each case, we produce a neomandible from resin models generated by a 3D printer of the mandible, cutting guides, fibula, and case- specific plates, that are cut and glued together. CT models of the reconstructed resin neomandible were compared with the HASP neomandible, to verify their correspondence. The study is currently in process of being submitted. At the UU Hospital, a trauma study has been evaluated for the haptic model in HASP on CT data from a scanned plastic skull and ten retrospective cases. For the plastic skull, we compared accuracy and precision between users, whereas for the retrospective cases we compared precision only. The study is submitted for publication. See Figure 4.

Figure 4: HASP: Haptics-Assisted Surgery Planning

3. Image Processing for Virtual Design of Surgical Guides and Plates

Fredrik Nysj¨o, Ludovic Blache, Filip Malmberg, Ingrid Carlbom, Ingela Nystr¨om

Partner: Andreas Thor; Uppsala University Hospital; Andres Rodriguez Lorenzo, Uppsala University Hos- pital; Daniel Buchbinder, Mt Sinai-Beth Israel Hospital, New York, Pontus Olsson, Savantic AB, Stockholm Period: 20150317–

Abstract: An important part of virtual planning for reconstructive surgery, such as cranio-maxillofacial (CMF) surgery, is the design of customized surgical tools and implants. In this project, we are looking into how distance transforms and constructive solid geometry can be used to generate 3D printable models of surgical guides and plates from segmented computed tomography (CT) images of a patient, and how the accuracy and precision of the modelling can be improved using grayscale image information in combina- tion with anti-aliased distance transforms. Another part of the project is to develop simple and interactive tools that allow a surgeon to create such models. Previously, we implemented a set of design tools for bone reconstruction in our existing surgery planning system HASP. When removing a tumor, a soft tissue defect in the face is also created. To reconstruct this defect, vascularized tissue is usually transplanted from other parts of the body. We developed a method to estimate the shape and dimensions of soft tissue resections

(27)

from CT data, and a sketch-based interface for the surgeon to paint the resection contour on the patient. We also investigated numerical finite element models to simulate the non-rigid behavior of the soft tissue flap during the reconstruction. See Figure 5.

Figure 5: Image Processing for Virtual Design of Surgical Guides and Plates

4. Interactive deep learning segmentation for decision support in neuroradiology

Ashis Kumar Dhara, Robin Strand, Filip Malmberg

Partner: Johan Wikstr¨om and Elna-Marie Larsson, Dept. of Surgical Sciences, Radiology, UU Funding: Swedish Research Council, AIDA

Period: 20150501–

Abstract: Many brain diseases can damage brain cells (nerve cells), which can lead to loss of nerve cells and, secondarily, loss of brain volume. Technical imaging advancements allow detection and quantification of very small tissue volumes in magnetic resonance (MR) neuroimaging. Due to the enormous amount of information in a typical MR brain volume scan interactive tools for computer aided analysis are absolutely essential for this task. Available interactive methods are often not suited for this problem. Deep learning by convolution neural networks has the ability to learn complex structures from training data. We develop, analyze and evaluate interactive deep learning segmentation methods for quantification and treatment re- sponse analysis in neuroimaging. Interaction speed is obtained by dividing the segmentation procedure into an offline pre-segmentation step and an on-line interactive loop in which the user adds constraints until sat- isfactory result is obtained. The overarching aim is to allow detailed correct diagnosis, as well as accurate and precise analysis of treatment response in neuroimaging, in particular in quantification of intracranial aneurysm remnants and brain tumor growth. In 2019, we developed attention-based learning methods.

Results were presented at SSDL. See Figure 6.

5. Methods for Combined MR and Radiation Therapy Equipment Robin Strand

Partner: Anders Ahnesj¨o, David Tilly, Dept. of Immunology, Genetics and Pathology, UU. Samuel Frans- son, H˚akan Ahlstr¨om, Dept. of Surgical Sciences, Radiology, UU

Funding: VINNOVA; Barncancerfonden; TN-faculty, UU Period: 20160601–

Abstract: Uppsala University and Hospital are current investing in image guided radiotherapy. An important component in the strategy is a combined MR scanner and treatment unit, enabling MR imaging right before and during treatment making it possible to adjust for internal motion. In this project, we develop methods for fast detection and quantification of motion for real-time adjustment of the radiation therapy in the com- bined MR scanner and treatment unit. A manuscript on the use of a motion model in radiation therapy was finalized and submitted. See Figure 7.

(28)

Figure 6: Interactive deep learning segmentation for decision support in neuroradiology

Figure 7: Methods for Combined MR and Radiation Therapy Equipment

6. Statistical Considerations in Whole-Body MR Analyses Eva Breznik, Robin Strand, Filip Malmberg

Partner: Joel Kullberg, H˚akan Ahlstr¨om, Division of Radiology, Dept. of Surgical Sciences, UU Funding: Centre for Interdisciplinary Mathematics, CIM, UU; TN-Faculty, UU

Period: 201609–

Abstract: In this project, the focus is on testing and developing methods for Imiomics, to facilitate utiliza- tion of whole-body MR images for medical purposes. For inference about activated areas, present in the image, statistical tests are done on series of images at every voxel. This introduces accuracy and reliability problems when drawing conclusions regarding the images or multi-voxel areas as a whole, due to the large number of tests that are considered at the same time. The solution to this problem is a proper multiple testing correction method. Therefore we need to test the existing ones on our specific datasets and explore possibilities of new ones, specifically tailored to our problem. Results have been in part presented at SSBA 2017 in Link¨oping. A manuscript has been submitted and is currently under review. See Figure 8.

7. Abdominal organ segmentation

Eva Breznik, Robin Strand, Filip Malmberg

Partner: Joel Kullberg, H˚akan Ahlstr¨om, Division of Radiology, Dept. of Surgical Sciences, UU Funding: Centre for Interdisciplinary Mathematics, CIM, UU; TN-Faculty, UU

Period: 201706–

Abstract: We focus on improving the existing registration method for whole-body scans by including seg- mentation results as prior knowledge. Segmentation of the organs in the abdomen is a daunting task, as the organs vary a lot in their properties and size. And having a robust method to segment a number of

References

Related documents

8.24, (a) is the original image containing the almost- transparent target cells; (b) is the morphological gradient; (c) shows the watershed lines of the h-minima filtered gradient;

• For the SPOT to TM data (20 m to 30 m), a different approach was used: the sampled image was assumed to be the result of the scalar product of the continuous image with a

Written and oral examinations and digital and computer-based examinations are held at least three times a year: once immediately after the end of the course, once in August, and

Written and oral examinations and digital and computer-based examinations are held at least three times a year: once immediately after the end of the course, once in August, and

LabVIEW uses a graphical programming language - G to create programs called Virtual Instruments or VI (pronounced vee-eye) in a pictorial form called a block diagram, eliminating a

Examinations for courses that are cancelled or rescheduled such that they are not given in one or several years are held three times during the year that immediately follows the

Examinations for courses that are cancelled or rescheduled such that they are not given in one or several years are held three times during the year that immediately follows the

Learning Cell Nuclei Segmentation Using Labels Generated with Classical Image Analysis Methods Conference name: International Conference in Central Europe on Computer