• No results found

Tracking individual bees in a beehive

N/A
N/A
Protected

Academic year: 2021

Share "Tracking individual bees in a beehive "

Copied!
39
0
0

Loading.... (view fulltext now)

Full text

(1)

IT 13 009

Examensarbete 45 hp January 2013

Tracking individual bees in a beehive

ZI QUAN YU

Institutionen för informationsteknologi

(2)
(3)

Teknisk- naturvetenskaplig fakultet UTH-enheten

Besöksadress:

Ångströmlaboratoriet Lägerhyddsvägen 1 Hus 4, Plan 0

Postadress:

Box 536 751 21 Uppsala

Telefon:

018 – 471 30 03

Telefax:

018 – 471 30 00

Hemsida:

http://www.teknat.uu.se/student

Abstract

Tracking individual bees in a beehive

ZI QUAN YU

Studying and analyzing interactions among bees requires tracking and identifying each individual among hundreds of them on a complex background. Automatic tracking and identification is challenging because of the unreliable features and appearance changes.

In order to map bee’s social interactions, low computational cost algorithm needs to run for a long time and process has to be done at the same time.

We present comparison among several methods and how we stabilize the features and reduce the appearance changes. We have improved much in set-ups and made a newly designed tag. Meanwhile we have developed the prototype of this automatic algorithm to track and identify each individual bee among hundreds of bees in a beehive over time. The rate is 15 frame per second at this stage and for the global detector it takes around 21s to process one frame and for the local detector it takes around 11s to process one frame. The algorithm can correctly detect 89% of around 300 tagged bees over hundreds of frames on average, but there are still around 11%

misdetections.

Examinator: Jarmo Rantakokko

Ämnesgranskare: Ida-Maria Sintorn

Handledare: Cris Luengo

(4)
(5)

Acknowledgements

I would like to express my deep gratitude to Dr. Cris Luengo my research supervisor, for his patient guidance, enthusiastic encouragement and valuable discussion of this research work. I would also like to thank Vladimir Curic for his advice and assistance. My grateful thanks are also extended to my reviewer Ida-Maria Sintorn who helped me with many valuable comments and suggestions of my work. And I want to also thank everyone in Prof. Gunilla Borgefors’s group.

I wish to thank my mother for her encouragement on me. I want to thank my friends who appreci- ated me for my work and supported me about my interests. I also would like to thank the colleagues at Dragon Palace for their help.

I would also like to extend my thanks to all the staff of the Centrum för Bildanalys for their help and offering me the resources.

I also want to thanks to Dr. Olle Terenius, Dr. Barbara Locke-Grandér and Teatske Bakker for their

(6)
(7)

Contents

List of Figures 9

1 Introduction 11

1.1 Background . . . . 11

1.2 Previous work . . . . 11

1.3 Motivation . . . . 12

1.4 Limitation . . . . 12

1.5 Materials . . . . 12

2 Image Acquisition 15 2.1 Introduction . . . . 15

2.2 Lighting system . . . . 15

3 1st Attempt – Histogram Matching 21 3.1 Motivation . . . . 21

3.2 Processing . . . . 21

3.3 Results . . . . 23

3.4 Reason to abandon . . . . 26

4 2nd Attempt – Modified Mean-Shift Method 27 4.1 Motivation . . . . 27

4.2 Processing . . . . 27

4.3 Results . . . . 28

4.4 Reason to abandon . . . . 30

5 3rd Attempt – Tag Based Tracking 31 5.1 Motivation . . . . 31

5.2 Design of Tag . . . . 31

5.3 Processing . . . . 32

5.4 Results . . . . 33

6 Conclusion and Future Work 37

(8)
(9)

List of Figures

1.1 Frame from previously acquired videos . . . . 13

1.2 Sketch of how observation hive looks like in theory . . . . 14

2.1 First attempt of improved lighting system . . . . 16

2.2 Second attempt of improved lighting system . . . . 17

2.3 Third attempt of improved lighting system . . . . 17

2.4 Acquired frame after first improvement . . . . 18

2.5 Acquired frame after second improvement . . . . 19

2.6 Acquired frame after third improvement . . . . 20

3.1 Comparison of histogram between bee and background . . . . 21

3.2 Masked images shows where we measure the template information . . . . 22

3.3 Images show how we measure the target’s information . . . . 23

3.4 Pre-processed image shows dark regions that we are looking for . . . . 24

3.5 Images show how edge-based searching method works . . . . 24

3.6 Correlation result after one loop . . . . 25

3.7 Good result of this method shows reasonable detected bees . . . . 25

3.8 Good result of this method shows reasonable detected bees . . . . 26

3.9 Bad detections without any special reason . . . . 26

4.1 Manually assigned initial position . . . . 28

4.2 One frame of tracked Bees in the video2 . . . . 29

4.3 One frame of tracked Bees in video1 . . . . 29

4.4 One frame shows losing track in video1 . . . . 30

5.1 Some designs for the new tags . . . . 31

5.2 Old tags derived from video and one in theory . . . . 31

5.3 The new tag . . . . 32

5.4 detected tag . . . . 32

5.5 Measuring grid on the tag . . . . 33

5.6 Result cropped from video . . . . 34

5.7 Mis-detected reflection in the process . . . . 34

5.8 New measuring suggestion . . . . 35

(10)
(11)

Chapter 1

Introduction

1.1 Background

A bee hive is a suitable model to study disease transmission in human society. The network of interaction in a beehive is presumed to be similar to the network of interaction in human society. Meanwhile the network allows disease transmission go through in different ways, and some bees have more interactions with more peers than the others. Researchers at the department of Ecology, Swedish University of Agri- cultural Sciences, are developing methods to quantify certain types of interaction and deriving interaction networks encompassing all bees in a hive and making bees individually identified. There are a couple of previously recorded videos of bees in an observation hive. Each individual bee was tagged with a unique identifier.

Before this project was started, researchers would have to manually count and observe different types of the honey bees’ interactions. It took very long time to manually analyze, for instance, video clips from an observed bee hive. Also, the accuracy was not that high. Thus, there is a strong need for automated analysis of these videos. This project aims to improve the acquisition method and develop an algorithm to detect and identify each tagged bee among the hundreds of bees in a beehive.

This project is trying to realize the similar purpose to tracking multiple targets but the algorithm will be implemented and developed in a more complicated and challenging environment. For instance, the system aims to track hundreds of bees and identify each individual at the same time. It is challenging to do this project in this research area.

There are various types of limitations and challenges for the experiment that will directly affect the anal- ysis, such as, the frame per second(fps) of the web camera, the resolution of the camera, the distance between camera and the observation hive, the illumination of the hive, the storage space on the computer and bee’s active season and the like. These shows part of reason why this project is challenging.

1.2 Previous work

The goal of previous work at SLU (Sverige Lantbruks Universitet) was to determine whether artificial light has an impact on honey bee’s(Apis mellifera) activities in an observation hive. The experiment con- cluded that “Strong peaks of bee’s activity appeared when the white light was turned on and turned off.

The observed activity was noted as ’nervous’ behaviour” [1].

There are some relevant work which has been done by other researchers, which has inspired us more

or less about how we should form our own framework for this project. For instance, these things have

bee done before [2]. It tells us some ideas about how to track single bee with MCMC (Markov Chain

Monte Carlo) method. The similar tracking method on ants [3]. This paper describes how researchers

(12)

manage to track multiple ants. There is another paper that describes a method to track flies [4]. It is also illustrating how researchers tried to track flies and analyze their behaviour. However, there are no published methods to track and identify all the bees in a beehive at the same time.

1.3 Motivation

I would like to mention some parts which motivated us on how to form this project in order to improve what other researchers have already done. Firstly, the image acquisition method and equipment did not perform as well as we expected visually. The difficulty can be found in Figure 1.1.It is nearly impossible to implement any type of analysis method on the dark area. Secondly, the long term goal for this project is to build up a real-time system to detect honey bees in beehive automatically. Therefore, a low com- putational cost method will be desirable. But sometimes, low cost method will generate results with low accuracy. So how to keep the balance between those two important factors becomes another issue in this case. We will illustrate later on in this paper about how we will solve these issues.

1.4 Limitation

This project is currently a prototype that we are developing and studying. Therefore the accuracy is not as high as we expected so far, the same is true for the computational performance. Meanwhile, the project is aiming to implement the algorithm to track each individual bee in a beehive over time. Although there are some ways to improve the computational power, we will try to speed up the algorithm when we have done the development part. So we will not do anything more about reducing the computing cost at the current stage. However, we will consider this factor in the development. Modelling and analysing the interactions among tracked bees is not included at this thesis project.

1.5 Materials

The experiments were performed at honey bee research facility, Bigården, at the Swedish University of Agricultural Sciences in Uppsala, Sweden. The observation hive consisted of a wooden frame (52.5 x 43.5 x 5.5 cm) and two plexiglas sheets on both sides, see Figure 1.2 [5]. This project focused on only half of the whole frame because it is currently a research for making prototype, half size of the frame was enough. When the prototype is realized, the whole frame will be observed by two cameras at the same time.

The observed bee hive was previously illuminated by four aHTI-760KCS1A1 IR illuminators. Each of these illuminators has 20st 12V IR LEDs with wavelength 850 nm(Vivotek,Taiwan). The LED light matched the camera’s bandpass filter so that the camera can record only the light from those LEDs, undisturbed by other light sources. There is one white light, whose type is PROX Light S30 Series(Smart Vision Lights,MI,USA) and it’s fitted with a 675 high-pass filter so that any light close to 850 nm wave- length will be cut. Therefore there is no other “noisy light" which will affect the illumination condition in the video.

The camera is a Basler Scout scA1600-14gm, with a Fujinon HF16HA-1B lens and a 850 nm band- pass filter. The camera is placed in front of one plexiglas and the distance between plexiglas and camera is about 80 cm. The camera is set up to record 14 fps, the resolution of each frame is 1628 x 1236 pixels.

The camera is controlled by Basler’s Pylon driver, and video is recorded off-line by Virtual VCR, Version 2.6.9.

12

(13)

Figure 1.1: Frame from previously acquired videos

(14)

Figure 1.2: Sketch of how observation hive looks like in theory

14

(15)

Chapter 2

Image Acquisition

2.1 Introduction

Image Acquisition plays an important role in image analysis. Because if we don’t have good images to start with, it is not possible to generate excellent result in analysis. Although image analysis seems powerful to any type of image with any quality, this is not true. It is still necessary to have the best source image to start with if we are also aiming for the best results.

There are a couple of factors that we can improve so as to acquire a better source image. One factor is the distance between the camera and the observation hive which is currently 80 cm. This was estab- lished by Lecocq [5]. The reason for this distance is that it is the minimum distance to observe the whole hive for this scientific-grade CCD camera. It’s definitely possible to buy a wider angle lens, but a lens with a different angle does not change anything except the distance to the hive and the image would look the same as normal lens. So we left this factor as it was there. Another factor is the resolution of the image. If we can have a higher resolution of the image, it will be easier for us to analyse the image in different ways. Third factor is the fps, A higher fps means that we can capture more details concerning bee’s movement. More details can give us less hard process to realize our tracking algorithm since the physical limitations will be weaken by higher fps. Forth factor is the frame where the bees walk on. If we can constrain the bees walking area, we can easily see the tag, and it will be easier for us to analyse and process the derived images. The fifth factor is the illumination condition, we expected an uniformly distributed illumination on the whole hive so that the intensity of all tags could be as equal as possible. It will be much more convenient for later process.

The last factor, that is illumination condition, is so important in this case, because we are going to identify each tag based on the gray value intensity at each sub region on the tag. If the illumination is not evenly distributed, we will have to process different regions separately on one frame. It’s a sort of waste if we stick to the low-quality resource. Therefore, we plan to focus on how we can improve the illumination condition the most. Also,this factor is worth trying to improve without spending too much money. Mean- while, this factor plays the most important role in the project. Hence, we plan to improve this factor as much as possible.

2.2 Lighting system

In Figure 2.1, it shows what the first attempt of improved lighting system looks like. The idea about

this new lighting system is to diffuse the light beam, then we can have an evenly distributed illumination

condition. The acquired image from this lighting system is shown in Figure 2.4. The illumination is

distributed evenly, but the intensity is too weak, so it looks really dark. Figure 1.1 shows the comparison

of previous illumination condition. We then decided to try to make a closed space for the diffused light

in lighting system since the reflectors might diffuse too much light to non-related directions in the open

(16)

space.

Figure 2.1: First attempt of improved lighting system

In Figure 2.2, the second attempt of lighting system is shown from side view. It doesn’t have a fancy appearance, but the result is improved. Based on the insight we have got after analysing the first attempt and its result, the second lighting box is almost a closed space around the lamps. We add three more cardboards between lighting system and observation hive, which is like a tunnel from the top view. Then we have the light source at one end, the target at another end. The light will in principle not run out of this closed space. So the light intensity on the hive will be theoretically stronger than in the fist attempt.

Figure 2.5 shows the acquired frame using the second attempt’s lighting system. The intensity looks stronger than the first attempt but there’s a dark area on one side. We want to avoid light’s refection on the cover glass to affect the detecting results. However the result is still not acceptable to us after we run the algorithm on this video clip.

We finally decided to buy four more lamps of the same type as the ones we already have. We now use two equivalent smaller lighting boxes instead of the reflectors that we’ve tried previously.Figure 2.3 shows how we use those two lighting boxes in the scene. Because we have tried to use one single light source and put it in front of the bee hive. We got the uniform illumination but weaker light intensity or stronger light intensity but uneven illumination. We then considered why we can not split the light source into two parts and place them with a certain angle in front of beehive, so that they can not only cancel the shadow but also compensate intensity to each other on the whole hive. It is like an operating astral lamp but a simple version. The angle for each lighting box is around 40 degree to the hive. At the meantime, by putting the lighting boxes at this angle, we also avoid the reflection on the cover glass to the camera. Therefore by using this type of lighting system, we have stronger intensity of light, uniform illumination and there is no reflection on the glass. Currently, we can start to acquire video by using this lighting system. Figure 2.6 shows the result that was satisfactory. It satisfies the light intensity and evenly distributed illumination on the whole frame.

16

(17)

Figure 2.2: Second attempt of improved lighting system

Figure 2.3: Third attempt of improved lighting system

(18)

Figure 2.4: Acquired frame after first improvement

18

(19)

Figure 2.5: Acquired frame after second improvement

(20)

Figure 2.6: Acquired frame after third improvement

20

(21)

Chapter 3

1st Attempt – Histogram Matching

3.1 Motivation

The motivation of this attempt is based on a bee’s special appearance. We considered that special ap- pearance should produce a special histogram other than the hive’s histogram,i.e., background. There are different ways to represent an object, for instance, using the center point, using multiple points, using rectangular patch, using skeleton, using object contour and using object silhouette/ object appearance, there is a general view of common methods used in tracking [6]. In principle, the appearance of the histogram should be different to the histogram of background and it should be easily distinguished, see Figure 3.1. The correlation between these two histograms is 0.025, which means they are unrelated to each other. Therefore we decided to start with this histogram matching method.

Figure 3.1: Comparison of histogram between bee and background

3.2 Processing

Introduction

Before we start to explain how this method works in details, we would like to explain some concepts

and some problems which are related to this histogram matching method. The first concept is histogram

matching, which means to measure the histogram information on the target image and compare the his-

(22)

Figure 3.2: Masked images shows where we measure the template information

togram information to the template’s histogram information. Based on the correlation between the target’s and template’s histogram information, we can find out that if the target is matched more to the template, the correlation value will be higher . The second concept is kernel tracking, which means to track object by computing the motion of the kernel in consecutive frames. An object’s appearance and shape or other features can be referred to as a kernel [6]. There are two important concepts: “target" and “template".

“Template" means we measure the object’s information first and store it as the criterion for later compari- son. “Target" means before we run the matching method, we need some information (target information) from the searching region to compare with the criterion (template information). After we know what those concepts are, we need to solve several problems for this method to work. The first one is how we can measure and derive the target and template’s histogram information. Although each bee has similar shape, they have different orientations. This becomes a difficulty in measuring target histogram informa- tion. The second problem is whether we can enhance the difference between target and background, so that they will be dramatically different in histogram matching.

Solutions

We manually choose some templates from the whole frame and put a mask on that region as shown in Figure 3.2. We then measure the histogram information and remove the black mask from the histogram.

This is how we derive the histogram information for the template. We then applied first target searching method which is center-based or tag-based rotating searching. It means we tried to find the bee’s thorax and then rotate the mask (6 degrees each step) on that sub image within 360 degrees. Therefore we will have different histogram information at different orientations. Ideally, we can locate the bee’s position and also its orientation.

22

(23)

Figure 3.3: Images show how we measure the target’s information

We did not obtain good results with this method. Then we have considered to change the method for how to search the target. Because center-based searching method has the drawback which is the region around center will not change at all. This means the correlation will be high on average. Then another rotating searching method was applied which is edge-based or head/tail-based rotating searching. Fig- ure 3.4 shows how we pre-process and detect dark regions for later use. The reason why we wanted to detect the dark region was that this could be a good feature to help us with locating a bee’s position. The red dots in Figure 3.4 present detected results of dark regions. Through Figure 3.5, we can see how the edge-based searching method works. It means the rotating center locates at either detected head point or tail point. The image at right hand side shows masked image, the detector will only measure the non-black area for the histogram as target histogram information. Based on correlation information in Figure 3.6, it is easy to notice that the matched area has higher correlation value than the unmatched ones.

In order to generate as good results as we can, there are some other improvements we’ve done, like increasing the number of template’s models and taking the average of those models’ histogram, and a method similar to the eigenface method [7]. We generated an eigenbee from hundreds of bees, and measured the histogram information from the eigenbee as the template information. However it turns out that the results are not as good as we expected. Because there are too many mis-detections by using the eigenbee’s histogram information as the template.

3.3 Results

In this part, a couple of images demonstrating the results for the edge-based histogram matching method

will be shown. It seems this method can detect some simple region, for instance, one separate bee as

we can see in Figure 3.7 and Figure 3.8. However it will meet some problems when the bees are sitting

closely together.

(24)

Figure 3.4: Pre-processed image shows dark regions that we are looking for

Figure 3.5: Images show how edge-based searching method works

24

(25)

Figure 3.6: Correlation result after one loop

Figure 3.7: Good result of this method shows reasonable detected bees

(26)

Figure 3.8: Good result of this method shows reasonable detected bees

3.4 Reason to abandon

The histogram matching method is simple and can be easily implemented. But It takes long time (around 178 seconds) to process one single frame. The accuracy is also relatively low which can be seen in Fig- ure 3.7 and Figure 3.8. Another problem is shown in Figure 3.9. It happens without any special reason on a random frame that we picked for testing. It means this method is unreliable with low accuracy and computationally costly. We therefore decided to abandon this method and try to make another approach.

Figure 3.9: Bad detections without any special reason

26

(27)

Chapter 4

2nd Attempt – Modified Mean-Shift Method

4.1 Motivation

Mean-shift method is a non-parametric feature-space analysis technique [8], a so-called mode seeking algorithm. It is a famous method that researchers have developed to track a moving object. The mean- shift method is an iterative method, it searches in a certain region and calculates the “similarity" between target and the model, the method stops when it reaches the stop criterion. The modification in our method is that we set fixed iteration times and a constrained searching area. There is some similarity to the method of appearance-adaptive model(AAM). The advantage of AAM is that it can update the appearance of the tracking object so as to reduce the error, see [9] and [10] for information about how researchers track facial animation and moving objects in different scales. Therefore we decided to try this modified mean- shift method combined with what we have already done with the histogram matching method. We update the template histogram information after every 2 frames, which is the similar part as AAM method.

4.2 Processing

In this part, we are going to describe how this method works step by step. This method is based on the assumption that we have already found the position of each bee. The position of each bee was manually assigned on several moving and non-moving bees for testing purpose. The reason for this hypothesis is that this mean-shift method needs to initialize itself at the first step. Then the method is going to search iteratively within the search region until it meets the stop criterion. Also, the newly designed tags (see next chapter for more details) can provide the initial location information for this method.

There is a good constraint that we can use based on a bee’s physical speed limit and the recording fps. It means that bees can not move more than certain number of pixels from one frame to another. This allows us to limit the search region based on this constraint and reduce the number of iterations. Meanwhile it can reduce the computational time.

The modified mean shift works as follow:

1) Shift the mask center within constraint area, to find the potential target.

2) Start center-based rotating method to find the orientation of bee’s head.

3) Measure the histogram information of masked image likes in Figure 3.5 and update the template his- togram information for next frame.

4) Measure the masked image and compare with template information.

5) Check stop criterion, if yes, stop to search for the next one, otherwise it goes on to 3) and repeat until

it meets the stop criterion.

(28)

Figure 4.1: Manually assigned initial position

The stop criterion is to calculate correlation value to the power of 2 and compare if the target’s value is over 0.65. By doing this way, it can make the difference more obvious. Then it compares the images difference and tries to find the minima value between 11 and 20 in gray value intensity by subtracting their own mean value.

In principle, we expected that this method will produce a more accurate result than the average model’s or eigenbee’s information does by using previously tracked object’s histogram information. However the results turned out to be not the same as we expected at the beginning.

4.3 Results

Figure 4.1 shows how the first step looks after we have assigned the initial position. Figure 4.2 shows a better result we have obtained by this modified mean shift method. The red circles represent moving bees and the blue ones show the non-moving bees. This result has tracked moving bee around 150 frames.

Figure 4.3 shows another try of this method, it still could track moving bees around 130 frames and it lost target when the moving bee walked into a crowded cluster of bees, see Figure 4.4. The green figure shows the bee’s orientation information in radian. The yellow figure shows the max correlation value at best matched position.

28

(29)

Figure 4.2: One frame of tracked Bees in the video2

Figure 4.3: One frame of tracked Bees in video1

(30)

Figure 4.4: One frame shows losing track in video1

4.4 Reason to abandon

The results were not as good as we expected, although it could track moving bees over time based on manually assigned initial positions. We need better accuracy and lower computational cost. We finally decided to implement the tag based tracking method in the next step.The reason to abandon this method is that it has low accuracy and easy to lose track when bees are walking into a complicated/crowded situation, although the results were improved somehow.

30

(31)

Chapter 5

3rd Attempt – Tag Based Tracking

5.1 Motivation

We tried some different methods previously, none of them performed well enough. We would like to find a reliable way to track and identify each single bee over time and this method should be computationally fast. We decided to replace the old tags with a newly designed tag. This new tag will provide us not only the ID information, but the bee’s position (i.e. object’s center point) and the bee’s orientation at the same time. These three pieces of information play a vital role in tracking a moving object.

5.2 Design of Tag

Because the coding combination of black and white is not enough for our case, we decided to add one more color in coding tags. Then we started to investigate which gray value intensities to use for en- coding tags. We visually tested how different those gray value intensities look in our camera with- out infrared light. Then we also tested how they looked like under infrared light. Thereafter, we de- cided to use total black, i.e. 0 out of 255 as the digit zero, gray value 65 out of 255 as digit one and 130 out of 255 as the digit two.We have tried to design several different ideas too, see Figure 5.1 and tested each of those on dead bees that we collected from SLU. Finally, we decided to use tag de- picted on the left in this figure as the final design that we are going to use later on in our project.

Figure 5.1: Some designs for the new tags

In figure 5.2, we can see what the old tags look like in the video. They are not in a good lighting condition

Figure 5.2: Old tags derived from video and one in theory

(32)

Figure 5.3: The new tag

and there are several different colors for the tags, therefore they look so different under infrared light, it is very hard to recognize the digits on each tag automatically. However, the newly designed tags will overcome the drawbacks of the old tags and help us to realize our goal.

Here we would like to introduce some parameters for the newly designed tag. The size of the tag is 3 x 3 mm square. The gray values we finally print out are 26% and 51% and combined with pure white and black. These intensities are calibrated to the illumination wavelength and printer, and should be re-calibrated when either changes. You can see the new tag in figure 5.3, see [11] for more details.

5.3 Processing

In this part, we are going to describe how the tag reader algorithm works. Because we are aiming to implement the algorithm in real time, we applied Laplace filter combined with minima filter to find local maxima on the original image so as to find the tag’s position. We employed the following parameters in current detector, which is that Laplace filter’s size is 2 and the minima filter’s size is 3. The filter’s shape is rectangular for both filters. The red dot in Figure 5.4 is the detected tag’s position which corresponds to the bee’s thorax’s position. The red circle is a measuring circle which aims to find the direction of the thin white bar on the tag by searching the highest intensity along the measuring circle. The tag is placed on the bee such that the white bar points to the direction of bee’s head.

Figure 5.4: detected tag

The second step is to distinguish if this detected object is a tag or not. We have already detected the potential white thin bar on the tag towards bee’s head. Then we draw a line from the center to the circle’s

32

(33)

Figure 5.5: Measuring grid on the tag

position where it holds the highest intensity. It can be seen as the line in figure 5.4. The intensity along this black line should be high enough and the difference between the center point and end point should not be too much. Based on this condition, some of the shinning reflection on bee’s wing will be recognized as the false detections. Most of them can be removed at this stage. But there are still some big reflection on bees’ wings and the edge of honey combo can not be removed.

After we find the tag and select it as the tag that we are expecting in the scene, the next step is to identify each tag by using a measurement grid, see Figure 5.5. On this stage, we calculate average intensity along white lines by convolving a 5x5 Gaussian kernel (sigma is 0.33) to each pixel on the lines. Then we employ the same method on the red lines. The reason for this is to compensate and correct the result from the horizontal white lines, because the horizontal lines can inherit the error in detecting the thin white bar.

We also meant to widen up slightly for those red lines towards the tail’s direction. Because there is a big white square in the middle, we do not want to cover that area when we calculate the average intensity of red lines. Thereafter, we collect the useful data from the detected area, which means to remove some data at the edge and center of tag. The marginal data at the edge also contains the background information and the central data is useless at this stage for comparing the intensities among different areas. Decoding the tag is based on selected information from tag detector.Based on the algorithm we have implemented so far, we developed a local detector for tracking purpose. We named previous detector as global detector for convenience. There is an advantage that the local detector could save around 50% computational time over the global detector. The local detector works with the global detector alternatively. Firstly, we run the global detector to detect all the information we need and then we transfer all information to local detector. The local detector will then carry on tracking the bees over time. After a certain period of time, for example 100 frames, the global detector is run again to correct the errors the local detector made.

For instance, the local detector might lose track on some moving bees and after 100 frames, the global detector would pick up those untracked bees and start to track again over time with the local detector.

There are some differences between global and local detector. The first is that the size for structure element (SE) is different, because they process on different size of images, that is, we enlarged 5 times the original tag image. it is obvious that the SE should be different. Meanwhile the filters’ size are 1 pixel and 7 pixels for the minima filter and the Laplace filter respectively. Secondly, the local detector will not identify the tags, hence it can save much time.

5.4 Results

In this part, we are going to illustrate some results. Figure 5.6 shows the detected tags and identified

results as well. The algorithm can correctly detect 89% of around 300 tags over hundreds of frames on

average, but there are still around 11% mis-detection. Most of those mis-detection happens due to reflec-

(34)

tion of wings, see Figure 5.7 [11]. We check the error by doing the following steps, we record the detected results first and then manually check the errors over frames and take the average of them. This method is generally working as we expected, and it achieved the first goal of the project. But there are still some parts which can be improved. For instance, the measuring method for identifying tags can be improved, so it will take less time to identify a tag. There is a matlab built-in command named “improfile" can be a good alternative than the one I made myself. Because I guess there are too many control points in current method , which means there are 20 points in each horizontal line and there will be totally 80 control points. However we can choose only 4 control points totally if we use “improfile" instead. It means we can reduce quite a lot of control points and it is much faster to obtain the tag’s ID, see Figure 5.8. As for the local minima detecting method, it can also be improved by applying other method. The threshold part in finding tag’s positions can also be adaptive, so that a better detection can be made.

Figure 5.6: Result cropped from video

Figure 5.7: Mis-detected reflection in the process

34

(35)

Figure 5.8: New measuring suggestion

(36)

36

(37)

Chapter 6

Conclusion and Future Work

This project aimed to track around 1000 bees, however we had only around 300 bees finally in the result.

We actually prepared about 1000 bees and they were all dead including the queen because of a mistake I made. I also learned that bee is sensitive to the isopropanol. What has happened then? We wanted to constrain the walking area for the bees and we did not want them to walk on the edge of frame and plexiglass but on the honey comb. We were suggested to spray fluon disolved in isopropanol onto the plexiglass and the edge of frame. But we did not let the fluon dry properly. Therefore bees died gradually over night. What a pity! Then I tagged around 300 bees since we did not want to miss the bee’s season.

Otherwise we would miss the main interaction that bees usually have when they are active enough. The project needs to be continued and the video data plays an important role in developing stage. We also recorded around 10TB videos in different conditions for developing purpose over days.

It is not an easy task to track hundreds of bees simultaneously over time. There will be a lot of problems that we need to solve properly. We have developed many important factors in this project, for instance, a modified frame that we put bees on, which can constrain bees’ walking area, and the illumination and camera system that can record bees in darkness, which will produce an evenly distributed illumination on the tags and it can help to make the later process easier and more accurate. We have also developed a new tag, different from the ones bee researchers usually use. The new tags have many advantages over the standard tags: the surface is flat and it is not shiny, so that there is not specular reflection; the bar code is easier to be read by the computer than the arabic numbers and the capacity is much larger too, there are 3

8

unique combinations in the new tag. There are some features the old tags do not have at all. The important one is the orientation information. Based on the customized tag, we can detect the bee’s orientation together with the identity and position information. It is much more convenient than the standard tags and it helps the later process too. Thanks to the customized tag, we do not need to spend time to segment or identify the actual bee for the useful information we need but process on its tiny tag.

We have also developed the algorithm to detect, identify and track these tags on the honey bees. Al-

though there are some parts which need to be optimized later, the algorithm can work and generate some

useful data as we expected. There are still many unfinished parts in the project, which can be done

gradually in the future. I suppose that the next step will be to develop the algorithm to detect bees’ in-

teractions and to identify different types of interactions. Once all the different components are in place,

some graphics processing unit (GPU) technology can be employed in some core parts [12] [13], so that a

faster programme can be used for a real time and fully automatic system. Another thing we can do in the

future work is to place the color reference bars on the frame in the beehive, so that we can compare the

color between these color bars in order to know if the illumination is correct or not. If the illumination is

correct, those color bars should have the same color intensities in the images. Otherwise we can adjust the

lighting system to adjust the illumination condition after we move the set ups. There is another advantage

of these identical color bars. The algorithm can use them as the standard color that we used in encoding

tags. So that identifying tag becomes adaptive which means that it will be more robust.

(38)

 

(39)

Bibliography

[1] A. Lecocq, C. L. Luengo Hendriks, B. Locke, and O. Terenius, “Increased artificial light inten- sity temporarily increases honey bee(apis mellifera) activity in an observation hive,submitted for publication,” 2011.

[2] S. Oh, J. Rehg, T. Balch, and F. Dellaert, “Data-driven MCMC for learning and inference in switch- ing linear dynamic systems,” in PROCEEDINGS OF THE NATIONAL CONFERENCE ON ARTI- FICIAL INTELLIGENCE, vol. 20, p. 944, Menlo Park, CA; Cambridge, MA; London; AAAI Press;

MIT Press, 2005.

[3] Z. Khan, T. Balch, and F. Dellaert, “MCMC-based particle filtering for tracking a variable number of interacting targets,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 27, no. 11, pp. 1805–1819, 2005.

[4] K. Branson, A. Robie, J. Bender, P. Perona, and M. Dickinson, “High-throughput ethomics in large groups of drosophila,” Nature methods, vol. 6, no. 6, pp. 451–457, 2009.

[5] A. Lecocq, “The development of tool for network analysis in the honey bee (apis mellifera l.),” Mas- ter’s thesis, Swedish University of Agricultural Sciences Faculty of Natural Resources and Agricul- tural Sciences, 2011.

[6] A. Yilmaz, O. Javed, and M. Shah, “Object tracking: A survey,” ACM Computing Surveys (CSUR), vol. 38, no. 4, p. 13, 2006.

[7] https://en.wikipedia.org/wiki/Eigenface, 2012.

[8] https://secure.wikimedia.org/wikipedia/en/wiki/Mean-shift, 2012.

[9] F. Davoine and F. Dornaika, “Head and facial animation tracking using appearance-adaptive models and particle filters,” Real-Time Vision for Human-Computer Interaction, pp. 121–140, 2005.

[10] S. Zhou, R. Chellappa, and B. Moghaddam, “Visual tracking and recognition using appearance- adaptive models in particle filters,” Image Processing, IEEE Transactions on, vol. 13, no. 11, pp. 1491–1506, 2004.

[11] C. L. Luengo Hendriks, Z. Q. Yu, A. Lecocq, T. Bakker, B. Locke, and O. Terenius, “Identifying all individuals in a honeybee hive progress towards mapping all social interactions,” Proceedings of the Visual observation and analysis of animal and insect behavior 2012 workshop (Tsukuba, Japan, in conjunction with ICPR), November, 11,2012.

[12] https://gpgpu.org/tag/matlab, 2012.

[13] J. Kong, M. Dimitrov, Y. Yang, J. Liyanage, L. Cao, J. Staples, M. Mantor, and H. Zhou, “Acceler-

ating MATLAB image processing toolbox functions on GPUs,” in Proceedings of the 3rd Workshop

on General-Purpose Computation on Graphics Processing Units, pp. 75–85, ACM, 2010.

References

Related documents

After this dip, it stayed active until mid afternoon (Fig. limbatus was frequently seen immediately revisiting a flower it had just visited. This behaviour could be

Clarification: iodoxy- is referred to iodoxybenzoic acid (IBX) and not iodoxy-benzene

Thomson (ZML) there stand seven specimens, viz. The 2♀♀ conform ex- actly to the description and qualify as syntypes. The left, first ♀ bears the original labelling as

The specimen is relatively large (body length 11.5 mm) but conforms to the original description as well as the common interpretation of the species (as in e.g. Tkalců’s selected

The type material conforms with the current common interpretation of Andrena fucata Smith 1847 (as in e.g. The synonymy has often been listed and, de- spite the lack of a

In this working group, we investigate the perceptions of code quality of students, teachers, and professional programmers. In particular, we are interested in the differences in

Re-examination of the actual 2 ♀♀ (ZML) revealed that they are Andrena labialis (det.. Andrena jacobi Perkins: Paxton & al. -Species synonymy- Schwarz & al. scotica while

From the survey and offline discussions, six research questions have been identified to start the course of this project work for improving remote control of short loading cycle