• No results found

Study I – Cyclists on one-way streets in Stockholm

5. PRACTICAL EXPERIENCE

5.1. Study I – Cyclists on one-way streets in Stockholm

5.1.1. Background

Stockholm city is considering an extension of the available bicycle network by allowing cycling against traffic on some one-way streets. Hence, it was necessary to collect a large sample of observations of such cyclists in order to obtain a perspective on their “typical” behaviour, as well as frequency and types of unusual situations and traffic conflicts before the change in legislation.

5.1.2. Study design

Even though it is not legally allowed, cycling against the traffic in Stockholm is not unusual, but the frequency of such cyclists is quite low (some cyclists per hour). It is very inefficient to use human observers in such conditions as a very long time has to be spent on the spot in order to collect a sufficient number of observations. Instead, it has been decided to use video recording and then detect the “wrong-way” cyclists in the video using the automated video analysis system.

Initially, 32 sites were selected as potentially interesting for observations. However, finding good spots for the camera installations turned out to be a problem. Finally, only 22 sites were filmed of which 18 were further analysed. Three of the excluded sites did not have any one-way streets entering or exiting the intersection (only selected for controlling for general changes in cyclist flow), and the fourth was excluded since the camera turned out to be too far away from the intersection to allow proper analysis.

Eight cameras were moved between sites just before or after the weekend, resulting in three to four workdays of recording at each site. Further, the video material was processed and the objects moving in the “wrong” direction were detected with the

“advanced road user detection” algorithm. Some work was done manually to ensure the quality and validate the work of the video analysis system. This included: a) calculation of the vehicle, pedestrian and cyclist flows for short periods at each site; b) visual control and sorting of the system detections, detection of the situations which might be potential conflicts.

the supporting tasks (e.g. adjustment of settings for cameras, start of the calculations, presentation of the results) have not been automated yet. However, some general experience with the use of cameras, data management, etc., can now be reported.

The video analysis system in Lund has been used in two large-scale behavioural studies to detect cyclists moving in certain directions. In presenting the results in the following sections I concentrate on the system performance (other results from these studies can be found in Papers IV and V). I also describe a special study designed to test the accuracy of the speed and position estimates produced by different video processing algorithms. Finally, I discuss the factors that affect the system performance and conclude with the practical lessons learnt from the use of the system.

5.1. Study I – Cyclists on one-way streets in Stockholm

5.1.1. Background

Stockholm city is considering an extension of the available bicycle network by allowing cycling against traffic on some one-way streets. Hence, it was necessary to collect a large sample of observations of such cyclists in order to obtain a perspective on their “typical” behaviour, as well as frequency and types of unusual situations and traffic conflicts before the change in legislation.

5.1.2. Study design

Even though it is not legally allowed, cycling against the traffic in Stockholm is not unusual, but the frequency of such cyclists is quite low (some cyclists per hour). It is very inefficient to use human observers in such conditions as a very long time has to be spent on the spot in order to collect a sufficient number of observations. Instead, it has been decided to use video recording and then detect the “wrong-way” cyclists in the video using the automated video analysis system.

Initially, 32 sites were selected as potentially interesting for observations. However, finding good spots for the camera installations turned out to be a problem. Finally, only 22 sites were filmed of which 18 were further analysed. Three of the excluded sites did not have any one-way streets entering or exiting the intersection (only selected for controlling for general changes in cyclist flow), and the fourth was excluded since the camera turned out to be too far away from the intersection to allow proper analysis.

Eight cameras were moved between sites just before or after the weekend, resulting in three to four workdays of recording at each site. Further, the video material was processed and the objects moving in the “wrong” direction were detected with the

“advanced road user detection” algorithm. Some work was done manually to ensure the quality and validate the work of the video analysis system. This included: a) calculation of the vehicle, pedestrian and cyclist flows for short periods at each site; b) visual control and sorting of the system detections, detection of the situations which might be potential conflicts.

5.1.3. Results

The recording at 18 sites resulted in 2.5 Tb of video data and 900 hours of daytime video material. After the detection performed by the video analysis system, this was reduced to approximately 27000 short video clips with a total length of 115 hours.

Two observers looked through the video clips and sorted them into 4 categories:

cyclists, pedestrians, cars and other (errors in video processing or odd situations). The results are presented in Table 1. The observational periods were not the same at each site; therefore, the numbers are given as an average per day.

Table 1. The results of manual classification of the automated detections at each site (average per day).

Site Cyclists False positives False positive rate

Detections, total Pedestrians Cars Other

2 147 894 11 12 86% 1063 4 100 44 19 7 41% 170 5 110 54 9 14 41% 187 6 63 938 26 126 95% 1153 7 8 13 4 1 69% 26 9 42 367 4 159 93% 572 11 35 104 29 63 85% 230 12 13 312 5 5 96% 334 14 31 140 16 48 87% 235 15 35 426 12 26 93% 497 16 55 667 7 17 93% 745 23 208 347 35 54 68% 645 27 52 163 61 61 85% 337 29 13 11 4 14 69% 42 33 55 50 9 6 54% 120 34 28 491 11 18 95% 548 36 30 15 20 10 61% 74 37 12 1 3 4 42% 19

Total 1037 5037 285 645 85% 6997

To estimate the accuracy of the automated cyclist detection, the manual counts were performed at each site for one or two 0.5-hour periods and compared with what was detected automatically during the same periods. Initially, the manual counts were expected to provide the “ground truth”, but it turned out that at some sites the observers missed a few cyclists found by the automated system. Therefore, the results of manual counts were adjusted to include these cyclists, too. Table 2 presents this comparison.

5.1.3. Results

The recording at 18 sites resulted in 2.5 Tb of video data and 900 hours of daytime video material. After the detection performed by the video analysis system, this was reduced to approximately 27000 short video clips with a total length of 115 hours.

Two observers looked through the video clips and sorted them into 4 categories:

cyclists, pedestrians, cars and other (errors in video processing or odd situations). The results are presented in Table 1. The observational periods were not the same at each site; therefore, the numbers are given as an average per day.

Table 1. The results of manual classification of the automated detections at each site (average per day).

Site Cyclists False positives False positive rate

Detections, total Pedestrians Cars Other

2 147 894 11 12 86% 1063 4 100 44 19 7 41% 170 5 110 54 9 14 41% 187 6 63 938 26 126 95% 1153 7 8 13 4 1 69% 26 9 42 367 4 159 93% 572 11 35 104 29 63 85% 230 12 13 312 5 5 96% 334 14 31 140 16 48 87% 235 15 35 426 12 26 93% 497 16 55 667 7 17 93% 745 23 208 347 35 54 68% 645 27 52 163 61 61 85% 337 29 13 11 4 14 69% 42 33 55 50 9 6 54% 120 34 28 491 11 18 95% 548 36 30 15 20 10 61% 74 37 12 1 3 4 42% 19

Total 1037 5037 285 645 85% 6997

To estimate the accuracy of the automated cyclist detection, the manual counts were performed at each site for one or two 0.5-hour periods and compared with what was detected automatically during the same periods. Initially, the manual counts were expected to provide the “ground truth”, but it turned out that at some sites the observers missed a few cyclists found by the automated system. Therefore, the results of manual counts were adjusted to include these cyclists, too. Table 2 presents this comparison.

Table 2. Comparison of the automatically detected “wrong-way” cyclists with the “ground truth”.

Site Cyclists,

“ground truth”

Automated video analysis Cyclists False

positives

Detections, total

2 3 3 9 12

4 7 6 2 8

5 6 6 5 11

6 4 2 47 49

7 3 0 1 1

9 9 5 11 16

11 1 1 1 2

12 0 0 2 2

14 0 0 3 3

15 1 1 9 10

16 7 1 28 29

23 19 18 36 54

27 1 1 13 14

29 0 0 0 0

33 12 9 4 13

34 6 4 34 38

36 4 3 3 6

37 3 0 0 0

Total 86 60 208 268

Average detection rate: 60/86=70%.

Average false positive rate: 208/268=73%.

Among all the automated detections, the observers found 43 situations that looked like potential traffic conflicts with “wrong-way” cyclists involved. However, none of these were classified as serious conflicts according to the definition used by the Swedish Traffic Conflicts Technique (Hydén, 1987). A small test on how well potential conflicts can be detected automatically from the video data was also performed. Site 33 was chosen for this test as it had a relatively high number of potential conflicts (6) concentrated during four 0.5-hour periods (i.e. totally 2 hours of video). The trajectories and speed profiles were extracted for all the road users in the video sequences (“trajectory extraction I” algorithm was used). Since it was known that the position was estimated with a systematic error (due to the assumption of

“flat” road users) and that there were no serious conflicts to be found, the conflict criteria were set quite loosely: first, all the detected cyclist moving in the “wrong”

direction were selected and then checked for encounters with other road users with

Table 2. Comparison of the automatically detected “wrong-way” cyclists with the “ground truth”.

Site Cyclists,

“ground truth”

Automated video analysis Cyclists False

positives

Detections, total

2 3 3 9 12

4 7 6 2 8

5 6 6 5 11

6 4 2 47 49

7 3 0 1 1

9 9 5 11 16

11 1 1 1 2

12 0 0 2 2

14 0 0 3 3

15 1 1 9 10

16 7 1 28 29

23 19 18 36 54

27 1 1 13 14

29 0 0 0 0

33 12 9 4 13

34 6 4 34 38

36 4 3 3 6

37 3 0 0 0

Total 86 60 208 268

Average detection rate: 60/86=70%.

Average false positive rate: 208/268=73%.

Among all the automated detections, the observers found 43 situations that looked like potential traffic conflicts with “wrong-way” cyclists involved. However, none of these were classified as serious conflicts according to the definition used by the Swedish Traffic Conflicts Technique (Hydén, 1987). A small test on how well potential conflicts can be detected automatically from the video data was also performed. Site 33 was chosen for this test as it had a relatively high number of potential conflicts (6) concentrated during four 0.5-hour periods (i.e. totally 2 hours of video). The trajectories and speed profiles were extracted for all the road users in the video sequences (“trajectory extraction I” algorithm was used). Since it was known that the position was estimated with a systematic error (due to the assumption of

“flat” road users) and that there were no serious conflicts to be found, the conflict criteria were set quite loosely: first, all the detected cyclist moving in the “wrong”

direction were selected and then checked for encounters with other road users with

TTC < 2 sec. or TAdv < 1 sec. Table 3 compares the detection of cyclists and potential conflicts and shows the results of this test. The entire video was also watched through by an observer to get the actual number of “wrong-way” cyclists (the “ground truth”).

Table 3. Detection of “wrong-way” cyclists and potential traffic conflicts by two techniques.

Video sequence

Cyclists,

“ground truth”

Video processing algorithm

“advanced road user detection” “trajectory extraction I”

Cyclists Conflicts,

detected manually Cyclists Conflicts

1 9 8 2 9 2

2 4 4 1 3 0

3 3 2 2 3 2

4 3 3 1 2 0

Total 19 17 6 17 4

Only 4 of the 6 known potential conflicts were detected automatically. Analysis of the

“misses” showed that in both cases the reason was that the cyclists involved in the conflicts were not detected at all. However, the general detection rate of both techniques is quite the same (17 cyclists in both cases, but not exactly the same ones), so it might be just a coincidence that the missed cyclists were involved in conflicts.

The studied site was in the shade of a large tree for most of the day. This resulted in many false trajectories located on the shade border (as the leaves moved in the wind, the shadows were detected as separate objects). These tracks were, however, very easy to sort out as they were abnormally long timewise while the travel length did not exceed 1-2 meters.

5.2. Study II – Cyclists in roundabouts, 2 design