• No results found

Pigeons are able to learn many categories, both natural and artificial ones, from pho-tographs (e.g. Bhatt et al., 1988), and to perform same/different judgements with pictorial stimuli (e.g. Cook et al., 2000), but the question for the present analysis is which mode of picture processing that is at work. In Cook et al. (2000) for example, judgements can be based purely on a level of pattern recognition.

Another classic example of local processing of images in animals is the “Charlie Brown” study by John Cerella (1980). Pigeons learned to discriminate images of Charlie Brown, the character from the cartoon Peanuts.49 Discrimination generalised to novel pictures, and also to scrambled versions of Charlie Brown. This meant that local information, and not Charlie Brown as a complete figure, was the basis for dis-crimination. But it was also found that no single critical feature accounted for the

48 Men in red jackets do not stand for the blobs of pigment in the cartoon at my desk equally well as those blobs of pigment stands for a man in red jacket.

49 In Sweden more known for the dog character Snoopy.

performance. It seemed that Charlie Brown could be defined as a redundant collec-tion of discrete features. The pigeons had learned to respond to several independent features. This is a clear example of picture processing in surface mode.

However, surface mode does not only work for isolated features, but can also take into consideration the relations between features. One can speak of a surface mode of a local and a global sort, where the global is very close to be a form of reality mode, but without any correspondence to previously experienced objects, i.e. a ge-stalt without recognition of a referent. But this gege-stalt can be recognised between exemplars and thus work in reality mode between pictures as such.

This is probably the reason why it looks like pigeons treat line-drawings in studies by e.g. Wasserman and colleagues as representations of objects. The pigeons’ dis-crimination is sensitive to deletion of several features at once, but not single features, and scrambling of features (Kirkpatrick-Steger et al., 1998), just as if they had per-ceived motifs in the pictures. When single features proved critical, however, individ-ual pigeons were found to have had idiosyncratic recognition strategies. Had the pigeons seen the drawings as categorisable objects from the real world, effects like this ought to be rare.50 Pigeons can also recognise three dimensional rotations of line-drawn object shapes (Wasserman et al., 1996), which is difficult to account for with a theory based on independently perceived features.

Kirkpatrick-Steger et al. (2000) exposed pigeons to line-drawn object shapes, called “geons,” and found that the shapes were difficult to learn to discriminate, but also that once they had been learned they were treated as compositional wholes, per-haps as three-dimensional objects. It is thus possible that the line-drawings also in the studies above were seen as three-dimensional objects simply by virtue of being perspective drawings. (But see e.g. Cerella, 1977.) The step to reality mode when seeing three-dimensional shapes in line drawings is very small, but to make the dis-tinction clearer I would like to reserve the term reality mode for those occasions where recognition of pictorial content is due to spontaneously taking it for a kind of real world exemplar. When finding out with time that a stimulus behaves compara-ble to objects in the real world one is constructing the relationship, or rather the confusion, from a different angle altogether. It can perhaps be described as a special case of surface mode, approximating reality mode. However, these variations likely stem from similar processes. Reid and Spetch (1998) for exampled showed that pi-geons could discriminate three-dimensional from two-dimensional abstract objects in pictures by using depth cues such as shading, as well as perspective transforma-tions.51 The latter were present in Kirkpatrick-Steger et al. (2000).

In a study on object rotation Peissig et al. (2000a; 2000b) used geons similar to the ones in Kirkpatrick-Steger et al. (2000), with the difference that they were com-puter 3D rendered, complete with shadows and reflections. They found similar re-sults for rotation as Wasserman et al. (1996). With this type of stimuli it is much easier to ascribe performance to reality mode, but in order to avoid double standards differences between the performance with rendered geons and drawn geons must be

50 Unless pigeons form radically individual recognition strategies of real-world items as well. (If this is indeed the case there is little hope of learning anything about pictorality from pigeons.)

51 For human children (3 years), shape perspective is a superior pictorial depth cue in relation to shading, relative sizes, linear perspective, interposition etc. (Olson et al, 1980).

found in a comparable test. Such a test was Peissig et al. (2006), where the line drawing stimuli and the 3D shape stimuli were used in a test of generalisation to novel sizes. Generalisation was equally successful for both types of stimuli. However, generalisation performance got worse the bigger the size difference was from training stimuli. The rotation studies also found that the larger the rotation the less response from the pigeons. These two findings imply that the object status of the depictions was not completely detached from experience. Meanwhile, it was found that the transfer to novel rotations in pictures became better the higher the number of differ-ent views had been during training sessions (Wasserman et al.,1996; Peissig et al., 2000b). This seems to suggest that the birds did indeed learn to see the shapes in the pictures as three-dimensional as a result of training.52

To really arbitrate between reality mode and a global, relational, version of sur-face mode, a comparison between e.g. rotation in pictures and rotation of real ob-jects is needed. Friedman et al. (2005) did exactly this and found a notable differ-ence in performance on novel rotational views between photographs and objects.

The latter was significantly easier for the pigeons.53 It was concluded that pigeons perceive objects and their pictures differently.

That discrimination of scrambled pictures is more difficult when viewed in a global fashion, or reality mode, than in a local surface mode (or pictorial mode) is shown in Watanabe (2001) for pigeons. Watanabe compared discriminations of a specific human individual among others in photographs, of pigeons among other bird spe-cies in photographs, of a specific human cartoon character among other cartoons, and of a specific pigeon cartoon character among other bird cartoons. In the case of human and bird cartoons the target and comparison stimuli were made by the same artist in each group, but not across groups. Human and bird cartoons were thus very different in style and composition. One notable difference was that bird cartoons were monochromatic while the human stimuli were in colour. All stimuli were cut out and pasted on green backgrounds (fig. 9, p. 88).

When scrambled, photographs, and especially of pigeons, stopped being recog-nised while cartoons still elicited discriminative responses, but only people cartoons.

The reason for the unsuccessful discrimination of scrambled photographs is proba-bly due to them being analysed in reality mode, at least pigeon photographs. A rea-son to suspect that bird photographs were processed in another way than the other categories is that discrimination of bird cartoons, human photographs, and human cartoons took roughly the same number of sessions for reaching criteria in learning trials, while bird photographs were learned in less than a third of that time (Wata-nabe, 2001).

52 Macaques (Macaca fuscata) in Sugihara et al. (1998) seem to have responded in a matching task with rotated stimuli as if computer rendered 3D objects were three-dimensional. However, exten-sive training including 360o rotations of the stimuli preceded testing.

53 The same effect could not be found in humans, probably because they were pictorially competent.

discr gests realit failed budg T prisin readi sitivi fying prod are fo T discr respe nove local So w gorie peop lus f Wass as we plaus stimu carto clude these tanab mad force essin W Kirk

riminate in s that even

ty mode m d to show gerigars.

That conspe ng. On the ily reprodu ity. On the g birds, suc duced in ph found in thi That people

riminable w ect to local el items occ l strategy fo why were p

es containe ple and bird

features ra serman (20 ell as their sible that t uli. Matsuk oon and lin

ed that “pi e types of i

be (2001), e recogniti ed the subj ng and sensi Watanabe’s

kpatrick-Ste

dividual pi for pigeon might often b

transfer fo ecific bird s e one hand uced in pict e other hand ch as vocali

hotographs is latter cat e cartoons i

was probab l features. I curred for a or this stimu

igeon carto d very diffe d cartoons c ather than 003) showe

spatial rela the relative kawa et al.

ne-drawing igeons use nformation for exampl on of local ects to rely itivity for s

(2001) res eger et al.

igeons in p n photograp

be at work.

or discrimin timuli mig d, signals th

tures and re d, addition isation, olfa

. Perhaps i egory.

in Watanab bly due to

In addition all categorie

ulus type.

oons sensib ferent types can thus be

the peopl ed that pige ations when e weight o

(2004) con studies alre

both globa n dependin

le the colou l elements y more on r

crambling.

sults for th (1998) wh

Figure pigeon Whole

Photo to ev graph to pe many photographs phs proces . Similarly, nation of l

ht be extra hat single o

eacted upon nal signals th actory cues

information be (2001)

the fact th n, successfu es except pe

ble to scram s of picture

e attributed le – pigeo eons are ab n both sour of each stra nfirm that eady mentio

al and loca ng on the p

ured elemen easier, whi relational p he pigeon ho also use

e 9. The car n drawings

e and scramb

ographic di oke global hs of pigeo erceive (e.g y exemplar s (Nakamu ses more c

Trillmich iving indiv sensitive in ut birds fro n with a pr hat are pot

and ultrav n for discr

could be s hat the cart ul generalis eople cartoo

mbling? Rem es (see fig. 9 d to differen on dimensi

le to simul rces of info ategy is inf this might oned with o al aspects, w

articular pe nts of the p ile the blac properties, cartoons c d black-an

rtoon human used in W led. From W

isplays are recognitio ns are som . Ryan &

rs are need ura et al., 2 close to sur

(1976, in W viduals to t

n both direc om other o redisposed entially rele violet mark

iminating scrambled a toons were sation of di ons, which

member th 9). The diff nt configur ion as suc ltaneously l ormation ar fluenced by

be the case one of thei with differ erceptual co people carto ck-and-whit

hence a mo closely rese nd-white ou

n and one o Watanabe (2 Watanbe (2001

far from ce on. Even ph metimes dif

Lea, 1994) ded to lear 2003). This rface mode

Watanbe, 1 their pictur ctions is no objects mig heightened evant for id ings, are no individual and still re processed iscriminatio

again sugg

at the two fference bet

rations of st ch. Gibson learn about re available

y the prese e. Revisitin

r own they rent mixtur

ontext.” In oons could te bird cart ore global embles tho utline draw

of the 2001).

1).

ertain hoto-fficult

) and rn to s

sug-than 1997) res in

t sur-ght be

d sen- denti-ot re-birds emain with on to gests a

cate-tween

timu-n atimu-nd

t cues . It is ented ng the y

con-res of n

have toons proc-ose of wings,

but of watering cans, irons, sailboats and desk lamps. Another explanation for the sensitivity to scrambling in bird cartoons is possibly that there was some bird typical silhouette in the pictures which motivated a reality mode processing. But given the just mentioned watering cans, irons, sailboats and desk lamps, this can be easily ar-gued against.

Whether human photographs, on the other hand, were processed in reality mode, like pigeon photographs, or was a case of processing in surface mode is less clear. If the latter it is reasonable to suspect a more relational parsing (global strategy) since it was sensitive to scrambling.

Aust and Huber (2003) found that in a “people present” vs. “people absent” dis-crimination, pigeons’ performance dropped when scrambled and distorted photo-graphs were displayed, but not as low as when “people absent” images were shown.

Thus both individual people components and configurations of components were responsible for the pigeon’s discrimination. However, the test does not convincingly show that pigeons saw the people stimuli as people. Just turning the human figure upside down had the same effect as scrambling the pictures severely. One should thus be vary of assuming that just because photographs are photographs they are treated in other ways than abstract stimuli.

But there might be good indications of recognition in other studies. Wilkie (2000) concludes that pigeons’ responses to photographs of outdoor scenes corresponds to landmark use in pigeon navigation. This means that photographic scenes to some degree are seen as natural scenes. However, the transfer to novel views of the same scene is poor unless they are given many training views (Spetch et al., 2000). Corre-spondence might thus not be what would be expected in reality mode. Can pigeons still use this correspondence in an actual task? Cole and Honig (1994) found that pigeons could use information from photographs of a room to find food in that room54, but they could not learn from the room to find a baited place in photo-graphs. Similarly, Dawkins et al. (1996) could not find in pigeons any transfer from rewarded places to photographs of those places. Lechelt and Spetch (1997) found that pigeons did not transfer in any direction, although they could independently learn to use landmarks in both a real room and in digitised displays. Again, relations in the pictorial world seem to take on aspects of relations in the real world, but with no bridging between the two spaces.

Watanabe (1993) showed transfer between objects and photographs, and vice versa, for the distinction food vs. non-food. Processed in a reality mode photographs could just have been seen as further exemplars of foods and other objects. This analysis is made by Watanabe (1997) himself. He thus repeated the experiment and also found that pigeons could learn to discriminate between real objects and photo-graphs.

One of the most, in their own judgement, convincing demonstrations of object - photograph equivalence, including differentiation, in pigeons, is published in Spetch and Friedman (2006). In order to exclude predisposed reactions they chose to look

54 This was in a heavily reinforced recognition task and not a case of map reading.

at learned instead of spontaneous discriminations and therefore used nonsense ob-jects. Furthermore, the stimuli was constructed and presented so as to require a global processing. The need for global processing would exclude responses based on local invariant features, such as colour, or memorisation of specific views. The pho-tographic stimuli used were realistic renditions, including depth cues such as shad-ows, on a homogenous background. Both photographs and objects were displayed behind glass. Transfer from depicted objects to real objects, and vice versa, was found. But separate subjects were used in respective transfer group. Symmetric equivalence can therefore not be said to have been proven on an individual level.

More interesting is the claim made by Spetch and Friedman (2006) that subjects perceived a difference between objects and pictures. All pigeons got worse directly after transfer, which means that it was not an effortless transition. Furthermore, pi-geons in a stable-contingency group remained above chance and reclaimed profi-ciency much faster than subjects in a reversed-contingency group, who had to re-learn the positive stimulus altogether. This means that the birds perceived a likeness in the new stimuli to the stimuli that had preceded the transfer. However, claims such as “[…] both groups demonstrated that they perceived a difference between objects and their pictures” (p. 970) and “[…] positive transfer was unlikely to reflect an inability to tell the difference between the objects and pictures” (p. 971) gives the wrong impression. What was rather shown was that there was a perceived difference in the new group of instances of the positive stimulus. The specifically pictorial part of this difference remains to be proven. The same result could have been provoked by making some other transformation to the stimuli.

Reality mode accounts for the Spetch and Friedman (2006) results, but only be-cause reality mode can work for the stimuli used. An important factor is the objects chosen to be represented in the pictures, and here simple but realistic computer ren-derings were used.

Pigeons have a different visual system than humans, and photographs are con-structed for human vision. Given birds different perception of e.g. colour in real objects and of colours in pictures (Delius et al., 2000), the pigeons might fail to see any correspondence between more visually demanding stimuli, in the very same ex-perimental setups. That the use of photographs of people, or transfer between pho-tographic and real space, fails in certain studies is thus not surprising. Transfer from objects to pictures, and vice versa, usually breaks down the more complex, or re-fined, discriminations have to be (Delius et al., 2000). This breakdown can be seen as a failure of reality mode to kick in, and the limitations of working in surface mode in experiments that demands recognition.