• No results found

Image statistics and their processing in insect vision

N/A
N/A
Protected

Academic year: 2022

Share "Image statistics and their processing in insect vision"

Copied!
8
0
0

Loading.... (view fulltext now)

Full text

(1)

Image statistics and their processing in insect vision Olga Dyakova 1 and Karin Nordstro¨m 1,2

Naturalscenesmayappearrandom,butarenotonly constrainedinspaceandtime,butalsoshowstrongspatialand temporalcorrelations.Spatialconstraintsandcorrelationscan bedescribedbyquantifyingimagestatistics,whichinclude intuitivemeasuressuchascontrast,colorandluminance,but alsoparametersthatneedsometypeoftransformationofthe image.Inthisreviewwewilldiscusssomecommontoolsused toquantifyspatialandtemporalparametersofnaturalistic visualinput,andhowthesetoolshavebeenusedtoinformus aboutvisualprocessingininsects.Inparticular,wewillreview findingsthatwouldnothavebeenpossibleusingconventional, experimenterdefinedstimuli.

Addresses

1DepartmentofNeuroscience,UppsalaUniversity,Box593,751 24Uppsala,Sweden

2CentreforNeuroscience,FlindersUniversity,GPOBox2100,Adelaide, SA5001,Australia

Correspondingauthor:Nordstro¨m,Karin (karin.nordstrom@flinders.edu.au)

CurrentOpinioninInsectScience2017,24:7–14 ThisreviewcomesfromathemedissueonNeuroscience EditedbyAnnevonPhilipsbornandStanleyHeinze ForacompleteoverviewseetheIssueandtheEditorial Availableonline6thSeptember2017

http://dx.doi.org/10.1016/j.cois.2017.08.002

2214-5745/ã2017TheAuthors.PublishedbyElsevierInc.Thisisan openaccessarticleundertheCCBY-NC-NDlicense(http://creative- commons.org/licenses/by-nc-nd/4.0/).

Introduction

Many of us are fascinated by the exquisite performance of flying insects, despite their seemingly simple hardware, with small brains and low-resolution compound eyes. Often experimenter defined, relatively simple visual stimuli, such as high-contrast spots, bars or gratings, have been used to decipher the algorithms underlying fundamental aspects of visual processing, such as direction selectivity, orientation tuning, and velocity tuning [1]. However, when using the resulting algorithms to predict the responses to more natu- ralistic stimuli, that is, more representative of the visual world the insects live in, they may fail [2,3]. Thus, to understand how insect vision functions in the natural world, we need to know what this world looks like [4]. In this review we will describe some common tools used to quan- tify naturalistic visual input, and how these have been used to inform us about visual processing in insects. In particular, we will discuss the redundancy present in natural visual stimuli and how this redundancy is reflected in insect

sensory processing, as it has been argued that coding and transformation of natural input at all levels of the visual system should be adapted to this redundancy in optimal ways [5



]. Even if we here focus on insects, note that the circuitry and many of the underlying algorithms are similar in insects and mammals [1,6,7].

Image transformations and the amplitude spectrum

To describe the natural world, we use image statistics, which refers to parameters that can be quantified in an image, including intuitive measures such as contrast, color and luminance, but also parameters that can only be quantified after doing some type of transformation [4,5



]. To illustrate a common transformation used in image statistics, we can start with an 8 bit photo of the Himalayas taken with a conventional camera (Figure 1a). By performing a 2-dimensional Fourier trans- form [8] of its grayscale image (Figure 1b) we can extract the amplitude spectrum (Figure 1c). The Fourier transform is a mathematical method that allows us to represent the image in the spatial frequency domain.

To naı¨ve viewers the amplitude spectrum of the image (Figure 1c) may appear daunting, but it can be explained in relatively straightforward terms. For example, in the amplitude spectrum of the image (Figure 1c) different orientations radiate like spokes on a wheel from its center, which may be easier to appreciate by re-plotting the amplitude as a function of orientation (black data, Figure 1g). For example, there is a white ‘cross’ in the amplitude spectrum (Figure 1c), highlighting increased amplitude at 0



and 90



(see gray dashed arrows, black data, Figure 1g). The increased amplitude at 0



and 90



correspond to the edges of the photo (Figure 1b), which disappear if we filter the photo (Figure 1d,e; red data, Figure 1g; [9



]). Sometimes it is even possible to identify individual image features in the amplitude spectrum. For example, the two stakes behind the Eristalis hoverfly (arrow, Figure 1f) appear as peaks in its amplitude spec- trum (solid arrow, blue data, Figure 1g).

By averaging amplitude spectra from many natural scenes,

after filtering their edges (as in Figure 1d,e), we know that

there are often peaks around 90



and 180



(Figure 1g)

corresponding to the vertical and horizontal contours,

including the horizon, tree trunks and branches

(Figure 1a,f), that tend to dominate in natural scenes

[10–13]. Many flies orient toward such vertical (90



) fea-

tures, which is exploited in tethered experiments in Dro-

sophila [14,15] and blowflies [16], but hoverflies also fixate

high-contrast landmarks in free flight [17]. This frontal bar

(2)

fixation behavior likely relies on an independent position system [e.g. 18,19,20], potentially by orientation tuned neurons in the central complex [21



]. Indeed, these central complex luminance change sensitive ring neurons have a particular preference for vertically oriented features [21



], where modeling shows that their coarse visual information would allow flies to orient toward large high-contrast fea- tures in natural scenes [22



], such as the stakes in our photo (Figure 1f).

Above, we looked at the amplitude across orientations (Figure 1g). If we instead calculate the rotational average across all orientations, we can plot the amplitude as a function of spatial frequency (black, Figure 1h). The amplitude (A) follows a power law:

Aðf Þ ¼ 1 f

alpha

in which the amplitude (A) is inversely proportional to spatial frequency ( f) raised to the power alpha. Because alpha is the slope of the log–log plot of the amplitude

spectrum, it is referred to as the slope constant [23]. A consistent body of literature shows that slope constants across natural images cluster around 1–1.2 (gray data, Figure 1i) [9



,11,23,24]. The slope constant is often extracted by doing a curve fit through a limited part of the spectrum (dashed and green lines, Figure 1h), for example, to take account of the limited optical resolution of the insect eye [23]. The slope constant varies between different types of scenes [11] and is higher in close-up photos [13], as is obvious in our example (blue data, Figure 1h). However, note that in our close-up photo (Figure 1f), the background is unfocused and blurry, which will artificially increase the slope constant [25].

In the hoverfly lobula plate there is a neuron that is inhibited by stationary sinusoidal gratings within a lim- ited band of spatial frequencies, called the centrifugal stationary inhibited flicker excited (cSIFE) neuron [26].

cSIFE’s inhibition to a range of natural scenes is strongest to those images that have slope constants around 1–1.2 (blue data, Figure 1i), strikingly similar to the probability

Figure1

Natural scene, α = 1.08 Windowed scene, α = 1.14 Close-up, α = 1.45 Curve fit Natural scene

Windowed scene Close-up

(a) (b) (c)

(g)

0.8 1.0 1.2 1.4 1.6 1.8

Inhibition Probability

Image α (i)

Inhibition

Probability

cSIFE inhibition Dyakova et al, 2015

1 10 100 1000

Spatial frequency (cpI)

Amplitude

(f) (e)

(d)

(h)

120 160 Orientation (degrees)

Amplitude

0 40 80

Current Opinion in Insect Science

Fouriertransformsandtheamplitudespectrum.(a)AcolorphotooftheHimalayastakenwithaconventional8bitcamera.(b)Thephoto convertedtograyscale.(c)Theamplitudespectrumofthephotoinpanelb.(d)Thephotoinpanelbwithitsedgesfilteredwithacosinetaper.

(e)Theamplitudespectrumoftheimageinpaneld.(f)AphotoofamaleEristalistenaxfeedingfromacanolaplantintheHimalayas.(g)The averageamplitudefrompanelsc(black)ande(red)asafunctionoforientation.Thebluedatashowtheamplitudeofthefiltered,grayscale versionofthephotoinpanelf.Thesolidarrowpointstoamplitudesgeneratedbythestakesandthedashedarrowspointtoedgeartifacts.

(h)Therotationallyaveragedamplitudeasafunctionofspatialfrequency.Theslopeconstantisoftenextractedfromalimitedpartofthe spectrum,seedashedlines.Thedatahavebeenshiftedverticallyforvisibility.(i)Thegraydatashowtheprobabilitydistributionofslope constantsinca.100naturalscenes[23,75],anditsGaussiancurvefit(black).Thebluedatashowtheinhibition6naturalimagesgenerateinthe hoverflycSIFEneuron,asafunctionoftheirslopeconstant.Panelredrawnfrom[23].

(3)

distribution of slope constants across natural scenes (gray data, Figure 1i; [23]). This observation was confirmed by using manipulated images to show that the inhibition disappeared if the slope constant was shifted away from naturalistic [23].

Correlations and the temporal structure The temporal structure of natural visual input also follows a power law (black data, Figure 2b), strikingly similar to the spatial structure (Figure 1h). Fly photoreceptors and their first optic interneurons, the lamina monopolar cells (LMCs), work as whitening filters of naturalistic input [27–29]. As opposed to naturalistic input, white noise has equal amplitude at all frequencies (e.g. purple line, Figure 2b). The term peripheral whitening thus refers to the processes that transform the typical 1/f slope seen in natural input (black data, Figures 1h and 2b), to a more horizontal line, at least across the frequencies that are relevant to the sensory system (Figure 2d; [27,29]). Periph- eral whitening is done in both space and time, and strongly depends on the background light intensity [27,30].

Notably though, it is not only the 1/f structure of the averaged amplitude spectrum (Figures 1h and 2b) that makes a stimulus natural, but also the correlations within the stimulus [4]. Indeed, neighboring points of naturalistic input are highly correlated in both space and time [4,31



], with recent papers suggesting how to quantify such higher-order correlations [32]. For example, if we look at the top photo in Figure 2a it is clear that the neighbor of a bright pixel corresponding to the sky is more likely to be similarly bright, than to have any of the other 256 possible intensity values. Indeed, if we scramble the pixels of the scene (Figure 2a, top), it appears to be very unnatural

(Figure 2a, bottom), even if the individual pixels are identical and only their spatial arrangement differs.

Naturalistic light intensity time series thus contain struc- tured asymmetric contrast variations, which correlate strongly, and drive photoreceptor output vigorously [33]. The importance of these naturalistic temporal cor- relations was shown in an elegant series of experiments by Song and Juusola [34



]. They [34



] recorded intracellu- lar photoreceptor responses in both Drosophila and the killerfly Coenosia attenuata to three types of stimuli: a naturalistic time series (black, Figure 2b) and one where they scrambled the temporal structure of the naturalistic input, similar to what we did with the photo in Figure 2a, in effect creating a white signal (purple, Figure 2b). The last stimulus consisted of an artificial time series, so called Gaussian noise, but they altered the slope constant to make the frequency distribution more naturalistic (green, Figure 2b). Song and Juusola [34



] showed that the photoreceptor response to the naturalistic time series was not only larger [33], but also had higher information content than the response to either of the artificial stimuli (Figure 2c), highlighting the importance of temporal correlations (for Review, see [3]). Indeed, despite having a naturalistic slope constant the Gaussian noise that they created (green, Figure 2b) is not naturalistic, as it lacks such inherent correlations. The importance of such tem- poral and spatial correlations might partially explain why it can be difficult to use visual responses to white noise [35] to predict responses to more complicated stimuli (for a thorough discussion, see Ref. [36]).

Dark pixels dominate in natural scenes

Photos of natural scenes are often dominated by darker pixels [31



,37,38



,39], which can be illustrated by plotting

Figure2

Temporal frequency (Hz)

Amplitude

1 10 100 1000 Information rate (bits/s)

Photoreceptor responses to naturalistic stimuli Song & Juusola, 2014

(a) (b)

Temporal frequency (Hz)

1 10 100 1000

Peripheral whitening van Hateren, 1992

(c) (d)

Response (mV)

Naturalistic stimulus Gaussian 1/f Shuffled naturalistic

Current Opinion in Insect Science

Neighboringpointsinnaturalisticinputarestronglycorrelated.(a)Thetoppanelshowsagrayscalenaturalisticscene.Thebottompanelshows thesamephotowhereallthepixelshavebeenscrambled.(b)Theaverageamplitudeofanaturalisticstimulustimeseries(black).Whenthe naturalisticstimulushasbeenscrambled,theamplitudespectrumbecomesflat(purple).Thegreendatashowanartificiallygeneratedtimeseries withanaturalisticamplitudespectrum.(c)Theinformationrate(redrawnfrom[34])inphotoreceptorresponsestothethreedifferentstimuli showninpanelb.(d)TheresponseofflyphotoreceptorsandLMCs(redrawnfrom[27])tonaturalisticinput(e.g.blackdatainpanelb).

(4)

a histogram of the probability distribution of pixel inten- sities within a grayscale photo (Figure 3a). This domi- nance of dark pixels is often quantified using skewness or kurtosis, which are both mathematical measures of how non-Gaussian a distribution is [38



,40,41]. Natural input is not only positively skewed in space (Figure 3a), but the temporal structure of a natural intensity time series is similarly positively skewed (Figure 3b, note that the x-

axis is a log scale, understating the positive skew; see Refs. [31



,33]). Extensive adaptive processes in fly photo- receptors and LMCs allow for encoding these large, naturalistic luminance ranges [42]. Indeed, in fly photo- receptors, pixel-wise adaptation [43] leads to a less skewed response probability (Figure 3c, note linear x-axis), becoming completely symmetrical in their LMCs (Figure 3d) [33]. Peripheral processing thus appears to

Figure3

(a) (b) (c)

Peripheral intensity coding van Hateren, 1997

Probability

Intensity

0.1 1 10 100

Probability

PR response (mV)

5 15 25 35

Probability

LMC response (mV)

-10 0 10

ON-OFF pathways

Shinomiya, 2014; Serbe et al, 2016;

Takemura et al, 2017; Arenz et al, 2017

τhp

τlp

Tm9 Tm1 Tm4Tm2 Temporal kinetics

Serbe et al, 2016; Arenz et al, 2017 (e)

(g)

(d)

ON receptive fields Arenz et al, 2017

OFF receptive fields Arenz et al, 2017 (f)

(h) L1

Mi1 Tm3

T4

L2

Tm4 Tm2

T5 Tm1 L3

Tm9

L4 100150200250

0 50

Probability

L5

Mi4 Mi9

0.5 1 1.5 2

Time constants (s) 0

Mi4 Mi1 Tm3 Mi9

0.5 1 1.5 2

Time constants (s) 0

Mi1

28 degrees Tm3

Mi4

Mi9

28 degrees

Tm1

Tm4 Tm2 Tm9

ON OFF

Inf

Inf Inf

Current Opinion in Insect Science

Peripheralprocessingisoptimizedforpositivelyskewednaturalisticinput.(a)Theluminanceprobabilityhistogramofthepixelsintheinsetphoto.

(b)Theprobabilitydistributionoflightintensitiesfromanaturalistictimeseries.(c)Theprobabilitydistributionofphotoreceptor(PR)responsesto thenaturalistictimeseriesinpanelb.(d)TheprobabilitydistributionofLMCresponsestothenaturalistictimeseries.Panelsb–dredrawnfrom [33].(e)Simplifieddiagramshowingtheinputofthe5LMCs(L1–L5)totheON(T4)andOFF(T5)motionvisionpathways,respectively,redrawn from[47–50].(f)Thelightimpulsededucedreceptivefieldsofthemedullaintrinsic(Mi)andtransmedulla(Tm)neuronsoftheON/T4pathway, redrawnfrom[47].(g)Thehighpass(hp)andlowpass(lp)timeconstants(t)ofthemedullaintrinsic(Mi)andtransmedulla(Tm)neuronsoftheON/

T4pathway(left,[47])andOFF/T5pathway(right,[47,50]).TherightdiagramshowsthetimeconstantsasmeanSDasgiveninRefs.[47,50].

(h)Thereceptivefieldsofthetransmedulla(Tm)neuronsoftheOFF/T5pathway,redrawnfrom[47].

(5)

optimally encode the naturalistic dominance of dark pixels (Figure 3a–d; [44,45]).

Fly LMCs can be subdivided into 5 subtypes, labeled L1–L5 [46]. These LMCs project to transmedulla (Tm) and medulla intrinsic (Mi) interneurons in the medulla [46], which connect to T4, part of the ON motion vision pathway, coding for bright intensity changes, and T5, which is part of the OFF pathway, coding for dark intensities (Figure 3e; [47–51]). Three of the receptive fields of the neurons in the ON pathway have ON centers, that is, their centers are excited by impulses of light (red, Figure 3f), whereas one has an OFF center (Mi9, Figure 3e; [47]). The receptive fields of the Tm neurons in the OFF pathway, however, all have similar structure, with an OFF center and ON surround (Figure 3h; [47]).

In the medulla there are thus more neurons responding to dark contrast changes than bright contrast changes (blue OFF centers, Figure 3f,h). Furthermore, whereas the OFF pathway is highly selective, neurons in the ON pathway give breakthrough responses to dark edges [52]. This, together with the finding that the eightme- dulla neurons have strikingly different kinetics (Figure 3g) [47,50], could be useful for the requisite dynamic encoding of naturalistic time series dominated by dark pixels (Figures 2b and 3a,b). Similar ON-OFF asymmetries are also found in the dragonfly L-neurons (the outputs of the ocellar photoreceptors; [36]), in drag- onfly small target motion detector (STDM) neurons [53], in locust looming sensitive descending contralateral movement detector (DCMD) neurons [54], all of which give stronger responses to dark stimuli than to bright

stimuli. However, a loom-sensitive neuron in Drosophila, showed no such asymmetry [55].

Naturalistic motion vision

The responses of motion vision sensitive neurons in the fly lobula plate strongly depend on the contrast of artificial stimuli, such as sinusoidal gratings (Figure 4a) [56,57].

Note that there are many definitions of contrast in the literature [24], which differ in absolute values, but usually not in relative rank [58]. When working with natural scenes we recommend using the root mean square (RMS) contrast as it depends on the standard deviation of luminance values [24], rather than just the minimum and maximum values, like the often-used Michelson or Weber contrast. The contrast dependency in the response to sinusoidal gratings can be predicted from the non- linear correlation (usually a multiplication) of a 2-point correlator (Figure 4c). However, in response to naturalis- tic, panoramic scenes, this strong contrast dependency is gone (Figure 4b; [2,58]), which the 2-point correlator (Figure 4c) does not predict. Importantly, moving natu- ralistic scenes contain many more complex correlations [59



] than simple sinusoidal gratings do. By modifying the simple 2-point correlator (Figure 4c) to include 3-point correlations, which compare the same point in space at two points in time, and two points in space at the same time (Figure 4d,e), responses to naturalistic image sequences become much more robust [59



,60]. Indeed, these 3-point correlators (Figure 4d,e) require the asym- metric contrast distributions (Figure 3a,b) of naturalistic stimuli [31



,61]. Note that there are many other non- linearities that can be added to a 2-point correlator to

Figure4

Response (mV)

Velocity (deg/sec)

Contrast

1 10 100 1000

Response (mV)

Velocity constancy to naturalistic stimuli Barnett et al, 2010

(a) (b)

(c)

Time

Spatial shift

Response (mV)

Velocity (deg/sec)

Contrast

1 10 100 1000

Response (mV)

Barnett et al, 2010

Diverging 3 point correlator Clark et al, 2014

Converging 3 point correlator Clark et al, 2014 2 point correlator

Time

Spatial shift

Time

Spatial shift

(d) (e)

Velocity (deg/sec)

1 10 100 1000

Current Opinion in Insect Science

Theresponsetonaturalisticimagesdoesnotdependoncontrast.(a)Theneuralresponsetomovingwide-fieldsinusoidalgratingswithdifferent contrast.Datafrom[56].(b)Theresponsetomovingnaturalisticpanoramicscenescoveringalargerangeofcontrasts.Figureredrawnfrom[2].

(c)Theclassic2-pointcorrelator,whichcompares2pointsinspaceat2differenttimepoints.(d)Adiverging3-pointcorrelator.(e)Aconverging 3-pointcorrelator.Panelsc–eareredrawnfrom[59].

(6)

provide more robust responses to naturalistic motion [60,62–64] and that our current understanding of the input elements to motion vision (Figure 3e) strongly argue against a biological simple 2-point correlator [47,49,50,65]. Furthermore, the spatial pooling that takes place in the input dendrites of fly motion vision neurons, together with their axo-axonal gap junctions [66], and fast adaptation [57], additionally assist in providing more robust responses to naturalistic motion.

The data in Figure 4a,b also highlight that it is hard to predict the responses to natural stimuli from responses to artificial stimuli. Why then, are artificial stimuli so popular in insect vision? Probably because it is much easier to understand the input–output function when the input is easier to characterize and describe mathematically. The more natural a stimulus is, the more complicated it is to describe, and it can be hard as the experimenter to identify the defining character. A complicating factor is also how to reproduce naturalistic stimuli in the lab, where most electrophysiological recordings and quantitative behavior still have to be performed, as visual displays limit the naturalness of stimuli. Whereas the temporal resolution is high enough in LED arenas [67], they tend to provide low spatial resolution, below the optical resolution of, for example, hoverflies [68] and dragonflies [69]. Many labs use LCD screens [70], but these are optimized for trichro- matic human vision, whereas many insects have more than three opsins [71]. In motion vision, this might not be an issue, as this pathway is believed to be colorblind [72], but see [73], but when using stationary images, it certainly is. Is it then better to display the image in grayscale? Based on our own bleak perception of a natural scene displayed in grayscale versus the same brilliant scene in color (Figure 1a, b), this is probably not optimal either. Nevertheless, we strongly encourage more naturalistic stimuli to be used, especially as visual displays continue to develop [74



], as we can learn a lot from the neural responses to these.

Conflict of interest statement The authors declare no conflict of interest.

Acknowledgements

OurresearchisfundedbytheSwedishResearchCouncil(VR,2012-4740), theUSAirForceOfficeofScientificResearch(AFOSR,FA9550-15-1- 0188),theUSAirForceResearchLaboratory(AFRL,FA9550-11-1-0349), theAustralianResearchCouncil(ARC,DP170100008),andStiftelsenOlle EngkvistByggma¨stare(2016/348).

References and recommended reading

Papersofparticularinterest,publishedwithintheperiodofreview, havebeenhighlightedas:

 ofspecialinterest

ofoutstandinginterest

1. EulerT,BadenT:Computationalneuroscience:species- specificmotiondetectors.Nature2016,535:45-46.

2. BarnettPD,Nordstro¨mK,O’CarrollDC:Motionadaptationand thevelocitycodingofnaturalscenes.CurrBiol2010, 20:994-999.

3. JuusolaM,SongZ:Howaflyphotoreceptorsampleslight informationintime.JPhysiol2017.

4. SimoncelliEP,OlshausenBA:Naturalimagestatisticsand neuralrepresentation.AnnuRevNeurosci2001,24:1193-1216.

5.

 BarlowH:Redundancyreductionrevisited.Network2001, 12:241-253.

Thisisawonderful,thoughtfuldiscussionbyoneofthegiantsinthefield of image statistics. Barlow discusses his ideas around redundancy reduction,suggestingthatitdoesnotmeanredundancyremoval,and whyitisimportantforustounderstandredundancyinnaturalscenes.

6. SanesJR,ZipurskySL:Designprinciplesofinsectand vertebratevisualsystems.Neuron2010,66:15-36.

7. ClarkDA,DembJB:Parallelcomputationsininsectand mammalianvisualmotionprocessing.CurrBiol2016,26:

R1062-R1072.

8. FieldDJ:Relationsbetweenthestatisticsofnaturalimages andtheresponsepropertiesofcorticalcells.JOptSocAmA 1987,4:2379-2394.

9. TolhurstD,TadmorY,ChaoT:Amplitudespectraofnatural images.OphthalPhysiolOpt1992,12:229-232.

This classic paper shows that the slope constant of the amplitude spectrumisnotanartefactofhowthephotosaretakenorprocessed.

10. vanderSchaafA,vanHaterenJH:Modellingthepowerspectra ofnaturalimages:statisticsandinformation.VisionRes1996, 36:2759-2770.

11. SchwegmannA,LindemannJP,EgelhaafM:Temporalstatistics ofnaturalimagesequencesgeneratedbymovementswith insectflightcharacteristics.PLOSONE2014,9:e110386.

12. GirshickAR,LandyMS,SimoncelliEP:Cardinalrules:visual orientationperceptionreflectsknowledgeofenvironmental statistics.NatNeurosci2011,14:926-932.

13. TorralbaA,OlivaA:Statisticsofnaturalimagecategories.

Network2003,14:391-412.

14. SareenP,WolfR,HeisenbergM:Attractingtheattentionofafly.

ProcNatlAcadSciUSA2011,108:7230-7235.

15. AptekarJW,ShoemakerPA,FryeMA:Figuretrackingbyfliesis supportedbyparallelvisualstreams.CurrBiol2012, 22:482-487.

16. KimmerleB,WarzechaA-K,EgelhaafM:Objectdetectioninthe flyduringsimulatedtranslatoryflight.JCompPhysiolA1997, 181:247-255.

17. CollettTS,LandMF:Visualspatialmemoryinahoverfly.JComp Physiol1975,100:59-84.

18. ReichardtW,PoggioT:Visualcontroloforientationbehaviour inthefly.PartI.Aquantitativeanalysis.QRevBiophys1976, 9:311-375.

19. AptekarJW,KelesMF,LuPM,ZolotovaNM,FryeMA:Neurons formingopticglomerulicomputefigure-ground

discriminationsinDrosophila.JNeurosci2015,35:7587-7599.

20. BahlA,AmmerG,SchillingT,BorstA:Objecttrackinginmotion- blindflies.NatNeurosci2013.

21. SeeligJD,JayaramanV:Featuredetectionandorientation tuningintheDrosophilacentralcomplex.Nature2013, 503:262-266.

Thisgreatpapershowshowmuchwecanlearnaboutvisualprocessing byventuringoutoftheopticlobes.Togetherwiththepaperbelow,we learnthatcoarsesamplingmaybesufficientforencodingnaturalistic information.

22. WystrachA,DewarDMA,GrahamP:Insectvision:emergenceof patternrecognitionfromcoarseencoding.CurrBiol2014,24:

R78-R80.

SeeannotationtoRef.[21].

23. DyakovaO,LeeY-J,LongdenKD,KiselevVG,Nordstro¨mK:A higherordervisualneurontunedtothespatialamplitude spectraofnaturalscenes.NatCommun2015:6.

(7)

24. BexPJ,MakousW:Spatialfrequency,phase,andthecontrast ofnaturalimages.JOptSocAmAOptImageSciVis2002, 19:1096-1106.

25. FieldDJ,BradyN:Visualsensitivity,blurandthesourcesof variabilityintheamplitudespectraofnaturalscenes.Vision Res1997,37:3367-3383.

26. DeHaanR,LeeY-J,Nordstro¨mK:Novelflicker-sensitivevisual circuitneuronsinhibitedbystationarypatterns.JNeurosci 2013,33:8980-8989.

27. vanHaterenJH:Theoreticalpredictionsofspatiotemporal receptivefieldsofflyLMCs,andexperimentalvalidation.J CompPhysiolA1992,171:157-170.

28. AtickJJ,RedlichAN:Whatdoestheretinaknowaboutnatural scenes? NeuralComput1992,4:196-210.

29. SrinivasanMV,LaughlinSB,DubsA:Predictivecoding:afresh viewofinhibitionintheretina.ProcRSocLondBBiolSci1982, 216:427-459.

30. vanHaterenJH:Realandoptimalneuralimagesinearlyvision.

Nature1992,360:68-70.

31. RudermanDL,BialekW:Statisticsofnaturalimages:scalingin thewoods.PhysRevLett1994,73:814-817.

Another classicinthefieldofnaturalimagestatistics,whichnotonly describesthespatialredundancy,butalsotheimportanceofcorrelations.

32. HuQ,VictorJD:Two-dimensionalhermitefilterssimplifythe descriptionofhigh-orderstatisticsofnaturalimages.

Symmetry(Basel)2016:8.

33. vanHaterenJH:Processingofnaturaltimeseriesofintensities bythevisualsystemoftheblowfly.VisionRes1997, 37:3407-3416.

34. SongZ,JuusolaM:Refractorysamplinglinksefficiencyand costsofsensoryencodingtostimulusstatistics.JNeurosci 2014,34:7216-7237.

Thisamazingstudyhasperformedeveryconceivablecontrolexperiment toconvinceusthatvisualcodingofnaturalistictemporalcorrelations, enabledbytherefractoryperiodinthephotoreceptormicrovilli,increases informationcapture.

35. WeberF,MachensCK,BorstA:Spatiotemporalresponse propertiesofoptic-flowprocessingneurons.Neuron2010, 67:629-642.

36. vanKleefJP,StangeG,IbbotsonMR:Applicabilityofwhite- noisetechniquestoanalyzingmotionresponses.J Neurophysiol2010,103:2642-2651.

37. KumarV,GuptaP:Importanceofstatisticalmeasuresindigital imageprocessing.IntJEmergTechnolAdvEng2012,2:56-62.

38.

 PouliT,CunninghamDW,ReinhardE:Asurveyofimage statisticsrelevanttocomputergraphics.ComputGraphForum 2011,30:1761-1788.

Thisisagreatstartingpointforpeoplewantingtolearnmoreaboutimage statistics.

39. RichardsWA:Lightnessscalefromimageintensity distributions.ApplOpt1982,21:2569-2582.

40. BradyN,FieldDJ:Localcontrastinnaturalimages:

normalisationandcodingefficiency.Perception2000, 29:1041-1055.

41. PouliFT,DouglasC,ErikR:Imagestatisticsandtheir applicationsincomputergraphics.EurographicsStateoftheArt Report(STAR).2010.

42. vanHaterenJH,SnippeHP:Informationtheoreticalevaluation ofparametricmodelsofgaincontrolinblowflyphotoreceptor cells.VisionRes2001,41:1851-1865.

43. BrinkworthRSA,MahE-L,GrayJP,O’CarrollDC:Photoreceptor processingimprovessaliencefacilitatingsmalltarget detectioninclutteredscenes.JVis2008,8:1-17.

44. AtickJJ:Couldinformationtheoryprovideanecologicaltheory ofsensoryprocessing? Network2011,22:4-44.

45. LaughlinS:Asimplecodingprocedureenhancesaneuron’s informationcapacity.ZNaturforschC1981,36:910-912.

46. FischbachK,DittrichA:TheopticlobeofDrosophila melanogaster.PartI:agolgianalysisofwild-typestructure.

CellTissueRes1989,258:441-475.

47. ArenzA,DrewsMS,RichterFG,AmmerG,BorstA:Thetemporal tuningoftheDrosophilamotiondetectorsisdeterminedbythe dynamicsoftheirinputelements.CurrBiol2017,27:929-944.

48. ShinomiyaK,KaruppuduraiT,LinT-Y,LuZ,LeeC-H, MeinertzhagenIA:Candidateneuralsubstratesforoff-edge motiondetectioninDrosophila.CurrBiol2014,24:1062-1070.

49. TakemuraSY,NernA,ChklovskiiDB,SchefferLK,RubinGM, MeinertzhagenIA:Thecomprehensiveconnectomeofaneural substratefor‘ON’motiondetectioninDrosophila.Elife2017:6.

50. SerbeE,MeierM,LeonhardtA,BorstA:Comprehensive characterizationofthemajorpresynapticelementstothe DrosophilaOFFmotiondetector.Neuron2016,89:829-841.

51. BehniaR,ClarkDA,CarterAG,ClandininTR,DesplanC:

ProcessingpropertiesofONandOFFpathwaysforDrosophila motiondetection.Nature2014.

52. ClarkDA,BursztynL,HorowitzMA,SchnitzerMJ,ClandininTR:

Definingthecomputationalstructureofthemotiondetectorin Drosophila.Neuron2011,70:1165-1177.

53. O’CarrollDC,WiedermanSD:Contrastsensitivityandthe detectionofmovingpatternsandfeatures.PhilosTransRSoc LondBBiolSci2014,369:20130043.

54. SanterRD:Motiondazzle:alocust’seyeview.BiolLett2013, 9:20130811.

55. deVriesSE,ClandininTR:Loom-sensitiveneuronslink computationtoactionintheDrosophilavisualsystem.Curr Biol2012,22:353-362.

56. DeHaanR,LeeY-J,Nordstro¨mK:Octopaminergicmodulation ofcontrastsensitivity.FrontIntegrNeurosci2012,6:55.

57. Nordstro¨mK,MoyerdeMiguelIM,O’CarrollDC:Rapidcontrast gainreductionfollowingmotionadaptation.JExpBiol2011, 214:4000-4009.

58. StrawAD,RainsfordT,O’CarrollDC:Contrastsensitivityof insectmotiondetectorstonaturalimages.JVis2008, 832:31-39.

59.

 ClarkDA,FitzgeraldJE,AlesJM,GohlDM,SiliesMA,NorciaAM, ClandininTR:Fliesandhumansshareamotionestimation strategythatexploitsnaturalscenestatistics.NatNeurosci 2014,17:296-303.

Thisnicepaperusesthecomplexcorrelationsinherentofnaturalscenes toinvestigatethelimitsofelementary motionvision,andasabonus, comparesfliesandhumans.

60. FitzgeraldJE,ClarkDA:Nonlinearcircuitsfornaturalisticvisual motionestimation.Elife2015,4:e09123.

61. FitzgeraldJE,KatsovAY,ClandininTR,SchnitzerMJ:

Symmetriesinstimulusstatisticsshapetheformofvisual motionestimators.ProcNatlAcadSciUSA2011, 108:12909-12914.

62. BrinkworthRS,O’CarrollDC:Robustmodelsforopticflow codinginnaturalscenesinspiredbyinsectbiology.PLoS ComputBiol2009,5:e1000555.

63. BoeddekerN,LindemannJP,EgelhaafM,ZeilJ:Responsesof blowflymotion-sensitiveneuronstoreconstructedopticflow alongoutdoorflightpaths.JCompPhysiolA2005,191:1143- 1155.

64. DrorRO,O’CarrollDC,LaughlinSB:Accuracyofvelocity estimationbyReichardtcorrelators.JOptSocAmA2001, 18:241-252.

65. TuthillJC,NernA,HoltzSL,RubinGM,ReiserMB:Contributions ofthe12neuronclassesintheflylaminatomotionvision.

Neuron2013,79:128-140.

(8)

66. CuntzH,HaagJ,ForstnerF,SegevI,BorstA:Robustcodingof flow-fieldparametersbyaxo-axonalgapjunctionsbetween flyvisualinterneurons.ProcNatlAcadSciUSA2007, 104:10229-10233.

67. ReiserMB,DickinsonMH:Amodulardisplaysystemforinsect behavioralneuroscience.JNeurosciMethods2008,

167:127-139.

68. StrawAD,WarrantEJ,O’CarrollDC:A‘brightzone’inmale hoverfly(Eristalistenax)eyesandassociatedfastermotion detectionandincreasedcontrastsensitivity.JExpBiol2006, 209:4339-4354.

69. HorridgeGA:Theseparationofvisualaxesinapposition compoundeyes.PhilosTransRSocLondBBiolSci1978, 285:1-59.

70. GhodratiM,MorrisAP,PriceNS:The(un)suitabilityofmodern liquidcrystaldisplays(LCDs)forvisionresearch.FrontPsychol 2015,6:303.

71. FutahashiR,Kawahara-MikiR,KinoshitaM,YoshitakeK, YajimaS,ArikawaK,FukatsuT:Extraordinarydiversityofvisual

opsingenesindragonflies.ProcNatlAcadSciUSA2015,112:

E1247-E1256.

72. YamaguchiS,WolfR,DesplanC,HeisenbergM:Motionvisionis independentofcolorinDrosophila.ProcNatlAcadSciUSA 2008,105:4910-4915.

73. WardillTJ,ListO,LiX,DongreS,McCullochM,TingCY, O’KaneCJ,TangS,LeeCH,HardieRCetal.:Multiplespectral inputsimprovemotiondiscriminationintheDrosophilavisual system.Science2012,336:925-931.

74. TakaloJ,PiironenA,HonkanenA,LempeaM,AikioM, TuukkanenT,VahasoyrinkiM:Afastandflexiblepanoramic virtualrealitysystemforbehaviouralandelectrophysiological experiments.SciRep2012,2:324.

Thisisagreatpaperforpeoplewantingtolearnmoreaboutthelimitsof visualdisplays.

75. OlmosA,KingdomFA:Abiologicallyinspiredalgorithmforthe recoveryofshadingandreflectanceimages.Perception2004, 33:1463-1473.

References

Related documents

The EU exports of waste abroad have negative environmental and public health consequences in the countries of destination, while resources for the circular economy.. domestically

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

• Utbildningsnivåerna i Sveriges FA-regioner varierar kraftigt. I Stockholm har 46 procent av de sysselsatta eftergymnasial utbildning, medan samma andel i Dorotea endast

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i

Den förbättrade tillgängligheten berör framför allt boende i områden med en mycket hög eller hög tillgänglighet till tätorter, men även antalet personer med längre än