Borrow it
- African Studies Library
- Alumni Medical Library
- Astronomy Library
- Fineman and Pappas Law Libraries
- Frederick S. Pardee Management Library
- Howard Gotlieb Archival Research Center
- Mugar Memorial Library
- Music Library
- Pikering Educational Resources Library
- School of Theology Library
- Science & Engineering Library
- Stone Science Library
The Resource Computational neuroscience of vision, Edmund T. Rolls and Gustavo Deco
Computational neuroscience of vision, Edmund T. Rolls and Gustavo Deco
Resource Information
The item Computational neuroscience of vision, Edmund T. Rolls and Gustavo Deco represents a specific, individual, material embodiment of a distinct intellectual or artistic creation found in Boston University Libraries.This item is available to borrow from all library branches.
Resource Information
The item Computational neuroscience of vision, Edmund T. Rolls and Gustavo Deco represents a specific, individual, material embodiment of a distinct intellectual or artistic creation found in Boston University Libraries.
This item is available to borrow from all library branches.
- Extent
- xviii, 569 pages
- Contents
-
- 4
- 55
- 2.6
- Backprojections to the lateral geniculate nucleus
- 55
- 3
- Extrastriate visual areas
- 57
- 3.2
- Visual pathways in extrastriate cortical areas
- 57
- 1.5
- 3.3
- Colour processing
- 61
- 3.3.1
- Trichromacy theory
- 61
- 3.3.2
- Colour opponency, and colour contrast: Opponent cells
- 61
- 3.4
- Long-Term Potentiation and Long-Term Depression
- Motion and depth processing
- 65
- 3.4.1
- The motion pathway
- 65
- 3.4.2
- Depth perception
- 67
- 4
- The parietal cortex
- 7
- 70
- 4.2
- Spatial processing in the parietal cortex
- 70
- 4.2.1
- Area LIP
- 71
- 4.2.2
- Area VIP
- 73
- 1.6
- 4.2.3
- Area MST
- 74
- 4.2.4
- Area 7a
- 74
- 4.3
- The neuropsychology of the parietal lobe
- 75
- 4.3.1
- Distributed representations
- Unilateral neglect
- 75
- 4.3.2
- Balint's syndrome
- 77
- 4.3.3
- Gerstmann's syndrome
- 79
- 5
- Inferior temporal cortical visual areas
- 11
- 81
- 5.2
- Neuronal responses in different areas
- 81
- 5.3
- The selectivity of one population of neurons for faces
- 83
- 5.4
- Combinations of face features
- 84
- 1.6.2
- 5.5
- Distributed encoding of object and face identity
- 84
- 5.5.1
- Distributed representations evident in the firing rate distributions
- 85
- 5.5.2
- The representation of information in the responses of single neurons to a set of stimuli
- 90
- 5.5.3
- Advantages of different types of coding
- The representation of information in the responses of a population of inferior temporal visual cortex neurons
- 94
- 5.5.4
- Advantages for brain processing of the distributed representation of objects and faces
- 98
- 5.5.5
- Should one neuron be as discriminative as the whole organism, in object encoding systems?
- 103
- 5.5.6
- Temporal encoding in the spike train of a single neuron
- 12
- 105
- 5.5.7
- Temporal synchronization of the responses of different cortical neurons
- 108
- 5.5.8
- Conclusions on cortical encoding
- 111
- 5.6
- Invariance in the neuronal representation of stimuli
- 112
- 1.2
- 1.7
- 5.6.1
- Size and spatial frequency invariance
- 112
- 5.6.2
- Translation (shift) invariance
- 113
- 5.6.3
- Reduced translation invariance in natural scenes
- 113
- 5.6.4
- Neuronal network approaches versus connectionism
- A view-independent representation of objects and faces
- 115
- 5.7
- Face identification and face expression systems
- 118
- 5.8
- Learning in the inferior temporal cortex
- 120
- 5.9
- Cortical processing speed
- 13
- 122
- 6
- Visual attentional mechanisms
- 126
- 6.2
- The classical view
- 126
- 6.2.1
- The spotlight metaphor and feature integration theory
- 126
- 1.8
- 6.2.2
- Computational models of visual attention
- 129
- 6.3
- Biased competition -- single cell studies
- 132
- 6.3.1
- Neurophysiology of attention
- 133
- 6.3.2
- Introduction to three neuronal network architectures
- The role of competition
- 135
- 6.3.3
- Evidence of attentional bias
- 136
- 6.3.4
- Non-spatial attention
- 136
- 6.3.5
- High-resolution buffer hypothesis
- 14
- 139
- 6.4
- Biased competition -- fMRI
- 140
- 6.4.1
- Neuroimaging of attention
- 140
- 6.4.2
- Attentional effects in the absence of visual stimulation
- 141
- 1.9
- 6.5
- The computational role of top-down feedback connections
- 142
- 7
- Neural network models
- 145
- 7.2
- Pattern association memory
- 145
- 7.2.1
- Systems-level analysis of brain function
- Architecture and operation
- 146
- 7.2.2
- The vector interpretation
- 149
- 7.2.3
- Properties
- 150
- 7.2.4
- Prototype extraction, extraction of central tendency, and noise reduction
- 16
- 151
- 7.2.5
- Speed
- 151
- 7.2.6
- Local learning rule
- 152
- 7.2.7
- Implications of different types of coding for storage in pattern associators
- 158
- 1.10
- 7.3
- Autoassociation memory
- 159
- 7.3.1
- Architecture and operation
- 160
- 7.3.2
- Introduction to the analysis of the operation of autoassociation networks
- 161
- 7.3.3
- Neurons
- The fine structure of the cerebral neocortex
- Properties
- 163
- 7.3.4
- Use of autoassociation networks in the brain
- 170
- 7.4
- Competitive networks, including self-organizing maps
- 171
- 7.4.1
- Function
- 21
- 171
- 7.4.2
- Architecture and algorithm
- 171
- 7.4.3
- Properties
- 173
- 7.4.4
- Utility of competitive networks in information processing by the brain
- 178
- 1.10.1
- 7.4.5
- Guidance of competitive learning
- 180
- 7.4.6
- Topographic map formation
- 182
- 7.4.7
- Radial Basis Function networks
- 187
- 7.4.8
- The fine structure and connectivity of the neocortex
- Further details of the algorithms used in competitive networks
- 188
- 7.5
- Continuous attractor networks
- 192
- 7.5.2
- The generic model of a continuous attractor network
- 195
- 7.5.3
- Learning the synaptic strengths between the neurons that implement a continuous attractor network
- 21
- 196
- 7.5.4
- The capacity of a continuous attractor network
- 198
- 7.5.5
- Continuous attractor models: moving the activity packet of neuronal activity
- 198
- 7.5.6
- Stabilization of the activity packet within the continuous attractor network when the agent is stationary
- 202
- 1.10.2
- 7.5.7
- Continuous attractor networks in two or more dimensions
- 203
- 7.5.8
- Mixed continuous and discrete attractor networks
- 203
- 7.6
- Network dynamics: the integrate-and-fire approach
- 204
- 7.6.1
- Excitatory cells and connections
- From discrete to continuous time
- 204
- 7.6.2
- Continuous dynamics with discontinuities
- 205
- 7.6.3
- Conductance dynamics for the input current
- 207
- 7.6.4
- The speed of processing of one-layer attractor networks with integrate-and-fire neurons
- 21
- 209
- 7.6.5
- The speed of processing of a four-layer hierarchical network with integrate-and-fire attractor dynamics in each layer
- 212
- 7.6.6
- Spike response model
- 215
- 7.7
- Network dynamics: introduction to the mean field approach
- 216
- 1.10.3
- 7.8
- Mean-field based neurodynamics
- 218
- 7.8.1
- Population activity
- 218
- 7.8.2
- A basic computational module based on biased competition
- 220
- 7.8.3
- Inhibitory cells and connections
- Multimodular neurodynamical architectures
- 221
- 7.9
- Interacting attractor networks
- 224
- 7.10
- Error correction networks
- 228
- 7.10.1
- Architecture and general description
- 2
- 23
- 229
- 7.10.2
- Generic algorithm (for a one-layer network taught by error correction)
- 229
- 7.10.3
- Capability and limitations of single-layer error-correcting networks
- 230
- 7.10.4
- Properties
- 234
- 1.10.4
- 7.11
- Error backpropagation multilayer networks
- 236
- 7.11.2
- Architecture and algorithm
- 237
- 7.11.3
- Properties of multilayer networks trained by error backpropagation
- 238
- 7.12
- Quantitative aspects of cortical architecture
- Biologically plausible networks
- 239
- 7.13
- Reinforcement learning
- 240
- 7.14
- Contrastive Hebbian learning: the Boltzmann machine
- 241
- 8
- Models of invariant object recognition
- 25
- 243
- 8.2
- Approaches to invariant object recognition
- 244
- 8.2.1
- Feature spaces
- 244
- 8.2.2
- Structural descriptions and syntactic pattern recognition
- 245
- 1.10.5
- 8.2.3
- Template matching and the alignment approach
- 247
- 8.2.4
- Invertible networks that can reconstruct their inputs
- 248
- 8.2.5
- Feature hierarchies
- 249
- 8.3
- Functional pathways through the cortical layers
- Hypotheses about object recognition mechanisms
- 253
- 8.4
- Computational issues in feature hierarchies
- 257
- 8.4.1
- The architecture of VisNet
- 258
- 8.4.2
- Initial experiments with VisNet
- 27
- 266
- 8.4.3
- The optimal parameters for the temporal trace used in the learning rule
- 274
- 8.4.4
- Different forms of the trace learning rule, and their relation to error correction and temporal difference learning
- 275
- 8.4.5
- The issue of feature binding, and a solution
- 284
- 1.10.6
- 8.4.6
- Operation in a cluttered environment
- 295
- 8.4.7
- Learning 3D transforms
- 301
- 8.4.8
- Capacity of the architecture, and incorporation of a trace rule into a recurrent architecture with object attractors
- 307
- 8.4.9
- The scale of lateral excitatory and inhibitory effects, and the concept of modules
- Vision in natural scenes -- effects of background versus attention
- 313
- 8.5
- Synchronization and syntactic binding
- 319
- 8.6
- Further approaches to invariant object recognition
- 320
- 8.7
- Processes involved in object identification
- 29
- 321
- 9
- The cortical neurodynamics of visual attention -- a model
- 323
- 9.2
- Physiological constraints
- 324
- 9.2.1
- The dorsal and ventral paths of the visual cortex
- 324
- 1.3
- 1.11
- 9.2.2
- The biased competition hypothesis
- 326
- 9.2.3
- Neuronal receptive fields
- 327
- 9.3
- Architecture of the model
- 328
- 9.3.1
- Backprojections in the cortex
- Overall architecture of the model
- 328
- 9.3.2
- Formal description of the model
- 331
- 9.3.3
- Performance measures
- 336
- 9.4
- Simulations of basic experimental findings
- 30
- 336
- 9.4.1
- Simulations of single-cell experiments
- 337
- 9.4.2
- Simulations of fMRI experiments
- 339
- 9.5
- Object recognition and spatial search
- 341
- 1.11.1
- 9.5.1
- Dynamics of spatial attention and object recognition
- 343
- 9.5.2
- Dynamics of object attention and visual search
- 345
- Architecture
- 30
- 1.11.2
- Learning
- 31
- 1.11.3
- Neurons in a network
- Recall
- 33
- 1.11.4
- Semantic priming
- 34
- 1.11.5
- Attention
- 34
- 1.11.6
- Autoassociative storage, and constraint satisfaction
- 2
- 34
- 2
- The primary visual cortex
- 36
- 2.2
- Retina and lateral geniculate nuclei
- 37
- 2.3
- Striate cortex: Area V1
- 43
- 1.4
- 2.3.1
- Classification of V1 neurons
- 43
- 2.3.2
- Organization of the striate cortex
- 45
- 2.3.3
- Visual streams within the striate cortex
- 48
- 2.4
- Synaptic modification
- Computational processes that give rise to V1 simple cells
- 49
- 2.4.1
- Linsker's method: Information maximization
- 50
- 2.4.2
- Olshausen and Field's method: Sparseness maximization
- 53
- 2.5
- The computational role of V1 for form processing
- Isbn
- 9780198524885
- Label
- Computational neuroscience of vision
- Title
- Computational neuroscience of vision
- Statement of responsibility
- Edmund T. Rolls and Gustavo Deco
- Subject
-
- Computational Biology
- Computational neuroscience
- Computational neuroscience
- Computer Simulation
- Computer Simulation
- Cortex visuel
- Informationsverarbeitung
- Models, Neurological
- Models, Neurological
- Neurociências
- Neurophysiologie
- Neurophysiologie
- Neurophysiology
- Neurophysiology
- Neuropsychologie
- Neuropsychology
- Computational Biology
- Neuroscience informatique
- Neurosciences
- Neurosciences
- Neurowetenschappen
- Perception visuelle
- Percepção visual
- Réseau neuronal (Biologie)
- Sehen
- Vision
- Vision
- Vision
- Visual Perception -- physiology
- Visual Perception -- physiology
- Visuele waarneming
- Visão
- Neuropsychology
- Language
- eng
- Cataloging source
- NLM
- http://library.link/vocab/creatorName
- Rolls, Edmund T
- Illustrations
- illustrations
- Index
- index present
- LC call number
- QP475
- LC item number
- .R498 2002
- Literary form
- non fiction
- Nature of contents
- bibliography
- NLM call number
-
- 2002 A-066
- WW 105
- NLM item number
- R755c 2002
- http://library.link/vocab/relatedWorkOrContributorName
- Deco, Gustavo
- http://library.link/vocab/subjectName
-
- Vision
- Computational neuroscience
- Neuropsychology
- Neurophysiology
- Computational Biology
- Models, Neurological
- Visual Perception
- Computer Simulation
- Neurosciences
- Computational neuroscience
- Neurophysiology
- Neuropsychology
- Vision
- Visuele waarneming
- Neurowetenschappen
- Neurociências
- Percepção visual
- Visão
- Vision
- Neuroscience informatique
- Neuropsychologie
- Neurophysiologie
- Perception visuelle
- Réseau neuronal (Biologie)
- Cortex visuel
- Sehen
- Neurophysiologie
- Informationsverarbeitung
- Label
- Computational neuroscience of vision, Edmund T. Rolls and Gustavo Deco
- Bibliography note
- Includes bibliographical references (p. [520]-564) and index
- Carrier category
- volume
- Carrier category code
-
- nc
- Carrier MARC source
- rdacarrier
- Content category
- text
- Content type code
-
- txt
- Content type MARC source
- rdacontent
- Contents
-
- 4
- 55
- 2.6
- Backprojections to the lateral geniculate nucleus
- 55
- 3
- Extrastriate visual areas
- 57
- 3.2
- Visual pathways in extrastriate cortical areas
- 57
- 1.5
- 3.3
- Colour processing
- 61
- 3.3.1
- Trichromacy theory
- 61
- 3.3.2
- Colour opponency, and colour contrast: Opponent cells
- 61
- 3.4
- Long-Term Potentiation and Long-Term Depression
- Motion and depth processing
- 65
- 3.4.1
- The motion pathway
- 65
- 3.4.2
- Depth perception
- 67
- 4
- The parietal cortex
- 7
- 70
- 4.2
- Spatial processing in the parietal cortex
- 70
- 4.2.1
- Area LIP
- 71
- 4.2.2
- Area VIP
- 73
- 1.6
- 4.2.3
- Area MST
- 74
- 4.2.4
- Area 7a
- 74
- 4.3
- The neuropsychology of the parietal lobe
- 75
- 4.3.1
- Distributed representations
- Unilateral neglect
- 75
- 4.3.2
- Balint's syndrome
- 77
- 4.3.3
- Gerstmann's syndrome
- 79
- 5
- Inferior temporal cortical visual areas
- 11
- 81
- 5.2
- Neuronal responses in different areas
- 81
- 5.3
- The selectivity of one population of neurons for faces
- 83
- 5.4
- Combinations of face features
- 84
- 1.6.2
- 5.5
- Distributed encoding of object and face identity
- 84
- 5.5.1
- Distributed representations evident in the firing rate distributions
- 85
- 5.5.2
- The representation of information in the responses of single neurons to a set of stimuli
- 90
- 5.5.3
- Advantages of different types of coding
- The representation of information in the responses of a population of inferior temporal visual cortex neurons
- 94
- 5.5.4
- Advantages for brain processing of the distributed representation of objects and faces
- 98
- 5.5.5
- Should one neuron be as discriminative as the whole organism, in object encoding systems?
- 103
- 5.5.6
- Temporal encoding in the spike train of a single neuron
- 12
- 105
- 5.5.7
- Temporal synchronization of the responses of different cortical neurons
- 108
- 5.5.8
- Conclusions on cortical encoding
- 111
- 5.6
- Invariance in the neuronal representation of stimuli
- 112
- 1.2
- 1.7
- 5.6.1
- Size and spatial frequency invariance
- 112
- 5.6.2
- Translation (shift) invariance
- 113
- 5.6.3
- Reduced translation invariance in natural scenes
- 113
- 5.6.4
- Neuronal network approaches versus connectionism
- A view-independent representation of objects and faces
- 115
- 5.7
- Face identification and face expression systems
- 118
- 5.8
- Learning in the inferior temporal cortex
- 120
- 5.9
- Cortical processing speed
- 13
- 122
- 6
- Visual attentional mechanisms
- 126
- 6.2
- The classical view
- 126
- 6.2.1
- The spotlight metaphor and feature integration theory
- 126
- 1.8
- 6.2.2
- Computational models of visual attention
- 129
- 6.3
- Biased competition -- single cell studies
- 132
- 6.3.1
- Neurophysiology of attention
- 133
- 6.3.2
- Introduction to three neuronal network architectures
- The role of competition
- 135
- 6.3.3
- Evidence of attentional bias
- 136
- 6.3.4
- Non-spatial attention
- 136
- 6.3.5
- High-resolution buffer hypothesis
- 14
- 139
- 6.4
- Biased competition -- fMRI
- 140
- 6.4.1
- Neuroimaging of attention
- 140
- 6.4.2
- Attentional effects in the absence of visual stimulation
- 141
- 1.9
- 6.5
- The computational role of top-down feedback connections
- 142
- 7
- Neural network models
- 145
- 7.2
- Pattern association memory
- 145
- 7.2.1
- Systems-level analysis of brain function
- Architecture and operation
- 146
- 7.2.2
- The vector interpretation
- 149
- 7.2.3
- Properties
- 150
- 7.2.4
- Prototype extraction, extraction of central tendency, and noise reduction
- 16
- 151
- 7.2.5
- Speed
- 151
- 7.2.6
- Local learning rule
- 152
- 7.2.7
- Implications of different types of coding for storage in pattern associators
- 158
- 1.10
- 7.3
- Autoassociation memory
- 159
- 7.3.1
- Architecture and operation
- 160
- 7.3.2
- Introduction to the analysis of the operation of autoassociation networks
- 161
- 7.3.3
- Neurons
- The fine structure of the cerebral neocortex
- Properties
- 163
- 7.3.4
- Use of autoassociation networks in the brain
- 170
- 7.4
- Competitive networks, including self-organizing maps
- 171
- 7.4.1
- Function
- 21
- 171
- 7.4.2
- Architecture and algorithm
- 171
- 7.4.3
- Properties
- 173
- 7.4.4
- Utility of competitive networks in information processing by the brain
- 178
- 1.10.1
- 7.4.5
- Guidance of competitive learning
- 180
- 7.4.6
- Topographic map formation
- 182
- 7.4.7
- Radial Basis Function networks
- 187
- 7.4.8
- The fine structure and connectivity of the neocortex
- Further details of the algorithms used in competitive networks
- 188
- 7.5
- Continuous attractor networks
- 192
- 7.5.2
- The generic model of a continuous attractor network
- 195
- 7.5.3
- Learning the synaptic strengths between the neurons that implement a continuous attractor network
- 21
- 196
- 7.5.4
- The capacity of a continuous attractor network
- 198
- 7.5.5
- Continuous attractor models: moving the activity packet of neuronal activity
- 198
- 7.5.6
- Stabilization of the activity packet within the continuous attractor network when the agent is stationary
- 202
- 1.10.2
- 7.5.7
- Continuous attractor networks in two or more dimensions
- 203
- 7.5.8
- Mixed continuous and discrete attractor networks
- 203
- 7.6
- Network dynamics: the integrate-and-fire approach
- 204
- 7.6.1
- Excitatory cells and connections
- From discrete to continuous time
- 204
- 7.6.2
- Continuous dynamics with discontinuities
- 205
- 7.6.3
- Conductance dynamics for the input current
- 207
- 7.6.4
- The speed of processing of one-layer attractor networks with integrate-and-fire neurons
- 21
- 209
- 7.6.5
- The speed of processing of a four-layer hierarchical network with integrate-and-fire attractor dynamics in each layer
- 212
- 7.6.6
- Spike response model
- 215
- 7.7
- Network dynamics: introduction to the mean field approach
- 216
- 1.10.3
- 7.8
- Mean-field based neurodynamics
- 218
- 7.8.1
- Population activity
- 218
- 7.8.2
- A basic computational module based on biased competition
- 220
- 7.8.3
- Inhibitory cells and connections
- Multimodular neurodynamical architectures
- 221
- 7.9
- Interacting attractor networks
- 224
- 7.10
- Error correction networks
- 228
- 7.10.1
- Architecture and general description
- 2
- 23
- 229
- 7.10.2
- Generic algorithm (for a one-layer network taught by error correction)
- 229
- 7.10.3
- Capability and limitations of single-layer error-correcting networks
- 230
- 7.10.4
- Properties
- 234
- 1.10.4
- 7.11
- Error backpropagation multilayer networks
- 236
- 7.11.2
- Architecture and algorithm
- 237
- 7.11.3
- Properties of multilayer networks trained by error backpropagation
- 238
- 7.12
- Quantitative aspects of cortical architecture
- Biologically plausible networks
- 239
- 7.13
- Reinforcement learning
- 240
- 7.14
- Contrastive Hebbian learning: the Boltzmann machine
- 241
- 8
- Models of invariant object recognition
- 25
- 243
- 8.2
- Approaches to invariant object recognition
- 244
- 8.2.1
- Feature spaces
- 244
- 8.2.2
- Structural descriptions and syntactic pattern recognition
- 245
- 1.10.5
- 8.2.3
- Template matching and the alignment approach
- 247
- 8.2.4
- Invertible networks that can reconstruct their inputs
- 248
- 8.2.5
- Feature hierarchies
- 249
- 8.3
- Functional pathways through the cortical layers
- Hypotheses about object recognition mechanisms
- 253
- 8.4
- Computational issues in feature hierarchies
- 257
- 8.4.1
- The architecture of VisNet
- 258
- 8.4.2
- Initial experiments with VisNet
- 27
- 266
- 8.4.3
- The optimal parameters for the temporal trace used in the learning rule
- 274
- 8.4.4
- Different forms of the trace learning rule, and their relation to error correction and temporal difference learning
- 275
- 8.4.5
- The issue of feature binding, and a solution
- 284
- 1.10.6
- 8.4.6
- Operation in a cluttered environment
- 295
- 8.4.7
- Learning 3D transforms
- 301
- 8.4.8
- Capacity of the architecture, and incorporation of a trace rule into a recurrent architecture with object attractors
- 307
- 8.4.9
- The scale of lateral excitatory and inhibitory effects, and the concept of modules
- Vision in natural scenes -- effects of background versus attention
- 313
- 8.5
- Synchronization and syntactic binding
- 319
- 8.6
- Further approaches to invariant object recognition
- 320
- 8.7
- Processes involved in object identification
- 29
- 321
- 9
- The cortical neurodynamics of visual attention -- a model
- 323
- 9.2
- Physiological constraints
- 324
- 9.2.1
- The dorsal and ventral paths of the visual cortex
- 324
- 1.3
- 1.11
- 9.2.2
- The biased competition hypothesis
- 326
- 9.2.3
- Neuronal receptive fields
- 327
- 9.3
- Architecture of the model
- 328
- 9.3.1
- Backprojections in the cortex
- Overall architecture of the model
- 328
- 9.3.2
- Formal description of the model
- 331
- 9.3.3
- Performance measures
- 336
- 9.4
- Simulations of basic experimental findings
- 30
- 336
- 9.4.1
- Simulations of single-cell experiments
- 337
- 9.4.2
- Simulations of fMRI experiments
- 339
- 9.5
- Object recognition and spatial search
- 341
- 1.11.1
- 9.5.1
- Dynamics of spatial attention and object recognition
- 343
- 9.5.2
- Dynamics of object attention and visual search
- 345
- Architecture
- 30
- 1.11.2
- Learning
- 31
- 1.11.3
- Neurons in a network
- Recall
- 33
- 1.11.4
- Semantic priming
- 34
- 1.11.5
- Attention
- 34
- 1.11.6
- Autoassociative storage, and constraint satisfaction
- 2
- 34
- 2
- The primary visual cortex
- 36
- 2.2
- Retina and lateral geniculate nuclei
- 37
- 2.3
- Striate cortex: Area V1
- 43
- 1.4
- 2.3.1
- Classification of V1 neurons
- 43
- 2.3.2
- Organization of the striate cortex
- 45
- 2.3.3
- Visual streams within the striate cortex
- 48
- 2.4
- Synaptic modification
- Computational processes that give rise to V1 simple cells
- 49
- 2.4.1
- Linsker's method: Information maximization
- 50
- 2.4.2
- Olshausen and Field's method: Sparseness maximization
- 53
- 2.5
- The computational role of V1 for form processing
- Dimensions
- 25 cm
- Extent
- xviii, 569 pages
- Isbn
- 9780198524885
- Lccn
- 2002277312
- Media category
- unmediated
- Media MARC source
- rdamedia
- Media type code
-
- n
- Other physical details
- illustrations
- System control number
-
- (OCoLC)48065474
- (OCoLC)ocm48065474
- Label
- Computational neuroscience of vision, Edmund T. Rolls and Gustavo Deco
- Bibliography note
- Includes bibliographical references (p. [520]-564) and index
- Carrier category
- volume
- Carrier category code
-
- nc
- Carrier MARC source
- rdacarrier
- Content category
- text
- Content type code
-
- txt
- Content type MARC source
- rdacontent
- Contents
-
- 4
- 55
- 2.6
- Backprojections to the lateral geniculate nucleus
- 55
- 3
- Extrastriate visual areas
- 57
- 3.2
- Visual pathways in extrastriate cortical areas
- 57
- 1.5
- 3.3
- Colour processing
- 61
- 3.3.1
- Trichromacy theory
- 61
- 3.3.2
- Colour opponency, and colour contrast: Opponent cells
- 61
- 3.4
- Long-Term Potentiation and Long-Term Depression
- Motion and depth processing
- 65
- 3.4.1
- The motion pathway
- 65
- 3.4.2
- Depth perception
- 67
- 4
- The parietal cortex
- 7
- 70
- 4.2
- Spatial processing in the parietal cortex
- 70
- 4.2.1
- Area LIP
- 71
- 4.2.2
- Area VIP
- 73
- 1.6
- 4.2.3
- Area MST
- 74
- 4.2.4
- Area 7a
- 74
- 4.3
- The neuropsychology of the parietal lobe
- 75
- 4.3.1
- Distributed representations
- Unilateral neglect
- 75
- 4.3.2
- Balint's syndrome
- 77
- 4.3.3
- Gerstmann's syndrome
- 79
- 5
- Inferior temporal cortical visual areas
- 11
- 81
- 5.2
- Neuronal responses in different areas
- 81
- 5.3
- The selectivity of one population of neurons for faces
- 83
- 5.4
- Combinations of face features
- 84
- 1.6.2
- 5.5
- Distributed encoding of object and face identity
- 84
- 5.5.1
- Distributed representations evident in the firing rate distributions
- 85
- 5.5.2
- The representation of information in the responses of single neurons to a set of stimuli
- 90
- 5.5.3
- Advantages of different types of coding
- The representation of information in the responses of a population of inferior temporal visual cortex neurons
- 94
- 5.5.4
- Advantages for brain processing of the distributed representation of objects and faces
- 98
- 5.5.5
- Should one neuron be as discriminative as the whole organism, in object encoding systems?
- 103
- 5.5.6
- Temporal encoding in the spike train of a single neuron
- 12
- 105
- 5.5.7
- Temporal synchronization of the responses of different cortical neurons
- 108
- 5.5.8
- Conclusions on cortical encoding
- 111
- 5.6
- Invariance in the neuronal representation of stimuli
- 112
- 1.2
- 1.7
- 5.6.1
- Size and spatial frequency invariance
- 112
- 5.6.2
- Translation (shift) invariance
- 113
- 5.6.3
- Reduced translation invariance in natural scenes
- 113
- 5.6.4
- Neuronal network approaches versus connectionism
- A view-independent representation of objects and faces
- 115
- 5.7
- Face identification and face expression systems
- 118
- 5.8
- Learning in the inferior temporal cortex
- 120
- 5.9
- Cortical processing speed
- 13
- 122
- 6
- Visual attentional mechanisms
- 126
- 6.2
- The classical view
- 126
- 6.2.1
- The spotlight metaphor and feature integration theory
- 126
- 1.8
- 6.2.2
- Computational models of visual attention
- 129
- 6.3
- Biased competition -- single cell studies
- 132
- 6.3.1
- Neurophysiology of attention
- 133
- 6.3.2
- Introduction to three neuronal network architectures
- The role of competition
- 135
- 6.3.3
- Evidence of attentional bias
- 136
- 6.3.4
- Non-spatial attention
- 136
- 6.3.5
- High-resolution buffer hypothesis
- 14
- 139
- 6.4
- Biased competition -- fMRI
- 140
- 6.4.1
- Neuroimaging of attention
- 140
- 6.4.2
- Attentional effects in the absence of visual stimulation
- 141
- 1.9
- 6.5
- The computational role of top-down feedback connections
- 142
- 7
- Neural network models
- 145
- 7.2
- Pattern association memory
- 145
- 7.2.1
- Systems-level analysis of brain function
- Architecture and operation
- 146
- 7.2.2
- The vector interpretation
- 149
- 7.2.3
- Properties
- 150
- 7.2.4
- Prototype extraction, extraction of central tendency, and noise reduction
- 16
- 151
- 7.2.5
- Speed
- 151
- 7.2.6
- Local learning rule
- 152
- 7.2.7
- Implications of different types of coding for storage in pattern associators
- 158
- 1.10
- 7.3
- Autoassociation memory
- 159
- 7.3.1
- Architecture and operation
- 160
- 7.3.2
- Introduction to the analysis of the operation of autoassociation networks
- 161
- 7.3.3
- Neurons
- The fine structure of the cerebral neocortex
- Properties
- 163
- 7.3.4
- Use of autoassociation networks in the brain
- 170
- 7.4
- Competitive networks, including self-organizing maps
- 171
- 7.4.1
- Function
- 21
- 171
- 7.4.2
- Architecture and algorithm
- 171
- 7.4.3
- Properties
- 173
- 7.4.4
- Utility of competitive networks in information processing by the brain
- 178
- 1.10.1
- 7.4.5
- Guidance of competitive learning
- 180
- 7.4.6
- Topographic map formation
- 182
- 7.4.7
- Radial Basis Function networks
- 187
- 7.4.8
- The fine structure and connectivity of the neocortex
- Further details of the algorithms used in competitive networks
- 188
- 7.5
- Continuous attractor networks
- 192
- 7.5.2
- The generic model of a continuous attractor network
- 195
- 7.5.3
- Learning the synaptic strengths between the neurons that implement a continuous attractor network
- 21
- 196
- 7.5.4
- The capacity of a continuous attractor network
- 198
- 7.5.5
- Continuous attractor models: moving the activity packet of neuronal activity
- 198
- 7.5.6
- Stabilization of the activity packet within the continuous attractor network when the agent is stationary
- 202
- 1.10.2
- 7.5.7
- Continuous attractor networks in two or more dimensions
- 203
- 7.5.8
- Mixed continuous and discrete attractor networks
- 203
- 7.6
- Network dynamics: the integrate-and-fire approach
- 204
- 7.6.1
- Excitatory cells and connections
- From discrete to continuous time
- 204
- 7.6.2
- Continuous dynamics with discontinuities
- 205
- 7.6.3
- Conductance dynamics for the input current
- 207
- 7.6.4
- The speed of processing of one-layer attractor networks with integrate-and-fire neurons
- 21
- 209
- 7.6.5
- The speed of processing of a four-layer hierarchical network with integrate-and-fire attractor dynamics in each layer
- 212
- 7.6.6
- Spike response model
- 215
- 7.7
- Network dynamics: introduction to the mean field approach
- 216
- 1.10.3
- 7.8
- Mean-field based neurodynamics
- 218
- 7.8.1
- Population activity
- 218
- 7.8.2
- A basic computational module based on biased competition
- 220
- 7.8.3
- Inhibitory cells and connections
- Multimodular neurodynamical architectures
- 221
- 7.9
- Interacting attractor networks
- 224
- 7.10
- Error correction networks
- 228
- 7.10.1
- Architecture and general description
- 2
- 23
- 229
- 7.10.2
- Generic algorithm (for a one-layer network taught by error correction)
- 229
- 7.10.3
- Capability and limitations of single-layer error-correcting networks
- 230
- 7.10.4
- Properties
- 234
- 1.10.4
- 7.11
- Error backpropagation multilayer networks
- 236
- 7.11.2
- Architecture and algorithm
- 237
- 7.11.3
- Properties of multilayer networks trained by error backpropagation
- 238
- 7.12
- Quantitative aspects of cortical architecture
- Biologically plausible networks
- 239
- 7.13
- Reinforcement learning
- 240
- 7.14
- Contrastive Hebbian learning: the Boltzmann machine
- 241
- 8
- Models of invariant object recognition
- 25
- 243
- 8.2
- Approaches to invariant object recognition
- 244
- 8.2.1
- Feature spaces
- 244
- 8.2.2
- Structural descriptions and syntactic pattern recognition
- 245
- 1.10.5
- 8.2.3
- Template matching and the alignment approach
- 247
- 8.2.4
- Invertible networks that can reconstruct their inputs
- 248
- 8.2.5
- Feature hierarchies
- 249
- 8.3
- Functional pathways through the cortical layers
- Hypotheses about object recognition mechanisms
- 253
- 8.4
- Computational issues in feature hierarchies
- 257
- 8.4.1
- The architecture of VisNet
- 258
- 8.4.2
- Initial experiments with VisNet
- 27
- 266
- 8.4.3
- The optimal parameters for the temporal trace used in the learning rule
- 274
- 8.4.4
- Different forms of the trace learning rule, and their relation to error correction and temporal difference learning
- 275
- 8.4.5
- The issue of feature binding, and a solution
- 284
- 1.10.6
- 8.4.6
- Operation in a cluttered environment
- 295
- 8.4.7
- Learning 3D transforms
- 301
- 8.4.8
- Capacity of the architecture, and incorporation of a trace rule into a recurrent architecture with object attractors
- 307
- 8.4.9
- The scale of lateral excitatory and inhibitory effects, and the concept of modules
- Vision in natural scenes -- effects of background versus attention
- 313
- 8.5
- Synchronization and syntactic binding
- 319
- 8.6
- Further approaches to invariant object recognition
- 320
- 8.7
- Processes involved in object identification
- 29
- 321
- 9
- The cortical neurodynamics of visual attention -- a model
- 323
- 9.2
- Physiological constraints
- 324
- 9.2.1
- The dorsal and ventral paths of the visual cortex
- 324
- 1.3
- 1.11
- 9.2.2
- The biased competition hypothesis
- 326
- 9.2.3
- Neuronal receptive fields
- 327
- 9.3
- Architecture of the model
- 328
- 9.3.1
- Backprojections in the cortex
- Overall architecture of the model
- 328
- 9.3.2
- Formal description of the model
- 331
- 9.3.3
- Performance measures
- 336
- 9.4
- Simulations of basic experimental findings
- 30
- 336
- 9.4.1
- Simulations of single-cell experiments
- 337
- 9.4.2
- Simulations of fMRI experiments
- 339
- 9.5
- Object recognition and spatial search
- 341
- 1.11.1
- 9.5.1
- Dynamics of spatial attention and object recognition
- 343
- 9.5.2
- Dynamics of object attention and visual search
- 345
- Architecture
- 30
- 1.11.2
- Learning
- 31
- 1.11.3
- Neurons in a network
- Recall
- 33
- 1.11.4
- Semantic priming
- 34
- 1.11.5
- Attention
- 34
- 1.11.6
- Autoassociative storage, and constraint satisfaction
- 2
- 34
- 2
- The primary visual cortex
- 36
- 2.2
- Retina and lateral geniculate nuclei
- 37
- 2.3
- Striate cortex: Area V1
- 43
- 1.4
- 2.3.1
- Classification of V1 neurons
- 43
- 2.3.2
- Organization of the striate cortex
- 45
- 2.3.3
- Visual streams within the striate cortex
- 48
- 2.4
- Synaptic modification
- Computational processes that give rise to V1 simple cells
- 49
- 2.4.1
- Linsker's method: Information maximization
- 50
- 2.4.2
- Olshausen and Field's method: Sparseness maximization
- 53
- 2.5
- The computational role of V1 for form processing
- Dimensions
- 25 cm
- Extent
- xviii, 569 pages
- Isbn
- 9780198524885
- Lccn
- 2002277312
- Media category
- unmediated
- Media MARC source
- rdamedia
- Media type code
-
- n
- Other physical details
- illustrations
- System control number
-
- (OCoLC)48065474
- (OCoLC)ocm48065474
Subject
- Computational Biology
- Computational neuroscience
- Computational neuroscience
- Computer Simulation
- Computer Simulation
- Cortex visuel
- Informationsverarbeitung
- Models, Neurological
- Models, Neurological
- Neurociências
- Neurophysiologie
- Neurophysiologie
- Neurophysiology
- Neurophysiology
- Neuropsychologie
- Neuropsychology
- Computational Biology
- Neuroscience informatique
- Neurosciences
- Neurosciences
- Neurowetenschappen
- Perception visuelle
- Percepção visual
- Réseau neuronal (Biologie)
- Sehen
- Vision
- Vision
- Vision
- Visual Perception -- physiology
- Visual Perception -- physiology
- Visuele waarneming
- Visão
- Neuropsychology
Library Locations
-
African Studies LibraryBorrow it771 Commonwealth Avenue, 6th Floor, Boston, MA, 02215, US42.350723 -71.108227
-
-
Astronomy LibraryBorrow it725 Commonwealth Avenue, 6th Floor, Boston, MA, 02445, US42.350259 -71.105717
-
Fineman and Pappas Law LibrariesBorrow it765 Commonwealth Avenue, Boston, MA, 02215, US42.350979 -71.107023
-
Frederick S. Pardee Management LibraryBorrow it595 Commonwealth Avenue, Boston, MA, 02215, US42.349626 -71.099547
-
Howard Gotlieb Archival Research CenterBorrow it771 Commonwealth Avenue, 5th Floor, Boston, MA, 02215, US42.350723 -71.108227
-
-
Music LibraryBorrow it771 Commonwealth Avenue, 2nd Floor, Boston, MA, 02215, US42.350723 -71.108227
-
Pikering Educational Resources LibraryBorrow it2 Silber Way, Boston, MA, 02215, US42.349804 -71.101425
-
School of Theology LibraryBorrow it745 Commonwealth Avenue, 2nd Floor, Boston, MA, 02215, US42.350494 -71.107235
-
Science & Engineering LibraryBorrow it38 Cummington Mall, Boston, MA, 02215, US42.348472 -71.102257
-
Embed
Settings
Select options that apply then copy and paste the RDF/HTML data fragment to include in your application
Embed this data in a secure (HTTPS) page:
Layout options:
Include data citation:
<div class="citation" vocab="http://schema.org/"><i class="fa fa-external-link-square fa-fw"></i> Data from <span resource="http://link.bu.edu/portal/Computational-neuroscience-of-vision-Edmund-T./bx7EvggfSbc/" typeof="Book http://bibfra.me/vocab/lite/Item"><span property="name http://bibfra.me/vocab/lite/label"><a href="http://link.bu.edu/portal/Computational-neuroscience-of-vision-Edmund-T./bx7EvggfSbc/">Computational neuroscience of vision, Edmund T. Rolls and Gustavo Deco</a></span> - <span property="potentialAction" typeOf="OrganizeAction"><span property="agent" typeof="LibrarySystem http://library.link/vocab/LibrarySystem" resource="http://link.bu.edu/"><span property="name http://bibfra.me/vocab/lite/label"><a property="url" href="http://link.bu.edu/">Boston University Libraries</a></span></span></span></span></div>
Note: Adjust the width and height settings defined in the RDF/HTML code fragment to best match your requirements
Preview
Cite Data - Experimental
Data Citation of the Item Computational neuroscience of vision, Edmund T. Rolls and Gustavo Deco
Copy and paste the following RDF/HTML data fragment to cite this resource
<div class="citation" vocab="http://schema.org/"><i class="fa fa-external-link-square fa-fw"></i> Data from <span resource="http://link.bu.edu/portal/Computational-neuroscience-of-vision-Edmund-T./bx7EvggfSbc/" typeof="Book http://bibfra.me/vocab/lite/Item"><span property="name http://bibfra.me/vocab/lite/label"><a href="http://link.bu.edu/portal/Computational-neuroscience-of-vision-Edmund-T./bx7EvggfSbc/">Computational neuroscience of vision, Edmund T. Rolls and Gustavo Deco</a></span> - <span property="potentialAction" typeOf="OrganizeAction"><span property="agent" typeof="LibrarySystem http://library.link/vocab/LibrarySystem" resource="http://link.bu.edu/"><span property="name http://bibfra.me/vocab/lite/label"><a property="url" href="http://link.bu.edu/">Boston University Libraries</a></span></span></span></span></div>