All visual information that the human mind receives is processed by a part of the brain known as visual cortex. The visual cortex is part of the outermost layer of the brain, the cortex, and is located at the dorsal pole of the occipital lobe; more simply put, at the lower rear of the brain. The visual cortex obtains its information via projections that extend all the way through the brain from the eyeballs. The projections first pass through a stopover point in the middle of the brain, an almond-like lump known as the Lateral Geniculate Nucleus, or LGN. From there they are projected to the visual cortex for processing.
Visual cortex is broken down into five areas, labeled V1, V2, V3, V4, and MT, which on occasion is referred to as V5. V1, sometimes called striate cortex because of its stripey appearance when dyed and put under a microscope, is by far the largest and most important. It is sometimes called primary visual cortex or area 17. The other visual areas are referred to as extrastriate cortex. V1 is one of the most extensively studied and understood areas of the human brain.
V1 is an approximately 0.07 inch (2 mm) thick layer of brain with about the area of an index card. Because it is scrunched up, its volume is only a few cubic centimeters. The neurons in V1 are organized on both the local and global level, with horizontal and vertical organization schemes. Relevant variables to be abstracted from the raw sensory data include color, shape, size, motion, orientation, and others which are more subtle. The parallelized nature of computation in the human brain means that there are certain cells that are activated by the presence of color A, others activated by color B, and so on.
The most obvious organizational protocol in V1 is that of horizontal layers. There are six main layers, labeled with Roman numerals as I through VI. I is the outermost layer, farthest away from the eyeballs and LGN, consequently receiving the fewest number of direct projections containing visual data. The thickest nerve bundles from the LGN are projected into layers V and VI, which themselves contain nerves which project back into the LGN, forming a feedback loop. Feedback between the sender of visual data (LGN) and its processor (V1) is helpful for clarifying the nature of ambiguous sense data.
Raw sensory data comes from the eyes as an ensemble of nerve firings called a retinotopic map. The first series of neurons are designed to perform relatively elementary analyses of sensory data — a collection of neurons designed to detect vertical lines might activate when a critical threshold of visual "pixels" prove to be configured in a vertical pattern. Higher-level processors make their "decisions" based on preprocessed data from other neurons; for example, a collection of neurons designed to detect the velocity of an object might be dependent upon information from neurons designed to detect objects as separate entities from their backgrounds.
Another organizational scheme is the vertical, or columnar, neural architecture. A column extends through all horizontal layers and usually consists of neurons that possess functional similarities, ("neurons that fire together, wire together"), and commonalities in their biases. For example, one column might accept information exclusively from the right eyeball, the other the left. Columns usually have subcolumns, which are called macrocolumns and microcolumns, respectively. Microcolumns can be so small as to contain only a hundred individual neurons.
Studying the details of information processing in the human brain is difficult because of the complex, ad hoc, and seemingly messy way in which primate brains evolved, as well as the complex nature that any brain is sure to display by virtue of its huge task. Selective injury of visual cortex in animal subjects is historically one of the most productive (and controversial) ways of investigating neural functioning, but in recent times scientists have developed tools to selectively deactivate or activate specific brain areas without harming them. The resolution of brain scanning devices is increasing exponentially, and the algorithms are increasing in sophistication to handle the flood of data characteristic of the cognitive sciences. It is not implausible to suggest that one day we will be able to understand the visual cortex in its entirety.