LMS Color Space

For those of you academics who are utterly befuddled by the confusion that is LMS color space, I offer the following groundwork.  This came about through my own work with synesthesia (see my other posts)  and my latest task: to which areas of the brain care the most about color using functional MRI (fMRI).  Scientific protocol dictates that if you’ve never done something before, pick a lab that has done so and copy it word-for-word.  In the world of color,Brain Wandell‘s lab is the place to be if you want to study anything about how primates process color.  Thusly, I turn to his past works to model a color localizer for fMRI.

Unfortunately, my experience with color up until three days ago included RGB basics and a moderate understanding of rods/cones.  As you  might remember before Home Depot brought a spectrophotometer into the game, matching paint colors was a complete nightmare.  It was easier just to buy 10 times more than necessary than to experience the pain of someday realizing your wall will forever remain chipped.  Actually, there are many different ways we humans have devised to describe color.  Here is a nice little review about how we specify and view color.  [Sidebar – here‘s an interesting article on how children learn color words].  The problem is this – when I ‘see’ a specific shade of green, even that specific shade could be made out of many different color combinations, including various degrees of achromatic (white/black) values.  So if I want to find the areas of the brain that ‘care’ about color, I want to look at brain activity when a person is seeing 100% color vs. 0% color.   Neither RGB nor CIE color space will do the trick, in short because they both describe color perception (i.e. what your brain interprets) rather than color processing (i.e. the signals (wavelengths of light) that your brain receives from the picture or item).  Here’s why:

CIE color space was designed to accurately represent perceptual color judgement.   In other words, regardless of what’s used to make the colors, if they look the same, they’re close in CIE color space.  CIE space was designed for use by manufacturers for easy communication regarding textile colors .  Manufacturers want to know: Does that text look like the same green to me as it does to you?.  It doesn’t matter which combinations of red, green, and blue are used to create it, so long as customers in 50 states agree that the color is perceived as the same color.  This system is unfortunately inconsistent with the physiological basis for color perception.  Physiologically, something has color if the wavelengths of light emanating from the object have different values of Long, Medium, and Short wavelengths.  This is because the human retina has cells (called cones) specifically designed to receive different wavelengths of light.

Enter: LMS color space, representing long, medium, and short light wavelengths (red, green, blue respectively).  This is useful for scientific experiments when you want to control how much certain cells (cones) are excited.  Something else to consider, however, is contrast.  Wandell 2008 found that, no matter the color being shown to the person,some cones (where we expect to find color) respond differently depending on the level of contrast presented to the subject.  This means that in order to localize regions of the brain that care only about color (and not about contrast), it’s important to make sure contrast (luminance) is the same for all of our colorful stimuli.

LMS values range from -1 to 1.  So, black is (-1,-1,-1) and white is (1,1,1), and then there’s everything in between.  MacLeod and Boynton (1979), who invented LMS, defined chromaticity using two dimensions: M+L cone excitation (red-green oppononency) and S cone excitation.  Just like in RGB, if all values are equal, you get gray.  In the literature, gray is described as such: “(L+M)- and S-cone contrast are set equal, and (L-M)-cone contrast is zero”.  The actual values of L,M, and S are all scaled by the sum of L+M, a measure of luminance. To go from gray to a color, all you need to do is change EITHER the S cone OR (L-M).  If L and M are changed by the same amount, they’ll cancel each other out and you get no effect.

That said, the latest and greatest of color localizers is described in Alex Wade and Brian Wandell’s 2008 paper in the Journal of Vision.  Here’s a description of the the stimuli  they used for the chromatic/achromatic simuli.

Achromatic block (12 checkerboards of 12×12 squares, updated every 2 seconds –> a block of 24s length)

The background for all windows is (0,0,0).  Each square subtends 1 degree of visual angle, so the checkerboard subtends 12 degrees on each side.  The range of values that can be selected runs from -1 to 1, for a total range of 2.  Wade varies each square by a randomly-selected +/- 24% of the gray background (0,0,0).  So, 24% of 2 is .48, which means that each square can range anywhere from (-.48,-.48,-.48) to (.48,.48,.48).

For each achromatic checkerboard, a chromatic checkerboard is generated.  The color varies between +/- 6%, which means any value between -.12 and +.12 multiplied by the value of the L cone.  Let’s say I choose 5% (+.1).  If Square 1 in the achromatic checkerboard was selected to be [.24,.24,.24](12% LMS contrast), in the achromatic patch, in the chromatic patch it will be: [.12+.1,.12,.12] (5% (L-M)-contrast).  Voila!  Color.

Leave a Reply

Your email address will not be published. Required fields are marked *