This is absolutely my favorite question, because it intersects so many fields of study! Color means something different to people doing physics, chemistry, biology, mathematics, engineering, psychology, linguistics, or art.
Physics
Color is a property of light. Different colors are different frequencies of light across the visible spectrum.
For example, light with a wavelength of 460nm is blue. Light can also be mixed, so the combination of light at 656nm, 486nm, 434nm, and 410nm (the Balmer series) produces the lovely pink glow of a fusion reactor. And all the frequencies together make white.
Chemistry
Color is a property of pigments. Different pigments absorb different amounts of each frequency of light.
There are many more colors of pigment than colors of light. A blue pigment is one that reflects mostly 460nm of light, absorbing all longer or shorter wavelengths. But what about black? Black pigments obviously exist, but there is no black wavelength of light. There is no black stripe in the rainbow. Black is a substance, some material that absorbs light of any frequency. There is also no such thing as brown light, because brown is just “dark orange”. A small amount of orange light is still orange, not brown. But brown pigments are all over the place, they absorb more light overall when compared to an orange pigment making them “darker”.
When mixing pigments together, the absorption spectra combine. To get a green pigment you generally need to mix cyan with yellow. The cyan pigment will absorb the redish wavelengths and the yellow pigment will absorb the bluish wavelengths, leaving only the green. If you mix blue and yellow pigments, the blue pigment can absorb most of the green light often leaving you with a mucky blackish color. This is subtly different from mixing wavelengths of light. If you mix blue and yellow light, they add to give a light purple.
Biology
Color is the stimulation of cone cells. There are three different colors of cone.
1416 Color Sensitivity
by Francois~frwiki
licensed under CC BY-SA 4.0
/ added dark theme support
Short wavelengths activate the blue cone cells inside your eyes, so they look blue. Long wavelengths activate the red cone cells, so they look red. Green cone cells are sensitive to wavelengths in the middle (kinda).
Because you only have three types of cone cells, some combinations of light will look the same. In physics terms, pure 480nm light and mixed 530nm+450nm light are completely different colors. But when looking at them with your biological eyes, they are the same shade of teal. Both your blue and green cones are sensitive to the single teal wavelength, but can be activated individually by mixed blue and green light.
We abuse the biology of vision to make screens work. A real rainbow emits all the frequencies of visible light, but a picture of a rainbow only eimits blue-ish, green-ish, and red-ish frequencies. You can’t tell the difference.
Biology is also why the violet end of the spectrum looks purplish. As already mentioned, our red cones are mostly sensitive to long wavelengths around 650nm, but respond to very short wavelengths around 400nm as well, making them look purple. The red sensors in most cameras don’t respond the same way. When you look at the edge of a rainbow you see violet but your cellphone only sees blue. Making the rainbow into a color wheel requires crossing the “line of purples” from spectral violet to spectral red. That the two look similar is complete coincidence.
There are all sorts of interesting things you can learn about color vision. Some of my favorites are:
In low light conditions, our rod cells contribute more to our vision, completely changing how we perceive color.
Many people only have two functioning colors of cone cell. Depending on which cells are functioning, they might experience orange and green as nearly the same color (because they activate the same functioning cones) or blue and red as nearly the same color.
A few women have an additional functioning cone cell, allowing them to decern combinations of wavelengths which would appear to the rest of us as exactly the same.
Animals have entirely different sets of cone cells. Many insects can see into the ultraviolet wavelengths, while reptiles can see into the infrared. Birds often have 5-7 different cone cells, allowing them to decern all sorts of combinations we can’t. Famously, mantis shrimp have 12 different cone cells, but their brains can’t really use all that information.
Your red cones are much more sensitive to light at 560nm than the 650nm wavelengths we think of as red. This is one reason the cones are properly called “long”, “medium”, and “short”. Your perception of green comes not from green-sensitive cones directly, but from ganglion cells behind the cones that subtract the long wavelength stimulus from the medium wavelength stimulus.
Since humans only have three cone cells, all the different wavelengths in a color can be summarized with three different numbers. The amount that color activates our red cones, green cones (really ganglion cells), and blue cones! White is [1, 1, 1], red is [1, 0, 0], orange is [1, 0.5, 0], and black is [0, 0, 0].
A fancy thing you can do in math is rearrange numbers however you want. So instead of RGB we could use BGR where red is [0, 0, 1]. Or we could use CMY where red is [0, 1, 1]. Or we can store the hue, saturation, and lightness (HSL) so that red is [0, 1, 0.5]. All these “color spaces” are interchangeable, so long as you clarify which you are using.
Depending on how you are using color, you may find some color spaces to be more useful than others. Computer generated imagery (CGI) tends to use RGB because it best represents the physics being simulated. But when encoding a video we usually convert to YCbCr color space because it helps compress the differences in color that humans won’t notice. If you need someone to pick a color, HSL is useful because it organizes everything into a color wheel. But HSL doesn’t quite match the biology of vision. For example, putting HSL’s yellow next to its blue, the yellow looks far lighter despite having the same “lightness”. The oklab color space helps blend between colors more smoothly.
Engineering
Color is a signal sent to a display or received from a sensor.
Did you know that up until recently we did not know neptune’s color? It was generally depicted as a vivid blue, but that was just a guess. The planet was too far away from earth to see clearly through a telescope, and the sensor we sent on the Voyager 2 mission didn’t measure the same wavelengths as the cones in your eye. Neptune is closer to white with just slight blue tint!
It is an oft-repeated fact that NASA’s cameras are black & white, but this isn’t quite true. The magnetic deflection vidicon used as a camera on the Voyager missions couldn’t tell frequencies apart by itself (for that it used a spectrometer). But there were ten different color filters Voyager could place in front of the vidicon to limit it to specific frequencies, including frequencies associated with methane and sodium. The Voyager spacecraft were built to do science, not take pretty pictures.
Although Voyager’s red, green, blue color filters were not calibrated to match the sensitivity of a human’s cone cells, the magnetic deflection was calibrated to accurately measure light brightness. Unfortunately, the computer monitors on earth weren’t. If the vidicon measured twice as much blue vs red, NASA’s computers would simply double the power of the electron beam illuminating the blue phosphors in each CRT monitor. That increases the actual blue light output of the monitor by roughly 430%, so the image would look far too blue. NASA isn’t stupid, but they were more interested in seeing the detail in Neptune’s clouds than in perfect color reproduction. In some cases they even amplified the effect! For a photograph to look correct on a CRT monitor, the colors need to be gamma corrected, so that “twice as bright” translates to only 137% as much power. Even now when CRTs are long gone, the sRGB color space used by this (and every other website) expects gamma corrected colors. And to this day, lazy programers still screw it up.
Of course, screens got better and monitors today can display quite extreme greens. Alas, most software is limited to the sRGB gamut: all the colors that can be made by mixing the most red, most green, and most blue values in the sRGB color space (called the primaries). The P3 and Rec2020 color spaces have much wider gamuts, but at the time I’m writing this they aren’t widely adopted. P3’s greenest green and Rec2020’s greenest greenwill only look greener on a few devices.
Psychology
Color is your brain guessing what pigments are in a thing.
Subconscious pigmentation solving is obvious in the dress picture above. Based on context, you correctly see the pigmentation of the apron on the left as bright blue and the pigmentation of the apron on the right as white, even though your eyes are actually sensing the same light brownish blue light in each case. Your brain is measuring the effect of the yellow rectangle on the left and the blue rectangle on the right, and factoring it out. In the original dress image the lighting was ambiguous, so different brains solved the illumination problem differently.
Evolution doesn’t care if you appreciate the beautiful blue of the noon sky, it only cares if that bush is a delicious wild blueberry or poisonous baneberry. Short of putting it in your mouth, how can you tell? By the pigments of course! The baneberry has few pigments, mostly reflecting whatever light hits it (that is, it is white). Whereas the blueberry absorbs most green wavelengths, reflecting blue and a bit of red (it is a blueberry). Great, so if your eyes sense more short wavelengths than long/medium wavelengths, it must be an edible blueberry! That, or you are looking at a white baneberry reflecting the light of that wonderful blue sky. Oh, and you’ll starve when all the blueberries reflect the orange illumination of a sunset and come out white. Oops.
So instead of taking light at face value, your brain subconsciously solves for the pigmentation of every object you see, taking into account the context you are seeing it in. Does the snow next to that blueberry also look blue? Then the light must be blue and the berry is white. Are the clouds flaming red? Then everything orangy is actually white, and blue things are very blue.
Our built-in automatic pigmentation solving is a pain for people trying to display images on screens. Assuming you aren’t using an e-ink display, all the colored squares on this page are made of tiny glowing lights. But your brain wants to interpret them as pigmented paper illuminated by the lights of your office (or bathroom if my blog-reading habits are anything to go by). Your brain forcefully applies that context without you even noticing. If the lights of your room are a warm hue or the background of a page is off-white, then you’ll perceive all the colors as slightly bluer (and vice-versa). People who need extremely accurate color sometimes have a booth with known D65 illumination or a monitor that adjusts its whitepoint based on ambient light.