skip to Main Content

I think I understand what color profiles are. I do not understand, what is the difference in manipulating photo for example in photoshop in 16bpp sRGB and 16bpp Adobe RGB. My monitor can only show me sRGB.

  • Is there any difference in algorithms?
  • Maybe there is some preprocessing executed before program displays effects of my work (for example AdobeRGB(0.3, 0.25, 0.82) is being displayed as sRGB(0.301, 0.253, 0.819) in my monitor)?
  • Is there any sense in using different color profiles when I am not using ICC profile of my monitor/printer?
  • In general – what should I do if I would want to develop my own graphics-manipulating application that supports profile different than sRGB (for example in Qt)?

3

Answers


  1. Their’s no differences in algorithms because you operate in RGB color space and not in XYZ color space. Monitors like you said shows colors differently, the red on one monitor may not exactly match the red primary on another monitor. In order to define different RGB color spaces in a common manner, monitors use the CIE 1931 XYZ color space. Every monitor or system calculate RGB color to XYZ according to used profiles, for example: RGB (1,0,0) = XYZ (0.4358, 0.2224, 0.0139) in sRGB and XYZ (0.7977, 0.2880, 0.0000) in ProPhotoRGB.

    For further information see:

    Login or Signup to reply.
  2. The color space your image uses determines how your 16 bits per pixel should relate to the output produced by your monitor, i.e., it determines what colors the numbers actually represent.

    This can make a difference in the way some algorithms are processed if they are supposed to make realistic, natural-looking, or consistent results.

    Let’s say you composite a semi-transparent yellow on top of a dark red background? What kind of brown do you get? If the algorithm always mixes the pixel data the same way, then even when the yellow and red look the same on your monitor, the brown you get might be different because of your color space.

    A more ‘correct’ way to do mixing would be to transform your pixel data into a consistent color space, mix, and then transform back. If the original colors look the same on two monitors with different calibrated profiles, then they will transform into the same numbers in a consistent color space, and the mix result will transform back into results that look the same on both monitors even though the pixel values might be different.

    Natural-looking compositing with semi-transparency is a good example of an algorithm that has to take your color space into account in order to produce realistic results. Other effects that have to look ‘natural’, like specular highlights, shadows, etc., similarly need to do physically accurate math in a consistent color space.

    To answer your specific questions:

    1. Yes, as explained, many algorithms should perform different calculations with different color spaces.

    2. Yes, there is. The image’s color space defines what the data means in terms of physical light. If you display it with an ICC calibrated profile, it is transformed into the numbers that your monitor needs to accurately display your image.

    3. It should make very little difference what color space you use for your image, except that some display software won’t take it into account. Making sRGB images is better for cross-system compatibility, but I think Adobe RBG has a bigger gamut and can actually represent some green colors that sRGB can’t. You should use printer and monitor calibration so that you can SEE what your image really looks like.

    4. I think I answered that above.

    Login or Signup to reply.
  3. Gamut mapping explained by analogy

    If you change color spaces, you may lose some of the information because the mapping from one to the other may not be injective (invertible). You may choose among different rendering intents to pick the mapping that throws only the information you find least useful away.

    This analogy might illustrate the consequences of converting an image to a smaller color space when the original space is larger than the one of your device: You can very well represent a 3D object in the computer, but you will never actually see it, because your screen is flat and thus able to display only 2D images. You can view projections of the object, you can view cuts through the object, but you need a 3D printer to get something really 3D out of it.

    Even if you have no 3D printer, it is worth representing the object in 3D and not as a fixed 2D projection. Otherwise, you would not be able to make all those 2D cuts and projections, and even if you bought a 3D printer in the future, you could not print the object anymore.

    The 3D object is a picture in the larger space, a fixed 2D projection is a picture in a smaller space, screen is a device with the smaller color space and 3D printer is a device with the larger color space. End of analogy.

    ICC workflow

    If you take a photo, you camera should assign a profile to it, describing the device color space of the camera. The profile defines mapping of the numbers inside the picture (coordinates in device color space) to real-world colors (coordinates in an absolute color space). Therefore, without a profile, the numbers really have no meaning and anyone is free to make up any mapping they like.

    If you shoot RAW, you do the color space conversion when developing the photo; if you shoot JPEG, the camera performs this task for you.

    In the opposite direction, when displaying or printing: If the display device is not calibrated and has no profile, the real-world colors stored in the image might not match what comes out of the device in reality. The mapping between the image color space and the output device space could not guarantee that the colors will be preserved and is somewhat arbitrary.

    Actual answers

    1. The difference in manipulating the photo in sRGB and Adobe RGB is that Adobe RGB is larger and thus preserves more information for further processing.

    2. The difference in algorithms has already been explained by Matt Timmermans in another answer. Regarding color blending, you might want to know more about perceptually uniform color spaces (see e.g. a closed Q & A on SO).

    3. Yes, conversion from Adobe RGB to sRGB is not identity and thus requires some processing. Where exactly this processing is done (device driver, OS kernel, image processing software) depends on the source and target, the OS and their settings. If you convert the spaces in Photoshop, it does the computation itself. Windows have a built-in color management module that takes care of converting an image with profile to the device color space of the output device.

    4. The image you want to display/print might be stored in some rather exotic color space. If the OS guesses it is in sRGB (Windows would), it might give odd results. It is better to provide as much information as possible to the color management system. Even uncalibrated devices might be assigned some generic profiles, some guesswork might take place. And maybe, you’ll calibrate and characterize your device someday, or you’ll send the image to someone with such a device.

    5. Qt itself does not support color management. However, KDE, which is built atop Qt, supports some color management via Oyranos.

      When should we expect complete color management for KDE?

      If we are talking about color management in Qt, not anytime soon. If we are talking about decent color management implemented in the compositor (KWin), sooner than not anytime soon. It also depends on how quickly the graphics applications adapt to these new color management things.

      You could use Oyranos or another color management system directly in your application. Google told me about a thesis about getting color management to Qt, too.

    Related reading

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search