skip to Main Content

I’ve written a program for converting images to a custom file format that uses a limited color palette of 30 specific colours.

In my application I have given the option of working in RGB or YUV color spaces and the option of: Sierra, Jarvis or Floyd-Steinberg dithering.

However I have noticed that photoshop’s save to web feature with the use of color tables to limit the color palette does a much better job than my program.

So I would like to improve my application to give better results.

Right now, with the example of Floyd-Steinberg dithering I’m essentially using this pseudo code

for each y from top to bottom
for each x from left to right
    oldpixel  := pixel[x][y]
    newpixel  := find_closest_palette_color(oldpixel)
    pixel[x][y]  := newpixel
    quant_error  := oldpixel - newpixel
    pixel[x+1][y  ] := pixel[x+1][y  ] + quant_error * 7/16
    pixel[x-1][y+1] := pixel[x-1][y+1] + quant_error * 3/16
    pixel[x  ][y+1] := pixel[x  ][y+1] + quant_error * 5/16
    pixel[x+1][y+1] := pixel[x+1][y+1] + quant_error * 1/16

My pixels are stored in RGB format and to find the closest palette color I am using the Euclidean distance in RGB/YUV.

I have been reading about the CIE94 and CIEDE2000 color difference algorithms and these should work better for my “find_closest_palette_color” function.

To do these calculations I’ll have to convert from RGB to the CIELab color space. Can I also use CIELab when distributing errors in my dither algorithms by:

  1. Converting the whole image to the CIELab color space
  2. For each pixel find the closest color in my palette using CIE94 or CIEDE2000
  3. Calculate the error in the CIELab color space (L* , a*, b* instead of RGB).
  4. Distribute the error in accordance with whatever dither algorithm I am using with the same weights I was using in RGB.

2

Answers


  1. Yes, in fact Lab is much better suited for this purpose, because Euclidean distance between colors in Lab reflect the human perception distance between the colors, whereas the distance in RGB does not.

    Login or Signup to reply.
    1. Converting the whole image to the CIELab color space with cache.

    2. Clustering with pixels of source image to find the best palette using a ratio of CIE76 and CIEDE2000.

    3. Calculate the error in the CIELab color space (YUV instead of RGB).

    4. Mix and match the error with the aid of Blue noise distribution.

    5. Use Generalized Hilbert ("gilbert") space-filling curve O(n) instead
      of Floyd-Steinberg dithering O(n2) to diffuse errors by minimize the
      MSE in RGB.

    Java implementation:
    https://github.com/mcychan/nQuant.j2se/

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search