skip to Main Content

I’ve been going over my code for a few hours now and I’m not sure why this contrast algo isn’t working.

Following this guide I’ve used the small algorithm given on the post. However I did mine using HSI color scheme because my pictures need to be in color. I have noted the changes for HSI in the post however they didn’t give me a step by step on exactly how to do it. Also they’re using pillow, whereas I’m using Cimg.

My code compiles and runs with no errors. But the result is a very dark image.
enter image description here

I was hoping for an output similar to what I get if increasing contrast using camera raw filter in photoshop. This is the a result of maxing the photoshop contrast slider:
enter image description here

This is the tail of the modified intensity values and the min max values:

old Intensity 0.422222
new Intensity 0.313531
old Intensity 0.437909
new Intensity 0.353135
old Intensity 0.437909
new Intensity 0.353135
old Intensity 0.436601
new Intensity 0.349835
old Intensity 0.439216
new Intensity 0.356436
old Intensity 0.443137
new Intensity 0.366337
old Intensity 0.45098
new Intensity 0.386139
old Intensity 0.458824
new Intensity 0.405941
old Intensity 0.461438
new Intensity 0.412541
min 0.298039
max 0.694118

Hope someone can help, thanks.

#include <iostream>
#include "CImg.h"

int main() {
  cimg_library::CImg<float> lenaCondec("./colors/lena_condec.jpeg");

  int width = lenaCondec.width();
  int height = lenaCondec.height();

  // enhancing contrast
  float minIntensity = 1.0f;
  float maxIntensity = 0.0f;
  cimg_library::CImg<float> imgBuffer = lenaCondec.get_RGBtoHSI();
  for (int row = 0; row < height; row++)
        for (int col = 0; col < width; col++) {
            const auto I = imgBuffer(col, row, 0, 2);
            minIntensity = std::min((float)I, minIntensity);
            maxIntensity = std::max((float)I, maxIntensity);
        }

  for (int row = 0; row < height; row++)
        for (int col = 0; col < width; col++) {
          auto I = imgBuffer(col, row, 0, 2);
          const auto newIntensity = (((float)I - minIntensity) / (maxIntensity - minIntensity));
          std::cout << "old Intensity " << (float)I << std::endl;

          imgBuffer(col, row, 0, 2) = newIntensity;
          I = imgBuffer(col, row, 0, 2);
          std::cout << "new Intensity " << (float)I << std::endl;
        }

  std::cout << "min " << minIntensity << std::endl;
  std::cout << "max " << maxIntensity << std::endl;

  cimg_library::CImg<float> outputImg = imgBuffer.get_HSItoRGB();

  // Debugging
  outputImg.save_jpeg("./colors/output-image.jpeg");

  std::getchar();

  return 0;
}

I have a repo for this here. Make sure you’re in the "so-question" branch.
Note: I modified line 389 of CImg.h from #include <X11/Xlib.h> -> #include "X11/Xlib.h"

3

Answers


  1. The algorithm above scales the image into [0, 1] range.
    Namely the pixels with the lowest values will be mapped to 0 and the pixels with highest values will be mapped to 1.

    You need to apply thin on RGB image which its values are in the range [0, 1]. You need to apply it per channel.

    Login or Signup to reply.
  2. I think there may be an issue with the built-in JPEG implementation in CImg. I found that your code works fine if you save the output file as a PNG instead of a JPEG.

    enter image description here

    Alternatively you can force CImg to use the IJPEG implementation on your Mac with:

    clang++ $(pkg-config --cflags --libs libjpeg) -std=c++17 -Dcimg_use_jpeg -lm -lpthread -o "main" "main.cpp"
    

    As a pre-requisite, you may need to install pkkconfig and jpeg with homebrew:

    brew install jpeg pkgconfig
    

    Note also that, as long as you don’t want to use CImg display(), you can avoid needing to put all the paths and switches for X11 on your compilation command by changing your compilation command to this:

    clang++ -Dcimg_display=0 ...
    
    Login or Signup to reply.
  3. As you mentioned you might consider other ways of stretching the contrast, I thought I’d add another option where you can do it in RGB colourspace. If you find the minimum and maximum of the Red channel and stretch the reds, and then do likewise for the other channels, you will introduce a colour cast. So, an alternative is to find the minimum of all channels and maximum of all channels and then stretch the channels in concert by the same amount.

    Effectively, you are stretching the RGB histogram until any of the channels hits 0 or 255. My C++ is a bit clumsy, but it looks something like this:

    #include <iostream>
    #include "CImg.h"
    
    int main() {
      cimg_library::CImg<unsigned char> img("lena.png");
    
      int width  = img.width();
      int height = img.height();
    
      // Find min and max RGB values for whole image
      unsigned char RGBmin = 255;
      unsigned char RGBmax = 0;
      for (int row = 0; row < height; row++) {
          for (int col = 0; col < width; col++) {
              const auto R = img(col, row, 0, 0);
              const auto G = img(col, row, 0, 1);
              const auto B = img(col, row, 0, 2);
              RGBmin = std::min({R,G,B,RGBmin});
              RGBmax = std::max({R,G,B,RGBmax});
          }
      }
    
      std::cout << "RGBmin=" << int(RGBmin) << ", RGBmax=" << int(RGBmax) << std::endl;
    
      // Stretch contrast equally for all channels
      for (int row = 0; row < height; row++) {
          for (int col = 0; col < width; col++) {
              for (int chan = 0; chan <=3; chan++) {
                  const auto x = img(col, row, 0, chan);
                  const auto newVal = 255*((float)x - RGBmin) / (RGBmax - RGBmin);
                  img(col, row, 0, chan) = (unsigned char)newVal;
              }
          }
      }
    
      // Debugging
      img.save("result2.png");
    }
    

    enter image description here

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search