skip to Main Content

When working with JPEG image properties (resolution, sampling, etc.) and you export the final product are you ALWAYS double dipping into ‘jpegification’?

From my understanding when you load a JPEG image into an image manipulation tool (GIMP, Photoshop, ImageMagick, etc.) it goes like so:

  1. Import JPEG
  2. Decode JPEG into easier workable format (Bitmap)
  3. Manipulate the pixels
  4. Export back into JPEG (REDOING JPEG QUANTIZATION AGAIN, even if you copy the original JPEG parameters it’s a double dip)

Am I correct in this?

Thanks!

2

Answers


  1. I think it depends on what you do after reading the image… but I think you can check for yourself for any particular operation and whether it has re-quantised by using this function in ImageMagick

    identify -format "%#n" image.jpg
    bb1f099c2e597fdd2e7ab3d273e52ffde7229b9061154c970d23b171df3aca89
    

    which calculates the checksum (or signature as IM calls it) of the pixels – disregarding the header information.

    So, if I create a file of random noise, like this

    convert -size 1000x1000 xc:gray +noise gaussian image.jpg
    

    and get the checksum of the data, like this

    identify -format "%#n" image.jpg
    84474ba583dbc224d9c1f3e9d27517e11448fcdc167d8d6a1a9340472d40a714
    

    I can then use jhead to change the comment in the header, like this

    jhead -cl "Comment" image.jpg
    Modified: image.jpg
    

    and yet the checksum remains unchanged so I would say jhead has NOT re-quantised the data.

    I guess my point is that your statement that images are ALWAYS re-quantised is not 100% accurate and it depends on what you actually do to the image, and further, that I am showing a way you can readily check for yourself whether any processing has actually caused requantisation. HTH !!!

    Login or Signup to reply.
  2. Any areas of the image that have changed would have to be quantized again anyway.

    In theory, an application could keep the quantized values lying around then use them again. However,

    1. That would require 3 times as much memory. The quantized values require 16 bits to store (+8 bits for the pixel value).

    2. If you changed the sampling or quantization tables, the quantized values would have to be recalculated.

    There would be very few cases where it would make sense to hang on to the quantized DCT values.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search