I just need some clarification on how to properly convert rgb pixel values in the range [0,1] to be in the right range for a HDR format like openEXR.
So I know for instance that when working with low dynamic range formats like PNG or JPG that only have 8 bits per channel, you simply have to multiply each rgb value by 2^8-1 = 255 and clamp so that all your values are in the range [0, 255].
OpenEXR and other HDR formats use a half precision format with 16 bits per channel. So do I just do the same thing as before and multiply each channel by 2^16-1 = 65535 so that my new range becomes [0, 65535]?
EDIT
So I tried doing what I wrote above but when I try to display the openEXR file in photoshop it is completely white. It seems like any value greater than around 10 or so is too bright for photoshop to display properly which I find very odd. So this doesn’t seem like the correct way to do it unless there is something wrong with photoshops display.
3
Answers
We are dealing with three related but different issues at the same time.
The first issue is that of range: are channels stored as values in the range [0, 1] or [0, 255] or [0, 65535]. Scaling (multiplying and possibly clamping) is what you do to convert from one range to another.
The second issue is that of raw sample size: how many bits do you use to store the value. This doesn’t necessarily have to be the binary logarithm of the size of the range. For example, if your range is [0, 255], you could store the values in 8 bits where the least significant bits represents increments of 1, or in 6 bits where the least significant bit represents increments of 4, or in 10 bits where the least significant bit represents increments of 0.25. In fact, as we will see in the next issue, the increments do not have to be fixed.
The third and final issue is that of encoding: fixed point or floating point. When we say that we store values of [0, 255] in 8 bits or values of [0, 65535] in 16 bits, we usually mean integer encoding (a special case of fixed point where the least significant bit represents fixed increments of 1). When values are stored in the range [0, 1] however, regardless of the raw sample size, usually this implies floating point storage (where most bits are used to store the significand, while a few bits are reserved to store the size of the increment associated with the least significant bit). When we speak of “half precision”, “single precision”, “double precision”, “extended precision” and so forth we also invariably mean floating point encoding.
So here is the catch: OpenEXR uses floating point encoding, in a format that is not built in to most programming languages. Most modern languages only have 64-bit floats, and if they offer anything else it’s usually 32-bit floats (respectively
double
andfloat
in the C family), but 16-bit floats are almost never available out of the box.Half precision can represent values in the range [-65505, 65504], with 11 bits (slightly better than 3 decimal digits) of precision, while also being able to represent values as small as 2^-14. However, given that OpenEXR is a HDR format, you are probably not really expected to use the entire range because the number encoding is chosen to accomodate for (extreme) over- or underexposure. That is, unless your camera actually produces such an enormous dynamic range of values.
So you might not actually need to scale your channel values. However, given that you already start with values in [0, 1], you probably have floating point numbers stored in single or double precision and you’ll have to transcode them to half precision. Depending on the programming language, the libraries and even the hardware platform that you use, there might be an off-the-shelf solution or you might need to do some bit-fiddling of your own. As a starting point I can only offer you this DuckDuckGo search.
I’m too tired to think straight, but this may help you work it all out. I used ImageMagick to create three OpenEXR images, one white, one black and one red, all 1 pixel x 1 pixel.
Then I hexdumped them all:
Here are the files:
White.txt
Black.txt
Red.txt
diff white.txt black.txt
The short answer is that this will probably not give you a useful result.
The moderately-long answer is that this approach makes a number of assumptions that are not at all "safe" with regard to image processing.
The primary assumption is that a Low Dynamic Range value of 255 represents an HDR value of 65535. The primary problem that HDR intends to solve is that real-world signals have to be compressed to a limited scope. Imagine taking a digital photograph of the sun directly: the input value for that light intensity is much greater than any imaging software supports, so it has to be compressed somehow. With LDR, the center of the sun’s disk and much of the bloom around it will all be clamped to 255. With HDR, you’re still clamping, but only to 65535. If you have a real-world signal which is giving values like 200, 255, 300 and 100,000, and then that gets clamped to 255, it should make sense that you cannot simply scale 255 to 65535 and get a reasonable result. That 255 might have been clamped from 256, or from 300, or from 100000; there’s no way to know. (This is a radical oversimplification of imaging, but it should be sufficient for understanding this limitation in particular).
A secondary assumption is that what you see on your monitor has anything to do with the values in your image. The 0 – 65535 range of values in the image must be represented on your display, which is almost certainly limited to the sRGB spectrum (255). Mostly likely, if you’ve simply scaled 0-255 to 0-65535, then 99% of your range is above 255. So regardless of how much information is in your image, it’s easy for it all to get clipped to "white."
Furthermore, depending on how Photoshop (or the GPU driver or the OS or the display panel) is translating your HDR 65535 into sRGB, it’s just as likely that it’s AGAIN being clamped to 255, or perhaps scaled "dumbly," or perhaps scaled "smartly," or any combination thereof. If Photoshop shows your image as solid white, you’ll have to confirm a number of steps in the process to determine "who" in particular is transforming your color values, and how. Photoshop might be clamping before it sends to the OS. The OS might be clamping before it sends to the GPU. The GPU might be clamping when it sends to the display. The display might be clamping when it turns on the pixel. …or any combination of clamping, scaling, and transforming at any of those steps (I know, not all of those combinations are actually possible, but the point is that there is a lot going on between the file and your eyeball).
Transforming LDR values to an HDR color space (and vice versa) is a non-trivial pursuit that represents a discipline unto itself. Depending on your application, you may need to do a great deal more research before you come to a good solution.