Replies: 9 comments 22 replies
-
Hi
You are referring to max output value in limited range? But if HyperHDR calibration detects limited scale at the beginning then all value on input will be upscale to full 0-255 range during the calibration process. Shades of gray are about so interesting and the HyperHDR calibration focuses especially on them, because the individual components are equal. If I convert RGB to BT2020 (and vice versa) they remain equal, you can test it here: https://ajalt.github.io/colormath/converter/ However, the graber returns slightly disturbed green channel, these are not any big differences, even more absolute than percentage ones. Just look at the "raw" HDR results for grays captured as SDR, without any transformations and corrections: https://www.hyperhdr.eu/2022/04/usb-grabbers-hdr-to-sdr-quality-test.html Calibration at the very end makes green correction, but only and exclusively to the gray levels with very small margin, which it has every right to do because the evaluation function earlier focused especially on them. I don't know if adding another intermediate level, e.g. between 220 and 255, will change much as we do not directly map these levels but set general parameters of the inverse function and scale. But we can experiment by increasing the number of tested and read colors and even using AI to interpolate the obtained results (I have already seen such an attempt for HyperHDR). |
Beta Was this translation helpful? Give feedback.
-
BTW this branch (not merged yet) has a bit better approach for limited colorspace: https://github.com/awawa-dev/HyperHDR/tree/422_limited_calibrator |
Beta Was this translation helpful? Give feedback.
-
Thank you for addressing this issue. I've noticed the scaling of values to the maximum, which makes sense given the basic formulas for converting YUV (limited) to RGB. In the added log, you can see the white index value (23), corresponding to the color value 220, 220, 220. With limited range scaling to full, even white seems to get slightly amplified, possibly leading to an overemphasis. This scaling could result in a white value somewhere between the original 219 and 255, hence the idea of adding more values for white balance. Interestingly, the white correction values found in my setup (1.003385, 1.000000, 1.004014) do not alter green, which theoretically makes sense. However, if there's an inherent error in the green channel, perhaps this should be adjusted? The information about the green channel being slightly off from the grabber seems crucial. It could explain the slight overemphasis, especially noticeable in specific shades of blue and sometimes in white or yellow tones. This might be due to the limited to full scaling where RGB values are slightly altered. I'm still trying to fully understand the code, particularly what happens where and when, but maybe an initial RGB channel analysis post-EOTF adjustment could be beneficial. Adjusting the primary colors to minimize delta E, without over- or undersaturation, and then optimizing the rest based on these adjustments might be a way forward. I wonder if a different correction value or weighting for each channel, especially green, might be needed if distribution is detected as incorrect. Regarding the AI-based adjustment you mentioned, is there a link or a branch for that? Also, thanks for the limited colorspace branch. I've merged it, but the score is still higher than for FCC, and I haven't been able to delve into that method yet. I've attached the current calibration log where I've tested using FCC and an HDR signal (not LLDV). I plan to test again by specifically disabling FCC to try this method. I've also temporarily used BT.2020 coefficients in tests to see the score, even though the color conversion would be incorrect with them. Your insights have been incredibly helpful, and I'm eager to explore any further suggestions you might have. Details
05:37:22.439 CALIBRATOR : LutCalibrator.cpp:939:correctionEnd() | Mean error for FCC is: 1868.309269 |
Beta Was this translation helpful? Give feedback.
-
Update with Log: Details
16:14:11.029 CALIBRATOR : LutCalibrator.cpp:938:correctionEnd() | Mean error for FCC is: 1868.309269 |
Beta Was this translation helpful? Give feedback.
-
Hey @awawa-dev , after more than two weeks of research, countless tests, and thanks to your amazing tip regarding hue mapping in the BT.2020 documentation, I finally figured out today why the colors are so shifted after conversion!!! Brace yourself: it's the wrong color space transformation matrix in the LUT calibrator from BT.2020 to Rec.709 space! I actually discovered this on the first day, but since you mentioned that the green channel from the grabber could also be defective, I started looking in the wrong place and then came back to it in a roundabout way. It all makes total sense because it's simply the wrong color space being converted to Rec.709. The correct color space that needs to be used is P3-D65, which is currently being "sold" as BT.2020 and is of course covered almost 100% by any current TV, unlike BT.2020. This ultimately explains a lot to me. The BT.2020 color space encompasses a significantly wider color gamut, with corner coordinates, such as those of green (the primary source of the issue), positioned much higher. This naturally defines all color coordinates higher up, leading to their overemphasis when converting back to the smaller Rec.709 color gamut. Thus, all colors that start very far from BT.2020 are not at the correct coordinates because it's actually the P3-D65 color space with the correct color coordinates and with which current HDR / DV content is mastered. This is why possibly other users have also noticed these color shifts through HDR2SDR Tone-Mapping within HyperHDR. Hue mappings and other adjustments unfortunately only led to further shifts of other colors, even with angle calculations of delta h and delta alpha and the color space differences and distortions... Since I can't make a pull request in the project right now, here is the matrix I used. It's a P3-D65 to Rec.709 with Chromatic Adaptation Transform according to CMCCAT2000 and the highest resolution I could get: void LutCalibrator::fromBT2020toBT709(double x, double y, double z, double& r, double& g, double& b)
{
r = 1.224940176280560 * x - 0.224940176280560 * y + 0.0 * z;
g = -0.042056954709688 * x + 1.042056954709689 * y - 0.0 * z;
b = -0.019637554590334 * x - 0.078636045550632 * y + 1.098273600140966 * z;
} Maybe I'm completely off base, and perhaps there's a better matrix than this one, or we need another matrix within the DCI-P3 color space, but with it, I was able to fix my issue and possibly that of other users and maybe yours too :)) |
Beta Was this translation helpful? Give feedback.
-
Oh Wow, that sounds amazing, and I'm really looking forward to it :)) Thank you! During these tests, I've also noticed a few other things, but it might be because I've tested a lot with (LL)DV content since that's mainly what I watch, less so HDR except for games. I realized with a few test images that I used to run all my calculations and conversions that the actual scope is larger than assumed, at least in my case. This makes total sense, as I don't know how extensively you've tested, but I don't think the test patterns are sufficient in terms of values because they should actually be full 10-bit values. The grabber, which cannot do HDR - so let's assume that anyone using the project does not use a real HDR grabber but the HDR Tone Mapping function ;) then, as with the MS2130, all 10-bit values are truncated/masked to 8 bits. Since I only had images from the preview for testing, which were already RGB images upscaled to the full 0-255 range in the conversion from limited YUYV to RGB, I found much higher values in the images than the ceiling definitions at calibration. I just printed out all high values in a Python script, which, for HDR 10-bit, 1023 is the maximum (Video data range: 4 through 1 019). for x in range(870, 1024, 1):
print(x, (x & 0xFF), (x & 0xFF) * (255/219), (x & 0xFF) / (255/219)) Just to have a few values that explain some things... Y can be a maximum of 940 (limited) and is actually reduced to 876 (940 - 64), and U/V a maximum of 960, which converts to around 896, at least if you had the original signal and the 10-bit YUV limited data converted to RGB, but we don't have that, so the limited signal is still there, but with masked values... This led me to realize that the maximum brightness value (already upscaled in my RGB image) must actually be around 200: So, theoretically, the maximum brightness value in the RGB environment should be about 200.274. I also compared all the values back and forth... since the 10-bit fits 4x into 256, the whole cycle repeats, and the values also go up to 255. Every step can be determined by adding 8-bit value + 256, + 512, +768... but that should be obvious ;) Therefore, for me, the maximum brightness value ends up being about, if not exactly, 200 or from the YUV signal at 172 and 192 for U and V, or about 224... So now I'm considering whether the calibration process should use the actual values and colors and maximum values, i.e., up to 940 / 960, etc., to determine floor and ceiling, pure white as 255, and then in the limited range, so 235 (theoretically) which I believe is actually represented by 940 or masked to 172 hence I think the weighting of maximum brightness shifts a bit... theoretically, the bottom 64 values are also black, but due to masking, there are values at the bottom too because from 256 the value starts again at 0.... unfortunately, the values can't be reconstructed, at least I don't know how, but I believe the low ceiling value of about 156 (I think it was about that for me) is too low, as it increases brightness but also leads to further oversaturation / overexposure... a higher value would allow for more nuances, which reduces overall brightness a bit but also minimizes color and brightness peaks - keyword only white areas, yellow is too bright so everything is white etc... because the brightness value is too high.... a higher ceiling value would make the image more balanced... all just theory.... I was able to test this in my setup, but I only have my setup and don't know how it behaves with other grabbers etc. so that's another input from me, noticed during all the tests, which could perhaps be considered/improved in the future... And if I could make a huge request, would it be possible, when creating a screenshot through the UI, which is a copy of the canvas content, to somehow have the option to save the current raw frame image, i.e., the original image captured by the grabber that's just arrived in the system, as a raw file (YUV - in my case) unchanged, and maybe even send it back to the browser/UI as a download or save it somewhere with a timestamp etc. The original data directly from the grabber with the original values helped my tests differently than already modified RGB values etc. If that were possible, it would be amazing. It would have saved me a few nerves to get the raw data, because when HyperHDR is running, v4l2 and the grabber are blocked... In FrameDecoder, I've seen that there are screenshot possibilities, but only for testing and then unfortunately not in RAW format, which is actually available at that point before processing etc. ;) I know it's possible via the command line, that's how I did it last, but when I activate the grabber, the signal is briefly cut and I always have to record several frames at once, then split the total file into individual frames and look for the frame that isn't black, but as soon as the grabber is ready again after the handshake, the image is also recorded... it's surely due to a setup in the chain, but it's annoying when you just want to quickly save a raw frame and if it could be done within HyperHDR via the UI, especially since you can already take screenshots of the preview, just unfortunately no RAW directly from the grabber, it would be very helpful. Maybe even parallel to the RAW, a second one after the HDR Tone Mapping, if HDR Tone Mapping is active. So I think that's all the topics for now... Thanks anyway for everything you've done and considered here so far! |
Beta Was this translation helpful? Give feedback.
-
Another progress. The white color disturbance was a dead end resulting from the search for the optimal scale for the whole, which could have slightly distorted U and V. But now it has to be changed because there is a better method, basically we only need to determine the correct YUV coeff because it has a huge impact on the result. I moved the entire conversion to a new library that supports linear algebra, basically the entire calibrator was rewritten. At the same time, I had to remember a lot of my studies (I studied mathematics/physics/computer graphics/CAD... all in one ;) but it was long time ago) The conversion of the bt2020 signal to bt709 itself is insufficient because there are some color distortions and I don't know for sure whether they result from the Windows mapping method / color profile or maybe the grabber. Or maybe we still missing something in USB grabber processing that is not included in ITU documentation... I'm not sure if Rec.709 OOTF was applied to our signal. The graber usually uses an some internal LUT with which it can process ("correct") colors or to apply EOTF, because even if we capture an SDR signal, we wont receive colors as they were rendered, at least on the MS2130. And even more so on the MS2109 where the saturation/contrast cannot be set in a completely neutral position. Unfortunately, this spoils our HDR to SDR mapping a bit. I think the effect is very good. You can observe island of disturbances here, e.g. for green (238, 0, 0), but this is almost certainly a disturbance introduced by the grabber and we cannot do anything about it. Fortunately, such cases are few and will be averaged when calculating LED colors anyway. We don't fully convert the video material from HDR to SDR to watch it, we only control the lights ;) The log is attached, I think you will find some interesting information here. Additionally, there are now many more test points. LCH mapping is extremely CPU resource expensive, fortunately we now have to do it once for each coef in the case of HDR10. Processing in LCH does not yet support mapping colors outside the sRGB gamut and they are simply cut off at the limits (without projection on Rec.709 gamut), because we will not display them on 8-bit LEDs anyway. But that's how it's always worked. Calibration process is using SRGB space colors encoded by OS into Rec.2020. So theoretically it should be reversible, but this is not entirely the case with grabbers.
|
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Hi |
Beta Was this translation helpful? Give feedback.
-
Hey @awawa-dev,
I've reached a point where I'm unsure how to proceed and desperately need some guidance. I'm experiencing an issue with the HDR2SDR conversion process where there's an excessive amount of green being introduced into the overall image. This becomes especially problematic during color correction for LEDs, as it shifts the other colors too much, and there's a global lack of green, which otherwise seems about right. The issue is particularly noticeable in the blue color range, which ends up looking too green or turquoise, instead of the intended blue.
I'm not sure if I can fully convey the technicalities here, but the problem isn't with the LEDs themselves. It's the color conversion process. The converted image from HDR to SDR already has incorrect colors. Adjustments like gamma corrections during the LUT calibration process or color / color temperature tweaks specifically for the LEDs haven't solved the issue. The balance and weighting of the colors seem off, with green either being overrepresented or another component lacking.
Importantly, this green tint issue is exclusive to the HDR2SDR converted image and not present in the normal SDR (YUV) signal, where everything seems perfectly balanced.
My setup involves a Raspberry Pi 4 with an MS2130, connected via USB 3.0 and using YUV Limited to receive the pale/washed-out HDR signal. I've used both your MS2130 LUT and one created with your LUT Calibration Tool. It's important to note that your LUT is designed for the FULL YUV range, which causes my LEDs to light up even when the image is black, as my signal is limited (black having the values 16,16,16). Therefore, I had to create my own LUT to accommodate this difference. The input is from an Oppo player, and the signal is processed either through LLDV or HDR from a Diva (I don't use the Diva's HDR2SDR - I'll explain why below). Regardless of whether the LUT is created with an LLDV or HDR signal, the excessive green issue persists. I'm wondering if this is a technical problem with the formulas or just the limitations of color reversion due to lacking color information/metadata, especially since it's in 8-bit instead of 10/12-bit.
I've attached three screenshots for reference: one without HDR2SDR active, one showing the conversion result (note the excessive green on the moon, making it look turquoise rather than blue), and a third showing the ideal result as seen on my TV.
I hope I'm not infringing on any copyright laws with these screenshots, which are from the opening logos of movies like 'Ready Player One' and 'Jurassic World 2'. If this is an issue, please inform me, and I will remove the screenshots or provide them through an appropriate channel.
without HDR2SDR:
with HDR2SDR:
optimal results:
Additionally, I'm curious if others in the community have faced similar issues, particularly with different capture devices like the Elgato HD60X or similar.
Regarding why I'm not using the Diva's HDR2SDR conversion: While many aspects of the Diva's color accuracy are satisfactory, its profiles for HDR2SDR conversion are not quite right. Your HDR2SDR conversion, despite the pale/washed-out colors, produces a much better image, except for the green/turquoise issue. It's frustrating, but true. HDFury's technical support hasn't planned any further LUT/profile adjustments, even after I explained the problem. For instance, the color red (like in PS5's Diablo IV) turns more orange than a rich, dark red, as compared to the direct TV signal. HDFury's team insists their profiles are optimal and won't be altered.
Any insights, suggestions, or explanations would be greatly appreciated. I'm seeking a solution or at least an understanding of this color conversion challenge. If more information is needed, please let me know. I'm looking forward to any assistance or advice the community can offer.
Thank you in advance!
Edit: I suspect the issue might be related to the white balance adjustment and/or the scale factor in the EOTF function. It’s possible that each RGB channel might require its own specific weighting, similar to the coefficients, but I’m not entirely certain. Additionally, in addressing the potential white balance issue, a detailed examination of the gray levels in the upper range, especially around values 220,220,220 to 255,255,255, could be useful. My white balance index is set at the value of 220,220,220, and the next available value is directly 255,255,255, with no intermediate options. This gap suggests that adjusting to a higher value might be limited in scope. However, exploring the possibility of implementing finer gradations or increments between these two values, if feasible, could potentially offer a more precise solution for balancing the colors. These are just my assumptions based on what I’ve observed and considered so far.
Beta Was this translation helpful? Give feedback.
All reactions