Greetings,

Some scientific camera capture grayscale images at 10, 12 and 16 bits in addition to the common 8 bit format.
Such formats are little-endian and encode each pixels into 2 Bytes - the unused bits are set to zero.
Of course the 8bit grayscale pixel only requires 1 Byte.

The problem is that even with the newly introduced Grayscale 16 QImage format (>=Qt5.14), 10 and 12 bits per pixels grayscale images can't be properly displayed within QPainter without first processing the whole image.

The 8bit image is displayed just fine:
Qt Code:
  1. painter->drawImage(
  2. transformedRect,
  3. (const unsigned char*) _image.memory,
  4. _image.width,
  5. _image.height,
  6. QImage::Format_Grayscale8));
To copy to clipboard, switch view to plain text mode 

Even the 16bit shows no problems, decoding the char memory 2 Bytes per pixel:
Qt Code:
  1. painter->drawImage(
  2. transformedRect,
  3. (const unsigned char*) _image.memory,
  4. _image.width,
  5. _image.height,
  6. QImage::Format_Grayscale16));
To copy to clipboard, switch view to plain text mode 

However I have no solution for what is in between. As expected, the Grayscale16 format loads the QImage just fine, but since the expected maximum brightness value of 2^16 is much higher than 2^12 and more so than 2^10, the displayed image is very dark.
Working on the image pixels is not an option because it is more time consuming than the Grayscale16 format.

The QImage pixelColor method clearly shows how this decoding is done (but it is highly unlikely that this is the exact method used by QPainter as it is not optimized at all):
Qt Code:
  1. QColor QImage::pixelColor(int x, int y) const
  2. {
  3. if (!d || x < 0 || x >= d->width || y < 0 || y >= height()) {
  4. qWarning("QImage::pixelColor: coordinate (%d,%d) out of range", x, y);
  5. return QColor();
  6. }
  7. QRgba64 c;
  8. const uchar * s = constScanLine(y);
  9. switch (d->format) {
  10. case Format_BGR30:
  11. case Format_A2BGR30_Premultiplied:
  12. c = qConvertA2rgb30ToRgb64<PixelOrderBGR>(reinterpret_cast<const quint32 *>(s)[x]);
  13. break;
  14. case Format_RGB30:
  15. case Format_A2RGB30_Premultiplied:
  16. c = qConvertA2rgb30ToRgb64<PixelOrderRGB>(reinterpret_cast<const quint32 *>(s)[x]);
  17. break;
  18. case Format_RGBX64:
  19. case Format_RGBA64:
  20. case Format_RGBA64_Premultiplied:
  21. c = reinterpret_cast<const QRgba64 *>(s)[x];
  22. break;
  23. case Format_Grayscale16: {
  24. quint16 v = reinterpret_cast<const quint16 *>(s)[x];
  25. return QColor(qRgba64(v, v, v, 0xffff));
  26. }
  27. default:
  28. c = QRgba64::fromArgb32(pixel(x, y));
  29. break;
  30. }
  31. // QColor is always unpremultiplied
  32. if (hasAlphaChannel() && qPixelLayouts[d->format].premultiplied)
  33. c = c.unpremultiplied();
  34. return QColor(c);
  35. }
To copy to clipboard, switch view to plain text mode 

It's just how data is reinterpreted.
However, I don't really want to tamper with the built-in Qt classes.

The other idea I had is to alter the on-screen pixel with some sort of QPainter color-space transformation. I have tried composition to enhance brightness and contrast but failed. It was acceptable computationally-wise though, so maybe it is the right path to follow.

I feel like I'm missing something obvious since 10 and 12bits images are commonly used in medical appliances.
What should I do to correctly display such images at the same speed as 16bits images?

Thanks