I am building a QGraphicsScene that contains many thousands of small images. The scene has one direct child, a QGraphicsRectItem, that serves as the parent for all other items added to the scene. This rect item has world coordinates (0, 0) - (50, 4000).

The scene is intended to visualize many small blobs resulting from segmentation of an image. (Imagine many rice grains scattered on a surface). Each blob has some dimension in the world coordinate space. Since the input data has a discrete character (for example, the x-axis is divided into 3600 units, the y axis into 400,000 units), I create a QImage to represent each blob, where the dimensions of the image are so many units of x and so many units of y (typically 10 x 20 pixels).

The pixels in each blob image are colored according to the z value of the data at that point. In 3D, the blobs look roughly cone-shaped.

I create the QImage, set the pixel colors, convert it to a pixmap, then create the QGraphicsPixmapItem and set its offset to the upper left coordinate of the underlying data for the blob. So far so good.

The problem is this: When the scene is displayed, the pixmaps are displayed in the correct position on screen, but they are stretched horizontally off to the right side of the scene.

Image1.jpg

If I replace these image items with QGraphicsRectItem instances of the right size and position, I get the results shown in image 2, which has the small rects in the correct positions in the plot.

Image2.jpg

So apparently there is some scaling needed to tell the pixmap item how to map a width and height in pixel dimensions onto the world dimensions. But I don't know how to specify that; QGraphicsItem has a "scale" method, but it is single-valued, and I need to specify a different scale factor for x and y dimensions.

Can anyone give some help here?