- test-texture.c
gcc -o test-texture test-texture.c -I/usr/X11R6/include -L/usr/X11R6/lib -lX11 -lGL -lGLU
<LJ-CUT text=" --More--(13%) ">
Help me fill in the question marks:
8 | 16 | 24 | 32 | |
Linux x86: | ok | ok | ? | ok |
OSX PPC: | ? | ? | ? | ok |
Sparc: | ? | ? | ? | ? |
Linux x86 -> OSX PPC: | ? | ? | ? | ok |
Linux x86 -> Sparc: | ? | ? | ? | ? |
Sparc -> Linux x86: | ? | ? | ? | ? |
OSX PPC -> Linux x86: | ? | ? | ? | BAD |
Tell me what it printed to stdout, and whether the colors in the window were right. It should look like this:

"Linux x86 -> OSX PPC" means "program is running on Linux x86, with $DISPLAY pointing at an X server running on OSX PPC".
"Depth 32" means 4 bytes per pixel.
"Depth 24" means 3 bytes per pixel (this is less common.) If your video driver supports it at all, you might need to turn it on by adding DefaultFbBpp 24 and/or Option "Pixmap" "24" to xorg.conf or XF86Config.
"Depth 8" means TrueColor, not PseudoColor / colormapped. Visual "TrueColor" will probably be needed.
What's the big idea?
- I've got this image data that came from the X server (via XGetImage), and it's in some arbitrary format. It might be in any bit order, byte order, and number of bits, depending on the whims of the architecture and video driver. I want to feed it to OpenGL for use as a texture.
Currently, I do this by iterating over the X data and constructing 32-bit RGBA native-endian data to hand to glTexImage2D or gluBuild2DMipmaps. But, copying/reformatting all that data is slow. (Slow enough to matter, it turns out.) So instead, I'd like to just hand it to OpenGL directly, and say, "hey GL, this data is 16 bit BGR bigendian, deal."
I'm trying to figure out how to express that to OpenGL. I'm having a hard time, because it's very poorly documented. Thus, this test program.
Update: Plase try the new version here.
damn I went to compile this on my alpha (compiles fine btw) and then remembered that the video card doesn't work in X at the present moment, we are still trying to sort out some bugs here and there, (using a friends port of fedora core2)
sorry about that..
Benjamin
Apple X11 on OS X.
Root: 1024 x 768 x 24Image: 640 x 480 x 24 format = ZPixmap bytes = MSBFirst bits = MSBFirst pad = 32 bpl = 2560 bpp = 32 rgb = 00FF0000 0000FF00 000000FFEndian: BigVisual: 0x36 TrueColorTexture: BGRA UNSIGNED_INT_8_8_8_8
And in 16-(well, 15-)bit mode:
Root: 1024 x 768 x 15Image: 640 x 480 x 15 format = ZPixmap bytes = MSBFirst bits = MSBFirst pad = 32 bpl = 1280 bpp = 16 rgb = 00007C00 000003E0 0000001FEndian: BigVisual: 0x36 TrueColorTexture: RGB UNSIGNED_SHORT_5_6_5
But I ran it on OSX X11 in 32 bit and it looked fine here! Same text output as yours.
The Mac I tried it on gets "illegal instruction" when I run any GL program when X is started after switching to "thousands of colors", and X won't start at all in "256 colors".
I get the same results as zetawoof here, with "X11 1.0 - XFree86 4.3.0" on "Mac OS X 10.3.7". Runs fine in millions and thousands, although the colors are messed up.
In 256 colors X11 seems to hang, but going into X11 Preferences and setting "256 colors" under Output gets X11 to run. However I just get this output from text-texture:
Root: 1280 x 854 x 8
Image: 640 x 480 x 8
format = ZPixmap
bytes = MSBFirst
bits = MSBFirst
pad = 32
bpl = 640
bpp = 8
rgb = 00000000 00000000 00000000
error: couldn't find a visual
I get the same yellow result as everyone else here (Powerbook w/Rage 128, Mac OS 10.3.7, XFree86 4.3.0).
Changing line 434 to GL_UNSIGNED_INT_8_8_8_8_REV fixes it.
stdout from working version:
I get EXACTLY the same on Mac OS X 10.3.7 (7S215) running Apple's X11 1.0 - Xfree86 4.3.0 (always nice to include those) with my oh-so-powerful Rage 128.
Mmmm, CGA
LinuxPPC (client & server) ATI Rage 128 card, X.org
Same colors as above.
Root: 1024 x 768 x 24
Image: 640 x 480 x 24
format = ZPixmap
bytes = MSBFirst
bits = MSBFirst
pad = 32
bpl = 2560
bpp = 32
rgb = 00FF0000 0000FF00 000000FF
Endian: Big
Visual: 0x23 TrueColor
Texture: BGRA UNSIGNED_INT_8_8_8_8
Okay compiles fine under IRIX, with the following output:
Root: 1280 x 1024 x 24
Image: 640 x 480 x 24
format = ZPixmap
bytes = MSBFirst
bits = MSBFirst
pad = 32
bpl = 2560
bpp = 32
rgb = 000000FF 0000FF00 00FF0000
Endian: Big
Visual: 0x39 TrueColor
Texture: BGRA UNSIGNED_INT_8_8_8_8
unfortunately the colours of the window are quite wrong
the white is yellow, the red text is black, the green text is blue.
If need be I could take a screen shot for you.
Benjamin
Can you try other values for "type" and "format" (around line 434) until you find one that works? "man glDrawPixels" lists the possible values.
(Sorry, mispasted first time) - compiled on IRIX, displayed on linux x86-64, 32bpp (xdpyinfo) screwed up the colours (screenshot) and printed this:
Changed to use UNSIGNED_INT_8_8_8_8 and it displayed correctly (screenshot) and printed this:
Jwz, sorry just saw your new post and I will run that on IRIX when I get home this evening, unfortunately the wonders of time zones here
B
Regarding documentation, maybe you already worked this out, but I believe it's:
OpenGL 1.2 and later: all the packed pixel formats are supported (or it's a bug)
OpenGL 1.1 and earlier: if the GL_EXT_packed_pixels extension is defined, then only the packed pixel formats that don't end in "_REV" are supported.
OpenGL 1.1 and earlier, without EXT_packed_pixels: no packed pixel formats
Of course, due to driver bugs etc., maybe you still need to collect the data. But you could try checking this stuff via glGetString(GL_VERSION) and glGetString(GL_EXTENSIONS), unless you know everybody really just always has at least 1.2.
I guess folks who don't have 1.2 won't be able to compile it at all, since the _REV symbols won't be defined?
Hey, good point. Although it's technically possible to compile it with different headers from what the server actually supports, I guess that doesn't seem likely in practice.
While I'm here...
It seems like with < 24bits, the OpenGL driver is just going to convert in software as well, especially if it has to generate mipmaps. Although since you're asking for any-old format (see below) the driver might be able to skip converting and use it directly if it happened to be internally compatible. I guess it's also possible the driver's converter is significantly more optimal than your own.
As I understand it, this comment is wrong; '3' is just a deprecated way of saying "GL_RGB", meaning "I want 3 components", and it doesn't imply anything about the size. "use 3 bytes internally" would be GL_RGB8.
Right, my thinking was that the format that X gives me back by default is likely to be the native format of the video driver, and therefore, is going to be the format that GL would be converting everything to internally anyway.
Stock solaris with X doesn't seem to know about glx/glu.h so doesn't go anywhere, either. Tried on vaguely sane sol8 and sol9 on sparc. Installing opengl 1.3 on sol8/sparc, it's all kinds of confused :
24 bit display
"this text is black" is ok
"this text is red" is green
"this text is green" is red
"this text is blue" is black
First time lucky fixing it though, removed all the conditionals, and :
type = GL_UNSIGNED_INT_8_8_8_8_REV;
format = GL_BGRA;
gives the correct colours for everything. For completeness, the original code had "this is white" in yellow.
7:28am drake /home/jamie/test %gcc -o test-texture test-texture.c -I/usr/X11R6/include -L/usr/X11R6/lib -lX11 -lGL -lGLU
7:28am drake /home/jamie/test %./test-texture
Root: 1280 x 1024 x 24
Image: 640 x 480 x 24
format = ZPixmap
bytes = MSBFirst
bits = MSBFirst
pad = 32
bpl = 2560
bpp = 32
rgb = 000000FF 0000FF00 00FF0000
Endian: Big
Visual: 0x47 TrueColor
X Error of failed request: BadMatch (invalid parameter attributes)
Major opcode of failed request: 1 (X_CreateWindow)
Serial number of failed request: 6187
Current serial number in output stream: 6191
7:28am drake /home/jamie/test %uname -aR
IRIX64 drake 6.5 6.5.25m 07080049 IP35
7:29am drake /home/jamie/test %hinv
2 800 MHZ IP35 Processors
CPU: MIPS R16000 Processor Chip Revision: 2.2
FPU: MIPS R16010 Floating Point Chip Revision: 2.2
Main memory size: 2048 Mbytes
Instruction cache size: 32 Kbytes
Data cache size: 32 Kbytes
Secondary unified instruction/data cache size: 4 Mbytes
Integral SCSI controller 2: Version IDE (ATA/ATAPI) IOC4
CDROM: unit 0 on SCSI controller 2
Integral SCSI controller 0: Version QL12160, low voltage differential
Disk drive: unit 1 on SCSI controller 0
Integral SCSI controller 1: Version QL12160, low voltage differential
IOC3/IOC4 serial port: tty3
IOC3/IOC4 serial port: tty4
Graphics board: V12
Integral Gigabit Ethernet: tg0, module 001c01, PCI bus 1 slot 4
Iris Audio Processor: version MAD revision 1, number 1
IOC3/IOC4 external interrupts: 1
I can display it to a remote Exceed session on my laptop but it looks like what <lj user="zetawoof"> posted above.
Hmm, so far the only way I've encountered depth 24 was 4bpp RGB noA. For x86, the intel740 AGP2x video card would be an excellent alpha-incapable test unit.
There is a (small) chance some of your BGR and BGRA problems come from not chekcing the servers supported OpenGL version and extensions. GL_BGRA was promoted from extension GL_EXT_bgra, in version 1.2.
It's quite common for the headers avaliable on the system to not match the servers implementation, one way or the other. For example, linux ships with GL 1.3 headers, regardless of host capabilities.
Solaris 9 (aka SunOS5.9, aka Solaris 2.9, aka whatever the hell else their marketing droids decide to name it on a whim) on a UltraSPARC-IIIi workstation...
The color is all gimped up, with green being red, blue being green, red being black and white being yellow. For your amusement, I have also supplied a screenshot:
I'm running Apple X11 on OS 10.3.7.
Worked fine in "Millions of colors" and "Thousands of colors":
Root: 1152 x 768 x 24
Image: 640 x 480 x 24
format = ZPixmap
bytes = MSBFirst
bits = MSBFirst
pad = 32
bpl = 2560
bpp = 32
rgb = 00FF0000 0000FF00 000000FF
Endian: Little
Visual: 0x36 TrueColor
Xlib: connection to ":0.0" refused by server
Xlib: No protocol specified
Texture: BGRA UNSIGNED_INT_8_8_8_8
Not so much with the working in "256 colors"...
The terminal output is the same, but the X-forwarded image is wacky. Unfortunately I can't give you a screenshot, because the colors come out differently in the screen capture!
The upper-left and lower-left backgrounds are white. The upper-right and lower-right backgrounds are blue.
The first stripe fades from blue to cyan. The second stripe fades from blue to magenta. The third stripe is constant blue. The fourth stripe fades from blue to white.
On the left, the text is blue, cyan, magenta, and blue. On the right, the text is white, cyan, magenta, blue (which is invisible on the background).
I think that OS X wants BGRA if it's coming from a Linux box. I've worked on a robot teleoperation interface the last two summers, where the robots are P3s running Linux, and I'm developing on my PBG4/400 running OS X. The robots have cameras and capture cards, and broadcast frames as arrays of unsigned chars.
I've used both GTK+ and SDL as the basis of the interface in two different versions, and in both cases I had to reverse the ordering of the color bytes but leave the alpha byte where it is.
feynman:~/src mstyne$ uname -aDarwin feynman.hq.voxel.net 7.7.0Darwin Kernel Version 7.7.0: Sun Nov 7 16:06:51 PST 2004;root:xnu/xnu-517.9.5.obj~1/RELEASE_PPCPower Macintosh powerpcfeynman:~/src mstyne$ ./test-textureRoot: 1280 x 1024 x 24Image: 640 x 480 x 24 format = ZPixmap bytes = MSBFirst bits = MSBFirst pad = 32 bpl = 2560 bpp = 32 rgb = 00FF0000 0000FF00 000000FFEndian: BigVisual: 0x36 TrueColorTexture: BGRA UNSIGNED_INT_8_8_8_8
Here's the screenshot on Solaris 8. The output from test-texture is:
Root: 1280 x 1024 x 24
Image: 640 x 480 x 24
format = ZPixmap
bytes = MSBFirst
bits = MSBFirst
pad = 32
bpl = 2560
bpp = 32
rgb = 000000FF 0000FF00 00FF0000
Endian: Big
Visual: 0x2c TrueColor
Texture: BGRA UNSIGNED_INT_8_8_8_8
I had to change line 434 to GL_UNSIGNED_INT_8_8_8_8_REV and line 435 to GL_RGBA, to come up with this screen, and this output:
Root: 1280 x 1024 x 24
Image: 640 x 480 x 24
format = ZPixmap
bytes = MSBFirst
bits = MSBFirst
pad = 32
bpl = 2560
bpp = 32
rgb = 000000FF 0000FF00 00FF0000
Endian: Big
Visual: 0x2c TrueColor
Texture: RGBA UNSIGNED_INT_8_8_8_8_REV
Here is the text output:
Root: 1280 x 1024 x 24
Image: 640 x 480 x 24
format = ZPixmap
bytes = LSBFirst
bits = LSBFirst
pad = 32
bpl = 2560
bpp = 32
rgb = 00FF0000 0000FF00 000000FF
Endian: Little
Visual: 0x27 TrueColor
disabling TCL support
Texture: BGRA UNSIGNED_INT_8_8_8_8_REV
The colors in the graphic looked correct.
Displaying from a solars 2.9 04/04 to a Debian sarge
Root: 1280 x 1024 x 24
Image: 640 x 480 x 24
format = ZPixmap
bytes = LSBFirst
bits = LSBFirst
pad = 32
bpl = 2560
bpp = 32
rgb = 00FF0000 0000FF00 000000FF
Endian: Big
Visual: 0x27 TrueColor
Texture: BGRA UNSIGNED_INT_8_8_8_8_REV
The colors of the image are mixed:
The white back ground is yellow.
The red text is green
The green text is red
The blue text is black.
The same goes with the four color bars.
Rotated colors from an X app displayed to my desktop is not unusual. Veritas 3.4 has scrambled colors.
:wq
i tried linux/arm but not glx extensions on that x server (and i assume no libs).client: linux/ppc; server: linux/ppc PASSRoot: 1024 x 768 x 16Image: 640 x 480 x 16 format = ZPixmap bytes = MSBFirst bits = MSBFirst pad = 32 bpl = 1280 bpp = 16 rgb = 0000F800 000007E0 0000001FEndian: BigVisual: 0x23 TrueColorTexture: RGB UNSIGNED_SHORT_5_6_5client: linux/x86: server: linux/ppc FAILRoot: 1024 x 768 x 16Image: 640 x 480 x 16 format = ZPixmap bytes = MSBFirst bits = MSBFirst pad = 32 bpl = 1280 bpp = 16 rgb = 0000F800 000007E0 0000001FEndian: LittleVisual: 0x23 TrueColorTexture: RGB UNSIGNED_SHORT_5_6_5
client: linux/ppc: server: linux/x86 FAILRoot: 1280 x 1024 x 16Image: 640 x 480 x 16 format = ZPixmap bytes = LSBFirst bits = LSBFirst pad = 32 bpl = 1280 bpp = 16 rgb = 0000F800 000007E0 0000001FEndian: BigVisual: 0x23 TrueColorTexture: RGB UNSIGNED_SHORT_5_6_5
[caveat: it's been a long time since I've looked at this stuff]
I don't think this will work for any 8 or 16 bit formats, at the very least. *_REV means a format-defined bit shuffle in these cases, so it won't undo the endian difference.
For 24 bit, will the XServer ever add a pad byte (I thought not)? OGL only understands 1, 2 and 4 byte formats, so I can't see how that can ever work. I also think you need to interrogate the 32 XServer pixel format to determine the R,G,B,A ordering. Assuming BGRA or ARGB may be incorrect, as previous posts have implied. ABGR is also supported by OGL, and, I would imagine, is thus a possibility for the XServer too.
see: http://www.opengl.org/documentation/specs/version1.2/1.2specs/packed_pixels.txt
Maybe the best bet is to just do the byteswap by hand in cases where the endianness is wrong, but the pixel format is otherwise native, and fall back to old code for all the really hard cases (24 bit, really odd pixel formats)? AFAICT, the mismatch always implies that the screensaver is being run remotely, so the speed will be dominated by the network overhead rather than the byteswap.
I'm happy doing it the "slow" way in weirdo cases like remote displays. My goal here is just to avoid having to reformat a lot of data in the normal, default case. It seems likely that X's default packing and OpenGL's default packing are going to be the same when they're both running on the same box, right? So I need to A) handle all such "normal" cases, and B) detect when we're in a "weird" case. So far, I don't fully understand how to do either.
I think the key here is checking the color masks from the XImage:
This works for me under OSX, for 32 and 16 bit formats. It doesn't really change your code all that much, but it *does* mean that you can catch RGB555 format pixels. Also, GL_UNPACK_SWAP_BYTES does work for byte swapping any 16 or 32 bit packed pixel formats.
However, it doesn't work for an OSX server, linux client.
Converted to a table like yours, the results so far are... anomalous:
I wonder if there's a bit of a problem here:
If the R,G,B components in your image are swapped, All that happens is that the 'This is red' text is blue, and vice versa. The image still appears superficially to be correct. This fooled me a couple of times.
The second 16 bit entry is almost surely wrong, as the sizes of the bitfields don't match the sizes of the masks.
This may or may not be slow (and, depending upon where the problems you're having lie, might also not work), but you could try:
GLXPixmap glXCreateGLXPixmap( Display *dpy, XVisualInfo *vis, Pixmap pixmap);
To create a pixmap with the same depth/visual as your source image that you can blit to via X, and read from using glReadPixels. That way you might be able convince OpenGL to make all the choices for you (you just specify the visual of the source data, and the format/type of the data to read back and then blat to a texture).
I can't make that work; I get a BadMatch when I try to glXMakeCurrent (dpy, glx_pixmap, glxc). I'm using the same XVisualInfo that I used with glXCreateContext.