OpenGL debugging, part 2

Ok, new version of the test program:

    test-texture.c
    gcc -o test-texture test-texture.c -I/usr/X11R6/include -L/usr/X11R6/lib -lX11 -lGL -lGLU

This will run through various permutations. Click on the "Bad" button if it looks wrong, and it will go on to the next test. Click on the "Good" button when it looks right, and it will print out some text.

If you get a good result, send me the text it printed. If you get no good result, tell me that. No need to post screen shots.

If at all possible, try it on a variety of machines (local and remote, both ways) and in a variety of bit depths.

Thanks!

Update: <LJ-CUT text=" the results so far... "> Ok, here are the results so far. I can't make any sense of this! Assuming people gave me good data, it seems just totally random what works and what doesn't.

First 5 columns are "Packed, Endian, Bytes, Bits, BPP."

    32 BGRA INT_8_8_8_8_REV ppc 1.1 ATI-1.3.26 
    32 BGRA INT_8_8_8_8_REV ppc 1.5 ATI-1.3.36 
    32 BGRA INT_8_8_8_8_REV ppc 1.2 (1.5 Mesa 6.2.1) 
    16 BGRA SHORT_4_4_4_4_REV ppc 1.3 NVIDIA-1.3.36 
    16 BGRA SHORT_1_5_5_5_REV ppc 1.5 NVIDIA-1.3.36 
    32 RGBA INT_8_8_8_8_REV ppc 1.1 ATI-1.3.26 
    32 RGBA INT_8_8_8_8_REV sun4u 1.3 Sun OpenGL 1.3 
    32 BGRA INT_8_8_8_8 irix 1.5.2 NVIDIA 66.29 
    32 BGRA INT_8_8_8_8 sun4u 1.5.2 NVIDIA 66.29 
    32 RGBA BYTE parisc 1.3 Mesa 4.0.4 
    32 RGBA INT_8_8_8_8 irix64 1.5.2 NVIDIA 66.29 
    32 RGBA INT_8_8_8_8 sun4u 1.5.2 NVIDIA 66.29 
    32 BGRA INT_8_8_8_8_REV i686 1.3 Mesa 4.0.4 
    32 BGRA INT_8_8_8_8_REV i686 1.2 Mesa 4.0.4 
    32 BGRA INT_8_8_8_8_REV i686 1.2 Mesa 6.1 
    32 RGBA INT_8_8_8_8_REV i686 1.2 Mesa 4.0.4 
    32 RGBA INT_8_8_8_8_REV i686 1.4.1 NVIDIA 53.36 
    32 RGBA INT_8_8_8_8_REV i686 1.5.1 NVIDIA 61.11 
    32 RGBA INT_8_8_8_8_REV i686 1.5.2 NVIDIA 66.29 
Tags: , , ,

help me debug some X11+OpenGL stuff

Hey, if you have access to X11 running on non-x86 platforms, help me out by running a test program and telling me what it did.

    test-texture.c
    gcc -o test-texture test-texture.c -I/usr/X11R6/include -L/usr/X11R6/lib -lX11 -lGL -lGLU

<LJ-CUT text=" --More--(13%) ">

Help me fill in the question marks:

        8     16     24     32  
      Linux x86:   ok ok ? ok
      OSX PPC:   ? ? ? ok
      Sparc:   ? ? ? ?
      Linux x86 -> OSX PPC:   ? ? ? ok
      Linux x86 -> Sparc:   ? ? ? ?
      Sparc -> Linux x86:   ? ? ? ?
      OSX PPC -> Linux x86:   ? ? ? BAD

    Tell me what it printed to stdout, and whether the colors in the window were right. It should look like this:

    "Linux x86 -> OSX PPC" means "program is running on Linux x86, with $DISPLAY pointing at an X server running on OSX PPC".

    "Depth 32" means 4 bytes per pixel.

    "Depth 24" means 3 bytes per pixel (this is less common.) If your video driver supports it at all, you might need to turn it on by adding DefaultFbBpp 24 and/or Option "Pixmap" "24" to xorg.conf or XF86Config.

    "Depth 8" means TrueColor, not PseudoColor / colormapped. Visual "TrueColor" will probably be needed.

What's the big idea?

    I've got this image data that came from the X server (via XGetImage), and it's in some arbitrary format. It might be in any bit order, byte order, and number of bits, depending on the whims of the architecture and video driver. I want to feed it to OpenGL for use as a texture.

    Currently, I do this by iterating over the X data and constructing 32-bit RGBA native-endian data to hand to glTexImage2D or gluBuild2DMipmaps. But, copying/reformatting all that data is slow. (Slow enough to matter, it turns out.) So instead, I'd like to just hand it to OpenGL directly, and say, "hey GL, this data is 16 bit BGR bigendian, deal."

    I'm trying to figure out how to express that to OpenGL. I'm having a hard time, because it's very poorly documented. Thus, this test program.

Update: Plase try the new version here.

Tags: , , , , , ,