OpenGL question

I want to blend a pixel fmod(a+b+c,1) (wrapped) instead of as min(a+b+c,1) (truncated) or as a/3+b/3+c/3 (scaled). Is there any way to do that in an OpenGL 1.3-ish world?

That means I can't use GLES2 shaders. "Hack the pixels by hand and create a new full-screen texture 30x/second" is also a non-starter.

Previously.

Tags: , ,

15 Responses:

  1. Sean Barrett says:

    I believe the answer is "no". (Since this is a negative answer, it's a bit hard to prove, so here are my bona fides: I am a professional graphics programmer and my primary expertise is fixed-function (non-shader) OpenGL.)

    There is some ambiguity here; are you talking about blending in the texture unit, or talking about blending in the framebuffer? Since the latter only nominally supports two inputs (plus alpha), it sounds more like the former than the latter, but we don't usually call that "blending". Plus I would guess you would be more likely to be concerned with the latter. Either way I think the answer is no, but I'm having to spread my thinking over both cases so I'm not positive I've thought of everything.

    The stencil buffer does allow adding 1 modulo 255 (or any arbitrary power of 2) with GL_EXT_stencil_wrap, which I would hope would be widely deployed, but it's only one channel, the value is fixed per draw call (no gouraud shading), and it would take 8 more passes to propogate that back to a color channel, so that's probably not really what you're looking for.

    But other than that, I think you're right out. The blend unit is the least flexible part of the pipeline, even with all the programmable shading going on. Note that you can't even just do a plain subtract in the blend unit. But even if you were talking about the texture unit rather than the blend unit, I don't think it was supported even by any of the fixed-function extensions (like the 'register combiner' extension).

    Another potential trick to explore would be if you didn't need that much range pre-mod, e.g. divide all the values by four and so effectively the range of the framebuffer would be as if it were [0,4). Then you'd post-mod somehow, but I haven't thought of any good way--if you could replicate the color channel to the z-buffer it would be possible to do more stencil trickery, but I don't think there's any good way to do that either. I can't think of any way to make this work, but it seems more likely than anything else.

    • jwz says:

      Thanks. And bummer.

      I suppose I'm talking about the texture unit, since what I'm dealing with are several overlapping grayscale quads, and I want their brightness to add and then wrap, instead of being truncated or scaled with alpha-blending.

      On another topic, is there a simple trick to colorize a grayscale image using OpenGL? E.g, pretending the image is HSV and altering HS and leaving V alone, or are the RGB channels the only ones that can be manipulated in isolation? Putting a color-plus-alpha quad on top of a grayscale scene almost works, but either the color or the intensity are washed out, depending on the alpha.

      • Sean Barrett says:

        The straightforward way to colorize is just to multiply. E.g. draw bright colors (normalized so one channel is 255) with glBlendFunc(GL_DST_COLOR, GL_ZERO). This tends to darken everything, so you can double the brightness while drawing using glBlendFunc(GL_DST_COLOR, GL_SRC_COLOR).

        (And overlapping quads means you were talking about the blend unit, yeah.)

    • Tom Seddon says:

      These fixed function pipeline problems infuriate me, but I find them strangely compelling nonetheless. Off the top of my head, so it could be nonsense...

      I suppose, once you have the 0-4 values in the frame buffer, you can draw a full-screen untextured quad with colour (63,63,63,63) using the AND logic operation, to strip off the top 2 bits.

      Then you could use glCopyPixels to copy the screen on top of itself. (As far as I can tell, this is actually allowed.) Set the red, green and blue pixel transfer scales to 4, to scale the values up. I've never actually used glCopyPixels, though, so who knows what will happen in practice. Maybe a combination of glReadPixels and glDrawPixels would also do the trick, though I don't ever recall hearing anybody say that glReadPixels is especially quick.

      If you have a quick way of getting the frame buffer into a texture, another option would be to draw it using the texture environment in COMBINE mode, using the MODULATE operation to scale up the colours. You could put the scale in the texture environment colour.

      (You could use non-pow2 scales too; both the pixel transfer and combine approaches support arbitrary scales. Instead of using the logic operation to clamp the frame buffer values, you could use the MINMAX blend extension. This might be more safer with different bit depths, too, I suppose?)

      • Tom Seddon says:

        (BTW as far as I can tell from the 1.5 spec, this stuff is all available in OpenGL 1.3.)

      • Sean Barrett says:

        Ah geez, I had totally forgotten that logic ops exist, since I've never once used them.

        Note thought that OpenGL 1.1 is required; from the GL 2.1 docs: "Color index logical operations are always supported. RGBA logical operations are supported only if the GL version is 1.1 or greater."

        You can scale up the colors by drawing white and using glBlendFunc(GL_DST_COLOR, GL_SRC_COLOR), which computes white*dest+white*dest, thus doubling the color in the framebuffer. Then do it a second time to get the 4x.

        This is all assuming the limit of 4x "overdraw" is sufficient.

        • jwz says:

          I hadn't played with glLogicOp before, this is interesting! Drawing a quad over the top of the scene with a value from around 0.5-1.0 with GL_AND_REVERSE is something kind of like adjusting the contrast. But not quite.

  2. The ARB assembly shading language is approximately this vintage.
    Good luck porting that to work on ES though.

    Is there any reason you can't upgrade to at least 2.0? That is completely backwards compatible with fixed function 1.3.

    Though even there the methods are going to approximate "intelligent hack"

    • jwz says:

      XScreenSaver runs on every god damned computer in the world. I do not drop support for systems that are in use lightly, and yet, I do not know how to answer the question, "are there systems out there that people are still using that support OpenGL 1.3 that do not also support OpenGL 2.0."

      And good luck finding a list comparing the functions that are supported in OpenGL 2.0 versus OpenGL 1.3, to know whether your suggestion even makes sense. Good luck even figuring out what the initial release date of OpenGL 2.0 was. Their historical documentation is an utter atrocity. I went into detail about this mess in the linked previously. When I generated my grid of supported functions per version, I had to copy text out of the indexes in PDFs and parse it, and most of those weren't even the original spec, they were a 5-years-later "revision", so I have no idea whether they were accurate from a backward-compatibility perspective. And I had to just hope that being mentioned in the index implied "supported". It was a horror show.

      From my notes in a comment at the top of xscreensaver/hacks/glx/jwzgles.c, here's a summary of what I've been able to piece together:

      OpenGL     1.0 1992
      OpenGL     1.1 1997 (improved texture support)
      OpenGL     1.2 1998 (nothing interesting)
      OpenGL     1.3 2001 (multisampling, cubemaps)
      OpenGL     1.4 2002 (added auto-mipmapping)
      OpenGLES 1.0 2003 (deprecated 80% of the language; fork of OpenGL 1.3)
      OpenGL     1.5 2003 (added VBOs)
      OpenGLES 1.1 2004 (fork of OpenGL 1.5)
      OpenGL     2.0 2004 (a political quagmire)
      OpenGLES 2.0 2007 (deprecated 95% of the language; fork of OpenGL 2.0)
      OpenGL     3.0 2008 (added FBOs, VAOs, deprecated 60% of the language)

      So, honestly, I don't even know what you're suggesting with any precision.

      I'm guessing what you're saying is, "why don't you just stop using OpenGL 1.x and start requiring this completely different system called OpenGLES 1.0 that happens to have a similar name but not much else."

      • No. ES1 is also (unless my memory lies) fixed function.

        ARB assembly shaders are an extension to OpenGL 1.something which give you a relatively low level but capable shading system.

        OpenGL 2 and ES2 came along later and introduced GLSL, the high level shading language which we all know and maybe love.

        Even with this, it would still involve reading the previous frame buffer as a texture because the framebuffer blending hardware is still fixed function. The only other method involves compute shaders... And, well, that's GL4

        But would this really require dropping support for older systems? Couldn't you just have that one hack require higher?

        For reference, some crappy Intels aside, and those only on Windows, everything post GeForce FX should do GL2 (which is roughly equal to D3D9)

        • jwz says:

          But would this really require dropping support for older systems? Couldn't you just have that one hack require higher?

          That means having that one hack either fail to compile or fail to run on some unknown number of systems. I hate going down that path. It tends to be a maintenance headache.

          For reference, some crappy Intels aside, and those only on Windows, everything post GeForce FX should do GL2 (which is roughly equal to D3D9)

          You seem to be saying "GeForce FX" as if it means something more significant than a single product from a single vendor. I do not speak your code. I also have no idea what D3D9 means.

          There are a ton of truly ancient Linux desktops out there in the world, and I've got the bug report emails to prove it. Also there are an increasing number of people trying to run Linux desktops on what used to be considered embedded systems, like Rasberry Pis, and from the reports I get it's like they've actually gone backward in time, with their goofy computers with GHz CPUs but no GPU at all.

          I don't make a habit of keeping up on the state of the "art" in Windows-and-Linux-oriented commodity hardware -- one of the many fringe benefits of drinking the Apple kool-aid -- so when I get bug reports saying "Hey there's this insane graphics glitch on this hobbyist board I'm using as if it's an actual computer", I usually just sigh and say "try a different video driver."

          • D3D9 = Direct3D 9. It's Windows stuff. Direct3D is often used on Windows rather than OpenGL for no good reason other than often better (historical, not sure about modern day) support. That's the extent of my knowledge of it, other than it flips the coordinate system in some funny way.

          • Zygo says:

            The Pi has a GPU, but it only does OpenGL ES, and nobody's bothered to do GLX-on-GLES yet (or will, ever). So the GPU is there, and the GPU driver is there, but you can't use either from X11.

            Ironically, the GPU vendors on embedded platforms tend to start out by writing Linux GPU drivers, then with great effort port them to Android and Windows CE and everything else. Only X11-on-Linux doesn't get a working GPU driver out of this. Maybe Weston/Wayland will fix that some day...or maybe the Sun will run out of photons first.

            The Pi will run the 2D X11 xscreensavers just fine...ish. The experience is remarkably similar to my 486 in 1993, if my old 486 could connect to my 47" TV set and pump out 1080p frames over HDMI while powered from one of the TV's USB ports.

        • Nvidia Geforce FX (circa 2003 vintage) and Anything ATi R300 based (circa 2002 vintage) support it.

          I would have no clue about the state of support in software renderers - my guess is "LOL".

          • The only two cases JWZ cares about are OS X and *nix. Every x86 mac does at least GL2.1; and for the X11 side of things mesa does GL2.1 in software.

            Every iOS device you can compile for these days and still support the currently shipping hardware also supports GLES2. Essentially, you can use GLSL shaders everywhere.

            Except... GLSLES and GLSL differ in just enough ways to be annoying...