XScreenSaver 4.20

XScreenSaver 4.20 out now! Fun stuff:

  • glslideshow should be a lot faster now (that's partly what all those OpenGL test programs I was having you run were about.)
  • carousel is a new slideshow program that draws 7 images at once that is chock full of the awesome.
  • starwars uses real fonts instead of that crappy line-segment font it had been using; it looks a zillion times better now.
  • bsod -only nvidia (watch it change)
  • sonar shows ping times.
  • substrate -circle-percent 100
  • boing -scanlines
  • boing -smooth lighting
  • boxfit (click in the window to restart early)
  • fiberlamp (move the window around)

<LJ-CUT text=" --More--(69%) "> Help wanted:

    I'm really happy with how carousel turned out, but there's some weird alpha-blending glitch that I can't figure out. Click and drag with the mouse until the text label of one of the images is on top of another image, so that you can see the text on a non-black background. Sometimes it looks right, but sometimes there are black boxes around the characters. Now here's the weird part: the same piece of text will have boxes when over one image, but not when over another! You can drag it left and right and see the boxes appear and disappear depending on what's underneath it.

    So, somehow alpha-blending isn't working right, but I can't figure out why. It's very confusing that it's intermittent like this: I'd expect it to always work or always not!

    Maybe I'm creating the texture wrong? I'm using INTENSITY / LUMINANCE / BYTE and handing it 8bpp data (where the values are either 0 or 255.) I'm trying to say, "create a texture whose R, G, B, and A are all the same value." texfont.c line 81.


    Thanks everybody, I fixed carousel by drawing the images, then making a second pass to draw the text. (It'll be in the next release.)

    When dealing with alpha in OpenGL, you have to render your scene back to front, from the point of view of the observer. I knew this at some point, but I forgot.

    You don't have to jump through that hoop with normal polygons, because GL has a depth buffer. That means that for every pixel on the screen, it stores not only its color, but also its depth (distance from the observer.) So when you tell it to draw a "far" pixel, and there's a "near" pixel on the screen already, it doesn't overwrite it, and life is easy.

    But, apparently alpha blending was an afterthought or something, so GL doesn't store the transparency of the pixels it writes, only the color. That is, when combining two pixels, it knows the transparency level of the pixel being written, but not of the pixel that's already there. So you can draw "50% tranparent" on top of "100% opaque", but not the other way around (which, in a sane world, would give you the same result, assuming the transparent pixel was closer to the observer.)

    Not storing alpha in the color buffer seems pretty goofy to me, since they are almost certainly using 32bpp instead of packed 24bpp anyway, meaning there's an unused byte just sitting there...

Tags: , , ,

48 Responses:

  1. baconmonkey says:

    Sorry I can't be more exact than this, but I believe that the draw order can do things similar to what you describe.

    Sometimes I've seen games suffer from that. for example, a tree with alpha-blend leaves in front of annother tree. The areas that are partial-alpha will see through the second tree to the background, while the non-poly areas showed the second tree just fine. One game's editor had some order flags you could tweak to try and get around that on a per-entity basis.

    Granted, this may very well have been an artifact of optimization in the game intended to reduce the number of overlapping polys rendered at a single pixel. i.e. it says "I have found the front-most Decorative, Non-Landscape mesh, I will stop further Decorative render tests for this pixel as this obscures all behind it."

    so perhaps try altering the draw order so that the ones closest to the camera are drawn last.

    • jjk says:

      It seems that there is a definite pattern to the misdrawn text;
      the caption for picture N looks OK when drawn over pic N-x,
      but blocky over pic N+x (or perhaps the other way round; I didn't
      check the source).

      A quick hack might be to draw all pictures first, then all captions.
      Or perhaps messing with the depth buffer / depth test may be required.

    • ajaxxx says:

      my understanding is that GL more or less requires back-to-front rendering when doing alpha blending, due to the way GL munges together the concepts of fragment alpha and fragment coverage.

    • duskwuff says:

      For what it's worth, Tranquility (tqworld.com) has this issue too.

    • nothings says:

      Re the games:

      As well as alpha blending, alpha test is also available, which simply suppresses drawing the pixel if the alpha doesn't mean some constraint (for example, isn't opaque). So it's possible to use a non-sorting pipeline that looks something like this:
      draw the transparent object with z-writes disabled, alpha-test disabled
      draw the transparent object again with z-writes enabled, alpha-test enabled, set to nearly entirely opaque

      This results in nearly entirely opaque parts of transparent objects updating the zbuffer, preventing transparent objects behind them from drawing. But it means the partially-transparent regions look inconsistent from the opaque regions.

  2. go_team_ari says:

    Oh, and starwars doesn't seem to work for me in 4.20 anymore. It quits after a:
    starwars: gluBuild2DMipmaps (1024 x 1024) error: invalid value

    Here's my glxinfo output if you need it.

    • go_team_ari says:

      If I comment out line #849 of starwars.c, it works fine.

      • jwz says:

        You mean "glEnable(GL_TEXTURE_2D)"? Turning that off should cause it to not actually draw characters, only boxes.

        I'm afraid I don't have any ideas on this one. I assume you can use 1024x1024 textures in other programs (e.g., run "flipscreen3d" in an at-least-1024-square window)? Maybe it doesn't like GL_INTENSITY at line 82 of texfont.c? Though that seems unlikely for an SGI.

        • go_team_ari says:

          You mean "glEnable(GL_TEXTURE_2D)"?
          That was the one I commented out, but interestingly enough, that workaround doesn't seem to be working anymore, so scratch that.

          Also, I'm not actually on an SGI, I'm using Xorg + DRI CVS. Which reminds me, if MergedFB (somewhat emulated Xinerama in Xorg) is turned on for my video card but a second head isn't actually configured, xscreensaver (not the hacks) thinks that I actually have two screens in the space of my primary (and only) display, so it starts up two hacks, each taking up about half my screen. I haven't actually looked into why this is happening, but only xscreensaver and firefox seem to act weird when having MergedFB enabled. I put a screenshot up of it here.

          But anyway.. i ran ltrace on starwars, and here's the last bit of the output before starwars exits:

          glGenTextures(1, 0x8233cf0, 0x8234ac8, 662, 765) = 0
          glBindTexture(3553, 1, 0x8234ac8, 662, 765) = 0x8096dd8
          XGetImage(0x805faa0, 0x3a00008, 0, 0, 704) = 0x8234ad8
          calloc(1024, 1025) = 0xb53a4008
          gluBuild2DMipmaps(3553, 32841, 1024, 1024, 6409) = 0
          sprintf("gluBuild2DMipmaps (1024 x 1024)", "%s (%d x %d)", "gluBuild2DMipmaps", 1024, 1024) = 31
          glGetError(0, 0, 0, 0xbfffed98, 0) = 1281
          fprintf(0x45f59d40, "%s: %s error: %s\n", "starwars", "gluBuild2DMipmaps (1024 x 1024)", "invalid value"starwars: gluBuild2DMipmaps (1024 x 1024) error: invalid value
          ) = 63
          exit(1 <unfinished ...>
          +++ exited (status 1) +++

          • jwz says:

            That MergedFB behavior is fucked up, but it has to be a bug along the lines of: MergedFB is reporting two Xinerama monitors when there is only one.

            1. build and run xscreensaver/driver/test-xinerama;
            2. note that it prints a pack of lies;
            3. report a bug to the MergedFB folks.

            • go_team_ari says:

              Well, it does seem to be reporting that screen 0 is the correct size:

              test-xinerama: 11:50:30: XineramaQueryExtension(dpy, ...) ==> 0, 0
              test-xinerama: 11:50:30: XineramaIsActive(dpy) ==> True
              test-xinerama: 11:50:30: XineramaQueryVersion(dpy, ...) ==> 1, 1
              test-xinerama: 11:50:30: 2 Xinerama screens
              test-xinerama: 11:50:30: screen 0: 1400x1050+0+0
              test-xinerama: 11:50:30: screen 1: 640x1050+760+0

              • jwz says:

                It says you have two monitors. You don't have two monitors. It says they occupy the same physical space. You don't live in a cubist house. (I'm guessing.)

          • jwz says:

            ltrace is useless, compile with -g and run under gdb.

            But, that error message comes from the the call to gluBuild2DMipmaps in bitmap_to_texture. I guess it might be a stale error code from earlier, though; try this:

              diff -u -1 -r1.5 texfont.c
              --- texfont.c 23 Feb 2005 09:20:02 -0000 1.5
              +++ texfont.c 24 Feb 2005 06:52:01 -0000
              @@ -121,2 +121,4 @@

              + check_gl_error ("stale error");
              if (!res || !*res) abort();
              @@ -210,2 +212,3 @@
              glBindTexture (GL_TEXTURE_2D, data->texid);
              + check_gl_error ("load_texture_font");
              data->tex_width = w;
              @@ -392,2 +395,4 @@
              + check_gl_error ("print_texture_string");

  3. parkrrrr says:

    While it's true that just storing the alpha along with the color would work in the simple case you're seeing, consider the case of three objects. How would it deal with the case where you insert an opaque pixel between two partially-transparent ones? Suddenly, you can't just use a z-buffer anymore; you have to know RGBAZ for each individual contribution to the result so far. Worst case, that could be a list as large as the list of polygons, for each pixel.

    • mackys says:

      Don't we already have massive CPUs, 16 pixel pipelines, and half a gig of RAM in our $400 graphics cards to deal with these kind of problems? I really don't think this is that hard a nut to crack...

      Here's an obvious first pass at the solution: As triangles are coming into the card, the card checks if they're opaque (no alpha, or alpha = 1). If so, it renders them as normal, respecting the Z buffer. If they're transparent, it projects them into screen space, and compares their bounding boxes to the bounding boxes of any other alpha enabled polygons. This comparison is done by two greater-than and two less-than comparisons, all of which can be performed in parallel with each other, and are easily done entirely in hardware (four comparators plus an AND gate) at a truly screaming rate of speed. For the moment, I'll skip the other obvious optimizations (like cutting the screen into ninths or sixteenths) to reduce the number of comparisons you have to do.

      99% of the time, the current triangle won't overlap any others. But if it does, the card performs a BSP operation on the offending triangle. That is, the current triangle's plane is used to cuts the other one into two pieces, and we now have three polygons that are guaranteed not to overlap and thus be unambigously depth sortable. (This idea blatantly stolen from DOOM 1 - thank you Jon Carmack). After this step is done, all resulting triangles are binary (and/or by hardware) sorted into a list that's kept sorted by depth, so it'll only take log2(3) time to insert them.

      When triangles stop coming in for the frame, all your opaque polygons have been rendered already, and the Z buffer contains the forward-most opaque polygon's depth value at that pixel. At this point, you render the depth-sorted transparent polygons off the list (still respecting the Z buffer, so you ignore alpha blended pixels behind the farthest forward opaque pixel) and you're done.

      I ran into this exact problem with alpha a year or so back while coding a small game engine. I wondered then why the hardware wasn't doing this for me. And I still wonder now. Avoiding costly depth sorts in software is a bleedingly obvious way to keep the framerate high, and the algorithm to make it happen is easily pulled out of any text book on computer graphics. I'm sure the guys at NVidia and ATI know how to do this... the question is why haven't they?

      • nothings says:

        It's a total mismatch for the way GPUs actually work. In fact, it's been done, and it wasn't a big success. Google for 'powerVR' and 'tiling'.

        • mackys says:



          The keyword to search with on Google is "kyro", which was the name of a short-lived graphics card series by the company PowerVR.

          Frankly, I'm not seeing what the problem is here. This kind of architecture seems to drastically reduce overdraw - which is still a big problem for modern graphics cards. It also drastically reduces memory bandwidth consumption on the bus, which is a huge win, at least until PCI Express becomes common. And you get very cheap fullscreen anti-aliasing with this approach - again, still a huge problem for even the latest GF4 6800 Ultra. It seems to dominate as you go to higher resolutions and 32 bit color. It's also significantly cheaper than the competition's cards.

          This doesn't seem like the kind of technology that failed due to lack of merit. It appears to me PowerVR lost because they insisted on a nonstandard graphics API (which sucked ass), and failed to incorporate hardware T&L (stupid decision), but not because their architecture was poor.

          If you want to argue that current PCs have crappy busses, and need more memory bandwidth, I'll certainly agree. The big trend right now is ever increasing amounts of *fast* RAM on the GPU board to combat shitty PC bus bandwidth. Only SGI seems to have gotten the bus right.

          In short, I don't get it. Why is this approach a total mismatch for the way GPUs actually work?

          • gen_witt says:

            Overdraw is not really a problem for modern graphics cards, as most modern game engines are multipass, and start with a Z-fill pass. That is drawing only to the Z buffer, it's so common that the GF6 architecture is twice as fast when doing only Z and Stencil operations. Because the Z buffer contains the final Z values only those fragments contributing to the final image are drawn. Yes we create a lot of extra fragments in sucessive passes but they are culled with only a single Z test, which uses little memmory bandwidth (the major expense).

            The PowerVR architecture does _not_ give you cheap full screen AA because the major cost in doing fs AA is the extra texture lookups (which are memory bandwidth limited), which cannot be avoided. Seeing as a modern engine will do the same number of texture hits as the PowerVR would, it would be no faster at fs AA.

            Sorting especially in hardware is expensive, but more importantly it causes a pipeline stall. In order to do it in hardware we have to stall the pipeline until we get all of the triangles, then we have to sort, and the we can restart the fragment end of the pipeline. So in the PowerVR arcitecure (assuming it did hardware T&L) we'd have 3 seperate operations that have to wait on one another instead of a smooth continuous assembly line (pipeline) like we have currently.

            Memory lots of it, very fast, and in tightly coupled multiway configurations has to be on the video card. Look at it this way, a single texture lookup requires acessing 8 pixels (trilinear filtering, aka GL_LINEAR_MIPMAP_LINEAR), each weighing 4 bytes. There are 16 pixel pipelines, each with 4 texture units, thats 2KiB/clock just to feed the texture units, never mind the Z or stencil reads, and the writes. At 400Mhz, the 35.2GiB/sec of theoretical memory bandwidth the GF6 has doesn't hold a candle to what the texture units can use, the numbers get worse with anistropic filtering. Of course the video card caches, and cheats, but it's still essentially memory bandwidth limited, and we arn't going to see 35GiB/sec going across any system bus anytime soon.

            Also a modern game engine spits out a lot of the polygons in front-to-back order anyway, asking it to output some things back to front generally isn't to much of a burdern. Sorting belongs in software where the program has much more information about the visibiltity culling algorithums, which often give ordering information.

            • nothings says:

              To be fair, sorting in software still sucks; a magic solution would be great. For example, stencil shadows don't work with transparent receivers (same problem as the original response to jwz: only depth value in the frame buffer); shadow depth maps don't work for transparent casters (only one depth value in the shadow buffer), etc.

              Solving this automagically in hardware would be great. And maybe it'll happen after another 100x overall performance increase when further performance increases aren't that useful. But yeah, right now it's not the right trade-off, for the reasons you said. Having to retain all polygons sucks.

              Also, tiling architectures actually interoperate poorly with multipass, where you're intentionally relying on effects happening in a certain order. This could be addressed, but it would require some sort of API change to express the extra sort info, which means it wouldn't just work with existing code, which could be problematic.

              • jwz says:

                To be fair, sorting in software still sucks.


                Just about all of my code allows the observer to be at any arbitrary point relative to the scene, so the only way to figure out what to draw first would be to... well, I'm not even really sure! Make a list of every polygon of every object and do a dummy run through the modelview and projection matrixes, I guess?

                Maybe there's some simple way to do that that I don't know, but that's always sounded like such a PITA that I haven't even considered it.

                Here's an example of where "DWIM alpha" would be nice: putting transparent "electron shells" in molecule, that is, overlay "molecule -no-bonds" on top of the default display. I think the only way to accomplish that would be to decompose the spheres into triangles and sort those triangles by depth, right? Ugh.

                • nothings says:

                  Yeah, the game industry has huge swaths of lore about how to sort efficiently since before hardware accelerators, sorting was way faster than z-buffering. But mostly it involves having some higher-level structure to things, e.g. you have a "world" made out of lots of "objects", and the objects move around the world but are static (don't deform or anything, so you can precompute their sorting), and _don't_ interpenetrate each other.

                  So yeah, overlapping intersecting spheres: nope. You just have to sort every triangle--and you really don't want to be doing O(NlogN) work for N=triangles--and if they interpenetrate, which is the whole point here, you'll still have artifacts at the interpenetration point. To handle interpenetration, you have to actually cut objects up into pieces, and sphere intersection, man, I just can't even begin to think how to do it. You're better off just writing a freaking ray tracer.

                • mackys says:

                  Just about all of my code allows the observer to be at any arbitrary point relative to the scene, so the only way to figure out what to draw first would be to... well, I'm not even really sure! Make a list of every polygon of every object and do a dummy run through the modelview and projection matrixes, I guess?

                  Maybe there's some simple way to do that that I don't know, but that's always sounded like such a PITA that I haven't even considered it.

                  For objects whose bounding boxes don't overlap each other in 3d space, you can sort them into correct depth order pretty easily by doing a 3d dot product on the view direction vector and a vector from the eyepoint to any point in the bounding box. (Which you can get easily by subtracting the view position vector from the objects's position vector.) The resulting value is proportional (but not equal) to the object's distance from the view point. The vector subtract takes three subtraction ops, and the dot product calculation takes three multiplies and two adds per object, so this costs you 7 floating point ops per object per frame. (I'm assuming you don't waste the cycles to normalize the "view pos to object" vector, because that isn't actually necessary.)

                  Once you have an array of values proportional to the viewer-relative depth of each object, you can sort and render the objects in depth order easily. Assuming there aren't a million objects on the screen, this works surprisingly well and quickly most of the time. And it handles transparent objects correctly too. This is the workaround I came up with to handle alpha objects correctly in the proto game engine I wrote. It's trivial code to write, but if for some reason you want a copy of the function to do the calculation, email me.

                  For objects that do overlap each other in 3D space, you're pretty well fucked. Z buffer will save your bacon with opaque objects, but with transparent objects there doesn't seem to be a good solution as of yet.

  4. jjk says:

    Sigh... screen_to_ximage() is gone completely. I was using that for a little private hack.
    (Textures are useless for that hack, as it needs to get at individual pixels.)

    • jwz says:

      If you're not using the data as a texture, then load_random_image/XGetImage should be all you need, I think?

      What's your hack do?

      • jjk says:

        I was planning to look at load_random_image() next, so thanks for confirming that I'm on the right track.

        The hack loads an image and randomly samples it (continuously), drawing a small textured blob at the sampled location in the sampled color. The result looks like a pointillistic painting, but constantly changes. It's a bit like the GIMPressionist plug-in for them GIMP.

        Oh, and it also has a Matrix mode :-)

  5. gen_witt says:

    Open GL stores both source (incoming) and destination(current) alpha, and you can use them in a myriad of ways (all that glBlenFunc stuff). You can often get acceptable albeit incorrect results by rasterizing all your opaque polygons. turning off Z-buffer writing (but keeping the test on) and then writing out the rest of your polygons out of order. The real problem as alluded to above is that the compositing integral(summation) cannot be computed by comparing only two random fragments at a time.

    • nothings says:

      seconded. you may have to request a destination alpha channel, and as previously noted, having one alpha and one depth value in the framebuffer is insufficient to draw an object that is behind several things that have already been drawn.

  6. The depth sorting problem isn't just for OpenGL. I'm quite sure DirectX and any other real-time pipelined graphics system has this shortcoming.

    As a number of people have pointed out, correctly handling translucent polygons transparently (so to speak) is very hard. The idea you are suggesting would probably work a lot of the time for small things like antialiased lines where artifacts wouldn't be obvious. I could imagine having multiple frame buffers that get composited on buffer swap, but to have enough for four layers of overlap, you'd need 15 buffers and of course you'd need a lot of logic to send those fragments to the right buffer.

    The best solution I've seen is depth peeling. With this technique you only see the first n layers of overlapped transparency, but other than that it is pixel perfect, even with intersecting polygons. It works something like this:

    1. render everything, writing alpha to the buffer, but not doing alpha blending,
    2. render to another buffer the objects that are obscured by exactly one layer,
    3. render to a third buffer the objects that are obscured by exactly two layers,
    4. etc. (The paper suggests stopping at 4.)

    After doing this, composite the resulting buffers with standard alpha blending. It turns out, through some crazy hackery, that “obscured by exactly n layers” can be done using a graphics card's shadow mapping facilities.

    But yea, this really is a hard problem to do efficiently, despite first impressions, without making the performace something other than O(max(geometry size, area to fill)).

  7. That boxfit one is really nice. The http://www.complexification.net version applied to images would seem to be a natural extension.

  8. gen_witt says:

    The problem with carosal is not a blending issue, persay. The problem is your writing to the depth buffer even when alpha = 0. You can get around the problem by, not using mipmaps, turning off alpha blending, and turning on the alpha test (glEable( GL_ALPHA_TEST ); glAlphaFunc( GL_GREATER, .5 );).

    You loose all the mipmap goodness however.

    Alternatly (and this is what I would do), is render the images and frames first, disable depth writing ( glDepthMask( GL_FALSE ); ), and then spit out the text tags. This gives you correct rendering except when two text tags overlap, in that case it's close enough (i.e. it doesn't have a box around one text label blocking the other). Getting rid of the artifact in your frames (black spots in the corners) is another can of worms too, one I'm not sure of the correct solution for, to wit I have spots where my smooth lines come together too.

    Also is there a process for adding a new hack to XScreenSaver, or am I supposed to just distribue it seperatly?

    • jwz says:

      That's what I'm doing now, except I don't disable depth writing before pass 2 -- why would I have to do that?

      I guess the aliasing on the outlines around the images is related to this alpha business somehow, too; at least, GL_LINE_SMOOTH and GL_POLYGON_OFFSET_FILL don't seem do help much.

      One gets new hacks added to xscreensaver by sending them to me, and them not sucking.

      • gen_witt says:

        Consider the following case. You have two labels, A and B. A is in front of B and and overlaps the left hand side of B. Poor ascii drawing follows:

        | |-----+
        | A | |
        | | B |
        +-------+ |

        Further imagine A gets drawn first, with Z writing on. All fragments, including those with alpha = 0, get their Z value written to the buffer. So when we draw B, the part of B occluded by A will not be drawn, even though you can see through parts of A. However if we where to draw B first we would not get this error. We leave depth testing on, because we still want to Z test A and B against the pictures and frames, just not each other. I hope this is somewhat clear. The rendering is wrong because the blending is in the wrong order, but it's close.

        The problem with smooth lines, the line gets turned into a bunch of fragments some of which have small alpha values. The same thing happens the Z buffer gets written too with the lines position even if the alpha is very small. If you draw the lines back to front it's all gravy, but when you draw the front line first (imagine black lines on a white canvas) you get gray pixels, and when you go to draw the line in back the fragments that should mix with those gray fragments, but they get Z culled. So you get gray spots on the black line in back where they come together. Thats not clear but it's the best I can do.

        Also, for starwars, you should really enable anistropic filtering on video cards that support it. Basically anistropic filtering helps for things like infinate planes. It is outlined in gl extension, GL_EXT_texture_filter_anistropic. The filtering makes the text crisper, although it does add some popping. Picture of the difference is at http://www.firebomb-w3c.org/lj/anistropic.png, the top is trilinear, and the bottom is trilinear with 16 tap anistropic filtering. Ya I guess this is sorta subtle.

        • nothings says:

          It would help if both had the same text.

          Look at this.

          • jwz says:

            That does look better. You should send me a patch!

            • nothings says:

              I'd have to be running Linux to test a patch!

              Put this after creating a texture (while GL_TEXTURE_2D is still bound to the texture), for every texture that needs it:

              #define GL_TEXTURE_MAX_ANISOTROPY_EXT 0x84FE
              glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, 16);

              You won't need the first line if it's already #defined (e.g. if you're already including a general GL extensions header).

              Technically you're supposed to query whether the anisotropy extension is available, and if not don't call it, but I'm pretty sure if it's not available you'll get a glError() INVALID_ENUMERANT error but it will be safely ignored.

              The number (16 here) is supposed to be how many samples it will take; the higher the number, the better the quality, but the slower. This seems like a place where high quality is worth the tradeoff. If hardware doesn't support the number you specify, it's automatically clamped to the physical max.

              Details here: http://oss.sgi.com/projects/ogl-sample/registry/EXT/texture_filter_anisotropic.txt

  9. anonymizer says:

    A small question that came up while we were discussing screensavers at my lab:
    Why doesn't screensaver renice itself? Some of the more graphical screensavers take up a lot of CPU time, and there's no reason for them to be same priority as user's tasks.

  10. kakaze says:

    Hi, I found your blog through google while searching for info on xscreensaver on OS X.

    I don't suppose you're up to a bit of troubleshooting, or know a place for it for xscreensaver on OS X?

    I installed it through Fink but there's no text on anything. It works despite the lack of text, though XQuartz seems to crash at the drop of a hat.

    Fink says version 4.18-1 is installed. I'm on 10.3.8 and I have xcode 1.5 installed.

    • jwz says:

      I haven't heard of that one, but yeah, my experience has been that the Mac X server crashes constantly.