You've got an app that accumulates 2D graphics onto a canvas, without recomputing the scene every time. You need to blit that canvas onto the screen every time drawRect is called. How?
I've done some searching and it seems that lots of people have asked this question, but I haven't found any answers that work. Several vague theories, no real explanations.
It turns out that I can't get better than like 4 frames per second when running on an iPad 3 (with the 2048×1536 screen, even if I only draw at 1024×768). Oddly, on an iPad 1, with a native 1024×768 screen I'm getting 15 FPS, which is still super slow, but I don't know why it's faster.
This is for non-OpenGL stuff. The GL stuff is fine.
On a really simple saver like Julia I'm spending 70% of my time actually rendering into the backbuffer, just drawing dots, and 25% of my time inside CGContextDrawImage. This is crazy, because on MacOS and in the simulator this takes basically zero time. CPU utilization is 0.1% instead of 60+%.
The X11 savers typically draw a few lines each frame and expect them to accumulate. This is no problem on MacOS because the CGContext of a given NSView never changes, and you can keep adding to it. But on iOS, you can't depend on UIGraphicsGetCurrentContext() having the same value across calls to drawRect. What's worse is that in practice, you can't depend on it retaining its bits. Among other things, it gets cleared by orientation changes, and there's double-buffering going on so you get different buffers on alternate frames. Even when it doesn't clear on you, you get constant flicker.
So, that means that if your app does not fully re-compute its scene every frame, you need to draw to an off-screen buffer and then splat that on the screen at the end. Annoying, but not too unusual a situation.
So I was doing this:
CGColorSpaceRef cs = CGColorSpaceCreateDeviceRGB();
CGContextRef backbuffer = CGBitmapContextCreate (NULL, w, h, 8, w*4, cs, kCGImageAlphaPremultipliedLast);
// ...draw more stuff into backbuffer...
CGContextRef cgc = UIGraphicsGetCurrentContext();
CGContextConcatCTM (cgc, t); // rotate for orientation
CGImageRef img = CGBitmapContextCreateImage (backbuffer);
CGContextDrawImage (cgc, target, img);
When the orientation changes, the underlying code sees e.g. a 640x480 screen "resize" into 480x640 and I re-create the back buffer at the new size and copy what I can of the old one's bits to the new one:
CGImageRef img = CGBitmapContextCreateImage (old);
CGContextDrawImage (backbuffer, rect, img);
So, as I said, that is inexplicably slow on Retina iPhones and non-retina iPads, and intolerably slow on Retina iPads even when the size of the backbuffer is half the real size of the screen (that is, the size of the screen on an iPad 1).
So then I tried this:
Lots of people seemed so say that if you wave the chicken of NSLayer at your CGBitmapContext code, things magically get better:
// Init this from under the first call to drawRect, else UIGraphicsGetCurrentContext() is null
CGLayerRef backbuffer = CGLayerCreateWithContext (cgc, backbuffer_size, NULL);
To copy it:
CGContextDrawLayerInRect (CGLayerGetContext (backbuffer), rect, old);
CGContextDrawLayerInRect (cgc, target, backbuffer);
That improved things by like, 10%, but not enough to matter.
Bizarrely, performance is worse by about 30% in landscape than portrait, and as far as I can tell, the only difference there is whether I appended a non-zero rotation to the CTM when copying.
If you have a functional iOS development environment and want to play around with it, the code is in xscreensaver/OSX/XScreenSaverView.m. Email me and I'll send you my latest version of that file with my recent attempts. In the app, click on the options arrow on the "Julia" line and turn on "Show frame rate" to see the numbers.