You've got an app that accumulates 2D graphics onto a canvas, without recomputing the scene every time. You need to blit that canvas onto the screen every time drawRect is called. How?
I've done some searching and it seems that lots of people have asked this question, but I haven't found any answers that work. Several vague theories, no real explanations.
Background:
It turns out that I can't get better than like 4 frames per second when running on an iPad 3 (with the 2048×1536 screen, even if I only draw at 1024×768). Oddly, on an iPad 1, with a native 1024×768 screen I'm getting 15 FPS, which is still super slow, but I don't know why it's faster.
This is for non-OpenGL stuff. The GL stuff is fine.
On a really simple saver like Julia I'm spending 70% of my time actually rendering into the backbuffer, just drawing dots, and 25% of my time inside CGContextDrawImage. This is crazy, because on MacOS and in the simulator this takes basically zero time. CPU utilization is 0.1% instead of 60+%.
The X11 savers typically draw a few lines each frame and expect them to accumulate. This is no problem on MacOS because the CGContext of a given NSView never changes, and you can keep adding to it. But on iOS, you can't depend on UIGraphicsGetCurrentContext() having the same value across calls to drawRect. What's worse is that in practice, you can't depend on it retaining its bits. Among other things, it gets cleared by orientation changes, and there's double-buffering going on so you get different buffers on alternate frames. Even when it doesn't clear on you, you get constant flicker.
So, that means that if your app does not fully re-compute its scene every frame, you need to draw to an off-screen buffer and then splat that on the screen at the end. Annoying, but not too unusual a situation.
So I was doing this:
CGColorSpaceRef cs = CGColorSpaceCreateDeviceRGB();
CGContextRef backbuffer = CGBitmapContextCreate (NULL, w, h, 8, w*4, cs, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease (cs);
- (void)drawRect:(NSRect)rect
{
...
UIGraphicsPushContext (backbuffer);
// ...draw more stuff into backbuffer...
UIGraphicsPopContext();
CGContextRef cgc = UIGraphicsGetCurrentContext();
...
CGContextConcatCTM (cgc, t); // rotate for orientation
CGImageRef img = CGBitmapContextCreateImage (backbuffer);
CGContextDrawImage (cgc, target, img);
CGImageRelease (img);
When the orientation changes, the underlying code sees e.g. a 640x480 screen "resize" into 480x640 and I re-create the back buffer at the new size and copy what I can of the old one's bits to the new one:
CGImageRef img = CGBitmapContextCreateImage (old);
CGContextDrawImage (backbuffer, rect, img);
CGImageRelease (img);
CGContextRelease (old);
So, as I said, that is inexplicably slow on Retina iPhones and non-retina iPads, and intolerably slow on Retina iPads even when the size of the backbuffer is half the real size of the screen (that is, the size of the screen on an iPad 1).
So then I tried this:
Lots of people seemed so say that if you wave the chicken of NSLayer at your CGBitmapContext code, things magically get better:
// Init this from under the first call to drawRect, else UIGraphicsGetCurrentContext() is null
CGLayerRef backbuffer = CGLayerCreateWithContext (cgc, backbuffer_size, NULL);To copy it:
CGContextDrawLayerInRect (CGLayerGetContext (backbuffer), rect, old);
CGLayerRelease (old);And drawing:
CGContextDrawLayerInRect (cgc, target, backbuffer);
That improved things by like, 10%, but not enough to matter.
Bizarrely, performance is worse by about 30% in landscape than portrait, and as far as I can tell, the only difference there is whether I appended a non-zero rotation to the CTM when copying.
Any ideas?
If you have a functional iOS development environment and want to play around with it, the code is in xscreensaver/OSX/XScreenSaverView.m. Email me and I'll send you my latest version of that file with my recent attempts. In the app, click on the options arrow on the "Julia" line and turn on "Show frame rate" to see the numbers.
I answered a similar question on Stack Overflow (go ahead and hurl now).
In a nutshell: keep your backbuffer as it is, and each time you want to push it to the screen, call CGBitmapContextCreateImage to create a CGImageRef.
Then, to get it on the screen, you have a couple of options:
1. Easy, but you have to make a subview: put that CGImageRef into a UIImageView, by doing
imageView.image = [UIImage imageWithCGImage:yourCGImageRef]
2. Harder, but you have more control: put the CGImageRef into a view's CALayer's contents (set
yourView.layer.contents = (id)yourCGImageRef
)(If you do #2: your view should not implement
-drawRect:
, and you should not call-setNeedsDisplay:
on it.)Ignore CGLayers -- they are an entirely different idea. They're an old API that turned out to be useful only in a few limited cases, but not this one. Lots of people think they are magic fairy dust that makes everything fast, but those people are idiots.
CALayers are an entirely different thing. They're the way to get to the fast path. If you use
UIImageView
it will take advantage of them.Let me know if this is too much text and not enough code.
This basically works. 15 FPS on Julia, iPad 3, debug build. (I can't get a release build to load any savers.)
Remove
-drawRect:
entirely, and then:- (void) animateOneFrame
{
// Render X11 into the backing store bitmap...
NSAssert (backbuffer, @"no back buffer");
UIGraphicsPushContext (backbuffer);
[self render_x11];
UIGraphicsPopContext();
// Then push it to the screen
CGImageRef img = CGBitmapContextCreateImage (backbuffer);
self.layer.contents = (id)img;
CGImageRelease (img);
}
Rotation is screwy though. I'd have to delve into how you're implementing that to make it work. Normally you let UIKit handle rotation, so it resizes your view to fit the new orientation, and applies any transform needed to the layer tree. Then you'd want to change the back buffer size to match, but keep pushing it into the layer the same way. But I'm sure it's just not that simple in your case.
Thanks very much, I'll take a look!
"Normally" you let UIKit handle rotation, unless you are using GL, in which case you can't. Since sometimes I'm using GL and sometimes I'm not, I had to do it myself.
Figured something like that. You probably want to set your
view.transform
(or, at a slightly lower level,view.layer.affineTransform
) to a rotation -- that way it gets done in hardware. ACALayer
is just a texture on a quad, really.The key to fast graphics on iOS is to do as little work with the software renderer (basically anything in a CGContext) as you can.
(BTW at this point the bottleneck in Julia is
XDrawPoints
. Hacking that to draw only one dot gets me around 35 FPS, so the underlying display stuff is doing OK. If I get bored I'll see if there's a better way to doXDrawPoints
.)Awesome, I would really appreciate optimization tips on XDrawPoints. If you really want a challenge, take a look at "Moire" or "Kumppa" and see if you can figure out how to make XCopyArea go fast. I spent a lot of time on that and didn't get very far...
You might as well let UIKit handle the rotation in both cases. No performance problems noted. I was given all sorts of dire warnings about the risks of doing this, but I haven't seen any of them actually happen. (I've only really been doing this stuff since iOS 4.2 though - I don't doubt it could have been poorly-supported before that.)
This was a while ago now, but from looking at my code, I think you just need to call -[EAGLContext renderbufferStorage:fromDrawable:] once the rotation is done, and the renderbuffer gets the right width and height, and you can take it from there. I do this from my view controller but I guess it would work if you're polling for rotation changes too.