I finally got the iPhone/iPad port working.
It was ridiculously difficult, because I refused to fork the MacOS X code base: the desktop and the phone are both supposedly within spitting distance of being the same operating system, so it should be a small matter of ifdefs to have the same app compile as a desktop application and an iPhone application, right?
Oh ho ho ho.
I think it's safe to say that MacOS is more source-code-compatible with NextStep than the iPhone is with MacOS. It's full of all kinds of idiocy like this -- Here's how it goes on the desktop:
- NSColor fg = [NSColor colorWithCalibratedHue:h saturation:s brightness:v alpha:a];
[fg getRed:&r green:&g blue:&b alpha:&a];
[fg getHue:&h saturation:&s brightness:&v alpha:&a];
But on the phone:
- UIColor fg = [UIColor colorWithHue:h saturation:s brightness:v alpha:a];
const CGFloat *rgba = CGColorGetComponents ([fg CGColor]);
// Oh, you wanted to get HSV? Sorry, write your own.
It's just full of nonsense like that. Do you think someone looked at the old code and said, "You know what, to make this code be efficient enough to run on the iPhone, we're going to have to rename all the classes, and also make sure that the new classes have an arbitrarily different API and use arbitrarily different arguments in their methods that do exactly the same thing that the old library did! It's the only way to make this platform succeed."
No, they got some intern who was completely unfamiliar with the old library to just write a new one from scratch without looking at what already existed.
It's 2010, and we're still innovating on how you pass color components around. Seriously?
You can work around some of this nonsense with #defines, but the APIs are randomly disjoint in a bunch of ways too, so that trick only goes so far. If you have a program that manipulates colors a lot, you can imagine the world of #ifdeffy hurt you are in.
Preferences are the usual flying circus as well. I finally almost understood bindings, and had a vague notion of when you should use NSUserDefaultsController versus NSUserDefaults, and now guess what the iPhone doesn't have? Bindings. Or NSUserDefaultsController. But it does have NSUserDefaults. I can't explain.
I basically gave up on trying to have any kind of compatible version of either Cocoa or Quartz imaging that worked on both platforms at the same time -- my intermediate attempts were a loony maze of #ifdefs due to arbitrary API wankery like the above, scathing examples of which I have mercifully forgotten -- so finally I said "Fuck it, the iPhone runs OpenGL, right? I'll just rewrite the display layer in GL and throw away all this bullshit Quartz code." (Let's keep in mind here the insanely complicated thing I'm doing in this program: I have a bitmap. I want to put it on the screen, fast, using two whole colors. And the colors change some times. This should be fucking trivial, right? Oh, ho ho ho.)
So I rewrote it in OpenGL, just dumping my bitmap into a luminance texture, and this is where some of you are laughing at me already, because I didn't know that the iPhone actually runs OpenGLES! Which has, of course, even less to do with OpenGL than iPhones have to do with Macs.
I expected the usual crazy ifdef-dance around creating the OpenGL context and requesting color buffers and whatnot, since OpenGL never specified any of that crap in a cross-platform way to begin with, but what I didn't expect -- and I'm still kind of slack-jawed at this -- is that OpenGLES removed glBegin() and glVertex().
No, really, it really did.
That's like, the defining characteristic of OpenGL. So OpenGLES is just a slight variant of OpenGL, in the way that unicycle is a slight variant of a city bus. If you can handle one, the other should be pretty much the same, right?
Again, what the hell -- I can almost understand wanting to get rid of display lists for efficiency reasons in an embedded API (I don't like it, because my screen savers tend to use display lists a lot, but I can sort-of understand it), but given that you could totally implement glBegin() and glVertex() in terms of glDrawArrays() why the hell did they take them out! Gaah!
Anyway, where was I?
Oh, yeah. So Dali Clock works on the iPhone and iPad now, I think. I can't actually run it on my phone, because I haven't gotten over my righteous indignation at the idea that I'm supposed to tithe $100 to Captain Steve before I'm allowed to test out the program I wrote on the phone that I bought. I imagine I could manage it if I jailbroke my phone first, but the last time I did that it destabilized it a lot and I had to re-install.
So if one of you who has supplicated at the App Store troth would like to build it from source and let me know if it runs on your actual device, that'd be cool.
Oh, PS, I just noticed that since I rewrote it in OpenGL, it's now too slow to get a decent frame rate when running full screen on an 860MHz PPC G4. I mean, that machine is only 53× faster than a 16MHz Palm Pilot, and only 107× faster than an 8MHz Mac128k.
This is why I sell beer.
So, the OpenGL thing...
What you think of as the "defining characteristic" of OpenGL is called "immediate mode" and is a legacy feature we've been failing to beat out of new developers (and even new tutorials) for over a decade.
It's the OpenGL equivalent of XDrawPoint, which I'm sure is a "defining characteristic" of X. Immediate mode is easy to understand... and has completely unsatisfactory performance. In a big workstation API, there's room for that, even if most of those tutorials are giving really bad advice (imagine if the average "Linux GUI" tutorial advised using XDrawPoint inside a for loop to render images) someone out there probably has a halfway plausible use case for it. But in an embedded system the API needed shrinking, so away goes the deprecated immediate mode, and with it glBegin().
So, can you recommend any tutorials (or, failing that, books) that teach OpenGL programming the officially-blessed way? I currently know no OpenGL at all, fwiw.
Good question, seeing that the OpenGL red book (4th edition) which covers OpenGL 1.4 (which OpenGL ES is based on) has you working during the first 7 chapters with immediate mode. Well, that edition is over 6 years old now so maybe that's why.
Joe Groff is in the process writing the apparently first modern OpenGL tutorial.
Seeing him reimplement the projection/modelview matricies from scratch where once one would call
glTranslateet. al., I really hope this isn't an accurate view of the present-future. That his ending note (I do confess it's getting late and I skimmed that last chapter) is about then shifting this over to C/the CPU and passing the results through to avoid per-vertex recomputation...jeez.
Give someone a massively parallel hammer, and suddenly they want to redefine the world in per-nail terms. Or something.
As soon as you need to do visibility culling or make a scene graph (in other words, if you do anything more than make a spinning cube), the OpenGL 1.x matrix functions become useless, and you need to reimplement them from scratch anyway. By killing them in OpenGL 3.x they're doing you a favor.
Mmm...while I'll defer to anyone who works on commercial game engines for a living, I've had a perfectly good portal-vis engine going with the stock matrix operations.
And, you know, some people want spinning cubes—pretty much every
xscreensaverhack I can think of, for a topical example, has no need for visbility culling. Taking away the easy path to a common use case is never "doing you a favour".
In this regard, pandering to the likes of Epic and ID et. al.'s engine coders is perhaps not entirely the best direction, since their interests are always going to be in getting maximum performance out of the API, damn the accessability, because they wrap it up in their engines and license them out anyway. If the only people who can feasibly write directly to OpenGL are full-time professional graphics engine developers, that suits their business model fine.
You may as well say that ACPI was a good thing because it got rid of that nasty old APM, which wasn't flexible enough. Oh, by the way, hobbist OS developers now need to write a bytecode interpreter if they want working power management. Enjoy your 450-page specification.
Yeah, this new OpenGL API has a bad code smell. I'm reminded of the post-mortems on Taligent and OpenDoc, where people talked about how Hello World was ten pages long... Is DirectX this bad?
I'm a hobbyist 3D developer and I don't find OpenGL 3 onerous.
Mainly because it removes the stuff I didn't use anyway. The OpenGL rotation ops are an example: My engine precalculates the matrices for efficiency reasons (They end up being used for culling and such anyway).
To put it simply - they're removing the old and slow methods. They're not going away any time soon - if you want, nothing is stopping you from using OpenGL 2 or 1.4, and even if the driver developers remove them, I'm sure someone will greate a gl1/2 to gl3 adaptor.
As a former commercial game developer and someone still involved in the game industry, I would guess that it is true that pretty much all commercial games are going to have their own matrix library, and probably even have a camera class, and so are highly unlikely to use those.
But the whole world isn't AAA games. As an indie game developer, I use the OpenGL matrices nearly exclusively. If nothing else, I start with OpenGL matrix operations and eventually replace them with my own when I start needing more sophisticated control over orientation (e.g. wanting to use quaternions or just Descent-like controls). And, indeed, if I were developing a commercial game from scratch (not using someone's engine), I would still probably do the same thing--start with the built-in matrices and only replace them when needed.
(If the argument is that it's a standard library that everyone is going to already have, or at least be able to reuse from their previous projects, maybe it's a standard enough library it should just be supplied with the API?)
The product I'm currently working on is videogame middleware, and we may or may not support OpenGL in the long run; in the short run, D3D is the main target on Windows, but I continue to develop it using a "placeholder" OpenGL implementation, because OpenGL and it's immediate mode and matrices is just so much easier to use for doing bespoke stuff... and in particular it uses the existing OpenGL matrix stack (particularly because it makes it easy in the pixel shader to access the world and view matrices separately or together).
Also, for 2D applications with zoom and scroll, glScale and glTranslate are pretty much perfect.
If you learn OpenGL ES 2.0; you'll learn the parts of OpenGL 2.0 you're supposed to use (modulo slightly different GLSL syntax). The book "OpenGL(R) ES Programming Guide" is as good as any.
I strongly recomend the Sony vector/matrix library. Unfortunately, it's not packaged but you can find a copy inside o3d (http://code.google.com/p/o3d/source/browse/#svn/trunk/googleclient/third_party/vectormath/files/vectormathlibrary) or Bullet Physics.
I enjoyed http://duriansoftware.com/joe/An-intro-to-modern-OpenGL.-Chapter-1:-The-Graphics-Pipeline.html which gives a very simple introduction. YMMV.
Indeed. OpenGL 3 has removed immediate mode and display lists too (And, indeed, the fixed function pipeline). Its also removed glDrawArrays, but thats because its been replaced by something entirely less awkward to use with vertex buffer objects.
Display lists are also gone except by legacy extension. But if you detect an nVIDIA card, you probably want to use said legacy extension where possible, because they're somewhat faster than VBOs for their problem domain.
Ah, so that's why nVIDIA insisted on re-incoporating the legacy stuff in the form of an extension.
glBegin()/glEnd() are not the OpenGL equivalent of XDrawPoint.
(I have no clue who your "we" is.)
I'm not clear how the OpenGL community has been successfully converted to the belief that immediate mode should go away. (I can understand that driver authors are put out by the complexity of handling it, but by 'the community' i mean the users, not the driver authors.) Indeed NVidia is promising to never get rid of the "legacy" features of OpenGL, so they're presumably not convinced either.
Immediate mode has enormous advantages for tutorials, prototyping, and things that don't need to be very efficient because you don't have that many triangles (e.g. text overlays and the like, or something like Dali Clock). It allows you to write the code in a simple way and come back and optimize it later. It has the inherent advantages of being an immediate mode (less management so simpler code).
Immediate mode isn't even necessarily less efficient than using vertex buffers; when I wrote the 100,000-sprite engine for Indie Game Jam 0 back in 2002, which drew 100,000 dynamically-computed quads every frame, I had to go through long, heroic optimization efforts to get the performance of vertex arrays (using the then-equivalent to vertex buffers, NVidia's VAR extension) to achieve the same performance as the trivial code I wrote quickly for immediate mode got with NVidia's drivers (which were obviously already heroically optimized and doing similar batching under the hood). And that seems great to me--just do the heroic optimization once in the drivers and spare authors the trouble, unless they really want to take it on. (Yes, for non-dynamic data it works out differently.)
Yes, getting rid of immediate mode makes drivers simpler and may make it possible for them to become more optimal, but it does so by throwing away a giant ecosystem of existing books and tutorials, throwing away ease of learning and prototyping and fast development of small projects (not all OpenGL apps are $10M videogames or CAD software). It seems entirely the wrong tradeoff.
(And I've yet to see a D3D app that didn't have its own immediate mode simulation that batched up some primitives and then dispatched them to DrawPrimitiveUP. That may just be because I haven't seen enough D3D apps, but it's a pretty common freaking idiom--especially, as I said, for UIs and text.)
It seems entirely the right tradeoff to me. In expense of ease of learning the API, you gain efficiency, which in turn gives you better battery life, which is paramount for mobile devices. OpenGL ES being an embedded API has to make different tradeoffs from its desktop cousin.
The same tradeoff has been made, sort of, in the desktop cousin -- OpenGL 3.1 and later have core and compatibility profiles, where the core profiles are similar to OpenGL|ES 2.0: no immediate mode, and no fixed-function lighting. Whether there will ever be a core-only desktop implementation is another question - it seems more likely that the compatibility profile stuff will turn into a standard library on top of the core profile, like glu. (At one point, at least, that was the explicit intention, but these things change.)
WebGL, being based on ES2, is going the same way, meaning that to get a basic demo working you need to pack a bunch of attribute arrays and write a couple of shaders before you can see anything. Various people have questioned whether WebGL will gain any more traction than VRML or custom 3D plugins ever did; ease of use certainly isn't a point in its favour.
The only saving grace with WebGL is that, being the web, most people will just use C3DL or whatever other wrapper libraries get written, and that's just fine. The lower-level APIs will still be there for anyone who wants to write at that level. The nice thing about that is that library authors can experiment with alternative APIs a lot faster than browser vendors can.
Although I suppose efforts are being made by at least three browser vendors to make it a real, grown-up, optimized language now. And if it becomes a de-facto standard, maybe its API will become implemented natively as another way for browsers to fight over performance.
Thing is, immediate mode's interface (glVertex3f, glColor3f, glNormal3f and so on) doesn't work properly with the shader pipeline - and GLES2/GL3 removed the fixed function pipeline, so the only option is the shader pipeline.
And once you start getting into that kind of state, the best option is to just pre-compose stuff use glDrawArrays. It ends up faster, too.
To be fair, you should be able to use CGColor identically on both OS X (10.3 or later) and iPhone OS, and CGImage to draw.
You also have the option of sticking a spork deeply in your ear and twiddling it around a bit.
Spoilt for choice, see?
Part of the problem with being spoiled for choice is nowhere is there documentation -- that I would have dearly loved to find when I was just starting to learn this stuff -- that says things like, "look, you really don't want to use anything beginning with NS, just use the CG ones, that's the maximally-compatible way." It's damned near impossible to even find statements like, "CG-based stuff will work on 10.3-present, and CG is officially not-yet-deprecated so you've got a least a few more releases before you'll be expected to rewrite it all."
I've been doing this for several years now and I couldn't even really tell you which of Apple's 5+ APIs for doing the same thing are the officially blessed ones.
Oh, certainly. This pops up in various places, but the graphics APIs are among the worst. Want to read the actual data in a PNG? Oh, can't do that, because we like premultiplied alpha. Want to load a PICT? Well, you can partially do that, as long as you stay 32-bit forever. Want to draw this FooImage into that BarContext? Oops, we didn't provide a Foo->Bar path, but if you draw it into a BazContext and convert that to a BazImage, you can then call FooImageFromBazImage, as long as it's in one of three pixel formats all three APIs have in common. And that's without even touching the iPhone SDK.
It could be worse, though. It could involve CoreAudio.
Or BarImageFromBazImage. Thus demonstrating part of the problem...
Same here. Fedora, FF 3.5.9. The error console tells me "this.canvas is undefined".
There is nothing but a black box at the top of the page, so it doesn't even look like it's falling back on the animated GIF (I said fuck quicktime a long time ago, so I'm not surprised it's not working as an object).
Yeah, I remembered my HTML a little while ago and figured that out. Too fucking early in the morning to look at angle brackets, I guess. :(
fwiw, these are also true of FF 3.6.3 on osx10.5.8, but I bet you knew that already. Chrome, they both work fine. I think this is something I've seen before with firefox's support of canvas being weird and spotty, but I don't have any useful information for you beyond "weird and spotty".
got it, sort of, I think: Somehow, your (to me seemingly more correct) "if (this.canvas.getContext) launch()" rather than simply "launch()" is what's breaking it. I'm not sure why, though.
Sent. IMHO what's "wrong" with what you have is that firefox's implementation of canvas is still kind of broken. You're doing it right, but the browser hasn't caught up.
Aren't you worried that if you go too crazy with the #ifdefs you'll run afoul of Apple's TOS banning writing to abstraction layers? They don't want it being too easy to port to other platforms, you know.
In case you're wondering why anyone still downloads the Palm OS version, I've been using this and an old Treo in a dock as a bedside clock. It's not bad.
Thanks for continuing to support the old platform.
That's a capital idea. Maybe I should dig up that Palm IIIc I've got somewhere...
If you decide to go that route, you'll want to pair it with an app called "AlwaysOn" that keeps the device from sleeping.
Well, I have some bad news for you.
To compile a PalmOS-classic executable, you need two things: a C cross-compiler that emits m68k code, and a set of tools that know how to build .prc files and their ancillary data. And, if you're lucky, a Palm emulator. So, on OSX there's the prc tools OSX package, and there's also a "prc-tools" MacPorts package that includes the cross-compiler.
But what there isn't... is a version of GCC that has been ported to Intel Macs that can also cross-compile to emit m68k code. For example.
So, I no longer have a development environment that can compile changes to the PalmOS-classic version. If that binary ever stops working, that's it.
(And that's just for PalmOS applications written to the pre-2001 4.x API, which means they're m68k programs running in CPU-emulation on ARM processors! There has long been no way at all to build PalmOS-4.x applications on any platform other than Windows, which is why the Dali Clock PRC was never compiled natively for Treo/Centro.)
You can't compile changes to the Palm OS version (which is fine, certainly no problem using 2.30 in perpetuity), but the download page lists the Palm OS port at 2.31. Just a little confused as to why.
("Why" the version number is what it is, not why you can't compile ofc.)
Side-effect of how my Makefile works. Version number inside that prc is still 2.30.
(I can actually rebuild the PRC file, so long as I never delete the .o files currently on my disk.)
I'm giving it a shot with the official Windows devel environment, but it's apparently been four years since I've dealt with this, and I've completely forgotten how to drive Eclipse in that time. And "Protein" (ARM?) is a completely different project type to 68K rather than another build configuration for some reason.
Right. I've got it to the point of running then crashing out because the resources are missing. It looks like you've got them in some text format, whereas the IDE's resource editor works with an XML format; trying to convince "smart" build management to use
pilrc('s output) instead is beyond my current patience boundary.
Absolute worse case, if 2.30 ever actually needs updating (AFAICT 2.31 makes no changes to the Palm tree anyway?), that could probably be manually translated. (The docs make reference to a conversion wizard, in fact, but I can't track it down offhand.) It's not as if Daliclock has an extensive and complicated UI.
Back when I was the maintainer of prc-tools, I didn't actually have access to a Mac OS X machine, so it was left to various third-party morons to package it for OSX. Hence the crapulent PPC OSX packages that were semi-broken even back when they appeared.
The fundamental problem is that m68k prc-tools consists of patches to GCC 2.95, which pretty much predates OSX. We backported just enough PPC-darwin host support, but all this long predated Intel-darwin, so there's no support for that in the "current" 7-years-old prc-tools.
The sane approach would be to port the patches (mainly PIC stuff, implementing the Palm OS system call trap, the multi-section code disaster area) to modern GCC, and about five years ago I was interested in doing that. There was a big discontinuity for the multi-section stuff between 2.95 and 3.0, which was where I lost interest. But if one dropped that (and everybody except me hated it anyway) porting the rest to current GCC probably wouldn't be too awful even now.
But... it's pretty hard to imagine how anyone could find the motivation to do any of that in 2010...
I suspect at this point the right solution is probably just to create a virtual machine with the right archaic bits needed to build for ancient versions of PalmOS, and then never touch it again. Some random Linux distribution (ideally with reasonably long term support and the right bits packaged -- Debian or Ubuntu LTS perhaps), which network-mounted a build directory from the host, so that one could just fire up the VM, tell it to rebuild, and shut down the VM again.
Porting 7 year old patches to a new compiler to support a platform end-of-life'd by the manufacturer seems too much like hard work. (Creating a build VM is bordering on too much like hard work. I still have my Palm III, and still occasionally use it, but can't remember when I last installed software on it.)
It wouldn't surprise me to find that this was exactly their reasoning--to make it hard to port existing OS X software to the iPhone. It certainly fits with their stance from the recent SDK, where they want everyone writing "iPhone apps" and not "apps that have been ported to the iPhone".
I'm as eager as the next guy to attribute Apple's actions to intentional malice, but that just doesn't make any sense at all, not even from a moustache-twirling-villain perspective.
I don't think it's necessarily malice, despite citing their recent SDK decision. I think they just really do believe that the iPhone is a different enough platform from desktop OS X that developers should be writing completely new UIs for the phone. As their own docs on porting say:
The fact that they went through all the trouble to write a new framework instead of just porting AppKit seems to support that. Then again, the fact that a large portion of UIKit is just AppKit with different prefixes and seemingly arbitrary API changes certainly makes their decision look questionable.
What you're saying is true, and even makes sense for actual UI. This shouldn't really impact DaliClock, though. Graphics and utility classes, like NSColor, are collateral damage.
Assigning an intern to write it from scratch gives you the effect of the malice with plausible deniability. Especially if you tell them "Hey, look, this is a new platform, so make sure the namespaces don't collide."
So: 'Oh, look, an innocent SNAFU,' and/or 'we went through it method by method to get the bloat down,' but Jobs has been around the block enough to understand the result when it comes to sheep-herding.
I suspect that eventually (and around the time Android or the CLR have any success with it the way Java and other runtimes never did) they'll bring iApps to the Mac to "keep it relevant," and maybe give you the option to VNC-from-iPad into a rent-a-VM in the Apple cloud.
[Which, when you get down to it, couldn't possibly be more expensive than trying to keep out of obsolescence with physical Apple hardware at this point. And once the initial revs of the hardware and display tech are done with, the 'dumb' pad display hardware has a much longer shelf life.]
The future is sort of like RMS predicted, but only because the alternative is sysadmining your television and the UNIX arts-and-crafts movement has no financial interest in actually making that easy.
sysadmining your television and UNIX arts-and-crafts movement are now my favorite new expressions.
The former is pure jwz... The latter can't possibly be my own invention, but describes the little piece of me that dies every time someone figures "I'll just do ___, it's not like anyone expects binaries to keep working anyway."
Amazon's MP3 downloader is a good example, especially in light of upcoming developments: they're [i]trying[/i] to keep it current, they certainly have the resources and familiarity, and yet on average it still ends up targeting Ubuntu 2 releases behind what could charitably be called 'current.'
I think the new 'cadence' is that, every 2 years or so, someone in a position of relative power and authority actually notices and bemoans that this is kind of horrible (Linus, Shuttleworth, ...) and then proceeds to do nothing substantive about it. Since the herd of cats has never worried about it before, it's hard to get them to start now (but Python and Mono have become somewhat effective band-aids, and I can't complain when they work).
sysadmining your television
Amusingly enough, many modern HDTVs have reasonable MIPS (they have to decompress whatever that standard is) and in fact run stripped down versions of Linux. Predictably, there are people who are modifying this; if you feel a compulsion to telnet into your Samsung HDTV, these folks can hook you up.
Since I'm emptying my snark-bladder:
"Oh, hey, is that the wireless Audrey?"
The section on "Flash & the Arrow Keys" should help explain why things are. Incompetence is not the reason for these "arbitrary" changes to the API between the Mac and the iPhone. This is a deliberate decision. It's about having a fresh start, and only keeping APIs that made sense to be kept. Steve, and the people he hires, wants developers to write new apps for the iPhone, not do a straightforward port from the Mac.
Anything NS* is a signal to the developer that it's backend-stuff, that one could port from the Mac. Anything UI* signals the developer to rewrite, not port, because it is a completely new UI paradigm, which most developers should rethink/redesign anyway.
A competent engineer says "We can't sell this car without brakes!"
A competent businessman says "The survivors will line up for our new 'stopping' feature in 2.0!"
FYI, this post got posted to Slashdot, apparently via this twit from Tim Bray.
Now try getting the program to work with multitasking in the iPhone 4.0 OS beta if you're brave. Talk about non-conventional threading models...
And I thought that Apple would be smarter than Microsoft. That looks like the API differences between normal Windows and Windows CE. Except with potentially the SUCK slider moved a little farther towards suck, because I think that's even worse than some of the Windows CE quirks.
I've got the project compiling in XCode but it's crashing with my ever most favorite EXC_BAD_ACCESS for some reason or another.
Also, I left my god damned sync cable at home I think, so I'll have to hunt one down. Hopefully I'll have a photo or something for you shortly.
Looks like fg and bg are null for some reason and it's causing it to crash in [DaliClockView drawRect] at:
fgr = rgba; fgg = rgba; fgb = rgba; fga = rgba;
Because rgba = CGColorGetComponents ([fg CGColor]); is returning null.
I don't completely understand what's going on here, but I'll continue to poke around until I either get it to work or give up in utter frustration.
So if I replace the two lines referencing fg and bg with:
fgr = 1.0f; fgg = 1.0f; fgb = 1.0f; fga = 1.0f;
bgr = 0.0f; bgg = 0.0f; bgb = 0.0f; fga = 1.0f;
It works. Obviously not the most elegant of solutions, but it'll build and run now at least.
As the kids say: brb ipod touch.
Awesome, thanks! Does this fix it?
That appears to have done the trick.
Sweet! Did you try this on an actual device? What version? Do you have an iPad?
I'm a little unclear on what build options I should use for maximal compatibility. I would assume that I should build it against the oldest SDK I have, in this case 3.0, but when I do that I get a warning: "building with 'Targeted Device Family' that includes iPad ('1,2') requires building with the 3.2 or later SDK." Does that mean that if I don't build against the 3.2 SDK, it won't launch on an iPad? Does that mean I need to distribute two binaries?
In my post below this but in a separate thread I have a photo of me w/ Dali Clock on my iPod Touch (so, yes :) ... running iPhone 3.1.3 at the moment.
No iPad just yet, still saving my pennies and I'm in Canada, so boo-urns on that angle.
The option "iPhone OS Deployment Target" I believe is what you set to make Xcode do magic things to make sure your app doesn't use any newer SDK calls.
As for the iPad version, I haven't done a universal build yet, but there's a way to make it so it munges all the stuff for the iPhone and iPad into a "fat" .app bundle. As for requiring 3.2, that version of the SDK is currently iPad Only so I'm sure there's a way to build the iPhone part to support 3.0.
Herp derp, responding to my own post...
Anyway I just decided to run this in the iPad simulator and it (so help me for using this) "Just Works" at the native resolution so I'm not sure what else you need to do to make this universal.
I meant, did you try the patched version on your device? Cause I've been seeing things work in the emulator but not on the device.
Adur, sorry about that.
I have no mouth and I must... TELL THE TIME IN GLORIOUS COLOR:
Here's a picture of me looking like a dork... also of Dali Clock running on my iPod Touch.
Inexplicable error as described previously still exists and has been defeated by previously described hack.
Also, if you'd like I could post Dali Clock to the App Store for you (as a free app, naturally... once this crash stops crashing). If not, also cool, but I figure it would be the least I could do in return for all of this awesome cynicism about the software industry you've instilled in me :D
Also also, I am the first person in the world to run Dali Clock on an iDevice... thing. Probably maybe. This is awesome; that is all.
That digital signing certificate from an Apple Ayatollah allowing you to run your own code on your own iphone/ipod costs $100 and is payable every year, ffs.
Maybe a one-time charge, but every year?!?
Nay. Nay, I say.
Think of it as like the rental you pay every year just so you can have an entry kept in a DNS database so you can have a vanity domain like jwz.org. Only more useful.
Whinging about rental on app distribution rights while paying rental on a database entry for DNS strikes me as somewhat inconsistent.
(And DNS should have been done on a 'want a change to the entry and we'll charge you' model.)
DNS is a directory service. The cost is running the directory service. If you want to be listed in the directory, you pay; stop paying, and you stop being listed. The database is purely an implementation detail.
Because DNS is a distributed system, you are free to make other arrangements. If you think your "charge to change" model makes sense, and are willing to enter into an indefinite contract in which you are obligated to provide a service without receiving any income - go for it. In contrast there is no alternative to paying $100 to Jobs in the iPhone ecosystem.
The cost of updating a database is trivial. Commercial registrars have made ridiculous amounts of profit based on rent-seeking behaviour, rather than for service-based behaviour where they simply charge for pushing the updates.
Why should a domain entry expire just because a credit card has been refreshed, and its details need tweaking? (This is the most common cause of domain lapses.) Because that's the rental model.
By analogy, I have a leak, I get a plumber in to fix it. He fixes it, I pay for the repair. I do not have the plumber on a yearly retainer.
The cost of operating the DNS infrastructure is not zero.
Man, I would kill to have a plumber on yearly retainer.
You don't even know.
The thing about analogies is that they only work when the situation is analogous.
DNS as I already explained is a service not a one time call to the plumber. Let's try to imagine that we wanted to provide an actual DNS service with your funding model. What are our costs? Well, we need a bunch of servers around the world to answer queries 24/7. The servers need network access (costs $) and a secure data centre ($$) with on-call technicians ($$$). Some of these costs are fixed and some are variable, but they have one obvious thing in common - they have nothing to do with how many changes our customers make. So if (as seems likely) many of our customers choose never to do any updates at all, we are on the hook to provide an expensive service for those customers indefinitely, and we get no income to pay for it. Oops.
Most likely all that's wrong is you didn't know anything about DNS, in which case here's a lesson: Your opinion about things you don't understand is worthless.
You missed the fact that the model means that the customers would have to pay the change costs every time they change their network provider or hosting... I like to think of this as promoting stability in the overall network, but others would think of it as the major part of the revenue opportunity.
All businesses have fixed and variable costs that have nothing to do with the number of customers.
HOSTS.TXT worked quite happily without a business model, and large parts of the DNS namespace (mostly second-level and down, where infrastructure costs are less than top level, admittedly - and unlike the worldwide root zone infrastructure you appear to be describing) do the same without rent-seeking from customers.
"large parts of the DNS namespace do the same without rent-seeking from customers."
This probably requires a very creative definition of "large parts". Sites like LJ, YTMND and so on don't actually have their customers in the namespace, they just use a wildcard. The "free" DNS from ISPs in certain places is conditional on you remaining a paying subscriber.
My country has several 2LDs which are operated by a user owned not-for-profit, it is generally agreed to be the 4th largest DNS registry. Despite having no motive for "rent seeking" they operate the same recurring payment model, almost as if they (unlike you) have done the maths and realised it's the only sensible way to defray the costs.
You're in the UK, so here's a UK example.
JANET customers have their first domain name in the .uk domain registered free of charge. If any subsequent domain names are required the standard charge of Â£94 including VAT applies. However, they will not be charged any ongoing maintenance charges, as long as they remain connected to the JANET Network.
JANET customers will not be charged the maintenance fee for keeping their ac.uk/gov.uk registrations in the DNS, as long as they remain connected to the JANET Network.
feel free to use the term without the quotes.
Are you hung up on the phrase "However, they will not be charged any ongoing maintenance charges" ? That would be missing the central point altogether because JANET is government funded, the institutions won't be charged because the whole thing goes on the taxpayer's bill. You chose not to quote their rates for outside commercial entities, which are, as would be expected, of the same type (periodic maintenance fee) as, and somewhat more expensive than, the famous TLDs.
Your position makes no damn sense, but I doubt there's any further point in trying to talk you out of it. If you ever go into the DNS business and lose your shirt on your topsy-turvy way of charging, look me up, I think I'd pay for lunch to hear that story.
So you're offering me a one-off payment for lunch while you explain to me why one-off payments couldn't possibly work as a business model?
Well, immediate mode isn't really a defining feature of OpenGL anymore, and in ES, they started clean and decided not to support it. OpenGL 3 wants you to use VBOs. As for defaults, NSUserDefaultsController is something you don't have to worry about since it's just for binding interface elements to the defaults system, and bindings aren't on iPhone (I guess they felt it was unneeded).
I'm curious, is there a reason you didn't want to fork the project? OS X and iPhone OS are based on the same core, but the actual application frameworks are pretty different, and it might have made things simpler to use different solutions for each platform. As far as I know, Apple hasn't really claimed the two platforms to be compatible. I don't think UIKit/AppKit are *that* arbitrarily different API-wise, but they do behave differently and expect different things. UIColor doesn't abstract CGColor as much as NSColor because the iPhone interface is entirely Core Animation based while OS X is not, so you have to work with CGColors more.
I'm still waiting for iPhone OS to catch up and get garbage collection. Maybe when iPhone goes dual-core...
It didn't seem sensible to fork it because the vast majority of this program is not GUI.
All of the apologia that people are presenting along the lines of, "well, iPhone apps have to work totally differently than desktop apps because it's such a different device" are arguably true if you're talking only about user interaction. That has absolutely nothing to do with putting raw bits on the screen. There's nothing about the difference between desktop and phone interfaces that says that you need to allocate colors differently!
I have actually four different Cocoa-based versions of Dali Clock now: there's the Mac desktop app; the Mac screen saver; the Mac "widget"; and now the iPhone version.
For the first three, that worked out fine: there's a DaliClockView, that just knows how to render the clock in an arbitrary rectangle. That's almost the entirety of the program. On top of that, the various other versions just do their boilerplate startup, and then instantiate a DaliClockView. The desktop app is a subclass of NSWindow that contains a single DaliClockView; the saver is a subclass of ScreenSaverView that contains a single DaliClockView; and the widget is a piece of HTML that embeds a plugin that in turn embeds the DaliClockView.
I didn't need any #ifdefs in DaliClockView to make the first three work. They actually shared the same .o file. It was all working nice and modularly until the iPhone came along and screwed the pooch.
Fair enough. Keep in mind that you're essentially writing for a new framework that just happens to run on top of some of the same underlying components of the other framework. UIKit does a lot of things totally from AppKit, owing to the interface and lack of backwards compatibility to worry about. That admittedly makes it difficult to have a single, cross-platform project for both, though I also don't remember Apple promising such a thing.
I don't know why UIColor doesn't have NSColor's -get... methods. It's a smaller class and may be just a thin wrapper over CGColor, which Core Animation takes for its arguments. Some of UIKit's design is something Mac developers are envious of (e.g., no NSCells, windows are views, animation everywhere). Hopefully the two kits will converge more in the future. I hear iPhone OS 4 got Snow Leopard's C blocks extension.
But I'm not writing for a "new framework" because if I was, 85% of it wouldn't Just Work without changes. The differences are actually very small -- pervasive, annoying, arbitrary, and unnecessary -- but small.
If UIKit worked on the desktop, I probably would have just converted everything from AppKit to that in all versions. But that doesn't work either.
I guess it depends on what you're referring to as similar. The Foundation stuff is the same, but the two application frameworks have very different class hierarchies and behaviors, too many to list. I agree that compatibility could be made better, but I don't ascribe it to design deficiencies in the frameworks.
Things are slowly converging. I wouldn't even be able to port my app to the current iPhone OS because it doesn't have NSAttributedString, which now exists in iPhone OS 4 beta.