There are 15 million original iPads out there. Should everyone just throw them in the trash because Apple has decided they should no longer be supported three years later?
I never said anything of the sort.
But you shouldn't expect the latest software to run on it, either. And, you shouldn't expect free bug fixes for years either, and certainly not for $10. That was the point I was making. You bought something for $10. The price of the iPad itself is
irrelevant, because we're talking about a single (of many) functional applications on the device. And besides, that's Apple, not in any way-shape-or-form Lespaul. You bought an application for $10. Probably a good while ago. You got that application, and some support for a while. Maybe not everything you wanted was fixed, or addressed, or improved (certainly), but... Dude, it was $10.
If you made the decision to standardize on a bunch of first gen devices from Apple (buying
any of which against my personal rules, by the way)... That's kinda the price you pay. Yes. Apple drops support for hardware. They drop it quicker than Microsoft does, typically. Three and a half or so years is maybe a little on the low end for Macs, but it is within the margin, especially for an oddball first-gen device. And there's no where near 15 million first-gen iPads still in full-time use. If nothing else, they're all probably getting a little rough in the battery department.
Your thing still works. But you do not get the new shiny. That is not, by any means, unusual in software in general, but
especially in the Apple ecosystem (whether Mac, mobile, whatever).
As far as the separate legacy version, I've made that point a bunch of times (including way before this thread). To me, that's pretty useless. I don't like JRemote because it is a remote control. That is it's least important function to me. I use it a bit, and I love it when I do. But... It is a heck of a lot more than a simple, dumb remote control app. But, hey, if he can just re-release the old code (perhaps with a tweak here or there, and just rip out the streaming and stuff if that's causing problems) with a new name, I say do that. If he makes you pay for it again, rather than free, then you should probably get some new bugfixes and whatnot too, for a few months or maybe a year.
The rest...
Even if I accept that the functionality of JRemote is "so simple any old thing could do it" (which I reject out-of-hand, by the way)... But ignoring that, and talking "generally" about software development on iOS. Older hardware is difficult to support for a whole bunch of practical reasons. Do you have a pile of the old devices around? Do you regression test with every change on every device? Different devices over the years had wildly different RAM amounts, for example. This can (and often does) lead to low-memory "crashes" (not really crashes, but iOS going "nope, you get no RAM anymore") on older devices that are much more RAM starved. For example when might this happen? Well, when you're loading and showing a grid of thousands upon thousands of thumbnail images, for example, which you have to cache so that the user can scroll around and have "the snappy" (and because you have no full control from the OS anyway and you have to run in RAM). Oh yeah, and us crazy users are going to feed it huge data-sets too, and you have no control over that. That's a pretty good example right there.
This is just one small example of the practical limits that coding for old devices quickly becomes extremely challenging. The same issues apply to PCs too, they've just been stagnant, so the window is way longer. It is about 1000x worse on mobile devices (especially older ones that were extremely resource limited to save power). The current platform (all the hardware, not just the A7) is not just faster than, but also wildly
different than, previous generation platforms. The OS is different. The API is different. The CPU and GPU are wildly different. It isn't just about "the same junk, faster". They were doubling performance while staying in the same power budgets (or reducing power budgets, in Apple's case, typically). That takes core instruction set and platform design changes, because you can't just throw power at it like Intel did in the late 90s-early 2000s. To support all this old, sometimes buggy, discontinued hardware, you are forced to
limit what you can do, to serve the least common denominator. Or you build in a million "special cases" (which makes your code into a monster). You can't adequately test, or even find bugs, without that stack of old devices on a range of OSes that you need to have. And, since you aren't using them as your daily driver, you probably don't learn about until someone is angry. But then do you rip out the new feature that you just added because you can't figure out how to write it so that the iPad 1 with its
256MB of RAM (and now it is all used up basically with the modern OS installed on it) doesn't crash all the time?
And so you're presented with this choice: You have an aging code base. You need to go back and re-write a big hunk of the UI anyway to stay current. To really do a good job, you need to re-think a bunch of the paradigms of the application as well, and there is also a bunch of forward pressure to stay current with the platform (especially to ease future device support, that people are going to want the instant the devices are released). And, by the way, the older APIs, like all software written by humans, are absolutely riddled with bugs that you have a bunch of special cases for already. These bugs have been fixed, but you can't use those fixes, and you have this creaky, hacky, hard to change code gumming up the works? And, you can't support that old stuff adequately anyway. So, what do you do?
If you're going to re-do it anyway, you target the latest (or close to the latest) methods, so that
next time, you're not starting back so far. If it is worth doing, it is worth doing right. Every developer has to make the call when it is worth doing themselves.
* For the record, generally... Acting like "just wiring up the UI" is the easy part of developing (especially on a modern mobile device that is all UI) belies naivety about software development. It isn't your fault, if you haven't tried it before. It is a very common misconception among people who aren't developers (especially power users who don't program). It seems like it is just graphics and pasting things, like putting a new theme on a PowerPoint slide deck. But... It isn't. The UI is hard. It is very hard. Often, way more difficult than the other stuff. First of all, it is extremely time consuming and usually involves a ton of math. But also... Because that is where the beauty of your underlying object model meets the messy, confusing, and unpredictable "world" of the darn user. If your goal is to do a really good UI, it is usually just the opposite. The back end is the easy part.