The Care and Feeding of the Android GPU 307
bonch writes "Charles Ying argues that Android's UX architecture has serious technical issues because, unlike the iPhone and Windows 7, Android composites the interface in software to retain compatibility with lower-class devices. Additionally, Android runs Java bytecode during animation, and garbage collection blocks the drawing system. Google engineers dismiss the desire for hardware-accelerated compositing and cite more powerful hardware and an avoidance of temporary objects as a solution to the collection pauses, but Ying argues that will just lead to [a lower] average battery life compared to devices like the iPhone."
And yet, somehow, it's WORKING!? (Score:3, Interesting)
OMG Android is making a play that's designed to let lower cost, highly capable devices subsist in the marketplace? How horrible is that?
I switched from Evil Major Network (TM) to Metro PCS a little over a year ago, and haven't regretted it for a SECOND. It is so nice, getting what you paid for, rather than wondering how much you'll be overcharged for what you aren't even sure you got... it's the ONLY way to survive teen children!
And even Metro PCS, the low price leader, offers a couple of Android phones that are highly capable and useful. For less than $300 I was able to upgrade my wife's shatty phone with a nice, capable Android phone with GPS, navigation, browser, email, games, full-screen youtube, Facebook, Marketplace et al (AKA "the works") and a good, full day of battery life. She LOVES the phone! In case you are wondering, it was the Samsung LG Optimus. And the network cost went from $40/month to (gasp!) $50...
Talk about having your cake and eating it too?
Say what you want, Android's strategy is working, as demonstrated by its continuing skyrocketing market share.
Re:Doesn't Optimizing for GPU Exacerbate Fragmenti (Score:5, Interesting)
Debian seems to handle it just fine and (based on gcc) they're compiling for 14 different platforms* and 3 different kernels (linux, hurd, freebsd)
Is it that difficult to setup a similar thing in the app store? "Oh it looks like you're running an ARMv5 and a PowerVR GPU. We'll give you this binary."
Or, you do what Apple has always done with Fat Binaries. 68k to PPC. PPC to PPC64. PPC* to i386. i386 to x86_64. You could have one single fat binary that supported ppc, ppc64, i386 and x86_64. And it "Just worked". They were literally checkboxes in XCode. How many GPU and CPU solutions are there for the Android? This isn't low level Assembly code, it's compiled Java.
*alpha amd64 armel hppa hurd-i386 i386 ia64 kfreebsd-amd64 kfreebsd-i386 m68k mips mipsel powerpc s390 sh4 sparc sparc64
Re:I've complained about this more times than i ca (Score:5, Interesting)
And you say that as an unbiased observer with no axe to grind, right? :-)
Right. I still own all my Google shares. However I am now properly disillusioned about a number of Google myths, but don't trust me. Ask any Googler, former or otherwise. In the latter case, make sure to do it out of earshot of other Googlers.
There are smart people at Google, and if they are really smart they learn early to keep their heads down. This seems to be the main sequence for large tech companies. Microsoft is far advanced on that path and Google seems more than a little determined to follow. The stack ranking system is nearly a carbon copy of Microsoft's, which in turn was copied from GE, and look how well that worked out. The result is inevitable degradation of the engineering culture. Now, warning about the negative consequences is not the same as axe grinding, quite the opposite.
Re:Java, the original sin (Score:4, Interesting)
Interesting theory. I've been working with Java since version 1.0 on devices as slow as an embedded 100Mhz device with 128MB RAM and I never remember GC taking seconds.
Just because I'm curious I tried to push our Java application (Data integration engine) to use both CPUs at 100% and dump the Garbage Collection stats to disk. Here's a typical sample:
133,091: [GC 30567K->10559K(60160K), 0,0052000 secs]
133,447: [GC 34943K->10347K(64832K), 0,0036360 secs]
133,873: [GC 39659K->10347K(63872K), 0,0028940 secs]
134,286: [GC 38699K->10531K(63104K), 0,0033140 secs]
134,674: [GC 37923K->10263K(61952K), 0,0019690 secs]
135,072: [GC 36759K->10351K(61184K), 0,0024490 secs]
135,462: [GC 36015K->10339K(60352K), 0,0022610 secs]
135,797: [GC 35171K->10739K(59840K), 0,0039780 secs]
136,134: [GC 34803K->10679K(59008K), 0,0033120 secs]
136,479: [GC 33975K->10567K(58048K), 0,0029140 secs]
136,801: [GC 33159K->10647K(57472K), 0,0026420 secs]
Note that this is without incremental garbage collection enabled. It might be possible for graphics intensive applications to notice the fraction of a second of delay but something tells me that this just might not be the case.
Not about battery life (Score:5, Interesting)