Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 internet speed test! ×
Android Google Operating Systems Technology Hardware

Android ICS Will Require 16GB RAM To Compile 357

ozmanjusri writes "New smartphones may be lightweight, compact objects, but their OSs are anything but. Ice Cream Sandwich will need workstations with no less than 16 GB RAM to build the source code, twice the amount Gingerbread needed. It will take 5 hours to compile on a dual quad-core 2+GHz workstation, and need 80GB disk space for all AOSP configs. Android developers are also being warned to be cautious of undocumented APIs: 'In almost every case, there's only one reason for leaving APIs undocumented: We're not sure that what we have now is the best solution, and we think we might have to improve it, and we're not prepared to make those commitments to testing and preservation. We're not claiming that they're "Private" or "Secret" — How could they be, when anyone in the world can discover them? We're also not claiming they're forbidden: If you use them, your code will compile and probably run.'"
This discussion has been archived. No new comments can be posted.

Android ICS Will Require 16GB RAM To Compile

Comments Filter:
  • Re:Of Course. (Score:3, Informative)

    by lolcutusofbong ( 2041610 ) on Sunday October 23, 2011 @08:42PM (#37813200)
    probably using the -pipe CFLAG.
  • not true (Score:5, Informative)

    by MrCrassic ( 994046 ) < minus berry> on Sunday October 23, 2011 @09:06PM (#37813324) Journal
    Here's what the article *actually* says:

    16GB RAM recommended, more preferred, anything less will measurably benefit from using an SSD.

    Emphasis mine. Still pretty beast, though.

  • by dwheeler ( 321049 ) on Sunday October 23, 2011 @09:09PM (#37813338) Homepage Journal
    Unless the build system is screwed up, recompiling after a change should be relatively fast. Usually source code is stored as lots of smaller files, and each file is compiled separately to produce a separate object file (e.g., .o). Then next time a rebuild is requested, the system should notice what changed, and only rebuild the needed parts. Some parts take the same time each time (e.g., a final link), but it shouldn't take anywhere near the same amount of time. There are lots of build tools, including make, cmake, and so on. If you use the venerable "make" tool, you might want to read Miller's "Recursive Make Considered Harmful": [] Cue the lovers and haters of "make", here :-).
  • shitty /. summary (Score:5, Informative)

    by petermgreen ( 876956 ) <plugwash.p10link@net> on Sunday October 23, 2011 @09:18PM (#37813380) Homepage

    TFA: "5+ hours of CPU time for a single build, 25+ minutes of wall time, as measured on a workstation (dual-E5620 i.e. 2x quad-core 2.4GHz HT, with 24GB of RAM, no SSD)."
    /. Summary: "It will take 5 hours to compile on a dual quad-core 2+GHz workstation"

  • by Sycraft-fu ( 314770 ) on Sunday October 23, 2011 @09:24PM (#37813412)

    While it is a lot of RAM compared to what many system have, it really isn't a big deal these days. 4GB DDR3 sticks are $25 or less each, and that is for high quality RAM. Regular, consumer grade, LGA1155 boards support 4 of them. So for $100 you can have 16GB on a normal desktop system. My home system I type this on has 16GB for that reason. It was so cheap I decided "Why not?"

    They actually can support more, with 8GB chips you can have 32GB on a standard desktop, but those are still expensive.

    The enthusiast X79 LGA2011 boards coming out will have 8 sockets and thus handle 64GB. Of course beyond that there's workstation which cost a lot more, but not as much as you might first think.

    At any rate, 16GB is now a "regular desktop" amount of RAM. Standard boards the likes of which you get in cheap ($1000 or less) towers support that much, and it only costs $100 to get. It is quite a realistic thing to require, for something high end.

  • by bucky0 ( 229117 ) on Sunday October 23, 2011 @09:40PM (#37813482)

    No, you can perform better optimizations if you know, for instance, that a function can be inlined. You can't get that if some of the uses are in other compilation units.

  • Re:Of Course. (Score:4, Informative)

    by kidgenius ( 704962 ) on Sunday October 23, 2011 @09:48PM (#37813516)
    Here's the original source over at Google Groups from JBQ []
  • Re:Of Course. (Score:5, Informative)

    by PopeRatzo ( 965947 ) * on Sunday October 23, 2011 @10:02PM (#37813576) Journal

    And if you read that original source, you'll see that they are recommendations for building future development machines:

    -6GB of download.
    -25GB disk space to do a single build.
    -80GB disk space to build all AOSP configs at the same time.
    -16GB RAM recommended, more preferred, anything less will measurably
    benefit from using an SSD.
    -5+ hours of CPU time for a single build, 25+ minutes of wall time, as
    measured on my workstation (dual-E5620 i.e. 2x quad-core 2.4GHz HT,
    with 24GB of RAM, no SSD),

  • Re:Of Course. (Score:5, Informative)

    by evilviper ( 135110 ) on Sunday October 23, 2011 @10:28PM (#37813712) Journal

    To me, that sounds like it takes 5 hours after compiling the code in parallel. So if it was a single threaded compilation job, in theory, the task would take much much longer.

    Yes, it does SOUND that way... It's very "truthy" that way...

    Relying on /. summaries just makes you look like an idiot, when you're just one quick and easy click away from the source. Surely, if you cant be bothered to put that much effort in, then you must not have enough time to write-up a response, either...

    Verbatim quote from TFA:
        "5+ hours of CPU time for a single build, 25+ minutes of wall time"

  • by Intropy ( 2009018 ) on Sunday October 23, 2011 @10:52PM (#37813820)
    Yes, there certainly are. The most obvious reason is code optimization. If your target device is something relatively light on resources like a mobile phone, then you probably want to optimize very aggressively. All forms of optimization require context. For something like "false && statement" all the required context for optimizing away the statement is very nearby. Something like the return value optimization [] needs to know about the entire function. So far we're considering the easy stuff. If you want to go all out and get into whole program optimization [] then some optimizations cannot be guaranteed to be safe without knowing the entire program.

    Now if "compile" refers to the entire build process, then we're also probably talking about some serious static analysis. Checking for things like "can this function ever throw?" or "is this code reachable?" or "is the memory allocated here always eventually freed?" also requires an awful lot of context to check. In the worst case each of these questions requires knowing all of the code to answer.

  • Re:Of Course. (Score:5, Informative)

    by Calos ( 2281322 ) on Monday October 24, 2011 @06:06AM (#37815168)

    Mmm, no. Third party modders do a lot of work, and make some really awesome builds, with all kinds of customizations and new features. Cyanogenmod, for instance. Quite the opposite of working for a large company with resources, their developer are now actually being hired by big companies because of their freelance work.

"I may be synthetic, but I'm not stupid" -- the artificial person, from _Aliens_