Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Intel Role Playing (Games) Technology Hardware

Intel Salivates Over Virtual World Processing Demands 52

CNet has up an article looking at the lucrative virtual world market for processor companies. An Intel developer forum held in San Francisco this week highlighted the opportunities for selling hardware to both consumers and vendors in the VW marketplace. "[Chief Technology Officer Justin Rattner] showed statistics that indicated a PC's processor bumps up to 20 percent utilization while browsing the Web, while its graphics processor doesn't even break above 1 percent. But running Second Life--even with today's coarse graphics--pushes those to 70 percent for the main processor and 35 to 70 percent for the graphics processor, he said. The Google Maps Web site and Google Earth software pose intermediate demands. Running a virtual worlds server is vastly more computationally challenging, though, when compared with 2D Web sites and even massively multiplayer online games such as Eve Online. An Eve Online server can handle 34,420 users at a time, but Second Life maxes a server out with just 160 users."
This discussion has been archived. No new comments can be posted.

Intel Salivates Over Virtual World Processing Demands

Comments Filter:
  • by jollyreaper ( 513215 ) on Thursday September 20, 2007 @03:24PM (#20687281)
    ...is the lack of optimization in the code. "Yes," says Br'er Intel. "Please throw more processors at the problem! Optimization is for pussies!"
    • Agreed. What's so different about Second Life that means eve online can get between 200 and 300 times the people on their servers? It definitely sounds like the code to me.
      • by Andy Dodd ( 701 ) <atd7@@@cornell...edu> on Thursday September 20, 2007 @03:39PM (#20687507) Homepage
        Nah, the article is just plain wrong and uses differing meanings for "server".

        SL - "Server" appears to mean a single CPU or box
        EVE - "Server" is used to describe the entirety of the Tranquility cluster, which has at least 150-200+ dual or quad-core blades that handle the solar systems, plus some serious database servers.

        EVE can achieve around 150-200 on a single machine before things start getting laggy, things get massively painful in the 500-700 range, and much above that and nodes start dropping. EVE has an architecture limitation in that processing for a given solar system cannot be spread across multiple CPUs, so if a single solar system in EVE has 200+ players, they're all on the same CPU. Meanwhile, 10 systems with 5 users each will likely share a CPU, and 50 systems with zero users probably also share.
    • Re: (Score:2, Insightful)

      Well you do not know what type of background processing Eve is doing and what type of background procession Second Life is doing. If the server for Eve is just recording your position and health while Second Life is recording, position, heath, money, surroundings, friends online etc, one is doing more processing then the other. Optimization can only fix so much. While the problems get harder you have to increase the number of processors and speed. Just like you can not run Fedora 6 in a GUI on a computer wi
      • Wouldn't removing the constant calls to check your money/friends online/etc be considered optimization? I don't believe those things are constantly required to be checked. So basicly, you just stated Eve is optimized, while SL isn't.
        • Re: (Score:3, Insightful)

          by Unoti ( 731964 )
          Users can create objects, and put scripts into those objects. They routinely do this. All those scripts run concurrently. So while it might not really be 'necessary' to run those scripts, they make the world what it is. Say I have a dragon avatar. It might seem silly to have 80 scripts running on my avatar's body at a time, but those scripts let me move like a dragon, blow smoke rings out of my nose at regular intervals, and so on. The content is created by the users, so that reduces the kind of optim
          • Wait a second, you can write scripts that are executed on the server? What's to keep you from blowing 10000 smoke rings per second and crashing the server?
            • by jandrese ( 485 )
              Your scripts get a limited number of cycles in which to execute during each "tick" of the server, however it was still trivial (at least last time I messed with it) to create self replicating objects that quickly crash the server so hard that they force a rollback. A lot of developers have even done that accidentally, and griefers use it from time to time. However, it's not a particularly smart thing to do since crashing the server is an easy way to be banned from SL (or at least put out in the cornfield)
      • by Amouth ( 879122 )

        Just like you can not run Fedora 6 in a GUI on a computer with 64mb of ram and 800mhz. Maybe Fedora should optimize their code?
        if that is true then yes..

      • > Just like you can not run Fedora 6 in a GUI on a computer with 64mb of ram and 800mhz.
        > Maybe Fedora should optimize their code?

        I ran Windows 98 on such a machine for a long time with no problems.
        Sounds to me not like an 'optimisation' problem, but maybe Fedora could do less fancy and unnecessarry processor-eating computations. -_o
      • Just like you can not run Fedora 6 in a GUI on a computer with 64mb of ram and 800mhz.

        You can't? Even if you run it headless? Why the hell not?
        • by argent ( 18001 )
          OK, I missed the GUI, but even then if you ran Windowmaker or TWM it should work. You don't have to use Enlightenment or whatever insane memory hog window manager they use.
  • As there is only one gaming instance in Eve Online.

    Now that I think of it Second Life is a single instace too, so in this case to make a fair comparison it would be more apt to say that Eve Online handles around 100 users with what could be considered normal performance on a single (solar system) server.
    • Eve is one large cluster of servers, certain places like Jita(Major trade center) tend to have more of the cluster dedicated to them than other parts.
      • Re: (Score:3, Interesting)

        No. I actually happen to know a great deal about their setup and that solar system handles around ~600 users on average with great strain. Due to the bad architectural legacy Eve Online has (they started developing around 1998) only one machine is responsible for a given solar system and their server side code doesn't have multiprocessor support.

        This causes lots of lag in high load/usercount systems as they cannot scale by putting in more hardware with increased demand, to the point which I wouldn't call
  • Errors in article? (Score:4, Informative)

    by Andy Dodd ( 701 ) <atd7@@@cornell...edu> on Thursday September 20, 2007 @03:31PM (#20687409) Homepage
    I find it hard to believe that SL doesn't allow more than 160 concurrent users to log in simultaneously. 160 users per CPU or per chassis blade, maybe, but 160 total all at once or even 160 per shard?

    EVE does not have 34,000 people on one server. One shard which people call a "server", but the Tranquility cluster is some SERIOUS hardware. I think they're up to something like 160-200 dual or quad-core blades, at least.
    • The article could be accurate so long as SL and EVE have their own definitions for what a 'Server' is. The more important question IMO is, how many people can they have for $X in hardware.

      -Rick
    • I think they're up to something like 160-200 dual or quad-core blades, at least.

      So, a small to medium university cluster? I doubt the server purchases for Second Life or Eve online from the last year mount up to a tenth of a percent of Intel's weekly sales. It's the *clients* they are thinking about.

      Hell, Intel should be *donating* CPUs to EVE online then. If buying a company 200 blades means that 34,000 users upgrade their game boxen regularly, the investment probably pays for itself.

    • by Caerdwyn ( 829058 )
      Second Life is not sharded. It's clustered (they refer to the cluster as "The Grid"). Everyone is on the same grid (though there is a teen grid and a test grid, they aren't used much, and they are 100% independent except for authentication/transaction systems).

      In Second Life, the game world is broken up into "sims"... sections of the virtual world that represent 256 x 256 in-game "meters". Each sim has its own master process, two of which run on each server within a cluster... everything that goes on within
      • Because Second Life is connected directly to your credit card read-write, there are significant hazards associated with a client build from any source other than what Linden labs has vetted.

        The recent URL exploit on Windows (it was a command line parsing bug, so wouldn't have impacted Mac or linux since applications don't parse their own command lines the same way) wasn't from any open source build. :p
        • I never claimed that any particular exploit to date was the O/S client. It is, however, a fact that malicious code embedded in an unchecked build of the client can drain your Lindens, all at once or a few at a time.

          Buy $9.95 worth of Lindens. Why $9.95? It's the subscription amount so won't look suspicious at first glance assuming you ever look at your transaction history (most people don't).
          Suppress the purchase confirmation dialog while sending the confirmation.
          Transfer the Lindens to the fraud-recipient
          • by argent ( 18001 )
            It's certainly possible, and a modicum of care to make sure you're getting a client from someone who has standing in the community is wise, but demanding Linden Labs vet the client is overkill.

            All it would take is one person noticing extra transfers in their transaction history to totally expose something like that, and with a direct link from the modified client to the distributor it would be relatively easy to track down.

            And it's a lot harder to hide changes in patches to a client than you think, especial
      • by jafuser ( 112236 )
        Because of the very high overhead of script processing, the pipe dream of player-created "mini MMOs" has never materialized

        The main limitations I encountered in this area were:
        • Scripts are limited to 16 kilobytes of memory in a very high level language which is not very memory efficient
        • Communications between objects is very crude and unreliable
        • Communications to the outside world are very unstable, and bandwidth-limited, preventing you from developing a reliable "core" server which controls the global aspects
    • Sounds like it. (Score:3, Informative)

      by argent ( 18001 )
      I find it hard to believe that SL doesn't allow more than 160 concurrent users to log in simultaneously.

      There's no "shards". The world is contiguous: you pause less than a second crossing from one sim to another and it's even possible to fly planes across multiple region boundaries at 25 meters a second (hitting a new region every 7-10 seconds depending on the direction you're flying) without losing control... you *can* still "outfly" the sims and crash but it's gotten a lot better than it has been.

      Typicall
    • by jafuser ( 112236 )
      Generally, each 256x256 meter region of SL runs on it's own core (it used to be one region per CPU prior to multicore hardware). This limits each region to about 40 or so users, but the entire grid of regions is theoretically able to scale indefinitely. In reality though, there are grid-wide services that need to work too (e.g., the asset server), and those have some serious problems scaling with a growing population.

      Unfortunately, SL was developed in a very "duct tape and bailing wire style" early on, so
  • Wirth's Law (Score:4, Funny)

    by pizza_milkshake ( 580452 ) on Thursday September 20, 2007 @03:32PM (#20687421)
    "Software is decelerating faster than hardware is accelerating."
    http://en.wikipedia.org/wiki/Wirth's_Law [wikipedia.org]
    • by jo42 ( 227475 )
      I blame it all on OOP (i.e. Java) programming...
      • by sg7jimr ( 614458 )
        While Object Oriented Programming is an easy target because it's an unnecessary-fluffy-overhyped process layer put on top of traditional structured programming (that did the job quite well when followed), and is a process layer which tends to overcomplicate programming for no real benefit and in fact is counterproductive because it hides how stuff actually works and gets in the way of debugging and optimization...

        (Yes the bloated and inefficient sentence structure is intentional and reflects the topic)

        I thi
  • by Sciros ( 986030 ) on Thursday September 20, 2007 @03:36PM (#20687465) Journal
    from the put-your-cock-back-in-your-pants department

    The other day Kohls ... business peoples.. got together and talked about how people wear way more clothes when they go outside than when they stay at home. "This whole 'real world' is frigging nuts as far as how much clothing the average person needs to wear when being active in it." Turns out that performing simple tasks like scratching one's belly or sitting around doing jack squat requires no more than a pair of shorts. But demanding real world tasks like walking outside and buying groceries requires no less than 200% more clothing. "We are gonna make a killing with this new realization," said a Kohls business dude to a hobo on the street pretending to be a news correspondent.
    • by vux984 ( 928602 )
      Turns out that performing simple tasks like scratching one's belly or sitting around doing jack squat requires no more than a pair of shorts.

      Uh-oh. /looks around for shorts...

      At least it explains why my belly scratching was all laggy. I didn't meet the requirements.
      • by Sciros ( 986030 )
        Hehehe that's right I don't want to picture naked slashdotters sitting around at the compootar scratching their hairy bellies. It's bad enough with just shorts.
  • One of the core components of AI is that it runs a virtual world that is the imagination of our real world. It doesn't have to be complete, but only know enough to get around. It is funny how games spur development of faster computers. Hasn't it been this way ever since arcade games? All those quarters didn't go to waste. At least thats what I like to think.
  • A single EVE server ("node") handles about 700 users. The EVE universe ("cluster") handles 30,000+ users
  • PC's processor bumps up to 20 percent utilization while browsing the Web.


    Web sites without Flash, maybe.

    The "processor" vs. "coprocessor" arguments has been going on forever. Meanwhile, people like me are still happily running Pentium3 systems at home at probably will for the next 5 years.
  • by Tom ( 822 )
    160? On the server where you don't even have to render graphics? Either they're running 160 self-aware AIs on their or the SL server code sucks so badly it could've just as well been written in visual basic.

    • Re: (Score:3, Informative)

      by cowscows ( 103644 )
      The way SL works (as per my second or third hand reading), is that the whole world (The Grid) is made up of a bunch of squares of virtual land (Sims). Supposedly each sim is its own server/blade/whatever. That sim is responsible for everything that happens within its virtual land space. This includes dealing with all of the players who are currently in that sim, but also all the interactions of physical objects within that sim, and probably more importantly, all of the scripted objects within that sim. When
    • 400, actually. That's 4 regions (called sims) on a 4-core server with up to 100 avatars per region.

      That's with each sim doing concurrent physics calculations for 100 avatars interacting concurrently with 15000 unique objects in a 256x256x768 meter simulated volume, with each avatar running up to 1000 concurrent scripts. Anything from 10 to 1000 objects are independent actors that have to be taking into account for object-object collisions with a 1/45th of a second quantum. Maybe 1000 objects are running the
      • by Tom ( 822 )
        Ok, that makes more sense now.

        So what they badly need is more sensible physics, and limits for what the amateurs do. No in the sense of what they do, but say some automatic creation of collision zones, boundary boxes, etc.

        Or simply a better system. For example, 90% of the objects everywhere I went in SL don't have any visible physics. They're just walls, for example.
        • by argent ( 18001 )
          Or simply a better system. For example, 90% of the objects everywhere I went in SL don't have any visible physics. They're just walls, for example.

          More than that 90% (that would be 1500 physical objects in a sim, that's a pretty heavy load). They still have to be included in physical calculations.

          I don't know what kinds of internal optimizations Havok (the physics engine) performs, but the point is it's not able to get any useful guidance in diong so from the builders.

        • by jafuser ( 112236 )
          You hit on a good point here. Most people in SL are amaeturs. They don't know how to, or are unwilling to expend the effort to optimize. These people don't have the same skill as a team of professional game designers, who tweak every polygon for maximum speed.

          Even those in SL with some skill are limited by the tools that they are given.

          Professionally-designed games take into account the maximum number of movable objects that the physics engine will have to deal with, and they probabaly design for a maxim
        • They are working on a better system. Check out: http://wiki.secondlife.com/wiki/Architecture_Working_Group [secondlife.com] A comparison (with pictures!!) of the current setup and what they hope to transition to http://wiki.secondlife.com/wiki/Proposed_Architecture [secondlife.com] Tao Takashi reported this info, but spent more time focusing on the fact that they are trying to develop the whole architecture openly. If done right, I think it could even help games like WOW and EVE perform better (imagine the performance improvement if ever
  • I'm surprised they're not trying to capitalize more on this. The piece about the "hard science" of gaming [slashdot.org] was mostly fluff, but it did highlight an interesting point: if you want ultra realistic graphics and physics, you need a crapton of CPU power. Intel, or some other enterprising folks with a lot of computers hanging around, should take up the challenge

    When I play WoW (and I use "I" figuratively, since I don't actually), my computer doesn't have to process everything to do with Azeroth. I let Blizzard

    • by pafein ( 2979 )

      Why not take this one step further, and farm out the physics and graphics processing to a remote super computer cluster?
      Bandwidth.
    • Why not take this one step further, and farm out the physics and graphics processing to a remote super computer cluster? Let's say I play a game where the goal is to knock down a building. I want every brick, tile and support beam in that building to be represented by an object that is controlled by a physics engine, which in turn will be able to simulate every stress, strain and force at work. My CPU certainly can't do that-- but if the CalculatePhysics() routine farms out to a beowulf-whatever-- returning to my CPU only the resultant vectors-- maybe its doable.

      Isn't that going to put significantly more load on the network code for the game? (It depends on how many actors are actually animated on your end) But I would imagine having a server send realtime status on the thousands of bits of your building as they fly around and interact...well, that would need one fat pipe. If you have more than one player viewing this scene, the amount of traffic going out duplicates per player. It gets even worse when you factor in players interacting with the scene you're ta

    • by mikael ( 484 )
      Some companies did propose that, you would have a thin client which connected to the server, and the server would render the final frame then send it back (or just the changes) back down to the client. This worked with itty bitty 3D graphics windows on an X-window server, but on a full-screen HDTV system, you would need to do full movie-style compression on the data. Since, the latest DVD compression methods take hours to perform and reference the last sixteen frames or more, this isn't practical just now.
    • by dbIII ( 701233 )

      Let's say I play a game where the goal is to knock down a building. I want every brick, tile and support beam in that building to be represented by an object that is controlled by a physics engine

      No you don't. You want to model it as the coarsest objects you can get away with. The mesh does not have to be uniform size and shape - triangles work as well as rectanges and tetrahedrons as well as rectangular prisms. Also the shape of the model can change over time for when you need to model a single brick.

  • If by one server you mean 200+ multicore blades with a 400gig RAMSAN database behind it...
  • Subject says it all.

    That is, if my post were as comprehensive as some of the figures in this story are.

    Most web browser are massively memory-hungry. Circular Javascript references have Internet Explorer practically hemorrhaging unreachable allocations, and on Firefox, numbers are often reference-counted and allocated on the heap.

    My CPU doesn't break above 20% when browsing the web either. But I'd be getting the same performance I would with four times as much RAM and a CPU that is one fifth as fast.

Time is the most valuable thing a man can spend. -- Theophrastus

Working...