Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Technology

Sandia's 20-Million-Pixel, 130-Square-Foot Screen 110

schauba writes: "Cipherwar has an article describing Sandia National Laboratories' new 10-foot by 13-foot, 20-million-pixel screen. The screen was created to allow scientists to view extremely complicated systems without sacrificing detail. The images are created through a parallel imaging system using 64 computers to generate the output. This makes my 17" monitor suddenly seem so inadequate." You can also view the same text with pretty pictures on Sandia's site.
This discussion has been archived. No new comments can be posted.

Sandia's 20-Million-Pixel, 130-Square-Foot Screen

Comments Filter:
  • by Anonymous Coward
    You can't tell me they aren't playing some righteous games of BZFlag on that bad boy!
  • by Anonymous Coward
    do you this thing has anti-aliased fonts. Otherwise on such a big display fonts would render horribly.

    [penny sized pixels] ewww..
  • by Anonymous Coward
    SiliconGraphics has made a lower-resolution "portable" model for awhile now. Not quite the same thing, but neat nontheless.

    http://www.sgi.com/realitycenter/rc3300w.html [sgi.com]
  • Isn't a screenshot ofa 20 million pixel screen kind'a useless without 20M of your own? [Like the "demonstration pictures" on television ads . . . ]


    :)


    hawk

  • > I like to print my code on legal sized paper (8.5x14in) in very small
    > type, tape the sheets end to end, and work on it with a pen


    Sounds convenient. It's a pity we can't convince a company to make paper that's already attached like that. Maybe they could even perforate it a bit tos that it would fold nicely without wrinkling the text . . .


    :)


    hawk

  • Although this sounds good as first glance, it's actually quite low resolution. To get 20 million pixels will be about 5000x4000 pixels. On a 10 foot screen, that's only around 40dpi. Why did they need to make it so large? A 4 or 5 foot display would have shown the same detail in a more palatable display area. I don't see what having it so big buys you...
  • by dustpuppy ( 5260 ) on Tuesday July 17, 2001 @12:20AM (#80458)
    you were surfing the web using that monitor and you accidentally clicked on that damn goatse.cx link ... urgghhhhh ... not a pleasant site/sight!!

  • Until now, I was thinking of a front projection screen for my home theater, but this would blow it away (I hope)! Does anyone know the quality of this thing?
    --
    Join my fight against Subway's new cut!
    http://spine.cx/subway/ [spine.cx]
  • by rpk ( 9273 )
    anybody got a screen shot ?
  • Another example of this fallacious research was previously (when nicer monitors first started being made) the claim that the eye couldn't see the difference between anything above a 72Hz refresh rate (the idea being that the eye simply doesn't grab images that frequently anyways), and this "magic number" was heralded by PC magazines everywhere as the magic number where the image on a monitor will be "perfect" and any further increase was mere waste: Is there anyone who believes that now? The target number on most monitors now to make an image that most people can tolerate is >100Hz (and this is coupled with a phosphor that has a persistence long enough that the fade is extremely limited at 100Hz...in other words with a quicker-fade phosphor the desired rate would be even higher). I guess humans are just evolving really quickly...

  • I concur that the major reason why people push for hyper-framerates in Quake is because of the low-end: I don't want a GeForce3 to get 200fps, but rather so it doesn't bog down during a big firefight. Having said that, simply running around an empty level (with the frame rate indicator on showing a steady FPS: It isn't jumping up and down. Indeed using detours [the MS Research tool] once before I tested this theory by logging every page flip [by putting a detour in the OpenGL library] so I could see what the low and median framerate was) betrays a very perceivable difference between even the high framerates (60, 100, 120).

    Having reasonable framerates in demanding situations is a necessity for surviving in the game. Having higher framerates in normal situations makes the game more immersive and "smoother" feeling.

  • From the article: The eyeball is the limiting factor, not the screen

    Yeah, sure it is. Throughout the history of time someone with a hard-on for a technology has slobbered away about how it's the last upgrade they'll ever have to do because damnit, it's more than the human eye/ear/senses can detect anyways. How many times have we had the moronic "The human eye can only detect below 60FPS!" arguments on Slashdot (yet I can refute that instantly as there is no doubt that Quake 3 feels smoother at 100+FPS than it does 60FPS, and the sense of natural motion blur is dramatically improved). How many times have people ranted that humans can only see X colors or hear X clarity of sound (both continually being defied).

    The next time someone wants to sell their bosses on the idea that this is the last upgrade they'll ever need because it don't get any better practically, they need to stop and pick a different excuse. That particular one has just been proven wrong so many times it is now completely laughable.

  • Wow. I take it you're in research and it just burns you when research isn't taken as a religion that the unwashed masses simply absorb and believe: There are those of us who simply don't believe (believe in the religious sense of "just because that's what you say") when Scientist XYZ, with loads of documentation to back up their claims, proclaims the truth about something (usually proving exactly what they set out to prove), and months or years later scientist B, with loads of documentation to back up their claim, absolutely overrides the original suppositions and conclusions. This has happened in science countless times, but each time it is presented as this is absolutely, positively true : Look at our methodologies!

    A perfect example of this is the number of "images" that the human eye can process per second, with various researchers attempting to come to a static number that quantifies and definitely states what the maximum FPS perceivable is. Of course they almost invariably fail to take into account persistence of vision, which is the concept that even if an entire scene isn't perceived the effects of the "sub-frames" merge together to form a common frame (natural motion blurring). That is what I mentioned about Quake (and it's funny how quickly you'll discount an oberservation: Don't you simply believe? Should I make some tables and package it in a whitepaper? Does that make it more credible?): Any Quake 3 player with a good system would have ZERO difficulty discerning between 60fps, 100fps, and possibly 200fps (or more), yet still there are those who will conclusively state that the human eye cannot see more than 46 FPS, etc. It is quite laughable though. In the case of pixel accuracy simply measuring the number of rods and cones in the eye would be insufficient and a half-measure: The "picture" that we see is the end result of a very intelligent system which may, for instance, do sub-pixel integration via "jitter" (i.e. you may have 20,000,000 "pixels" in your eye, but your eye is never absolutely still, which means that the light hitting your retina is contantly from a slightly different source: When looking at a leaf you are getting information from trillions of rays of light).

    P.S. The post is interesting because most people in computers have seen this shit a million times before: Someone stating unequivically that the human ear/eye/nose/etc. can only see/hear/feel/taste/smell XYZ measures. CD is apparently beyond the absolute limit of human hearing (I won't get into the fools who believe that MP3 is beyond the limits of human hearing...), yet strangely they're coming out with DVD audio at 24-bits per sample/96Khz (versus 16-bits per sample/44.1Khz).

  • ..they're not going to be watching the Superbowl on it then (or are they) ?
  • The next time someone wants to sell their bosses on the idea that this is the last upgrade they'll ever need because it don't get any better practically, they need to stop and pick a different excuse. That particular one has just been proven wrong so many times it is now completely laughable.
    Why? My boss doesn't read slashdot.

    \//
  • It's expensive because there are low yields in manufacture.
  • It could be done, but it would be expensive as hell if you wanted it to work perfectly. The viewing screens in movie cameras (you know, the kind that they actually make movies with ;-) are actually bundles of tightly-packed fiber-optic cables. They are quite expensive, and most of them have at least one broken fiber in them, resulting in a tiny dark spot. I imaging that doing something like this on a large scale would cost tens of thousands of dollars for just one monitor. Probably cheaper to just use LCDs.
  • No comment on most of this post, but I think that by now we all realize that Quake feels smoother at 110 fps is due to the drop in the framerate during times of intense on screen action, such as rendering 20 people shooting rail guns at each other, etc.

    By starting at 100, the game has farther to go before the rate drops below 60, which is the point it will feel less smooth at. In other words, you are kind of supporting the statement that 60fps is where it feels smooth, as when it starts there and drops below 60, you don't like the way it plays.

    I do have to admit that being that silly about one part of your post does kinda throw a shadow on the rest


  • it has its inspiration from Spaceballs.
    it's their equivalent of the Force and the light saber, all in one - a sample line from the movie is, "I see your Schwartz is bigger than mine!"
  • they're browsing at 1 like good moderators should(n't).
  • Some researchers at the university of minnesota put together something called the "PowerWall" for Supercomputing '94. It was an array of 4 1600x1200 projectors combined on a single 8'x6' screen for 3200x2400 resolution. There were two SGI Onyx machines with Reality Engine graphics, each driving two of the screens. In addition we had a big stack of RAID 3 disk arrays (Ciprico) so we had enough bandwidth through the system to stream 24 bit/pixel images at full resolution at 20-30 frames/second. The cool thing was that if you stand next to the screen it fills your whole peripheral vision. I tried a couple of times but could never quite get glquake to work on it.

    For more info see:

    PowerWall link [umn.edu]

  • In the pic that I am looking at (http://www.sandia.gov/media/NewsRel/NR2001/images /jpg/VisLab.jpg)
    the lady is holding a light bulb. You can see the power cable. Also note shadows from the guys legs. I dont think that the projector on the ceiling is even switched on (too short a focal length).

    It would be great for playing road-runner type "I'll just go through this open doorway *BUMP* opps, it's just painted on a wall" gags.

  • the high resolution of the monitor coupled with the extrmely low resolution of the goatse.cx image would probably make the image smaller than the human eye can see... so no problems there.

  • Some folks at Argonne [anl.gov] wrote a nice introductory paper on all this. If you're still willing to read PDF files:

    Introduction to Building Projection-based Tiled Display Systems [anl.gov]

    Nice pictures inside, but of course it can't compare to actually seeing a massive OpenGL fractal bouncing around a huge crisp display...

  • I don't like to program on the computer screen. I like to print my code on legal sized paper (8.5x14in) in very small type, tape the sheets end to end, and work on it with a pen. Why?

    The single long continuous printout makes it easier for me to visualize the flow of the code, how the seperate parts relate to one another. I get a snapshot of the flow by looking a single long printout that I just can't get from a scolling window with a max of about 80 lines. The pen allows me to quickly make annotations and draw relationship lines with a speed and simplicity that is impossible on the screen. In short, the printout contains more data per square inch than my brain can pick out immediately, but being able to see the whole picture in one shot allows me to see relationships and flows that I would otherwise miss. I'm then able to immediately zoom in on important sections while ignoring the rest.

    I would assume that these scientist are aiming for the same effect. Being able to have any single piece of the information immediately availble while looking at the whole picture. The point is to quickly draw out what might be important in the whole picture without being distracted by the mechanics of zooming into a detailed section. Instead of saying, "Computer, zoom into grid section E5", the scientist just has to look more closely.

  • Ooh, what an idea. And with any luck we could convine them to make the edges with little holes that a tractor feed can pick up and guarantee that the paper goes through with getting twisted. Man, that would be great.

    Unfortunately, all the manufacturers think that the only thing that anyone wants is 8.5x11 sheets (it's actually a hack to use 8.5x14 on my HP930C). If I want to use tractor fed, fanfold paper, I have to revert to a house-shaking dot-matrix. Ugh!

  • by dierdorf ( 37660 ) on Tuesday July 17, 2001 @12:40AM (#80478) Homepage
    Although this sounds good as first glance, it's actually quite low resolution. To get 20 million pixels will be about 5000x4000 pixels. On a 10 foot screen, that's only around 40dpi. Why did they need to make it so large? A 4 or 5 foot display would have shown the same detail in a more palatable display area. I don't see what having it so big buys you...

    RTFA(rticle). This is an intermediate step to the REAL display, which will be 69MPixels. Note also that this gadget isn't really that expensive - 64 PCs and 16 projectors add up to maybe a quarter million bucks. A couple of years from now, maybe 300 MPixels for no more money. Digital IMAX theaters, anyone?

    The article mentions that one of these toys is under construction down the road at UT. I can see I'm going to have to be extra nice to my lapsed contacts in the CS department!

  • After assuming Sandia and Lawrence Livermore were actually government facilites, I was surprised to see this at the bottom of that page:
    Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract DE-AC04-94AL85000. With main facilities in Albuquerque, N.M., and Livermore, Calif., Sandia has major research and development responsibilities in national security, energy and environmental technologies, and economic competitiveness.

    Owned by a private company???
  • It also seems like they are using this with whatever is their latest ASCII cluster (Red? White?) to display data off it (I just noticed one of the projects name had ASCII in it - so I am making an assumption here - probably a wrong one).

    What I still tend to wonder is - why do those damn projectors still cost so much? I mean, sure - prices have come way down, with higher resolution - but why don't they offer "low-res" consumer models - ie, a 640x480 projector for $500-800? The panels should be dirt cheap to make - and I would bet there is a market for higher-res TV projection systems (people still buy normal - ie, non-HDTV - rear-projection systems), right?

    It just irritates me that one can't go out and get a new projection system cheaply (actually, I have yet to even see the high-end projectors being sold at a place like Best Buy or Fry's).

    I recently set up a cheesy Fujix P401 video projector, coupled to an Avermedia VGA->TV converter. Good enough to watch VCDs, anyways - and it was inexpensive ($250)...

    Worldcom [worldcom.com] - Generation Duh!
  • Hrm...you may be right.

    But how do you account for the weird squared-off edges of his shadow? I was assuming the crops were because some other project was putting a bright light overtop, making it visually disappear.
  • Why did they need to make it so large?

    A lot of theoretical science is all about collaboration and sparking ideas through talking with colleagues. (My father-in-law is a theoretical physicist and visiting the Aspen center for Physics I was amazed at the number of chalkboards EVERYWHERE...in the halls, outside of offices, etc. They're there so that people can talk and collaborate whereever the idea hits them.)

    If you look at the pictures, you'll see that many people can stand and discuss the high-resolution image together. Sure, it may be only 40dpi, but that's 20 million pixels that 5 people can stand in front of, talk about, walk up and point at, etc.

  • by Phrogz ( 43803 ) <!@phrogz.net> on Tuesday July 17, 2001 @06:03AM (#80483) Homepage

    When I saw the high-res image (the first one) and saw it was an array of projectors, I said "Eeeeuw! How can they get them all aligned at the edges well?"

    When I looked at the 2nd hi-res image, and saw the color mismatch down the vertical center, I nodded to myself and said "Thought so. Bleah!".

    But then I looked at that first hi-res image again, and noticed the bizarre shadow. Why is it all squared off? And then I realized--those projecters aren't just aligned at the edges, they're actually overlapping and registering correctly at 40dpi! Have you ever tried to get your company's LCD projector to project a reasonably orthogonal image? I can't get it even close. Now imagine getting two projectors to OVERLAP perfectly at the edges.

    Color me impressed.

  • ...but it's only off by about one order of magnitude instead of two. Their 10x13' dimensions yield a diagonal measurement of about 197 inches. A cluster (Beowulf, perhaps?) of maybe a few dozen of these would get you the drive-in-movie-in-your-own-living-room experience...
  • We have the same thing here at PPPL(princeton plasma physics lab). We call it the "high resolution wall." Big whoop. It's a pile of projectors each creating a chunk of the screen. The software isn't the difficult part... the practical difficulty is getting all the images from the projectors to line up. We have several interns working on this at the moment. This is hardly news, as systems like this have been around for quite some time.

  • heh i work right up the hall from you, in the computational group. :)

  • In the brief meeting I had to discuss the setup on one occasion, it seemed as if the eventual plan was an automagic alignment setup. I believe for the time being they are still using manual adjustments, although I am not sure. I haven't really talked to anyone involved with the project directly in several months. I work in the computational group, not visualization. There was another /. poster elsewhere in this discussion who also works at princeton, and he seems to be directly involved with the project. Perhaps you should ask him

  • Does it run Linux ;) Actually, methinks that this display just might make AA obsolete! Maybe X is more cutting edge than we all thought...
  • You've confused a number of variables in these cases (it's probably because you seem to merge "research" and "marketing" -- the two use entirely different vocabularies.)

    You seem a little skeptical of scientists in general. That's a good thing until you toss out all claims only because something is claimed at all. ;)

    There is an inconsistency at the heart of your post that I'd like to weed out.

    (and it's funny how quickly you'll discount an oberservation: Don't you simply believe? Should I make some tables and package it in a whitepaper? Does that make it more credible?)

    I never said anything about being published ... only about providing sufficient evidence. If you'll go back and read my post you'll notice that I said that you've got to actually debunk a claim if you're going to dismiss it. None of this has anything to do with publication. You did not make a detailed observation. This is the matter in dispute. Even if you'd at least said specifically what changes are visible between an animation at 30fps and 60fps you'd have been contributing something.

    Any Quake 3 player with a good system would have ZERO difficulty discerning between 60fps, 100fps, and possibly 200fps

    You'll also note that this isn't necessarily proof that the eye is picking up each frame. If you've ever programmed a time-relative game (that is ... a game that doesn't use the lock-step method for applying world physics,) you'll see that the finer grained the latency times are between frames, the closer the objects get to optimum movement. It's possible to base a character's movement off of a step function ... and in that case you'll see jumps through time/space regardless of the frame rate.

    The point is that you've got to be precise about what you're "seeing."

    In the case of pixel accuracy simply measuring the number of rods and cones in the eye would be insufficient and a half-measure: The "picture" that we see is the end result of a very intelligent system which may, for instance, do sub-pixel integration via "jitter"

    You mean interpolation ... and this is the brain's way of accounting for "jumps" between "frames" ... that much is true. However, this is just a smarter way of catching up to live data. It's not adding any extra processed frames. It's like the difference between the state of the video buffer and the state of the monitor's surface. The "tearing" that you see in animated graphics is the digital version of our brain's motion blur. Of course, even if you do assume that interpolated frames count ... there's a limit in the brain's processing capabilities. Your original post decried the whole notion of finding limits and that was the original issue in dispute.

    I hope you're not going to suggest that we can process photons faster than photons can move. ;)

    The notion that there are no limits to our capacity for visual interpretation is silly.

    P.S. The post is interesting because most people in computers have seen this shit a million times before

    Yes, many people on Slashdot talk out of their ass but that doesn't mean that it's interesting.

    CD is apparently beyond the absolute limit of human hearing, yet strangely they're coming out with DVD audio at 24-bits per sample/96Khz

    I suppose that you're going to say that we can hear sound at any frequency for any amplitude. ;)

    This is entirely off of the original topic anyway. It's fine to take issue with a statement like, "humans can only process 30 frames per second," but unless you prove otherwise (note: you don't even have to prove the actual limit) then your statement amounts to, "I don't think so" (which is also fine, though much less interesting than proof.) What you did was even worse than that though. You essentially said, "ignore all measurements of human capabilities -- they're inherently flawed because people have made mistakes in the past." That's the issue. That's bad reasoning.

    Thank you.
    ____________________
  • How exactly is this moderated as interesting? The only interesting thing about this post is how a person can be so deliberately obtuse.

    There are actual quantitative experiments that have demonstrated the limits of photon reception in the human eye. To just dismiss all of this work with, "there is no doubt that Quake 3 FEELS smoother ..." is ridiculous! Go ahead then, prove that there's no physical limit.

    The next thing you know, he'll be saying that there's no limit to how much energy we consume in a given instant (after all, if the capabilities of the eye aren't fine-grained, then neither is the structure of the body -- surely!)

    The next time someone wants to sell their bosses on the idea that this is the last upgrade ... has just been proven wrong so many times it is now completely laughable.

    The next time that someone purports to defy many many years of research, make sure that you have more than "his word" to go on ... especially if he says that it's been "proven wrong so many times it is now completely laughable."

    If you think that there are no limits to our capabilities for processing colors, sounds, and variations through time, you'll have to ignore an awful lot of biology.
    ____________________
  • by selectspec ( 74651 ) on Tuesday July 17, 2001 @03:18AM (#80491)
    While the Sandia Monitor has 20 million pixels, IBM has a monitor [ibm.com] that has 9 million pixles but is only 17".
  • They're not overlapping, the images meet at the edge... If they did, the shadow of the edge of his hand wouldn't be in line with the shadow of his leg. Also, the edges where they meet would be noticably brighter than the rest of the image, not to mention they'd loose all the overlapped pixels. The alignment isn't perfect, but the guy *was* checking the alignment anyway...

    So no, the projectors are just aligned at the edges, they're not overlapping.

  • Working for the summer at Sandia, I receive the Snadia Daily News. I thought it was ironic that I heard about this on /. first, but whats better is that todays SDN has a blurb about the article showing up on /.

    There's a loop for ya,
    JD
  • I'm pretty sure it was Arthur C. Clarke.
  • ...of the technology. After all, it's just a bunch of standard projection units hooked together so the image spans neatly. It's actually a no brainer to do this stuff since the MacOS (for a decade now) and Windows (98? and up) support monitor spanning.

    What they should focus on is the video card technology used to drive this display. Not too many video cards that I know of that can go up to 5120 x 4096.
  • I suppose there's not too much we can spot with the naked eyes that we can't find with a couple thousand computer years these days, but it's be really nice to visualise complex protein interactions with these. Could be the visualisation tool that allows DNA-computer design to take on new dimensions.

    Think about it: Massive numbers of ddNTPs radioactive markers on dna molecules (bear with me - especially if I've got the acronym wrong) all flashing when the nucleotides bind to other molecules. By inferring where those markers are and projecting the rest of the molecule accordingly, you could get a slowed down real-time picture of multiple molecules interacting at massive numbers of points at once! Not just poxy small fragments like RAPDs, but mystery proteins released beside a suspended target cell with marked cell receptors. Yes, I know these are amino acid chains as opposed to DNA molecules.

    Ok maybe this doesn't take a 69 MegaPixel monitor, but it'd be fun, wouldn't it? Maybe better than crystallography, which normally breaks the protein..

  • Hm. That was more confused than I thought when I wrote it. Teach me not to use the Preview button inadvertedly. Ok, the point was that all reactions make sense if you talk amino to amino and dna to dna. Not mixing stuff up strangely. Though you could use this to monitor introns and exons in gene expression as well..
  • > No more squinting at tiny Media Player windows!

    I think my 56k modem might struggle to get 24fps of 20 million 24 bit pixels, and anyway, I'm not sure my 1Mb S3 ViRGE is compatible with it.
  • Some peoples must really like watching their divxs:) hehe
  • You'd really need to use a seperate fibre-optic cable for each individual pixel - the image would blur together somewhat if you just used a huge fat line. You'd also need to get the fibres really close to the surface to stop them picking up light from adjacent pixels. This would be more inconvenient to produce than what you suggest (though, actually, I'm not sure how easy it would be to produce a line with such a large diameter).

    Yes, it should be equally simple to do this from a curved screen, if you get a snug fit. You could even have all the screens sat in seperate places and just string the cables close to each other - no need to be limited by having them a foot apart. Assuming you could get it working of course...

    There's probably also some good reason why individual fiber-optic cables have a round cross-section too (I'd guess something to do with refraction), but I didn't really pay as much attention in my optics lectures as I should have done ; )

    Finally, setting all this up in an array as you suggest might be a bit tricky. It probably could be done, but whether it'd be worth the effort is questionable. It may well be more appealing (and cheaper) to just buy a bigger screen (or a high resolution projector if you're that desperate) and wait until the technology makes this simpler to do.

  • Well, as I said, I wasn't the most dilligent optics student in my class, but...

    First up - I meant really really close to the surface. The monitor I'm using at the moment has a few millimetres of glass between the flourescent bit and the surface. This is going to be an issue with most screens, but more expensive ones seem to have a thinner layer here.

    My comment about the display blurring is based solely on my experiences playing around with fibre-optics - it could just be because I was using low quality fibre or something. However, I do recall that when seperate signals are simulataneously sent down fibre-optic cables (as you mention) they use seperate frequencies for each channel (frequency division multiplexing) and de-multiplex them at the other end, so perhaps they do get mixed up. Could be that it's a combination of the two, or that it wouldn't be an issue over a straight 1 foot connection. I'm sure there's someone here on Slashdot who knows more about the subject than me.

    Oh, and fair point about the projector.

    Hope I made myself a little clearer. If anyone wants to shoot down my answers then I'm all ears (or should that be eyes?).

  • The point of this screen is to display an unprecedented amount of data, too much to be taken in at once even by the subconcious, thus flooding our sensory input leaving our brains free to do what they do best - recognise patterns, holes and anomalies in the massive amount of data represented in new visual ways. Much like a graph lets you visualise a trend easily.
    Is it? I didn't notice that in the article - I thought they just wanted a really big clear screen so I was wondering how well they actually managed to take in the display. I can see that the screen could be used like you suggest, but I didn't think that was why they built it.
  • Seem like an intention to use the screen as a visualisation system to me.
    Heh; I never suggested they wouldn't be using it as a visualisation system (what else would they use a screen for? :)).

    I was questioning whether the intention had been to simply get a bigger clearer display (motivated by the same reasons that make people swap their 800x600 15 inch screen for a 1024x768 17 inch unit, though to a much greater extent), as opposed to "flooding our sensory input leaving our brains free to do what they do best - recognise patterns, holes and anomalies in the massive amount of data" (as a previous poster put it) - intentionally overwhelming the user's brain to force it to pick out patterns and work differently to how it would on a smaller screen.

  • Please note - The offensive parent post wasn't written by me. Why on Earth would I post as an AC but still leave my sig in there? I never post anonymously anyway since my karma's high enough to take negative moderation. (The real) Dr_Cheeks
  • Please ignore the offensive post - it wasn't me. An AC copied my sig - I always post as myself. Thanks for the info.
  • by Dr_Cheeks ( 110261 ) on Tuesday July 17, 2001 @01:29AM (#80506) Homepage Journal
    OK, this is very cool and all, but surely when you start getting such a huge detailed image the limitations imposed upon our vision by evolution will become more obvious; we can only see this sort of clear image in the centre of our field of vision - our peripheral vision is considerably more fuzzy.

    IANAO(ptician), but I recall that it's down to the distribution (thanks to evolution) of rods and cones (light receptors) on the retina - near the centre there's one sort (can't remember which) that's good for recognising colours and shapes (useful when examining objects), and round the edge there's the other sort that's more sensitive to light/dark and movement (useful for spotting something with big teeth sneaking up on you).

    On a normal computer screen we only have to focus on a small part of the image at one time (try reading the text at the top/bottom of this page while staring at the centre). Even on movie screens (which are a comparable size to this screen) we typically only need to look directly at one small part of it at a time and let our peripheral vision pick up the rest. But if this screen is going to be running hi-res images across it's whole surface (i.e. you want to watch the whole thing instead of focusing on one small part) then anyone using it is going to have difficulty seeing the whole image at once, unless they sit really far away or run the thing over and over so you get a chance to see everything.

    I could just be talking out of my ass here, so I'd be interested to know if anyone here has used something like this and noticed any problems.

    Oh, and before any wise-asses reply - I know my eyes move - I'm talking about trying to see the whole thing at once instead of focusing on different parts in rapid sucession and getting a killer headache.

  • Meanwhile, the throughput of conciousness is about 40 bits/s...

    I'd be interested to know where you got this number... Really. Just curious.
  • At Princeton (not Main Campus, but the Plasma Physics Lab) we've got one, too... 7 million pixels, but roughly about half the size (7.5' x 10' or so). We just put in a new screen, and we use 9 Proxima 9250+ projectors attached to a beowulf (yeah!) cluster of 11 dual 733's. Unlike many of the other walls, we do not run custom software... instead, we run WireGL (from Stanford, too lazy to post a link), soon to be upgraded to Chromium (again, look on sourceforge), which just alpha-ed 5 days ago. The uses of this thing are amazing. Visualization is the official use, and it does that amazingly well (our resolution is 3072 x 2304). Things like UT or Quake also kick @$$... in fact, one summer student in our lab is designing a walkthough of the reactor space using UT. But the awesomest part-- tuxracer! Talk about immersive... I've often seen people from tours wince if I accidently fly into a wall at 150 km/hr. It rocks ;)
  • The article says there are 16 projectors. You don't need a video card that can go up to 5120*4096. One that will drive the projector at 1600x1200 is sufficient.

    For a single frame of video at 5120*4096 at 24 bit color, you'd need 60 MB of memory. That doesn't include anything like Z-buffering, or double or triple buffering, so that's one reason why you don't see the GeForce3 running at that resolution.
  • it's actually quite low resolution

    Keep in mind that this is meant to be viewed by a whole room-full of scientists, which can't all sit/stand next to the thing. As stated in the article, at 10 feet away the limiting factor is human vision.


  • We have several interns working on this at the moment.

    Developing an automatic alignment system, or manually adjusting the alignment?

    In the early 1950s, director Michael Todd did some tests in Cinerama, which used three aligned 35mm cameras and three aligned 35mm projectors. Cinerama was actually a holdover from a WWII multi-project training system for aircraft gunners. After the headaches of that project, he went to the head of American Optical and said "Doctor, I want a system where everything comes out of one hole". The result was Todd-AO, the first good 70mm system. Hollywood then dumped multiprojector systems and never went back.

    Still, Cinerama, with a 152 degree field of vision, was truly impressive.

  • I've been working at Sandia National Labs over the summer and had the priveledge of visiting this facility today. I found it to be very cool, but there is still much to be done. The resolution and graphics didn't actually seem all that great. I dunno if it was the video they were showing or what, but it was getting pretty choppy. The synchronization works pretty well from side-to-side due to overlapping, but the top-to-bottom still has issues. There were also a couple times where a rogue projector would decide to do its own thing and it would throw the whole thing out of sync. They were saying that a major goal of theirs is to get up to about 65 million pixels as that would really match what the eye is capable of noticing. One other thing is that because the light bulbs in all of the projectors go out at different times, the different intensities of the lights cause discolorations throughout the different projections.

    Overall, very cool and I'm glad I was able to see it (and the teraflops too :), but it looks like they still have quite a bit of work to do before they'll really be able to do everything they want to do with it.
  • The main viewer on the Enterprise bridge.
  • I kind of like the idea of useing twin or splitvt to create enough vconsoles to see the logs of the 130 systems you manage symotainously. a little color highliteing from coloriz.pl and it might actualy be useable to cause perminent eye strain and file a workmans comp claim.
  • None of the slashdotters really seem to understand why this is cool. The purpose is to render large sets of data as gazillions of pixels all viewable at once. If I had one, I would definitely be rendering fractals on it. (for the totallly clueless, fractals are infinitely complex images created from simple equations.) It would be nothing short of mindblowing to be able to see so many levels of detail at once. Think Mandelbrot set.
  • That might be.... but can anyone tell me this? "What is the Matrix?"
  • Look at the picture half-way down the article.

    I'm convinced their 'l33t screen is displaying a quality WinAMP plug-in, adn the guy on the right is saying
    "If you look closely you can see the little people, dude!"

    ... and the one on the left is just awe-struck at the realisation his head is floating in space.

  • I think what you wanted to write is "mein-schwanz-ist-laenger". Which means my d*ck is longer. What you did write is my black is longer...
    Anyway why are you USAmericans using germany words for emphasis so often?? "Uber" which should be "Über" and so on??
  • by Kjella ( 173770 ) on Tuesday July 17, 2001 @12:22AM (#80519) Homepage
    - Wow! Imagine how great it would be to watch pr0n on this thing.

    - I need to replace my 19/21/23/50" CRT/LCD/Plasma whatever.

    - Imagine a beowolf cluster of these

    - Someone give the reasearchers the goatse.cx link.

    Kjella
  • I have news for you, buddy. Your 17" monitor became inadequate about a year ago.
  • "Any sufficiently advanced technology is indistinguishable from magic." -- Isaac Asimov

    Wasn'it Arthur C. Clarke who said that?

  • Actually, if you remember this story [slashdot.org], you might be interested to view the 110-million particles simulations [npaci.edu] in details on such a big display...
    But well, they'll have to process very-high-res movies, first, which might be much more expensive in terms of supercomputing power.
    --
  • The article mentions that one of these toys is under construction down the road at UT. I can see I'm going to have to be extra nice to my lapsed contacts in the CS department!

    CS people have one? That's soooooo not right! needs to go in the EE building... I mean there's already that *GIANT* capacitor in there, why not a giant screen? :)

  • The upgrade they are talking about will be 16 1600x1200 projectors configured in a 4 x 4 matrix.

    How many GForce 3 cards can you put in a high-end system? One projector for each card carfully laid out to minimize lines along the edges, and you've got a big display. I use Windoze 2000 at work with a Matrox G100 dual-headed card. There is native support for multiple cards in Win98, 2000 (I haven't tried Me). Guiltily, I admit that I haven't used Linux for a while (work, and home life with my wife and 1.5 year-old). What support does Linux have for multiple display-adapters? Also, a high-end card would best interface to a projector via a digital line, to minimize cross talk on all those video cables.



  • more Slashdotters leavings their houses and going outside...

    I'm going to bed... this reality vs nonreality is just too tiring today

  • i wonder how these things are kept aligned. i would assume that once they are aligned (so as not to create any lines/breaks in the image) that they are kept bolted to the floor. but it didn't look like that in the picture.

    how are they making sure that those projectors stay in the right spot?

    weylin

  • by K45 ( 207177 )

    I just downloaded the 300dpi JPEG [sandia.gov]. It's pretty sweet! You really can see all those pixels in that image if you look at your monitor close enough! :)

    K45

  • Your underlying facts are basically right but your conclusion is, fortunately for all of us who want one of these, somewhat mistaken. As you described, there are two kinds of photoreceptors in the eye: rods and cones. The cones are the ones that come in three varieties (the peaks of their sensitivity curves are not organized in a simple "red-green-blue" fashion, incidentally) and enable us to see color. (Simple memory aid: c = color = cones.) Cones are concentrated in the central 2 or 3 degrees (IIRC) of the retina, the area known as the "fovea" that is responsible for all our high-resolution vision. Outside of the fovea are rods, which function in low-light circumstances but come in only a single variety and thus produce monochromatic vision, as well as a sparser distribution of cones.

    You mention that you worry about the possibility of "focusing on different parts in rapid sucession and getting a killer headache." Here's the thing: Your eyes "focus on different parts in rapid succession" all the time when viewing real-world images. That doesn't normally cause killer headaches. (The exception is usually if you're, say, farsighted and insist on looking at a close-up image all day long -- the muscles that focus your eyes spend all day working hard, which like any protracted muscle work get tiring.)

    Admittedly, a display like this is probably best when you need to see lots of detail in static images rather than in movies. I think the idea is to be able to visualize lots of spatial detail in extremely complex systems -- to be able to look closely at one part of the image while still maintaining a sense of what's in the periphery. That's hard with current displays, where zooming in means you have to discard stuff outside your immediate field of focus. You're right that if you wanted to watch a moving image, a lot of this resolution would probably be wasted -- in fact, I'm led to believe that some professional flight simulators and similar devices use this fact to their advantage by performing eye-tracking and showing full detail only in the area that the user is actually focused on, while showing lower-resolution imagery in the periphery to save CPU cycles. (Of course, that only works if you have a single or very small number of viewers, all of whose eyes are being tracked.)

    One other funny perceptual thing: it's unlikely that "the limitations imposed upon our vision by evolution will become more obvious" when using this or any other display. We cope with those limitations in a very high-resolution environment (the real world) every single day and rarely notice them unless we really take the time to think about them and/or do experiments. We all have a fairly sizable blind spot in each of our eyes, for example (caused because there are no receptors whatsoever where the optic nerve exits the eyeball), and yet we never notice that gaping hole in our field of vision. The combination of unconscious eye movements and the fact that the brain maintains a basically continuous picture of the environment around us do a pretty good job of convincing us that we see everything in the world around us fairly well even when we don't. If anything I suspect what this display will show is how good a job evolution has done at making us ignorant of all the visual limitations we actually have!

  • Clusters of computers, or "render farms," used for many years in the movie industry, may take a half-hour or more to render a frame -- the equivalent of the Sandia screen -- but they cannot handle the data set sizes or the interactive rates of the Sandia cluster, which renders huge data sets in seconds.

    The Sandia images are created through massively parallel imaging, which could be thought of as the kid brother of massively parallel computing -- a method of orchestrating the individual outputs of many desktop computers to produce a combined output faster than a very complex, single supercomputer. In this case, the image is not created from a single graphics card but instead through the orchestrated outputs of 64 computers splitting data into 16 screens arranged as a 4 by 4 set.
    I was going to comment with something else, but this is just too good to resist: right now, as I'm about to hit submit, at the top is a banner flashing "YOUR VIDEO CARD SUCKS" and "UNLESS IT'S THIS ONE" href'd to an nVidia GeForce 3 specs page.

    Now who's stupid? -Homer J.
    ~
  • (and now is the perfect time to post)
    You know how sometimes when take off the front of a computer, there are little fiber-optic-looking things that 'translate' the location of the blinkers, so the LED's actually shine from a different location on the front of the case (when the face-plate is on) from where they are actually located? Well, this is kind of what I'm talking about:

    What if you took a really big version of a fiber-optic line, like one that's 19 inches in diameter, but square instead of round, and very short, and cut both ends at such an angle that you could push one end of it up against the face of a 19" monitor, and attach it (at the sides) so it's right up against it, and every pixel goes in that end and comes out the other, and snaked it so that the other end would also be a 19" square plate, but because of the snaking the new image would be translated by a foot. (And, of course, be a foot or so away from the old monitor). The two faces (the two ends of the line) should be parallel. Okay, have you got that pictured? You could sit in front of this thing and the visible face of the fiber-optic cable would look just like a normal monitor? Okay: so now we've "translated" the image of the montitor by a foot.
    Now imagine that this monitor is really sitting in a box in a huge strucutre of boxes, and right above it is another 19" monitor, whose image is also "translated" but in such a way that the very bottom of the translated image from this second monitor meets the very top of the image from the first monitor, with only like 1 dead pixel. You probably don't even need a dead pixel, if you push them tightly enough together and their edges are precise enough...

    Anyway, to the left you have a box holding a monitor, and to the right you have a box holding a monitor, and each of their images is translated to match up with the one next to it.



    In other words, it's like one of those huge displays made up of lots of individual monitors [hantarex.co.uk], only without the dead space (which you translate around). The limit to the size of this whole thing is the limit to how far you can translate the image from the monitors farthest from the center of the Giant Image.


    So, does someone who knows about fiber-optic lines able to tell me whether this is something that's possible to build? And another thing: I'm looking at a flat CRT, so that's what I had in mind, but would it be possible to translate a curved image into a flat one?

    For anyone who's at a workstation, it's the DPI that's more important, not the size of your monitor, and this 40 DPI that people mention for the display linked from this article is...uncompelling. So, I'm looking at a 19" monitor at 1600 x 1200 right now. I can imagine another monitor on top of mine, two to the left and two to the right (I mean the whole thing would be 3 monitors by 2 monitors). Since right now mine costs about $300, this would mean:
    For a base price of $1800 I could get a 3 foot by 4 foot display running at 4800 by 2400 with a nice flat screen, which I could easily take apart and carry in 7 trips? (1 per monitor, one for the ultra-light fiber-optic thing that pastes them together?)

    Since fiber is flexible, I really don't see why this shouldn't be a possibility...4800 by 2400 is already close to doable by run-of-the-mill $350 agp video cards, so that's not an issue...and for big walls made with this system, you'd use a cluster of computers...so why not? why not just master the image together from a bunch of 15" monitors, and get a huge wall with DPI of whatever each monitor maxes out at?
    ~
  • thanks for the extra time answering me...unfortunately, no slash-dotters seem to care :) perhaps an ask-slashdot is in order...?
    ~
  • by 3-State Bit ( 225583 ) on Tuesday July 17, 2001 @12:37AM (#80532)
    The facility's digitized images, created of 20 million pixels, approach the visual acuity of the eye itself. "The eyeball is the limiting factor, not the screen," says manager and program leader Philip Heermann. "From ten feet away, the image is as good as your eyes are able to see."
    From 10 feet away, so is 1600 x 1200. The issue here isn't DPI, it's TOTAL resolution (ie 1600 x 1200 is crystal clear on my 19" monitor, but if you crank it up to a 35" monitor, you can see the pixelization a lot more easily. I have to strain to see a single pixel). This thing is massively huge, that's the point. Ten feet high, thirteen feet wide, like the person said. Hehe, cool.
    ~
  • HOT TEENS with incredible resolution!

    A screen like that might even make VirtuaGirl watchable :-)

    cmclean

  • Well, It's about time for me to upgrade my home theater... This time around I wanted a monitor rather than a TV anyway...

    I guess all I'd need is to install LiVid [linuxvideo.org]. Which would be the most impressive to see, Ronin [imdb.com] or Titanic [imdb.com]... hmmm. Definately Ronin.

    Imagine playing Quake on this bad boy!
    --
  • 17" high maybe...

    With 200 pixels per inch and more than 9 million pixels in total on its 22.2-inch screen, the T220 monitor displays photographs with a degree of realism not previously possible.

  • Since the obvious uses for the porn industry have been mentioned...just add the following:
    1. Cluster to generate multiple 5.1 audio streams.
    2. One machine to handle the artificial scent generatior.
    Well, maybe the scent generator is a little TOO real... :)

    /* ---- */
    // Agent Green (Ian / IU7)
  • As for the human eye and visual systems, the eyes are continually scanning and moving around as you look. This occurs as the rods and cones fire off and send information through the nerves. The rods and cones have a certain amount of time before they can fire again, so the eye moves around to continue recieving data, which the brain assembles into an image.

  • I read your post. You stated yourself that you do not know much about eyes.

    I presumed that you were talking about the person voluntarily moving their eyes over the image to view the picture.
    I thought I was clarifying, in denoting the INVOLUNTARY movement of the eye in the process of the brain gaining the data at the behest of the human moving the eye. Try looking at something without moving your eyes.

    I apologise if I did not understand the distinction you made of the movement of the eyes. I was attempting to clarify.

    Another point of interest is the difference between male and female eyes and the degree of peripheral and central acuity.

  • Crap image quality dude, I can see the lines between the projectors!

    So, uh, is that some kind of back projection or are you supposed to peep through the projector rack? Great potential for shadow puppets though.

  • Non-interlaced? I somehow don't think so. *smile*
  • Low res or not, I bet a bunch of people will jump at the chance to buy a 196.8" viewable screen for just a bit more than a GMC Hummer.
  • Using projectors on a backlit display is a nifty twist, but using a compute farm to drive multiple integrated displays is nothing new. Check out the work done by Hank Dietz (formerly with Purdue, now with U. Kentucky, author of the Linux Parallel Processing HOWTO). His stuff is based on the PAPERS parallel architecture. The folks with the REALM project at U Maryland ported Hank's stuff to MPI as well. All open source.
  • A friend of mine owns a GeForce3 dual head also. He told me that he could enable the dual head things without much problems in Linux

    For what I know, you must have XFree 4.x which supports xinerama extensions

  • That's exactly what I was thinking.. A 17" monitor has sucked for quite some time.. Hell, my 21" is starting to seem too small.
  • www.howstuffworks.com/internet-odor1.htm [howstuffworks.com].

    DigiScents has been working on this very thing for years now; I've talked to people who've tested it, and they claim it's damn spooky just how well it works.

  • by snake_dad ( 311844 ) on Tuesday July 17, 2001 @12:34AM (#80546) Homepage Journal
    A beo.. err... which industry will succeed in making this technogoly profitable?

    HOT TEENS with incredible resolution! Hotter than you've ever seen before!
    No more squinting at tiny Media Player windows!

  • We have a number of developers sharing a long desk facing a wall. Imagine them all sharing a large screen at once, each with their own 'window'. Want to show something to your colleague? Just 'drag-n-drop' the window beside your buddies for a quick little chat, then drag it back.

    For a backdrop, have a large virtual fish tank bubbling away.

    I'd almost pay to work in an environment like that...

  • It seems that bulbs for these types of projectors are very expensive (for a bulb).. if any of them blew, you lose 1/16th of your image.. if all of them blow, you need to pay 16*whatever-the-bulbs-cost. If the display is on constantly, wouldn't the costs to keep it running go through the roof?
  • Why did they need to make it so large? A 4 or 5 foot display would have shown the same detail in a more palatable display area.

    Have you thought perhaps they couldn't make it that small? They're already using a lot of projectors in a relatively small area, and planning to thrity-two more. As with much of electronic and computing technology, the challenge has been making it small enough to be convenient, not making it larger

  • by bartlett's ( 465717 ) on Tuesday July 17, 2001 @12:26AM (#80559) Homepage
    Is there a pratical, scientific use for this? Sure, I love drive-in movies, but why not just use a projector? What could possibly need a 20 million pixel display?

    Well, if you actually read the article it would have answered your question:

    The images are expected to allow scientists a better view of complicated systems. Sandia's immediate needs are to improve understanding of complex situations like crashes and fires, but the facility is also valuable for microsystems, nanotechnology and biological explorations. Says Heermann, "It does not make sense to view a detailed 20- or 100-million-cell simulation on a standard one-million-pixel display." Data presented as columns of numbers would be a numbing amount of information for the brain to comprehend.

  • by q-soe ( 466472 ) on Tuesday July 17, 2001 @12:35AM (#80560) Homepage
    Maybe the lower resolution (40dpi - which i would be surpised in) is not a problem - the fact is this would be usefull when dealing with larger images - youre not standing or sitting right in front of this mother after all - the pixel size would be larger i suspect thus making small detail (such as stars etc ) easier to see - maybe they are looking for field of view rather than super resolution ?

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...