Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Graphics Games

Euclideon Teases Photorealistic Voxel-Based Game Engine 134

MojoKid writes Not many would argue that current console and PC graphics technologies still haven't reached a level of "photo-realism." However, a company by the name of Euclideon is claiming to be preparing to deliver that holy grail based on laser scanning and voxel engine-based technologies. The company has put together a six-minute video clip of its new engine, and its genuinely impressive. There's a supposed-to-be-impressive unveil around the two minute mark where the announcer declares he's showing us computer-generated graphics rather than a digital photo — something you'll probably have figured out long before that point. Euclideon's proprietary design purportedly uses a laser scanner to create a point cloud model of a real-world area. That area can then be translated into a voxel renderer and drawn by a standard GPU. Supposedly this can be done so efficiently and with such speed that there's no need for conventional load screens or enormous amounts of texture memory but rather by simply streaming data off conventional hard drives. Previously, critiques have pointed to animation as one area where the company's technique might struggle. Given the ongoing lack of a demonstrated solution for animation, it's fair to assume this would-be game-changer has some challenges still to solve. That said, some of the renderings are impressive.
This discussion has been archived. No new comments can be posted.

Euclideon Teases Photorealistic Voxel-Based Game Engine

Comments Filter:
  • by DoofusOfDeath ( 636671 ) on Thursday September 25, 2014 @11:45AM (#47993907)

    Not only that we're talking about voxels, but also we're actually Slashdotting an origin server.

  • by Anonymous Coward
    LOOK! Look at how bad these OLD and BORING graphics look when we zoom really, really close. Now look at OUR awesome graphics moderately zoomed out! Aren't you impressed???
    • by i kan reed ( 749298 ) on Thursday September 25, 2014 @11:55AM (#47994011) Homepage Journal

      The thing about voxel raytracing is that it usually requires less fancy programming and design work to get your graphics up to snuff.

      Rasterization, while extremely efficient, requires layers upon layers of programming cleverness and hours of skillful modeling and texture creation to pull off a "photorealistic" look(let's be honest, it's not that good either). If you could just throw out all the lightmapping and the real-time-self shadowing hacks, and the ambient light simulation, and a bunch of other stuff that's cropped up over the years to make up for the fact that we're not raytracing, you might choose to.

      • Re:"Photorealistic" (Score:5, Informative)

        by Dutch Gun ( 899105 ) on Thursday September 25, 2014 @12:41PM (#47994559)

        This company with it's impressive-looking but completely static scenes shows up every few years. Honestly, I didn't see anything that couldn't be done in that video with any modern engine targeting high-end video hardware. It's a bit of a cheat if you only have to show the terrain. I'll be more impressed when I see a demo with physics, animation, and dynamic lighting, because that's where things tend to get tricky. They mentioned in the video that they do have animation working - I'll be curious to see how it looks in the next video.

        This company seems to be trying to solve the problem of how to accurately capture and reproduce the real world, but how many games actually want to capture real-world data? If you're in the business of creating fantasy worlds of any sort - and that's precisely what most games are - there's nothing in the real world for you to scan. There's a reason no one else is working this way, I think. As far as the game industry goes, I'm guessing it will probably remain a very niche product, if it's viable at all. I just don't see them throwing away 15 year's worth of maturing polygon-based tools and technologies anytime soon.

        • I am, in no way, endorsing this company. Just that overall level visual impressiveness might not be the primary target.

        • by Delwin ( 599872 )
          It's voxel raycasting. That means that lighting is trivial. Physics gets easier too as you don't have to deal with single sided triangles and can trivially subdivide objects. ... animation however. That's the killer with voxels. I'm curious to see what happens with them.
        • You're missing the point. Their technology isn't solely for the purpose of gaming. They are soon to start pursuing it for games, but this is already being commercially leveraged for a problem domain that was sorely needing the help. Point cloud mapping of real life for engineering and planning.
          • I'm a game developer, and they're claiming their engine to be suitable for game development, so I'm just evaluating it on that basis. If it works well in other domains, good for them, but I know nothing about those industries. However, game engines have a lot of *very specific* requirements in order to make them practical. At the moment, generating game assets by 3D scanning objects is simply a non-starter for all but a very specific set of games.

            Polygonal-based tools and pipelines are deeply entrenched

        • This company with it's impressive-looking but completely static scenes shows up every few years.

          It's nothing particularly special. Unigine [youtube.com] has done something similar, Atomontage [youtube.com] is also similar, Voxel Farm [youtube.com] is lower resolution but supports more interactivity, there's various SVO [youtube.com] demos, this one [youtube.com] is particularly cool for high resolution detail.

        • how many games actually want to capture real-world data?

          It would work for more than real world data. You could take any CGI scene from any movie done today, turn it into voxels and use it in their engine.

        • Even if it isn't real, you might want it to look realistic.

          Otherwise why do they have special effects in movies that are works of fiction?

      • They are only raytracing primary ray intersections though. www.Caustic.com already does this on the new PowerVR Wizard chip. You could Lidar Scan a scene and tesselate the point cloud into a mesh and you would get just as high of quality as this voxel system. The only reason this looks even slightly acceptable is because they're just projection mapping real world lighting directly onto the voxels. But baked in lighting would work with projection mapped geometry too. Without a real shader mo

      • Comment removed based on user account deletion
  • by Anonymous Coward

    I was waiting for the "photorealistic video" to show up and it didn't show its face.
    Hit the snooze button. The stuff at the beginning has crappy texturing and the stuff at the end has better texturing but it's *not* photorealistic.

    • Re:Meh (Score:5, Insightful)

      by Em Adespoton ( 792954 ) <slashdotonly.1.adespoton@spamgourmet.com> on Thursday September 25, 2014 @12:49PM (#47994673) Homepage Journal

      I think the stuff at the end is fairly close to photorealistic, assuming the photo is low-resolution. What it *isn't* is camera-realistic. Their camera engine uses pans and zooms that in no way reflect how a physical camera would move through real space -- this makes the entire effect look fake.

      What they needed to do to make the demo a "wow" demo is put the camera inside the physics engine, and give it the mass and movement of a real camera. The results would have been much better.

      The one bit of the video where I thought "hey, that's actually decent" was on the zoom-in on the stair tread, as the zoom was similar to what you'd get on a camera on a tripod, and the stair looked pretty photorealistic. For the rest, our brains enter the "uncanny valley" not because of the images presented, but because of the surreal way they are presented.

      • What they needed to do to make the demo a "wow" demo is put the camera inside the physics engine, and give it the mass and movement of a real camera. The results would have been much better.

        That's why head tracking demos that allow you to shift your head to get a slightly different viewpoint on screen are so effective.

      • I thought the messed up street with gravel looked real.

        The stairs and most of the rest looked "close... but not real".

        Plus, it replaces artists with

        a) on site camera requirements.
        b) site licensing fees and gaining permission to "film".

    • the stuff at the end has better texturing but it's *not* photorealistic.

      That's because what it is is basically a high resolution 3D image. They use laser scanning to produce a high resolution point cloud and then display it, the ability to display that much data is impressive but currently it's using baked lighting (whatever is captured), no animation and seemingly no volumetric data. It's all well and good to display an image but it's not particular good from an interactive perspective if you don't have any context, like what's a tree, a rock or grass? You could individually s

  • They essentially haven't made any progress since when they turned to laser scanning a few years ago.
    Oh and no, not really impressive graphics. They're just showing a cloud of points with some color. No dynamic lights and shading, no dynamic geometries/skeletal deformation here...

    • They're just showing a cloud of points with some color.

      Array of dots.

      As we are wise and ignoring the bullshit marketing words.

      struct Voxel{
      unsigned long Id;
      vector4 Colour; //r,g,b,alpha
      vector3 vLocation;
      float fScale;
      };

      • A point cloud [wikipedia.org], while similarly named, has nothing to do with cloud computing.

        • "Point Cloud". Where have I heard that term before?

          Ah yes
          Time Scanners [pbs.org]

          For a PBS show, it's surprisingly repetitious, and a lot of the dialogue tends towards the "gee whiz--look at the cool technology we have". It has the flavor of a Discovery Channel type show. Despite this, there are interesting bits and pieces throughout.

  • by MobyDisk ( 75490 ) on Thursday September 25, 2014 @11:48AM (#47993943) Homepage

    Here's the direct YouTube link, BTW: http://www.youtube.com/watch?v... [youtube.com]

    Another potential problem here might be dynamic lighting.

    • by MobyDisk ( 75490 ) on Thursday September 25, 2014 @11:51AM (#47993971) Homepage

      Oh wait, use the higher-quality video link at https://www.youtube.com/watch?... [youtube.com]
      The 360p version of that beats the 1080p version of the original.

      • by MobyDisk ( 75490 )

        Ahh Sorry, that video probably isn't using the voxel engine. That's why it looks so good, but they can't really move around freely. It is more like Microsoft Photosynth.

        • Look at the corner of the wall in the cathedral. They ARE using the voxel engine in that video. It doesn't look as great as they try to make it sound though. Lots of artifacts from scanning etc. Perhaps it could be made to work if they went in and hand-edited those faults. Meanwhile, in the forest footage, which seems to be scanned at higher resolution, you can see how nothing at all moves. As for animation, a while ago, sadly on a different computer, I actually got a video of an animated hummingbird from
      • Here I was watching it at 144p...

    • by UnknownSoldier ( 67820 ) on Thursday September 25, 2014 @12:19PM (#47994263)

      Indeed! Lighting has always been their biggest drawback. Their static lighting, even using lightmaps, is a FAR cry away from Dynamic Global Illumination.

      I'm much more impressed with Unreal Engine 4's realtime global illumination [youtube.com] and and architecture real-time [youtube.com]

      Another thing the Eucliedian guys don't get is is that polygons are "good enough." Photorealism is a red herring of computer games. All the photorealism doesn't make a game FUN, only the potential to NOT break immersion. When game design breaks immersion the easiest all the photorealism in the world won't save it -- ironically, it makes it worst! What do you mean I can't climb that fence?

      Cool tech of course with their compression for voxel data, but until they have real time dynamic lighting and global illumination ... *yawn*.

      • As important as the rendering engine may be, the major hold-up on making worlds "photorealistic" is that it takes a lot of work on the part of the artists and level designers to create those worlds. While the video harped on how current-generation engines have lots of repeating textures and models, voxel-based engines aren't going to magically solve that problem. Be they polygon- or voxel-based, the worlds will still have to be put together by artists. Even with voxels, the level designers will still reuse

      • by Creepy ( 93888 )

        First off, many genres of games don't need dynamic global illumination, so that is less of an issue that you make it. In fact, the only games I've played lately that have a global illumination model were Guild Wars 2, the Sims 4 (and yeah, 3 is generally better, but it isn't terrible) and Tomb Raider (2013 version). I have dozens of strategy, adventure, RTS, and racing games that have no global illumination at all. I don't play a lot of shooters outside of the few I know don't make me sick, but yeah that ma

        • > The problem id had with it in RAGE (which was also voxel based)

          How do you come to THAT conclusion?? The world was represented and stored with a **polygon** mesh.

      • Cool tech of course with their compression for voxel data, but until they have real time dynamic lighting and global illumination ... *yawn*.

        Yup, wake me when it runs something non-static and real-time immersive. Until then get off my lawn and let me go back to sleep.

      • Indeed! Lighting has always been their biggest drawback.

        This! Granted, I used to do a bit of 3D work, and have a more critical eye. but the lighting screams out NOT photorealistic. I could appreciate the imagery, but again, it had no animation, merely camera movement. So in the end, I was pretty unimpressed. At no point did I even think that any of the images were photos.

        Photorealism is a red herring of computer games. All the photorealism doesn't make a game FUN, only the potential to NOT break immersion.

        When I think of Photorealism, for some perverse reason I think of the ancient text game Zork. No photo-anything, yet I'd get and keep total immersion. In games, I liken photorealism to featur

        • Your reference to text adventures is spot on! We were lost in the imagery that the text prompted our brains to create!

          And yup, agreed that the lack of animation seems conspicuous too!

          • They released a video 6 or 7 years ago and were claiming "yes we can do animation", now many years later they still can't show animation.
      • by mjwx ( 966435 )

        Indeed! Lighting has always been their biggest drawback. Their static lighting, even using lightmaps, is a FAR cry away from Dynamic Global Illumination.

        I'm much more impressed with Unreal Engine 4's realtime global illumination [youtube.com] and and architecture real-time [youtube.com]

        Another thing the Eucliedian guys don't get is is that polygons are "good enough." Photorealism is a red herring of computer games. All the photorealism doesn't make a game FUN, only the potential to NOT break immersion. When game design breaks immersion the easiest all the photorealism in the world won't save it -- ironically, it makes it worst! What do you mean I can't climb that fence?

        Cool tech of course with their compression for voxel data, but until they have real time dynamic lighting and global illumination ... *yawn*.

        This, there is too much emphasis on photo realism and not enough on making environments believable and imersive.

        I'd rather a building look a little less real if I can blow a decent hole in it with a rocket launcher where I want to. This is something that surprised me about the original Operation Flashpoint. you could actually blow holes in buildings that you could walk through. Of course only in certain spots and as long as the building had an actual interior (this was just swapping out the model for a p

  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Thursday September 25, 2014 @11:49AM (#47993951)
    Comment removed based on user account deletion
    • However, I could still easily tell that these were not real world images.

      I was thinking they weren't blurry enough. The camera pans should have depth-dependent motion blur compensation to make the look more convincing. Since they have the 3D model, this is simply a matter of having the time to program the algorithm - all the parts are well-known. They'll be hearing from Hollywood when they get that and perhaps a bit more accurate lighting (but, hey, throw the thing on a tracing farm for Hollywood money).

  • by awrc ( 12953 ) on Thursday September 25, 2014 @11:54AM (#47994003)

    Graphics so real you could almost be there although we can't figure out why you'd *want* to be there, exciting architecture-based gameplay. Defeat enormous boss structures such as gothic cathedrals and terrifying office blocks, advance to higher levels and face ever-more-powerful types of inanimate building...

  • by OzPeter ( 195038 ) on Thursday September 25, 2014 @11:54AM (#47994007)

    Where the hell do I find a dinosaur in this day and age so that I can laser scan it?

    More reasonably: one thing that leapt to mind when watching the video is that laser scanning inherently "can't see behind the curtain". So how do you generate data for all those hidden surfaces? Several of the examples in the video showed fields of rocks, and I can't imagine there would be enough time to scan the field from all possible view points that would ensure that all surfaces have been scanned. Or is this product mainly targeted at fly-throughs along well defined paths?

    I also did see in one of the comments on the site that all of the video data that was shown was static IE no animation.

    • Where the hell do I find a dinosaur in this day and age so that I can laser scan it?

      Same place the movie studios get one.

      You have your model builders build you a high quality model you can then scan.

      From what I can tell, most stuff done in CG starts out life in the hands of your model builders and scupltors.

      • I agree 100% and was just about to chime in and say the same thing. Now especially with 3D printers being cheap, you can print out models of any kind. Scan them in, skin them and go nuts.

    • by Anonymous Coward

      Just find whoever is maintaining Slashcode.

    • by sl3xd ( 111641 )

      The museum of natural history?

      If Skyrim has skeleton archers and beserkers, why not skeleton dinosaurs?

    • You don't scan everything at once. That is not what they appear to be doing.

      Instead you scan each thing seperatly and then put them together as needed. Sort of like what they do with things, for polygon models, today.

      Half of what is being dismissed, is not what they were saying...

      Not that think it will work, they don't give enough info to know.

  • I claim that I had photorealistic, real-time voxel graphics running on an iPhone 2 seven years ago. I just didn't release it for... reasons. Until they release a demo that runs on someone else's hardware, it's just a worm on a hook for investors.
  • by gurps_npc ( 621217 ) on Thursday September 25, 2014 @12:02PM (#47994089) Homepage
    Yes, with this new product, you the fashion and cosmetic industry will be able to make videos with models whose waist is thinner than their ankes.
    • Yes, with this new product, you the fashion and cosmetic industry will be able to make videos with models whose waist is thinner than their ankes.

      I think Walmart has already solved that design problem. They're called "cankles".

  • Didn't John Carmack of ID Software vanished voxel-based engines back in the 1990's as being technically inferior -- or maybe impractical -- with the video cards of the day?
    • > Didn't John Carmack of ID Software vanished voxel-based engines back in the 1990's as being technically inferior

      /sarcasm That is why Mojang was bought for $2.5 billion dollars from Microsoft because it had an inferior voxel engine. :-)

      Oh wait. Voxel engines are NOT the problem. It is the world interaction that either breaks or makes the games. Successful games are about FUN first, graphics second, regardless of what sequelitis EA & Ubisoft is trying to shovel this year.

      • > Didn't John Carmack of ID Software vanished voxel-based engines back in the 1990's as being technically inferior

        /sarcasm That is why Mojang was bought for $2.5 billion dollars from Microsoft because it had an inferior voxel engine. :-)

        Oh wait. Voxel engines are NOT the problem. It is the world interaction that either breaks or makes the games. Successful games are about FUN first, graphics second, regardless of what sequelitis EA & Ubisoft is trying to shovel this year.

        Play control. It's been the defining factor of a good game since the NES days. Without good play control allowing you to interact in a game it will suck. That's why WoW does so well and COD. They are responsive to your actions. The Battlefield games on the other hand and are clunky and slow even if they look realistic.

        • Indeed. There are a few more terms that should be included for good game games but I didn't want this to turn into an essay. :-)

          Play Control is definitely one of the key ones. In contradistinction to "floaty controls". Your example of BF feeling clunky while WoW being polished is spot on. I'm not sure I would praise COD, but I can see why you did. It is all relative. :-)

    • Didn't John Carmack of ID Software vanished voxel-based engines back in the 1990's as being technically inferior -- or maybe impractical -- with the video cards of the day?

      Yes. He also tweeted about Euclideon [twitter.com] in 2011. Apparently he seems to be somewhat optimistic about the concept but sees hardware requirements and production issues to be possible blockers.

    • by Jeremy Erwin ( 2054 ) on Thursday September 25, 2014 @03:34PM (#47996681) Journal

      Here's a 2011 interview [pcper.com]

      I’ve revisited voxels at least a half dozen times in my career, and they’ve never quite won. I am confident in saying now that ray tracing of some form will eventually win because there are too many things that we’ve suffered with rasterization for, especially for shadows and environment mapping. We live with hacks that ray tracing can let us do much better. For years I was thinking that traditional analytical ray tracing intersecting with an analytic primitive couldn’t possibly be the right solution, and it would have to be something like voxels or metaballs or something. I’m less certain of that now because the analytic tracing is closer than I thought it would be. I think it’s an interesting battle between potentially ray tracing into dense polygonal geometry versus ray tracing into voxels and things like that. The appeal of voxels, like bitmaps, [is that] a lot of things can be done with filtering operations. You can stream more things in and there is still very definitely appeals about that. You start to look at them as little light field transformers rather than hard surfaces that you bounce things off of. I still wouldn’t say that the smart money is on voxels because lots of smart people have been trying it for a long time. It’s possible now with our current, modern generation graphics cards to do incredible full screen voxel rendering into hyper-detailed environments, and especially as we look towards the next generation I’m sure some people would take a stab at it. I think it’s less likely to be something that is a corner stone of a top-of-the-line triple A title. It’s in the mix but not a forgone conclusion right now.

      In 1999 [floodyberry.com], he was working with 3d "light maps".

    • by Creepy ( 93888 )

      Early graphics hardware was designed to texture triangles and didn't contain a bunch of highly parallel general purpose units like today's graphics hardware. Since raycasting (and raytracing) and voxels can be done in parallel, voxels are making more sense again. You can make entirely unique terrain without overlapping and blending a bunch of textures using disk streaming methods and some structure (Carmack used a sparse voxel octtree in RAGE [here's a BSD licensed example of a sparse volume octtree [google.com]).

      But as

  • Comment removed based on user account deletion
  • Fraudulent.

    They've shown up numerous times presenting this technology with an attempt to garner easy money. They already suckered in Australian government with a $2 million grant [startupsmart.com.au]. They haven't really produced anything new and are just presenting the same polished demos without making efforts to tackle any of the issues inherent in such voxel-based engines.
  • So as well all know... the better it looks, the worse the gameplay will be - and the storyline will be even worse.
  • Looks like National Geographics already scanned the universe from earth.
    https://www.youtube.com/watch?... [youtube.com]
  • I've seen demos of what I believe to be this technology before, but what it seems to lack is any kind of interactivity with the environment/objects in the environment. From what I can tell in this latest video they've added an FPS handgun overlay and some poorly animated ferns.

    The point is: Cool, you can render a nice point cloud. Can you actually do interesting things with it / what we want in most games or virtual environments, or can you simply render a nice point cloud?

  • 1) They still haven't explained how they solved the memory-bandwidth issues inherent to point-cloud rendering. As far as I'm concerned, they're probably a scam just because of this. I can't say with 100% certainty, but their refusal to demonstrate it actually running in real-time is extremely suspicious.

    2) How do they plan to work with dynamic content? Animations? Dynamic lights/shadows? So far I've only seen static scenes - unless they just want to make a new Myst, this is basically useless for games.

    3) Ho

    • by Creepy ( 93888 )

      1) While I don't know specifics, I'm sure streaming, probably oct-trees, and possibly compression/decompression of nodes in the oct-tree.

      2) You need to add or modify the scene in an editor and create animations if you want those. No dynamic lighting or shadows at this time, pending hardware advances (it can be done, just not very well in realtime).

      3) Laser scanning can quickly build rooms identical to real world rooms without having to go in and model everything. The flaw with it is anything behind any bloc

  • the people turning their nose up at this are failing to see the implications. We can 3d model reality in precise detail with this and then replay the model realistically.

    that is amazing. What is more, the data is precise enough that you could reconstruct the whole thing exactly. That is completely amazing.

    As to games... I look forward to them. I remain a little skeptical as to the animations but maybe they really did sort it all out.

    • It's more complicated than that.

      Polygon-based engines support (and modern games heavily depend upon) things like:
      * Dynamic lightling and shadows
      * Deformable environments
      * Transparency
      * Reflections
      * Fast collision detection
      * AI route planning

      Now go back and look at that demo video and tell us where you see those things.

      Also, polygon based engines are still pretty efficient because of:
      * Texture re-use
      * Bump mapping to improve realism
      * Shaders to implement things like motion blur, ambient lighting, etc.
      * LOD m

    • by ledow ( 319597 )

      Don't be fooled by the hype.

      In that same way that some have taken hi-res scans of the Mona Lisa in every spectrum (visible, UV, etc.), there are companies capable of taking these laser scanners and doing just the same - without the voxel bollocks.

      At no point is that engine rendering "hundreds" of voxels in between every point that the laser scanner scanned. What they've done is taken several laser scanned, merged them together to get an almost-3D representation (of the backs of objects the laser can't pene

    • I think this has a lot more application than just games. I mean, there's archival of buildings, objects, knowledge, record keeping, procedures, etc. People forget how to manufacture items when record keeping isn't done properly or is lost. Now you can accurately scan objects, deconstruct them virtually and figure out how to rebuild things once those who originally built them have long passed on. It's happened already with engine parts with NASA. Military parts and items from WW1 and WW2 even. Think about an

  • Dubbing (Score:5, Insightful)

    by donscarletti ( 569232 ) on Thursday September 25, 2014 @12:28PM (#47994359)

    "Oh, looking at the time I've used up one minute and twenty seconds of this video (audible snigger) and according to Youtube statistics, I've only got about one minute and forty seconds to show you something really interesting and get your attention before the majority of you decide you're bored and move on to something more interesting (slight snigger at the end of sentence). How inefficient of me to use my time to show you all of these video clips of all of these nice real world places we filmed when we should have been showing you some amazing new lifelike computer graphics!.

    This guy has an annoying, self satisfied way of speaking that just makes me want to beat the snot out of him.

    Voxel graphics are interesting and the laser measurement plus automatic texturing from a real world scene is cool, but this just does not compare in detail or framerate to a mesh generated by the exact same laser scanner and a little bit of pre-processing, all of which has been possible for over a decade now.

    Plus, what are you going to do with this 3D scene? An interactive game? But games need dynamic objects, which cannot really be done well with voxels and will contrast dramatically with the scene's lighting model. You don't have any light probes, or spherical harmonic coefficients or anything useful for static lighting dynamic objects, let along dynamically lighting static objects.

  • You just need to mine a lot of stuff beforehand

  • Absolute GARBAGE (Score:2, Informative)

    by Anonymous Coward

    "Dead Meshing" a real environment, and then shooting enough JPG compressed digital photographs to texture the mesh from most possible viewing positions is the very WORST way to handle the problem of real-time rendering of open world environments. And BTW, this method has ZERO to do with 'voxels'. But periodically we get this same nonsense sold to fools alongside the cretinous "ray-tracing uber alles".

    John Carmack- an individual with intelligence vastly greater than the con-man behind Euclideon- attempted to

  • I thought they were gone for good after the last, oh, I don't know, 4 attempts of selling a shitty static-everything voxel engine as THE MOST REVOLUTIONARY THING EVER? Ask anybody who knows fuck all about computer graphics what they think of these people, and prepare to get laughed out of the room.
  • Voxelization is 4-D calculation. Every five years 16x faster computers double the rendering speed. After 20 years you see significnt progress.
  • Photogrammetry (Score:5, Interesting)

    by acid_andy ( 534219 ) on Thursday September 25, 2014 @01:08PM (#47994889)
    I was much more impressed with this technology [theastronauts.com], photogrammetry, given that they're already using it to develop a game (called the Vanishing of Ethan Carter). Rather than brute force laser scanning to create voxels, they are building 3D models using photographs from many different angles and to me the results look as good or even better than those in TFA.
    • Today the number of online casino [wordpress.com] is increasing at a very fast rate. When new player try to find an online casino to play casino games, they find it very difficult to select the best casino among the other entire online casino. In order to solve the problem and provide the best casino facility we bring to you the best of casino that is luckycasino13.com.
  • Early in the video, the narrator said "our eyes just know that these (shown on the screen) videos are real", with the point being that later on he was going to surprise us that they were in fact renditions by his product.

    But when I was looking at those images, I was actually thinking that they didn't look real to me. For some reason, I found myself thinking of Half-life 2.

    • One of the reasons the videos look fake is because the lighting is static. Light is never 100% static. Most people's eyes pick up on this and know that there is something wrong, but can't explain why. They just know it is fake because something looks wrong.

      The other problem is that the movement is too linear. Even with a steady cam and rails, you still get small irregularities in movement and shaking. The video was perfectly going from point A to point B with out any flaws at all.

      It is these, and probab
  • ...to link to them, instead of the damn video in the first place?

    http://www.youtube.com/watch?v... [youtube.com]

    (warning: unbelievably unctuous narrator included)

    I'm curious how they deal with occlusion, as the demonstrated environments are fully-realized, yet - unless you're popping that laser-scanner in 100+ locations - there's no way that there aren't going to be surfaces occluded from the scan?

  • Their demos are not very nice and mostly vaporware. For people interested in non-vaporware next generation rendering technology, I suggest you check out this video, based on Path Tracing which is a form of stocastich Ray Tracing. https://www.youtube.com/watch?... [youtube.com] It looks to me like this technology is getting closer and closer to the mainstream, and the results are eons from any raster-based engine.
  • This tech is great for maps. If Google Maps implemented it, that would be awesome. Instead of seeing through a lens mounted on their car, you could walk around the environment and see everything recorded from their car. It could be useful for film and television, as buildings and backgrounds are often CG that gets touched up anyway. This would just streamline the process.

    But this will never be used in games until animation and full shader support is available on a per-object or per-face basis. That

It is easier to write an incorrect program than understand a correct one.

Working...