Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Displays

NVIDIA's G-Sync Is VSync Designed For LCDs (not CRTs) 139

Phopojijo writes "A monitor redraws itself top to bottom because of how the electron guns in CRT monitors used to operate. VSync was created to align the completed frames, computed by a videocard, to the start of each monitor draw; without it, midway through a monitor's draw process, a break (horizontal tear) would be visible on screen between the two time-slices of animation. Pixels on LCD monitors do not need to wait for above lines of pixels to be drawn, but they do. G-Sync is a technology from NVIDIA to make monitor refresh rates variable. The monitor will time its draws to whenever the GPU is finished rendering. A scene which requires 40ms to draw will have a smooth 'framerate' of 25FPS instead of trying to fit in some fraction of 60 FPS." NVIDIA also announced support for three 4k displays at the same time. That resolution would be 11520×2160.
This discussion has been archived. No new comments can be posted.

NVIDIA's G-Sync Is VSync Designed For LCDs (not CRTs)

Comments Filter:
  • In English (Score:2, Interesting)

    by girlintraining ( 1395911 ) on Friday October 18, 2013 @04:35PM (#45169409)

    Okay, can someone who isn't wrapped up in market-speak tell us what the practical benefit is here? The fact is that graphic cards are still designed around the concept of a frame; the rendering pipeline is based on that. 'vsync' doesn't have any meaning anymore; LCD monitors just ignore it and bitblt the next frame directly to the display without any delay. So this "G-sync" sounds to me like just a way to throttle the pipeline of the graphics card so it delivers a consistent FPS... which is something we can do since DirectX9.

    So what, then, is the tangible benefit realized? Because I smell marketing, not technology, in this PR.

  • Finally (Score:5, Interesting)

    by Reliable Windmill ( 2932227 ) on Friday October 18, 2013 @04:37PM (#45169431)
    Now we just wait until they finally figure out to employ a smarter protocol than sending the whole frame buffer over the wire when only a tiny part of the screen has changed. It would do wonders for APUs and other systems with shared memory.
  • Re:In English (Score:5, Interesting)

    by Anonymous Coward on Friday October 18, 2013 @06:35PM (#45170479)
    Well I don't want to be "that guy", but I am "that guy". The real reason for vsync in the days of CRTs is to give time for the energy in the vertical deflection coils to swap around. There is a tremendous amount of apparent power (current out of phase with voltage) circulating in a CRT's deflection coils.

    Simply "shorting out" that power results in tremendous waste. They used to do it that way early on, they quickly went to dumping that current into a capacitor so they could dump it right back into the coil on the next cycle. That takes time.

    An electron beam has little mass and can easily be put anywhere at all very quickly on the face of a CRT. It's just that the magnetic deflection used in TVs is optimized for sweeping at one rate one way. On CRT oscilloscopes they used electrostatic deflection and you could, in theory, have the electron beam sweep as fast "up" as "left to right".

    So why didn't they use electrostatic deflection in TVs? The forces generated by an electrostatic deflection system are much smaller than a magnetic system, you'd need a CRT a few feet deep to get the same size picture.

    Ta dah! The wonders of autism!

  • by Miamicanes ( 730264 ) on Saturday October 19, 2013 @01:49AM (#45172817)

    Actually, the problem is even bigger. Somewhere around 200fps, you start flying into "uncanny valley" territory. 200fps is faster than your foveal cones can sense motion, but it's still less than half the framerate at which your peripheral rod can discern motion involving high-contrast content. When it comes to frame-based video, Nyquist makes a HUGE mess thanks to all the higher-order information conveyed by things like motion-blur. That's why so many people think 24fps somehow looks "natural", but 120fps looks "fake". Motion-blurred 24fps video has higher-order artifacts that can be discerned by BOTH the rods AND cones equally. It's "fake", but at least it's "consistent". 120fps video looks flawless and smooth to the cones in your fovea, but still has motion artifacts as far as your peripheral rods are concerned. Your brain notices, and screams, "Fake!"

Always try to do things in chronological order; it's less confusing that way.

Working...