Forgot your password?
typodupeerror
This discussion has been archived. No new comments can be posted.

X.Org Server 1.11 Released

Comments Filter:
  • by Anonymous Coward

    Is it just me, or are most open source project "official" releases getting rather humdrum?

    If I have a bug that needs fixin', I use the beta (or just apply the patch(es) manually). If I don't, I generally don't fix it until a feature it has seems interesting, or my package manager says "omg you need this!".

  • by Gaygirlie (1657131) <(gaygirlie) (at) (hotmail.com)> on Saturday August 27, 2011 @06:04AM (#37226350) Homepage

    I mentioned about this a while back on OSNews when I got my new laptop and noticed that it has two graphics cards instead of one: the other one is a higher-powered one able to churn away on games, 3D-modeling and whatnot at acceptable speeds, the other one is a very low-powered one that is barely able to do regular 2D sufficiently. The system switches between those two when I plug/unplug the AC adapter, though it also allows me to switch between them at will.

    The thing here is that the low-powered one saves HUGE amounts of battery compared to the high-powered one, even if I go to such drastic measures as downclocking it. Using two separate chips instead of incorporating both in the same chip, or just having more aggressive power-saving capabilities on the more powerful chip is not the same thing for several reasons: being able to buy and use both chips separately means the manufacturer may be able to save money by buying different batches of chips from different places, and it obviously allows the manufacturer to mix-and-match at will. And adding more aggressive power-saving capabilities to a chip always means having to make compromises that could otherwise be omitted. It simply makes some sense to use two chips for saving battery, and I've noticed several manufacturers lately trying that. It remains to be seen whether or not it'll actually become a trend, though, or just a passing fad.

    Unfortunately though X.org doesn't support such a scheme. You can't just switch between cards on-the-fly, you must muck around first, then restart whole X, thereby defeating the whole idea. And it doesn't seem like there are any plans for remedying this, or atleast I can't find anything relevant.

    • but have you filed a bug report?
      • No. It would only stir up the hivenest, generate a few angry replies, and nothing would happen.

        I'm rather hoping for someone more influential to pick it up and raise some interest in the issue.

        • No. It would only stir up the hivenest, generate a few angry replies, and nothing would happen.

          I don't think so. It would be good to rise up discussion on the subject.

        • No. It would only stir up the hivenest, generate a few angry replies, and nothing would happen.

          I'm rather hoping for someone more influential to pick it up and raise some interest in the issue.

          So you think complaining about it on Slashdot is more likely to get the result you want? Really? My guess is that anyone who works on the project who sees your post is likely to think, Here's someone who can't be bothered to follow the most basic procedures, so hell with it, why should we care what they say they want? It's like dealing with your computer-illiterate friend who calls you up for free tech support but refuses to understand the difference between the monitor and the hard drive.

          • It is more likely to reach wider audience here than on the mailing lists, so yes, this is my approach. I'm just raising general awareness and interest on the issue, I'm not even really expecting anything to happen as a result. Feel free to disagree, as you clearly are doing already.

    • by maxume (22995)

      Does it have an intel i3/i5/i7?

      In that case, the explanation is that intel built a reasonable GPU into the CPU (but those GPUs have nice drivers and make easy work of 2D, so maybe it isn't one of those).

    • by Anonymous Coward

      Take a look at Bumblebee[1]. It will turn off and on your discrete GPU at your will. For example, to start quake using your discrete GPU, you would have to call :

      $> optirun quake

      and that window and only that window would take advantage of your discrete GPU and when the process dies, it turns off your GPU.

      [1]: https://github.com/MrMEEE/bumblebee

    • by sgt scrub (869860)

      Your wanting HotPlug support for graphics devices. That wouldn't be part of X.

      • Your wanting HotPlug support for graphics devices. That wouldn't be part of X.

        Oh really? What would it be part of then, pray tell? Because when you change graphics adapters on-the-fly the X environment would still have to adjust its settings appropriately -- the supported features for example could and likely would change when you change the adapter -- , transfer any necessary memory contents from the previous one to the new one and so on without killing off the running apps in the process.

        • by sgt scrub (869860)

          There isn't Hot Plug support for graphics devices. Hot Plug support is in the driver. If support existed then X developers could add code to support it just like they do for input devices -> Option "AutoAddDevices" The supported features for graphics devices would be in the config when you create them. A graphic device not being available would (should) not delete them. Switching from one device to another shouldn't be any more difficult to implement than switching from one view to another on a lap

    • by Paradigm_Complex (968558) on Saturday August 27, 2011 @10:05AM (#37227262)

      The two-graphics-card scheme you're talking about was developed by nVidia; it is called "Optimus."

      There is an open source project to get this stuff to work with Linux/X11, called bumblebee. See here:

      https://github.com/MrMEEE/bumblebee/ [github.com]

      If you want a more specific guide for using bumblebee with your specific laptop/distro combination, you may be able to find one if you look around. For example:

      http://ubuntuforums.org/showthread.php?t=1763742 [ubuntuforums.org]

      I can't vouche for bumblebee; I've never actually tried it myself. However, it seems to be exactly what you're looking for. Let's hope it's a solid project, as Optimus is becoming more and more popular and nVidia doesn't seem to have any plans to support it on Linux, with a open source driver or otherwise.

      • The two-graphics-card scheme you're talking about was developed by nVidia; it is called "Optimus."

        I know that atleast Intel offers a similar thing which integrated graphics and the option of switching between that and a separate one. And I have a system with two Radeon cards, so it's not only "Optimus" that does this.

        There is an open source project to get this stuff to work with Linux/X11, called bumblebee. See here:

        https://github.com/MrMEEE/bumblebee/ [github.com]

        Aye, someone else also mentioned Bumblebee here. I don't know why I have totally missed Bumblebee when I Googled around. It looks interesting, but it has a flaw in that it is chipset specific: it doesn't even try to provide a general purpose solution. With atleast 3 different manufacturers

    • Re: (Score:2, Informative)

      by Anonymous Coward

      Nvidia implementation is called optimus [wikipedia.org], and nvidia has already said "go fuck yourself" in response to "will you support this on linux".

      Initial linux support is being carried out in the Bumblebee Project [github.com], bleeding edge branch is called Ironhide [github.com]. I have no idea about the AMD version because I'm not affected by it.

      Ha, captcha is "ashamed", as Nvidia should be for releasing this shit.

    • What this is is the new generation of Intel CPUs which include a built-in GPU and frame buffer, usually advertized as an "Intel 3000". They are capable of providing moderate graphics as well as support for GPU off-load of some operating system functions.

      The daughter cards, as far as I know, are all "Optimus" versions of nVidia GPUs which don't have frame buffers or other basic capabilities, but share the on-board GPU for those capabilities while providing much higher powered graphics for games and other

  • Does this mean we are finally going to eliminate some of the layers of X and go with a more sane and modern approach to video display or did we just throw in a few new bells and do some performance tweaks. We really need a better, more responsive/modern display system.

    • Re: (Score:2, Informative)

      by VVelox (819695)

      The performance issues are not an X issue, but a driver issue.

      The major issue when it comes to performance and X is the drivers, which are largely crap. Unfortunately there is very little information on the internals of most cards, which makes writing good drivers complex or damn near impossible. This also requires a nice bit of programming and math knowledge in various areas.

      Changing graphics server technology won't fix this issue.

    • What layers? Last I looked Xlib calls are all there are for low level graphics. Can you show me a system with fewer layers than X?

      • by siride (974284)

        Wayland, but only because it isn't asked to do what a modern windowing system needs to do. Once it goes "live" and all of the same requirements that X had to deal with and foisted upon it, it'll likely quickly become bloated like every other software project that has to do anything useful. Except, instead of having a good architecture, even if it's large and complex, they went for simple for the sake of simple. We'll see how that works out. Already things like network transparency will have to be bolted

        • But what is the use case for network transparency in a modern setup?

          It was original designed so you could log into a powerful graphics workstation/server and do you work on them, and then send the output to your own computer. But nobody does that today because cheep powerful workstations are everywhere.

          I have been running Linux for the last 10 years and I have not used the ability to show remote x sessions/windows on my own desktop for the last 5 years. And I can't make up a reason to ever do it again.

          It mi
          • by siride (974284)

            Which are the design problems you speak of? It's pretty simple.

            Maybe RD doesn't have a lot of use for the average desktop user, but it is used in the corporate world and it is used by power users. Just because *you* don't use it doesn't mean nobody does.

            • by TeknoHog (164938) on Saturday August 27, 2011 @10:39AM (#37227440) Homepage Journal

              Maybe RD doesn't have a lot of use for the average desktop user, but it is used in the corporate world and it is used by power users. Just because *you* don't use it doesn't mean nobody does.

              Here are some of the use cases where remote X has been important to me:

              • Compiling and running student projects on the university's Solaris machine
              • Computational fluid dynamics on a supercomputer, situated in another city
              • Just today: running Firefox on my x86-64 machine to access a Flash site, displayed on my Powerbook (no flash for PPC Linux)

              You could summarize these in the way that, for power use(r)s, the number of users is very different from the number of computers. For starters, I'm not going to buy extra monitors, keyboards and mice for all my machines, just because some desktop user thinks remote X is obsolete. In the case of supercomputers and similar specialist machines, it is physically impossible for all users to sit by the same computer. Plus it would be expensive (money, time, environment) for everyone to get there.

              Many people argue that remote X can be replaced by more platform-independent systems like VNC. In some cases that is true; in fact, there are cases where remote X does not work, for example when the OpenGL/CL code need to run on the same machine as the rest of the program. On the other hand, VNC is often much heavier on the network, as it needs to transfer the entire bitmapped screen. For example, my fluid mechanics work involved relatively simple 3D modelling, and it worked fine over a 1-megabit ADSL and cable, but VNC is often sluggish even on a LAN.

              • by bgat (123664)

                VNC is a screen-scraper, with all the issues that come with that. If that's all you have then it's at best only tolerable. The rest of the time, it's a crappy alternative. Windows Remote Desktop falls into the same category, as far as I'm concerned.

                It's far better that X work the way that it does, and we use it that way. X's client-server model contributes very positively to system stability, portability, and maintainability; and when the client and server are on the same machine, as is the case with th

                • by TeknoHog (164938)

                  It's far better that X work the way that it does, and we use it that way. X's client-server model contributes very positively to system stability, portability, and maintainability; and when the client and server are on the same machine, as is the case with the OP, the "overhead" really isn't there at all. Any objection to X on this basis is pure and ignorant FUD.

                  Oh, and by the way, since X is client-server, we can move the two onto different machines. And add more machines into the mix.

                  This. I thought the merits of modular coding would be widely acknowledged already. We use higher level languages with object orientation, even though assembler might be a little faster.

                  Just today, I've been discussing how to make a cluster of FPGAs for a certain parallel job. I then realized that the same ideas of modularization would help my code even on a single chip. (Partly because the async links would help with some clocking issues, making each module independent clock-wise).

            • by TheSunborn (68004)
              The design problems mainly have to do with timing of animations.

              Since you need support for running over a network, things such as vblank support and anything in general which require knowledge of the update frequence/stats for my screen can't really be done*.

              *Except by extensions which don't support network. 
              • by siride (974284)

                I don't see why vblank information can't be sent over the wire like everything else. Or, as you mention, it can be done outside the wire, just like shared memory images are done. The problem we've had with vblank is that neither the protocol nor the toolkits have any infrastructure for dealing with it. There have been vblank and sync extensions for years, but nobody uses them. Meanwhile, DRI and DRI2 have been used extensively by applications and desktop frameworks/window managers, and DRI and DRI2 are c

          • by bgat (123664)

            You can also think of the "network transparency" part as being a side-effect of the client-server model implemented by X, which fully isolates applications from the graphics hardware. That isolation contributes in a very positive way to system stability and portability.

            And, once you have a client-server model, it doesn't really matter how far apart the two are. Hence the "network transparency" part.

            Regardless, anyone who argues against X because of its "network transparency" feature is arguing from a poin

          • by rmcd (53236) *

            I want to do a quick calculation in mathematica. I don't have a mathematica license on my personal machine. I log in to the research server, launch mathematica remotely, do my thing, log off.

            Are you really claiming this is use case is no longer important? At my university I see it all the time.

            Maybe I'm missing something.

            • Actually, I rather like the point that X is to Wayland as http: is to file: for the web. It's a silly comparison, however.

              Personally, I like the philosophy behind Wayland, as it does a nice repartition of the problem that better fits. I use remote X all the time and assume I will continue to use remote applications when Wayland is popular. Modern X toolkits use techniques that are a poor match for remote X anyway (mostly rendering to bitmaps internally and then outputting them via X). Things are changin

          • by fikx (704101)
            So, you don't use it which means nobody does? My guess is if you don't use it it's because you don't understand how to use it, or the only way you talk to other machines over the network is via a browser.
            I use it daily, not for troubleshooting or admin, but just for average use. Running a browser from one machine onto another is a great way to not have to worry about a locked down machine using a bad browser by default and I get to keep all my shortcuts and plug-ins as is no matter where I am. Yes, there
          • by Eric Green (627)

            One thing I'll point out is that RDP (using the current Windows clients and servers) is extremely efficient compared to "network-transparent" X. When I use Wireshark to look at what's on the wire, opening a Firefox window on Windows and displaying it to my desktop uses roughly the same bandwidth as X's "network transparent" windowing, but happens much quicker due to latency -- the X client is issuing multiple requests to the X server then *waiting for the response* before continuing on. Furthermore, RDP is

If a 6600 used paper tape instead of core memory, it would use up tape at about 30 miles/second. -- Grishman, Assembly Language Programming

Working...