Compiz Project Releases C++ Based v0.9.0 237
werfu writes "Compiz 0.9.0, the first release of Compiz rewritten in C++, has been announced on the Compiz mailing list. See the announcement for more info." Compiz has for years been one of my favorite ways to make Windows users envious, despite my (Linux) systems' otherwise low-end graphics capabilities. Besides the switch to C++ from C, this release "brings a whole new developer API, splits rendering into plugins,
switches the buildsystem from automake to cmake and brings minor functionality improvements."
Wow! (Score:5, Funny)
I'm excited to learn about more software using this new programming language of the future!
Re: (Score:2)
Jealous of first poster or what?
Re:Wow! (Score:4, Funny)
troll or idiot?
People with preternatural foresight will often look like the idiot or a fool.
I think the grand parent sees the potential of C++ and a bright future for this new and advanced language!
Re: (Score:2)
Truer words haven't been spoken! I am filled with jubilant delight to hear that the Compiz team could exploit the wildly successful merge of the object-oriented and functional programming paradigms of C++!
Great - Time to hold off upgrading Compiz (Score:2)
The language and dependency changes aside, how much do you want to bet there will be problems in every package distro?
After 2 and a half years of getting Compiz sorted in SuSE, RH, Slackware so you have a 50% or better chance of it working out of the box when you install a distro, not having to dig through massive tweaking to get it operating... I'm expecting a step or two backwards in the "installability" department for a while.
Re: (Score:2)
After 2 and a half years of getting Compiz sorted in SuSE, RH, Slackware so you have a 50% or better chance of it working out of the box when you install a distro, not having to dig through massive tweaking to get it operating... I'm expecting a step or two backwards in the "installability" department for a while.
Nobody should be putting Compiz 0.9 into a shipping distribution. Hopefully by the time 0.10 comes out they'll have it unfucked again.
Fedora might do it, of course. But I don't see it until some point releases have gone by.
Summary Fail (Score:5, Interesting)
The relevant words from the announcement are "complete rewrite". Or in simpler terms for the users, you do not want to run this until it reaches 0.10 (also as per the article.) This is a development and not stable release. (Sure would be nice if they would go 1.0 instead of .10 if it's going to be a stable release...)
Here's the stuff from the announcement interesting to users:
Everything else is of interest only to developers...
Re: (Score:2)
Sure would be nice if they would go 1.0 instead of .10 if it's going to be a stable release...
1.0 = 100%. When they reach 1.0, there can never ever be any more releases. ;)
Speed (Score:2)
Would the coding switch gain any speed increase?
Since having the enforced change from the ultra fast, ultra stable Beryl to the not very fast Compiz, I have not been very impressed with Compiz. The developers told me they didn't change anything to get the Beryl fork back into Compiz, but the fact on _MY_ system is simple.
With Beryl I could run whatever effect I wanted and even multiple effects at the same time, and the CPU was barely used, about 98% of the work was offloaded to the graphics card. Now with C
Re: (Score:2)
You're not going to see any speed gain from *just* switching to C++ from C. A direct translation of code from C to some other language invariably never accomplishes this. The compilation of Compiz will also be slower if it was just a language change, anyway.***
*** Unless the authors also did a major refactor and performance enhancement job while they were sifting through the code, which is what I always strive to do when I have to refactor an entire project from scratch, but in a time crunch or to get new
Re: (Score:2)
First release of merged branches (Score:5, Informative)
So.. what is it? (Score:5, Insightful)
Re:So.. what is it? (Score:5, Funny)
It sucks the paint off your house and gives you and your family a permanent orange afro.
Re: (Score:2)
What, a Valderama [inthestands.co.uk]?
Re: (Score:2)
Re:So.. what is it? (Score:4, Informative)
I use the cube desktop switcher and that's it. For some reason I find the idea of a cube easier to map out my mind when I have several windows open than a chain of 4 desktops.
Re:So.. what is it? (Score:4, Insightful)
Nothing useful. It's eye candy, like a turbo-charged Aero Glass with 3D effects. I use the cube desktop switcher and that's it. For some reason I find the idea of a cube easier to map out my mind when I have several windows open than a chain of 4 desktops.
So in other words, you find at least one aspect of it to be very useful. While some window effects are just pure eye-candy (e.g., wobbly windows), many of the added desktop effects provide various degrees of enhanced functionality. This includes:
Don't dismiss the suite as just eye-candy; if the main perception of Compiz is that it exists only to make things more fun and prettier, then its overall value to the desktop is understated.
Re: (Score:3, Insightful)
Don't forget window grouping and tab groups. I use that a lot. Expose is nice for managing multiple desktops as well.
You mean, nothing for you, right? (Score:3, Interesting)
I have to agree that the cube is useless (and I don't use it).
There are a number of plugins that increase productivity a lot though, namely the scale, desktop wall, expo, app switcher and zoom plugin. Problem is: the default configuration is not designed to be useful, but to be easy.
While installing new systems, I install the CompizConfig Settings Manager, and then set up the plugins for efficiency: I basically map common window functionality to screen edge/corner clicks with the mouse.
Base setup is 6 (2x3)
Re: (Score:2)
oops..
I meant to do mouse+keyboard activity with either mouse or keyboard, but not both at the same time.
Re: (Score:2)
Re: (Score:3, Interesting)
Ah, sorry, wrong wording. Actually, I just wanted to expand your comment. In my experience (and I spent quite some time on it) much of the usefulness of compiz is a matter of configuration.
So I'd rephrase my leading comment to "the desktop cube is useless to me". But that's what I like about being able to adopt compiz to my bidding. People are different, and I can adopt compiz to my preferences while not bothering you. :)
People should be aware of how the work and see how they can adopt the tools to make the
Re: (Score:2)
Re: (Score:3, Interesting)
I think it's because if you want all the shiny bits of objects and encapsulation then you use Java. If you want raw speed & dirty tricks then you use C.
I favor C++ myself, but I'm a huge fan of breaking encapsulation.
Re: (Score:3, Funny)
Fun fact: I knew somebody who added a preprocessor step to his compile process to make every class as a friend of every other class, because he was tired with "not being able to use the pesky private stuff in coworker's cold".
Re: (Score:3, Interesting)
The point is that its niche. The range of situations where you need raw speed, yet by-the-book OOP doesn't slow you down too much is very very small. Games and large commercial desktop apps are basically it. Line of business apps will usually go .NET or Java, web apps will go PHP, .NET, Java, PERL, Python, whatever. Drivers will go C/Assembly, specialized backend systems will go C/Assembly, etc.
There's exceptions to everything and i realize this is a gross generalization, but overall it stands, leaving C++
Re: (Score:3, Insightful)
I suspect the efficiency gap between C and C++ is smaller than you think. Even if you are very strict about encapsulation of objects, you'd be very unlikely to add more than 10% to the run time. And as others have pointed out, making use of features such as templating can actually help the compiler generate more efficient code.
C++ was designed so that it adds no overheads to imperative code, while the OOP constructs such as member functions have only one extra parameter (and one level of indirection for
Re: (Score:2)
C++ can be faster than C... This [stanford.edu] is an old one, which proves the point...
Remember, C++ is not just OO - that's one of the paradigms it supports, but not the only one.
Re: (Score:3, Insightful)
And Java can be faster than C++, if you write sufficiently good Java code and sufficiently bad C++ code. That you manage to find a single instance of this is true doesn't prove anything.
Re:Objects... (Score:5, Insightful)
I understand, but for speed I expect that C++ still outperforms Java, and while C should outperform both of them, C doesn't feature encapsulation, polymorphism and all the other goodies that OOP provides.
No, C is exactly as fast as C++. C++ only becomes slower if you use certain features that have a performance impact. Example: if you use exceptions, there is a performance penalty. If you don't, you don't get the performance penalty. That is one of the design principles of C++: nothing can be included into the language that slows down code that does not use/need it. The main slow downs you will see in your average C++ program, over the corresponding C, is the use of the string class as opposed to the nasty but fast strcpy and friends, and the extra indirect function calls due to virtual functions (which causes a branch misprediction and hence a pipeline flush on modern cpus, costing you a bunch of clock cycles). Still, you only pay for virtual if you choose to use it, and manually implemented virtual function calls are used all over the place in good old C, with the same effect. Furthermore, C++ templates allow code re-use with exactly 0 performance loss and while the error messages are ugly, they're still a whole load prettier than doing the same thing the C way with recursive includes and lots of preprocessor madness. And you can link to existing C code/libraries without any problems. Frankly, there is no valid reason for starting a new program in C in this day and age.
Re: (Score:2)
C++ only becomes slower if you use certain features that have a performance impact.
And virtually every useful feature of C++ that is not in its common subset with C is one of those.
Example: if you use exceptions, there is a performance penalty.
And if you use operator new, you use exceptions.
The main slow downs you will see in your average C++ program, over the corresponding C, is the use of the string class
That and <iostream> [yosefk.com]. Once, I tried programming in GNU C++ for a system with an ARM7 CPU and 288 KiB of RAM. Even after applying all the link-time space optimizations I could find, Hello World statically linked against GNU libstdc++'s <iostream> still took 180 KiB [pineight.com]. (Dynamic linking wouldn't even have worked because libstdc++.so itself is bigger than RAM
Re: (Score:2)
"As I understand it, C++ compilers implement templates by making a copy of the object code for each type for which the template code is instantiated. Once you instantiate a template numerous times, your binary gets bigger, and it slows down because it has to keep loading data from storage instead of caching it in RAM."
Not really. GCC reuses the same code from different instantiations. And of course, if you follow ODR then you'll have at most 1 template instantiation for each combination of type parameters.
A
What Newlib++? (Score:2)
if you follow ODR then you'll have at most 1 template instantiation for each combination of type parameters.
The problems come when A. programmers become unaware of how many combinations of type parameters they're actually using, or B. programmers can't decipher template type names in compiler diagnostic messages.
Also, libstdc++ is a beast. But so is glibc. If you compile for embedded devices - don't use it.
Newlib is better than glibc for embedded devices. What C++ standard library implementation do you recommend for these?
It's certainly possible to make 'Hello world' to be about 1kb in C++.
I've done so with std::fputs of <cstdio>, but there are still a lot of self-proclaimed C++ purists who apply the no-true-Scotsman fallacy on C++ code using <cstdio>, claiming t
Re: (Score:2)
C++ only becomes slower if you use certain features that have a performance impact.
And virtually every useful feature of C++ that is not in its common subset with C is one of those.
What is the performance overhead of namespaces, typesafe object creation, references, function and operator overloading, use of const ints for array sizes (more efficient than C), non-virtual methods, STL (the word "virtual" does not appear anywhere in the STL sources), support for wide characters, protected/private modifiers, etc.? While features like templates and metaprogramming hav
C++ as better C vs. no-true-Scotsman C++ (Score:2)
you can always use nothrow new
As I understand it, the standard library uses throw new, not nothrow new. So if you use the standard library, you get the exception handlers linked in.
What is the performance overhead of namespaces, [...] references, [...] use of const ints for array sizes (more efficient than C), non-virtual methods, protected/private modifiers
True, these features allow one to use C++ as "a better C". But a lot of C++ fanboys will claim that if a program doesn't use virtual, throw, and <iostream>, it's not in the spirit [wikipedia.org] of C++.
typesafe object creation, STL (the word "virtual" does not appear anywhere in the STL sources)
Exception overhead. Or is the entire C++ standard library also available in a nothrow version?
function and operator overloading
No runtime overhead, but especially operator
Re: (Score:2)
As I understand it, the standard library uses throw new, not nothrow new. So if you use the standard library, you get the exception handlers linked in.
The standard library allows you to specify allocators for everything in it that requires memory allocation, precisely so that you can use your own allocation mechanisms. Writing one that does new(std::nothrow) is trivial.
Of course, this assumes that you want to ignore any OOM errors (which, given the existence of things such as Linux "OOM killer", is a reasonable default), since there's no way for, say, std::string to report a memory allocation error other than just propagating the exception. If you really
Re: (Score:2)
this assumes that you want to ignore any OOM errors (which, given the existence of things such as Linux "OOM killer", is a reasonable default)
I was referring to embedded systems and handheld devices, not PCs. I specifically had Nintendo DS (4 MB RAM, single-tasking) in mind.
If you really want to check for OOM without exceptions, then, yes, you'll have to stay clear from STL and other bits of C++ standard library.
Would it be safe to say that common STL implementations operate under the assumption that allocate() throws std::bad_alloc rather than returning 0?
What does addAll have to do with operator+=? I mean, sure, you could overload the latter that way
std::string does exactly this. I was under the impression that std::vector did the same, calling std::vector::insert() at the end, but now I guess not.
C++ or C+Template? (Score:2)
Classes (not virtual)
Sugar for functions that take this as their first argument. But as Micropolis showed, these are useful for taking legacy code that uses global or module-scope variables and allowing it to be instantiated multiple times. I'll grant you this one.
References
Sugar for pointers.
You have other problems when your code runs out of memory that often
Only if you consider running on a microcontroller or a handheld device a "problem". In such a case, running out of memory means the allocator has to purge items from the cache. Then you run into other classes that use new as their factory, for which
Sugar? (Score:2)
References
Sugar for pointers.
And C is sugar for assembler, which is sugar for writing machine code directly using a hex editor.
The whole point of any language feature is to make it easier to use machine features. Calling them "sugar" doesn't negate that.
Re: (Score:2)
References
Sugar for pointers.
And C is sugar for assembler
In what situations would one use C++ references where pointers do not suffice?
The whole point of any language feature is to make it easier to use machine features. Calling them "sugar" doesn't negate that.
I didn't necessarily mean "sugar" in a negative way. I do remember writing that classes with no virtual methods are a useful sugar.
Re: (Score:2)
Not if you use nothrow. Eg.:
obj *p = new(std::nothrow) obj;
Does the standard library use new(std::nothrow), or does it use regular new?
Re:Objects... not as easy as they sound (Score:3, Interesting)
I would imagine that the biggest performance hit for C++ vs C is just the fact that most objects make extensive use of memory allocations. C++ makes this 'safer' than in C, and so most C++ users use it. In C, I tend to avoid memory allocation. You end up defining arrays sized to some reasonable maximum, but there's no performance penalty for that. Occasionally, this does cause problems when that maximum was underestimated, but most of the time it's pretty effective.
Where I work, we have a transaction pr
Structures (Score:2)
For example, our standard apps maintain state persistence by simply writing out one or more C structures to a temp file on disk.
Of course, the C standard explicitly states that the layout in memory of structures is implementation-dependent, so doing things like that sets yourself up for serious pain when you do things like change compiler versions, optimization options, or run on different platforms.
In my experience, a lot of programs run without crashing only through sheer luck.
Re: (Score:2)
C++ coders could continue to do this, of course, but they've assumed they needed to use objects for this purpose, leading to complex schemes for streaming those objects out to disk for persistence.
My PoV on C v C++ coding comes down to this kind of stuff. In C, you'll have a function that takes a struct parameter and writes it to file. In C++ you put that function inside the struct and remove the parameter.
so Persist(struct Data d); becomes d.Persist(); simples!
In effect, no difference - except to handily
Re: (Score:3, Insightful)
Which would be every feature that isn't C with added syntactic sugar.
Yes, there is: it's a simple language with very predictable behaviour, compiles fast, and the resulting binary can be trivially interfaced with pretty much every other language. There's no good reason to use C++: you don't get the benefits
Re: (Score:2)
As GP rightly noted, unless you use specific C++ features (exception, virtual), you get opcode-for-opcode identical code from C++ compared to C. Unless your microcontroller uses Tarot cards to determine the original language in which that MOV was written, I don't see how it's possible.
As a side note, I've seen drivers written in C++. Worked great.
Re: (Score:2)
Why would you want to break encapsulation?
Because perhaps one is trying to work around poor design of a class where useful functionality has not been exposed to the public:. Using the class as intended would result in an abstraction inversion [wikipedia.org].
Re: (Score:3, Interesting)
Why would you want to break encapsulation?
Speed. Lazy. Debugging.
And in C you can have encapsulation, polymorpism and all other goodies OOP provides. C++ just makes it easier. For example many libraries don't export the contents of structures in the exported header files. zlib for example gzopen() returns a "gzFile" which is a typedef void*, and doesn't expose any internals.
Re: (Score:2)
templates [...] don't carry any runtime speed penalty
Unless they cause the code to spill out of the instruction cache. Or unless they cause the entire working set to spill out of a handheld device's 4 MB of RAM.
Re: (Score:2)
That's not the fault of the feature itself, but of people using it incorrectly (at least in a particular environment).
It is still quite possible to retain full control over template instantiation by splitting template into header & implementation files (with header only containing function declarations and not definitions, and implementation containing their definitions), using extern template [open-std.org] in the header for all specializations that you need, and using explicit instantiations in the implementation fi
Template misuse (Score:2)
if you remove the templates by hand-instantiating them, you'd still have the same issue of code duplication.
The difference is that algorithms and containers in C or Java encourage the use of erasure to a higher type (e.g. void * or java.lang.Object). C++ templates can be used this way, but they can also be instantiated once for each T* (by pointer) or even once for each T (by value). I can think of a few things to watch out for when using templates:
Re: (Score:2, Insightful)
-1, linux zealotry bordering on FUD
Re: (Score:2, Insightful)
-1, linux zealotry bordering on FUD
Nah. He's karma whoring.
Re:favorite way (Score:5, Informative)
No, karma whoring is to post something completely obvious you know will be modded up and not add anything to the discussion. Like this comment.
Re:favorite way (Score:5, Funny)
No, karma whoring is to post something completely obvious you know will be modded up and not add anything to the discussion solely because you want to boost your karma. Unlike this comment.
Re: (Score:2)
Fewer Viruses - check
Lower TCO - check
CLI is not working on windows - wrong
Most FLOSS runs on it - check
Drivers for more hardware - check
No kernels panics (BSOD) - wrong
Not nearly as resource hungry - wrong because tests indicate that Windows 7 is less hungry than Ubuntu
Penguins - what a BS
The easyest way of making a Windows user envious = getting the hottest chick on the planet
Re: (Score:2)
Re: (Score:2)
Ha... ha... ha... ha... ha....
Okey....
1. Windows 7 has better OpenGL performance no matter what hardware and what drivers you throw at it.
2. 1.5GB? Sorry but I thought Windows 7 didn't use more than 200-300MB RAM and cached out wasted RAM space?
3. What are you running next to GNU+Linux?
Re: (Score:2)
Oh I forgot to mention less kilowatts...
Re: (Score:2)
Well, I know OpenGL performance on Linux is sucky, but Windows 7 is definitely using more RAM. That's what task manager shows. And yes, I know how to read the various dials on there.
I don't know what your 3rd question is supposed to mean.
Re:favorite way (Score:5, Interesting)
Re:favorite way (Score:5, Informative)
In fact, on old systems with a graphics card it is significantly faster than the traditional way of redrawing windows.
Why? Because:
1. the gfx card can do part of the work
2. all windows are already drawn and kept in the graphic card's memory
Re:favorite way (Score:4, Informative)
Compiz doesn't actually use that much system resources, nor strain your hardware either.
I have a 3.2GHz tri-core Phenom II system with a GTS 240 (~400MHz, 96 stream processors) and Compiz will easily consume 5% or more if you have a window with continual graphics updates, like a game or a video player. That's a lot of CPU! You can manually disable transforms on that window but that requires a visit to the settings manager that would leave the average user dumbfounded.
Re: (Score:3, Interesting)
I run compiz on several Atom 230 and 330. This are mini-itx mobos that have integrated Intel GMAs (Pineview series). I am checking this as we speak over SSH.
Compiz CPU usage: 2% Ram: 34MB.
This is with all settings turned up to 11, and, since this machines are surveillance systems, 4 windows showing 352x288 video @30fps each, plus a fullscreen browser window that is constantly updated.
Total CPU usage is ~3.4%.
Re:favorite way (Score:4, Interesting)
Excuse me, what part of "since this machines are surveillance systems, 4 windows showing 352x288 video @30fps each, plus a fullscreen browser window that is constantly updated. Total CPU usage is ~3.4%." You didn't understand?
It's not compiz itself eating up that much processing power. It's the 4 threads capturing 352x288 video @30 fps, and displaying it in 4 different windows.
Also, it is IMPOSSIBLE for any operating system to be actively displaying how much CPU it is using while using 0% CPU. Answer: WINDOWS IS LYING TO YOU.
On the other hand, even if windows created some magic way to run out of thin air while using 0 processor power, it would mean nothing because it would still be completely useless.
Re: (Score:2)
I just used the middle click / cube shrinks and becomes semi-transparent and can be rotated... effect in Compiz, which immediately shot up the CPU usage for both cores of my processor from 20% to around 60% per core. Under Beryl the CPU usage changed about 2% over what the system was already running at. I would say that Compiz does not use the graphics card like Beryl did, and the Compiz devs deny there is a problem.
Re: (Score:2, Insightful)
Re:favorite way (Score:5, Interesting)
I use a variety of POSIX operating systems 95% of the time, at work through necessity, and at home through choice. And because I use them, rather than despite it, I am compelled to respond.
And drunken cheerleaders get date raped more than shut-in nerd chicks. Personally, I prefer nerd chicks, and you likely do too, but most people don't. Really, they don't, and there's no use telling them that their opinion is wrong.
If you don't value your time. For the latest of many, many examples down the years, I 'invested' 3 hours this weekend trying to get WiFi with WPA working again after upgrading my wife's box from Ubuntu 9.10 to 10.04. Verdict: the rt73usb driver has (yet again) returned to a state of porkage, so it was (yet again) ndiswrapper and Windows drivers for the eventual win.
Until of course you try and run a script written for fooshell on barshell, i.e. when a distro changes its shell [ubuntu.com].
Can be made to run on it, given enough time.
If you limit "ever" to "older than two years or so". But sure, many of the drivers give the appearance of working tolerably well, for a surprising amount of the time! And when they don't, well, there's ndiswrapper, or we'll-fix-it-in-the-next-release, or you've-got-the-source-compile-a-previous-version-yes-we-know-it-doesn't-build-against-your-kernel-headers-or-gcc-version-fix-it-yourself-you-filthy-M$-shill.
Ain't seen on one Windows for years.
Granted. Oh, unless you've got a driver bug, which you almost certainly do if your hardware was designed this millennium. Then see above.
By that measure, that would mean...
...that.
This is not the year of Linux on the desktop (or the netbook). I thought we were there with Ubuntu 10.04, but it's actually a regression from 9.10. I'd just recommend 9.10, but that's effectively abandonware now, just like all previous versions of all Linux distros, "LTS" included.
Again: I'm writing this from Ubuntu 9.10. I've got RHEL5 in that VM over there, SUSE 11 yonder, Solaris in that shell, and even SUA on Windows (tastes a bit like POSIX). I'm happy with POSIX OSen. But I would not recommend them to a Joe Windows user, ever, since I don't want to be their Support Guy from now until there's a distro that actually Just Works.
Re: (Score:2)
Until of course you try and run a script written for fooshell on barshell, i.e. when a distro changes its shell.
If you were using #!/bin/sh and expecting bash specific code to work, you're doing it wrong. If you want bash, call it by its proper name and it will always work.
Re: (Score:2)
Well, sure, if your definition of "actually works" depends on "if you use it right", which is a perfectly reasonable condition.
But then that means the Windows "CLI/scripting system" also "actu
Re: (Score:2)
Re: (Score:3, Insightful)
If you were using #!/bin/sh and expecting bash specific code to work, you're doing it wrong. If you want bash, call it by its proper name and it will always work.
A more likely scenario is that a script written by someone else improperly references /bin/sh despite being chock full with bashisms.
The real problem is that many people these days just assume Unix = Linux and can't even think of /bin/sh possibly not being bash (or something "compatible enough"). This is especially true of "Linux on the desktop" crowd, as server admins typically know better
Re: (Score:2)
I'm happy with POSIX OSen. But I would not recommend them to a Joe Windows user, ever, since I don't want to be their Support Guy from now until there's a distro that actually Just Works.
Seriously? My POSIX compliant OS X is something I do recommend as it does Just Work.
Re:favorite way (Score:5, Insightful)
If you don't value your time.
Linux is only free if your time is worth nothing.
Windows is only $119.99 if your time is worth nothing.
Re: (Score:3, Informative)
And drunken cheerleaders get date raped more than shut-in nerd chicks. Personally, I prefer nerd chicks, and you likely do too, but most people don't. Really, they don't, and there's no use telling them that their opinion is wrong.
Do people prefer Windows? After actually trying Linux? Not in my experience.
If you don't value your time.
Most stuff works out of the box. Some stuff does not work out of the box on Windows or Mac either.
Until of course you try and run a script written for fooshell on barshell, i.e. when a distro changes its shell [ubuntu.com].
Dash is supposed to compatible with Bash if you stuck to Debian policy of affected scripts (those than use #!/bin/sh - if you useed bash specific features you should have used #!/bin/bash . Any examples of stuff that breaks? BTW Bash is still the login shell.
Can be made to run on it, given enough time.
Most stuff non-geeks use is in the major distros repos and is easier to install
Re: (Score:3, Insightful)
A) have you actually tried to figure out how to secure a network, or even your Dad's computer, when doing so requires he have the ABSOLUTE LATEST version of flash, adobe reader, and java? Not to mention those realplayer and QT plugins that are sure to get exploited one of these days? Linux gets it right with centralized software updates; Windows is an absolute nightmare in this regard. Theres WSUS, but oh
Re:favorite way (Score:4, Insightful)
Lower cost of Ownership - Last time I went shoppping for a computer, I didn't see any discounts for not having Windows installed from the get go. Either you go with Dell/HP/Lenovo, and they only offer windows, or when the offer Linux, it's the same price, or only a little cheaper, but you get a lot less selection of machines you can get. The other option is to build your own machine from off the shelf components. This is my favourite option, as you can get exactly what you want, but you will end up spending more.
CLI/Scripting system - Almost nobody except tech geeks cares about this. Also, Powershell on Windows isn't all that bad. It has its pluses and its minuses.
Most open source software runs on it - Most all of open source that is worth running will run on Windows. Maybe not all of it, but most of the more important stuff. Conversely, almost no closed source software runs on Linux. Which might not matter to you, but if you're trying to get work done, having things like Photoshop, Outlook (hate it but necessary for business), and many other closed source programs, makes a big difference.
Drivers - Sure you get drivers for all the old stuff. But are you sure that shiny new piece of hardware that just came out last week will run to its full potential. Probably not. And there's also plenty of older hardware that I had that I couldn't run on Linux.
No Blue Screen - I haven't seen a blue screen on a Windows machine in many years. And when I do, it's usually because of bad RAM, causing something to get corrupted. Blue screens still exist, but they don't happen quite as often as they used to. I imagine most Linux systems would also crash pretty badly when they have bad memory.
I'm not some Windows Zealot. I use Windows when it makes sense, and I use Linux where it makes sense. But I don't really think that that any of the reasons you mentioned are valid. Especially if you're talking about home desktop use. Which in the case of Compiz, is exactly the kind of people we are talking about.
Re: (Score:2)
But the easiest way of making a windows user envious is to use a mac
Something that's more closed than Windows and Linux?
No, we're not envious. You might think that we should be envious, just like the guy who brags about his expensive designer clothes or Iphone, but the rest of us don't actually care.
Re: (Score:2)
Oh come on... how, exactly, is the Mac platform (no, not the iPad, not the iPhone, the Mac, ie Mac OS X) "more closed than Windows"? At best it's exactly as closed, though I'd argue somewhat less so (thanks to the existence of Darwin, their work on the ObjC gcc backend, Webkit, etc).
Re: (Score:2)
Something that's more closed than Windows and Linux?
1. OS X is not any more closed than Windows.
2. A Windows user will likely not care anyway.
Re: (Score:2)
CLI/scripting system that actually works
Very, very true. Although PowerShell is quite powerful... but quite different from most shell scripting in the UNIX world.
You really expect any CLI, no matter how awesome, will make Windows users jealous? I definitely think Compiz is one of the few ways to make your average Windows user jealous of Linux, with perhaps your favourite package manager coming next. I remember reading that MS are building an app store for Windows though, so it won't be something to be jealous of for long!
Of course, trying to make other people jealous of you is pretty pathetic.
Re:BS (Score:5, Insightful)
* Lower cost of ownership - BS, too much time is spent hacking up config files to make crap work or work right
On Windows, too much time is spent hacking up the registry to make crap work or work right. Just this last Thursday, I had to manually scan the registry to delete every reference to a printer driver that kept killing someone's spooler service... because the spooler service needed to be running to delete the printer normally. If it had been a unix system, I could have just edited a line in a file and been done.
* CLI/scripting system that actually works - BS, anything you can write and make work in Linux, I can in Windows
Using cygwin, bash compiled for Windows or DOS, or other scripting applications that are not guaranteed to be on every Windows system.
* Most open source software runs on it - Show me anything worthwhile that doesn't run in Windows or have a better alternative there
Well, Linux runs in Windows, so I'd say you've won this argument.
* Drivers for just about any piece of hardware ever built - BS, that's the primary thing most users have issues with, half baked drivers
Half-baked drivers in Windows XP, Vista, and 7. That printer driver mentioned above? It was an HP driver written for and installed in Win7 64bit.
* No blue screen of death - Agreed, but I haven't seen one yet in Win7
I haven't either, but I have seen a Win7 machine reboot constantly (the equiv of BSOD since Win7 is set to reboot on fail).
* Not nearly as resource hungry (unless of course you use Compiz :-) - Agreed, but neither was Win98 which is typically how Linux feels
I still have Win98se running on an old machine for old games. Win98se is actually snappier than modern Linux, which is in turn snappier than WinXP/7. How much window compositing did Win98se do? Firewalling? Multi-user? Even the 1998 version of Linux had multi-user support and ipchains.
Mod me down if you want to, but I've yet to have Windows drop me to a command prompt after an video card driver update
I've had it boot up to a BSOD, which looks worse than a command prompt, or a blank screen where I had to remote in or boot up in safe graphics mode.
[I've yet to have Windows drop me to a command prompt after an] OS update (Ubuntu anyone?)
I've had it boot up to a BSOD, which looks worse than a command prompt.
or had to recompile sound drivers after every OS update (Ubuntu on that one too).
I wish I could. Sometimes vendors take years to get their sound drivers working. Google realtek, imac, and Windows 64 bit.
My file manager will display in a column what date pictures were taken so I can categorize them accordingly, can yours do that? It couldn't the last time I checked.
This is the first time that I ever checked. No, it does not, but it could with a little quick editing. Right clicking and selecting properties shows that the Gnome file manager (didn't check KDE) can see the image properties, including "Date Taken", so the information is there. Linux users are probably just better mentally organized, and name their photo directories YYYY_MM_DD
Re: (Score:2)
No need to try to make Linux users smarter than they think they are though, Windows users and possibly even Mac users can be fairly mentally organized as well.
Re: (Score:2)
in Windows I don't have to check every photo individually for the date taken, it's a column in the file manager.
ls -lt *.jpg
If you want to automatically file them into directories based on date you can use --time-style=iso and pipe it into awk or perl and write a quick script you can use every time you do this. You definitely do not have to sort them by date, create a folder for each date, and drag and drop each group of files into its directory.
You can do the same sort of thing in Powershell I'm sure. Th
Re: (Score:2)
Re: (Score:2)
Linux defaults to the command line because the command line is better. There's a reason we moved beyond pointing and grunting into symbolic language. Writing a few lines of code is in fact easier than manually copying, renaming, converting, etc dozens of files. And when you're done you get a script you can use the next time such a task comes up.
If you really really want to use the GUI though, there's no shortage of file managers that will display the date in a column. Konqueror does it by default. So d
Re: (Score:3, Insightful)
command line is better. There's a reason we moved beyond pointing and grunting into symbolic language.
Best description of why to use CLI; it allows for an explosion of thought.
Re: (Score:2)
Re: (Score:2)
in Windows I don't have to check every photo individually for the date taken, it's a column in the file manager.
ls -lt *.jpg
This isn't what GP was talking about. That's file modification time, not the date the photo was taken (which is data inside the image file, not in the filesystem about the file). The closest you could get with ls would be to re-touch all the timestamps to match the image date data first, then use ls. /image/directory/ -name \*.jpg -exec touch -d `exiftime -tg {} |sed -e 's/Image Generated: //' |sed -e 's/:/-/' |sed -e 's/:/-/'` {} \;
find
or something similar. I can't remember if backticks work in -exec
Re: (Score:2)
This isn't what GP was talking about. That's file modification time, not the date the photo was taken (which is data inside the image file, not in the filesystem about the file).
The filesystem time and the exif time should be the same when they're on the camera. Just pass -p to cp when you copy them over.
Re: (Score:2)
This isn't what GP was talking about. That's file modification time, not the date the photo was taken (which is data inside the image file, not in the filesystem about the file).
The filesystem time and the exif time should be the same when they're on the camera. Just pass -p to cp when you copy them over.
Yes, but when you edit the file (in Photoshop, say), the date taken stays the same and the filesystem timestamp changes.
It actually annoys me that Windows defaults to showing the exif date taken when it detects a directory of images - I'd much rather see the filesystem datestamp and sort by that, so I can see which I've already edited. I already organise the directory structure by date taken anyway.
Re: (Score:2)
That's a fair point. But at this stage it seems like we've moved beyond the field where a general purpose file manager is appropriate. There's no point in having a "date taken" column in a utility that many people will never use for photos. If you really need to sort your photos based on this photo specific metadata, there are photo managers for that.
Re: (Score:2)
Re: (Score:2)
I've not had to edit any config files on Ubuntu since version 8, apart from Apache - which needs exactly the same setup on Windows.
Evolution doesn't have a decent Windows port (there is a port available, but it crashed on installation and I couldn't be assed trying to diagnose it, just left the user with Outlook).
Windows "feels" worse than any OS I've ever used, with maybe the exception of Amiga Workbench 1.3.
I always file my pictures in folders with the date that they were taken in YYYY-MM-DD format, so ye
Re: (Score:3, Insightful)
But then I guess you have never tried to use cmake; else you would not have made the ignorant statement about its incomprehensibility. If you have never used autoconf, automake, make, libtool, m4 and friends it would be just as incomprehensible.
Re: (Score:2)
I have used autotools, and they're still incomprehensible.
Re: (Score:2)
You actually go through the trouble to reimplement build systems in autotools? That's a lot of work, dude. I'm calling foul here.
Re: (Score:2)
So you are saying if you were to compile kde-4.4.x (they use cmake) you would convert it all to autotools? ... I don't believe you at all, not for a second.
Re: (Score:2)
Good lord, I did not expect so many to come the defense of that grand old dame, X.
Rather than get into an argument on the Internet about Computers, I'll just say that
the Linux desktop remains a beloved canard to me. I do not doubt that others will disagree
with me.