Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
GNOME GUI Linux

Linux Mint Developer Forks Gnome 3 314

An anonymous reader writes "Clement Lefebvre, the Linux Mint founder, has forked Gnome 3 and named it Cinnamon. Mint has experimented with extensions to Gnome in the latest release of their operating system, but in order to make the experience they are aiming for really work, they needed an actual fork. The goal of this fork is to use the improved Gnome 3 internals and put a more familiar Gnome 2 interface on it."
This discussion has been archived. No new comments can be posted.

Linux Mint Developer Forks Gnome 3

Comments Filter:
  • by Anonymous Coward on Wednesday December 21, 2011 @07:42PM (#38454784)

    GNOME has always had the shittiest developer and user community of all of the major Linux desktop projects. This is because politics, rather than the development of practical software, have been its driving force.

    It was initially created to "fight" against KDE, solely because KDE was using Qt and Qt had a proprietary license at the time. There wasn't any technical need for GNOME. Most people were quite pleased with KDE and its abilities. So GNOME wasn't even addressing a real technological deficiency in the first place.

    Their architectural approach has been rather fucked up, too. Instead of using a true object-oriented language like C++, Objective-C, Python, Java, Smalltalk, or one of the many other OO languages out there at the time, they instead chose to create GObject. For those who don't know, GObject is a horrible kludge to add pseudo-object-oriented capabilities to C. It's a unholy mess of macros and other stupidity, and the result is completely shitty. Don't take my word for it. Go use it yourself! See how horrible of an experience it is compared to using a real UI toolkit like Qt, or Cocoa, or wxWidgets, or even MFC or Swing.

    Then there was the decision to implement it as 50+ separate libraries. Compiling GNOME 2, for instance, is a massive burden.

    Recent releases have seen some of the most stupid UI design decisions ever made. It's unbelievable that some of these ideas were proposed, never mind actually implemented!

    This is the kind of crap that drives away good software developers, and attracts the lousy ones. Good developers don't care for unnecessary licensing politics. They don't create software when there are perfectly fine alternatives they could use instead. They don't try to craft their own bullshit OO extensions to C, when they can just use C++, or Java, or Objective-C, or Python. They don't create projects that consist of over fifty small libraries that are distributed separately. They don't make stupid UI decisions. Since GNOME isn't developed by good developers like that, the GNOME project has apparently decided to make every mistake possible. That's why the project and its software is in such a sorry state today.

  • by rrohbeck ( 944847 ) on Wednesday December 21, 2011 @08:10PM (#38455016)

    I have converted all of my systems to XFCE. It feels like an older, simpler and leaner Gnome to me and some of the applets even have better functionality.

  • Re:You're... (Score:5, Interesting)

    by jejones ( 115979 ) on Wednesday December 21, 2011 @08:15PM (#38455044) Journal

    Actually, Jon McCann, in an interview, seemed to say that user configurability is a bug, detracting from GNOME presenting a single face to people who might consider switching to GNOME. "And I think there is a lot of value to have that experience you show the world to be consistent. In GNOME2 we didn't do that particularly well because everyone's desktop was different."

  • by jbolden ( 176878 ) on Wednesday December 21, 2011 @09:20PM (#38455462) Homepage

    At the time objective-C developers were, if they were interested in desktop development working on Mac, OpenSTEP or GNUStep. Java was too slow for a desktop and had bad Linux support, though this was a major consideration. QT was amazingly good for C++, Gnome couldn't compete. So they created a system for C programmers who didn't know C++.

    They had to reinvent the wheel because they had to recruit people. There were a lot of C programmers that were willing to work on Linux desktop apps.

  • by laffer1 ( 701823 ) <luke&foolishgames,com> on Wednesday December 21, 2011 @09:33PM (#38455544) Homepage Journal

    I disagree with you. As a vendor, I find shipping gnome to be a nightmare. It had a ridiculous number of dependancies and is rather unpleasant to build. I haven't looked much at the Gnome 3 stuff yet so perhaps they've improved it, but Gnome 2 had dependancies on webkit and firefox. What kind of idiot thought that up? Epiphany rocks with webkit, but using libxul to get help is stupid. It should be ported to webkit.

    Further, the gnome community only cares about Linux. if you're not a linux distro, they don't take upstream patches and they don't like you. Considering what Ubuntu went through with them (not that i agree with all the ubuntu changes), I'm not shocked to set yet another fork of gnome. I think this fork will fail on the sheer weight. Too many things depend on parts of gnome and you'll end up trying to track updated libraries yet trying to keep old code running. It gets ugly.

  • by Anonymous Coward on Wednesday December 21, 2011 @10:31PM (#38455818)

    Modularity is a good thing. It's not cutting up things into a lot of small modules (aka "libraries") that's the problem.
    It's doing it wrong.

    Look at the typical bash shell and GNU utilities we all use every day. They are hundreds of small executables, libraries, etc. But they are not a mess. They all do one thing, and do it well. That's part of the UNIX philosophy, and for a reason.

    Are you joking? They absolutely are a mess, and I say that as someone who uses them every day. I'm just not fooling myself into thinking they're not a mess. UNIX shells and utils have had a chaotic development history and are chock full of bad design. Most of the good utils do a lot more than one thing, and they are usually far less than excellent, just good enough to suffice if you fight them long enough to get them to do what you want. And don't get me started on the gaping abyss of existential Lovecraftian horror that is shell code, or (shudder) Perl. (and I even like perl! But it's also an eldritch tool of the Many-Angled Ones.)

    The only reason the entire lot hasn't been incinerated and replaced by saner tools with better and more consistent design is that there's far too much legacy code out there which depends on the behavior of existing UNIX shells, tools, and scripting languages. Just look Plan 9. Even though it was very much a UNIX-philosophy OS, only more so, and better designed than the original, by the same people who designed UNIX in the first place, it failed to gain any traction because it came far too late. UNIX already had unstoppable critical mass.

    You're falling into the trap of believing that the ideal is the practice. UNIX started out as a very hacky OS because squeezing advanced features into a PDP-7 was Hard. The subsequent 40 years of continuous and divergent development, little of it done by people primarily concerned with "do-one-thing-and-do-it-well", have left that ethos in tatters.

    You're also falling into the trap of believing without rational reason that one philosophy of software design is best for everything. One-thing-and-do-it-well is a fine idea for a software environment intended to filter text through independently written programs, but it might not work so great for easy to learn and use GUIs.

    In fact, I think every user should have his own fork by default. Where "fork" can mean anything from an empty patch set to fundamental major changes. And everybody should just be able to "subscribe" to whoever else's personal fork, implicitly making that someone else a "distributor" without having to do anything special. So that natural leader/follower structures can arise, and nobody can force anything on anyone.

    Okay, so you're a crazy guy.

    Also, there is one additional thing you missed: The moment "desktop environments" for Linux started to forget the UNIX philosophies, abandoned the concept of "everything is a file", and chased the Windows and OS X, they were full of FAIL and lost anyway. (There's no file system for your GUI, is there? You can't cat /proc/pid-6939/window-2/grid-3-2/textarea-2. It's all monolithic Windows-like "applications". You can't use a GIMP brush in OpenOffice, you can't use the same text layouting engine for OpenOffice, Firefox and GIMP, etc, etc, etc. It's all just deeply deeply anti-UNIX, harming code re-use, customizability, modularity, and most of all usage efficiency.

    This is not even on the same planet as right and wrong.

  • C is good. (Score:4, Interesting)

    by MrEricSir ( 398214 ) on Thursday December 22, 2011 @03:47AM (#38457186) Homepage

    GObject has features C++ doesn't include natively, like type introspection.

    Besides, what's wrong with C for a low-level API? You can connect just about any C-based API to a higher level language.

  • by mangobrain ( 877223 ) on Thursday December 22, 2011 @05:39AM (#38457574) Homepage

    What would the API for this "single persistent, networked, yet not completely shared-global" object concept look like? If nothing is typed, how - for example - does an image manipulation program know which objects represent data, which objects represent manipulations, and how to actually fit the two together? What if I come up with a new and useful way of manipulating data which doesn't fit into the existing API? If what used to be a large database in an optimised binary format is now a millions of individual objects inside collections, all untyped and limited to whatever metadata satisfies the lowest common denominator of code written for this object concept, how do I access and manipulate that data with anything approaching decent performance? How do I create an attractive UI for my application when it has no prior knowledge of what settings, properties or operations may be exposed by the components it uses, or even what components exist? If I invent a new or improved type of formatting for word processors, and there isn't already a property or set of properties in the object format which can express the data easily, do I just scrap the idea, or implement an object where the "real" data lives in an opaque blob in a non-standard property, defeating the whole point of generic objects?

    Specialisation is a necessary evil when it comes to actually getting things done. Frameworks and APIs for implementing pluggable, re-usable components exist, but in the interests of being realistic, useful and performant, they are always specialised. Some examples:

    * Gstreamer implements a wonderful pluggable framework for playing media, be it audio, video, streaming, stored, compressed, whatever. There are plug-ins for input types (file/network), codecs, multiplexing, de-multiplexing, filtering, rendering (e.g. displaying on screen/sending to sound card), and they can be plumbed together in any way that makes sense. To this end, plug-ins use a domain specific method for describing what they can consume, and what they can produce. It even manages to export UI information and accept user input, for things like DVD menus, though I have no idea how they managed to fit that in to the model. It can't be used to edit a Word document or interact with a web page, but web browsers can and do make use of it for displaying video data embedded in HTML5 documents. For performance, all operations beyond data loading/retrieval are performed on the local machine, in the address space of a single process, unless you specifically create a plug-in chain with the disk or the network as its output.

    * D-Bus implements a simple, persistent message bus, where anything connected to the bus can send messages either to specific numbered entities, or to named interfaces. Anything which implements a given named interface can respond to these latter messages, and as long as the data is returned in the correct format for that interface, the sender will understand it. However, it's only as useful as the interfaces themselves; it can't magically connect anything to anything else, applications have to be written in advance to expect/implement specific, designed interfaces. I'm not sure how it would hold up performance-wise if you tried to connect up something like Gstreamer plug-ins over it - that would be a lot of messages, and an awful lot of data copying.

    I would love for what you propose to be possible, but something tells me it's not grounded in reality.

  • by luxifr ( 1194789 ) on Thursday December 22, 2011 @06:44AM (#38457780)
    I know and support a lot of people who use their computers as a tool and not as a hobby. The more tech-savvy (and even those only somewhat tech-savvy) have a good idea of how they want to interact with the computer - being pissed of by unnecessary graphical effect and/or limited configuration possibilities in newer versions of what they are using (be that Windows, Gnome, KDE, all the way down to specific applications - doesn't matter!). The not so tech-savvy people (mostly older people, frankly) learned to use the desktop metaphor and when confronted with something different they're feeling and acting insecure because they don't have the mindset to adapt to such huge changes rapidly enough not to be complete thrown off by them. That's because they wouldn't have to. Those are the people who started working with computers in this century and use them as tools to get specific tasks done and nothing else. Frankly this means that the target audience for the intended improvements in usability in these new interfaces consists mostly of people who will have a hard time adapting to something so radically different. For people who never really used a computer it makes no difference, of couse as they start from scratch anyway. So I'm wondering whether those UI designers actually conducted some meaningful usability studies with some REAL people before getting to the drawing board trying to reach nerdgasm by ripping of broken UI concepts from apple et al.; heck, the reduced usability even doesn't have to do with people exclusively... basic issues arise even from ignoring what kind of input and output devices will be used... Metro Launcher on a 27" screen + mouse and keyboard? WTF? Unity is the same, as is Gnome3! Touch for regular office work? Not so much! More mouse then? Yeah... your wrists will thank those braindead ui designers... I'm not even arguing that keyboard navigation is usually faster - everyone knows that. KDE shines here - I haven't seen such a good use of the activities (which can be used easily to accommodate to different form factors) concept anywhere else and while coming with sane default configurations for big and small screens where you wouldn't have to change anything you're still empowered to make it your own completely by mostly easy and well structured configuration options. Heck, even Windows shines here as there are a gazillion programs and more or less documented tweaks with which you can tune it for your needs.

Those who can, do; those who can't, write. Those who can't write work for the Bell Labs Record.

Working...