Walter Bright Ports D To the Mac 404
jonniee writes "D is a programming language created by Walter Bright of C++ fame. D's focus is on combining the power and high performance of C/C++ with the programmer productivity of modern languages like Ruby and Python. And now he's ported it to the Macintosh. Quoting: '[Building a runtime library] exposed a lot of conditional compilation issues that had no case for OS X. I found that Linux has a bunch of API functions that are missing in OS X, like getline and getdelim, so some of the library functionality had to revert to more generic code for OS X. I had to be careful, because although many system macros had the same functionality and spelling, they had different expansions. Getting these wrong would cause some mysterious behavior, indeed.'"
High performance of C++ equal to D??? (Score:4, Interesting)
I don't think D will ever have the high performance of C++, because D objects are all allocated on the heap. The D 'auto' keyword is just a compiler hint (last time I checked) to help in escape analysis. D has structs, but one has to design upfront if a class has value or reference semantics, and that creates a major design headache. Avoiding the headache by making everything have reference semantics negates the advantages of struct.
D is a mix of C and Java, with the C++ template system bolted on top. It is no way C++. D is not what a veteran C++ programmer excepts as the next generation C++.
Re:High performance of C++ equal to D??? (Score:5, Informative)
http://www.digitalmars.com/d/1.0/memory.html#stackclass - Objects in D are not always allocated on the heap. Also, you've clearly never used templating in D if you think it is the C++ template system bolted on top ;)
Re: (Score:3, Informative)
Re:High performance of C++ equal to D??? (Score:5, Informative)
The GC is the way to go for complex application. The reason is simple: the GC has a global overview over all memory usage of the application (minus special stuff like OpenGL textures). This means that the GC can reuse previously allocated memory blocks, defragment memory transparently, automatically detect and elimitate leaks etc.
Somewhat less obvious is that a GC allows for by-reference assigments being the default. In C++, by-value is default. a = b will always copy the contents of b to a. While this is OK for primitive stuff, it is certainly not OK for complex types such as classes. In 99.999% of all cases, you actually want a reference to an object, and not copy that object. But, as said, the default behavior of assignment is "copy value".
This is a big problem in practice. The existence of shared_ptr, reference counting pointers etc. are a consequence. We could redefine the default behavior as "pass a reference of b to a", but who will then take care of the lifetime of b? Using a GC, this last question is handled trivially; when the GC detects 0 references, b is discarded.
Now, once you have by-reference as default, things like closures get much easier to introduce. Neither D nor C++ have them at the moment, but C++0x requires considerably more efforts to introduce them. Functional languages all have a GC for a reason.
D did another thing right: it did not remove destructors, like Java did. Instead, when there are zero references to an object, the GC calls the destructor *immediately*, but deallocates the memory previously occupied by that object whenever it wishes (or it reuses that memory). This way RAII is possible in D, which is very useful for things like scoped thread locks.
They also simplified the syntax, which is one of the major problems of C++. Creating D parsers is not hard. Try creating a C++ parser.
Now, what they got wrong:
- operator overloading
- const correctness
- lvalue return values (which would solve most of the operator overload problems)
- no multiple inheritance (which does make sense when using generic programming and metaprogramming; just see policy-based design and the CRTP C++ technique for examples)
- crappy standard library called Phobos (getting better though)
- and ANOTHER de-facto standard library called Tango, which looks a lot like a Java API and makes little use of D's more interesting features like metaprogramming, functional and generic programming
Re:High performance of C++ equal to D??? (Score:4, Interesting)
>> D did another thing right: it did not remove destructors, like Java did. Instead, when there are zero references to an object, the GC calls the destructor *immediately*, but deallocates the memory previously occupied by that object whenever it wishes (or it reuses that memory). This way RAII is possible in D, which is very useful for things like scoped thread locks.
First of all, Java does have destructors. It's called finalize().
Second of all, calling destructors on a modern GC are extremely costly. Sure, your example implementation of destructors seems simple, but it is only possible in a reference counted garbage collector, which is so primitive as to be nearly useless.
Modern Java GCs are generational copying collectors. They have a young heap, where objects are allocated, and an old heap, where objects are moved when they survive an allocation. Object retention is done by tree search from a root node.
This means you can do fun things like allocate a linked list, then drop the reference node. When a collection happens, anything living is rescued from the young heap, and then it's simply wiped clean. No computation is performed regardless of how large it is or how many links it has, because there's no such thing as deallocation on the young heap. When you drop that first link, it's like the VM doesn't even know it's there anymore; the whole list just gets overwritten on the next pass over the heap.
If, however, you write a destructor for your links (or in Java, finalize()), the destructor then needs to independantly keep track of all of your destructable objects. It needs to remember that they're there so it can call their destructor when they do not survive an allocation. Furthurmore, if you impose your hard requirement of calling the destructor immediately, then the implementation of such a collector is impossible for your language. Even a primitive mark sweep collector, or anything not reference counted is impossible.
This example is discussed in detail here:
http://www.ibm.com/developerworks/java/library/j-jtp01274.html [ibm.com]
You should familiarize yourself with modern garbage collectors. I don't know much about D, but if D really is tied down to a reference-counting collector due to its destructor requirements, that makes it extremely unnatractive as a language. Here is more information on various collector implementations:
http://www.ibm.com/developerworks/java/library/j-jtp10283/ [ibm.com]
Re: (Score:3, Informative)
> First of all, Java does have destructors. It's called finalize().
I'm sure you know their differences. Java's finalize does NOT make RAII possible, because it's not run deterministically.
RAII requires controlling the order of execution of destructors.
Not being able to perform RAII with it makes finalize() almost totally useless.
Re: (Score:2)
I'm sure you're thinking of Java or C.
Wrong. I am thinking of C++. Count the times you pass a reference or a pointer vs. the times you actually want to copy large, complex objects. The outcome is pretty clear. Hell, boost even contains a "noncopyable" class as a tool to disallow deep copies.
Deep copies are the exception. Shallow copies are the rule. This is totally obvious, and virtually all other languages do it this way. The only two reasons why C++ doesn't have it that way are the C legacy and the owners
Re:High performance of C++ equal to D??? (Score:5, Interesting)
Maybe your [sic] right, I can't say.
I can. The gp is wrong on just about every count.
As you already say, if you are very concerned about performance in a situation with lots of small objects you can use structs. Simply ignoring structs because you are too lazy to use them does not make D slow. With a bit of experience and a few rules of thumb it's not hard to choose.
I think maybe you're talking about what is called "scope" now. It allocates the memory for the class on the stack. Yeh, it doesn't cover every possible use case of by-value classes, but it can be a nice optimization.
Yes you use structs when you want an efficient aggregate value type. Classes and structs have different semantics in D. It's pretty easy to choose once you get the hang of it. If you are likely to want to put virtual functions on the thing, make it a class. If you want to pass lots of them around by value, make it a struct. If you can count on your fingers how many instances you will have, make it a class -- the hit from allocation won't matter. There is some gray area in-between, granted, but in practice it's usually pretty clear, and the benefit is that you do completely eliminate the slicing problem by going this route. If you really can't decide you can also write the guts as a mixin and them mix them into both a class and a struct wrapper. Or just write it primarily as a struct, and aggregate that struct inside a class. The use cases that are left after all that are pretty insignificant.
Yeh, well don't try to avoid it, then. It's not as much of a headache as you make it out to be.
D's template system has gone far beyond C++'s. It's even far beyond what C++0x is going to be. Alias template parameters, template mixins, static if, a host of compile-time introspection facilities, the ability to run regular functions at compile-time, type tuples, variadic template parameters. Of these, I believe C++0x is only getting the last one. D metaprogramming is to C++ metaprogramming as C++ OOP is to OOP in C. It takes a technique that the previous language begrudgingly permitted and turns it into one that is actually supported by the language.
D, E, F,... where will it all end? (Score:2)
Now he's done it. Kicked off the alpabetical arms race. He himself averted it once by the Operator Gambit of + and ++. The crisis was defused.
It was stumped again this decade by what came to be called the Jungle of Earthly Delights Diversions (Python, Ruby, et al). But now he's broken the dam to the last redoubt, the alphabet. There are only so many letters! It is a limited supply! We've feared this since the mid-nineties. Who will save us now?
ZZ99++ (Score:5, Funny)
...or there's always Greek, Hebrew, Klingon* and, hey, Chinese (that should keep us going for a while)!
Why do you think they invented Unicode?
(*Fatal Error at line 16349: statement has no honour)
Where we go from here (Score:2)
D -- wha? (Score:5, Insightful)
I think the fact that this post has been up for almost an hour and has only 33 follow-ons shows what the software community thinks of D.
One has to acknowledge that Back in The Day, Walter Bright did all of us a great service in producing the first PC-based C++ compiler (Zortech) which effectively forced Borland and Microsoft to take the language seriously.
Unfortunately, for all of us, he seems to be better at invention than collaboration but that doesn't devalue the contribution he made (structurally) to get us to where we are.
cheers...ank
Re: (Score:3, Funny)
I choose to think it shows what the software community thinks of the Mac.
why all the hate? (Score:5, Insightful)
The griping and misinformation here is so atrocious that I'm simply embarrassed to be reading Slashdot today.
Digital Mars D is a wonderfully designed language and I'm in the process of giving up a lifetime of C++ for it.
I'm not here to defend D or enumerate it's growing pains or evangelize it, but if you don't take it upon yourselves to be well informed, please don't repeat your biased gibberish to the rest of us.
No one cares about D (Score:5, Insightful)
Re:No one cares about D (Score:5, Informative)
Re:No one cares about D (Score:5, Informative)
Re: (Score:3, Informative)
I can't think of something with significant market share, but there is now an indie game on Steam written in D: Mayhem Intergalactic [steampowered.com]
Additionally, D is link-compatible with C. Using C libraries from D is as easy as porting the header files to D. There are a couple of tools for (mostly) automating this process, and quite a lot of the major C libraries have D bindings available.
So fucking what! (Score:2)
Fabulous, he ported a language to the MAC. Ok groovey...
The bit about having to make adjustments to the library code is news why?
Not all OS's support everything in the world, that is just the way it is. If you implement a certain function or macro one way on one platform does not in any way mean that you will be able to implement exactly the same on another platform.
Qt bindings (Score:3, Interesting)
I really like D, and would give up C++ for it, but the one thing I feel is really missing is bindings for Qt. :(
Re:Qt bindings (Score:4, Informative)
D is nice, but... (Score:4, Interesting)
Programming in D is nice, but the situation is a bit annoying.
1. Tango vs Phobos. Phobos is the official standard library, but it seems most use Tango. Phobos is also pretty low level compared to Tango.
2. The reference compiler dmd is 32bit only, gdc is outdated and abandoned, and ldc is still alpha status and has missing features. Ldc is quite promising though.
3. D2 is maybe the biggest issue. It has very useful features, such as partial C++ compatibility, but D2 is a moving target and practically no library supports it. It's also unknown when or if ever D2 will become stable. I haven't seen much discussion about it in the newsgroup either.
Re:What? (Score:5, Funny)
As little as possible. From the article:
I then figured out how to remotely connect to the Mac over the LAN, and (the Mac people will hate me for this) put the Mac in the basement and operate it remotely with a text window.
Re:What? (Score:5, Insightful)
Why would Mac people hate somebody for that? I ssh into my macs all the time. I pretty much always have terminal windows open. A lot of the molecular biology software I use (the open EMBOSS set of programs ROCK) are command line only, take files as input & write files as output. It's a BSD box with pretty paint. Sure, it's nice to have the pretty screens & be able to run things like iphoto & etc, but at the end of the day the most useful stuff still runs from the > prompt.
depends on the Mac people (Score:5, Insightful)
People who have been Mac people for a long time generally don't have that workflow, as the importing of the BSD backend is a fairly recent addition to the Mac world, whereas many of the GUI conventions have been around much longer.
Re:depends on the Mac people (Score:5, Insightful)
"fairly recent!?" Dude, that was a decade ago. I became a Mac user when Rhapsody first came out (it was the NeXT lineage that brought be onboard) and a lot of time has passed since. This reminds me of growing up in Podunk, Nebraska, in that after living their for 10 years the old ladies at the Methodist church were still referring to my mother as "the new girl in town".
Re:depends on the Mac people (Score:4, Informative)
OS X displaced the classic Mac OS in majority desktop usage sometime around 2002, so about 7 years out of a 25-year history.
Re: (Score:2)
I would say that most Mac users don't do that, though. A higher proportion of developers do, of course, but I know a couple of pretty heavy developers who use Macs and are outright hostile to the idea of using a command line. Which is funny, because one of them has been writing code on the Mac since System 7, using MPW (which had a command line!).
Mac^H^H^H people are weird.
Re: (Score:3, Informative)
MPW had an interface called Commando. You could highlight a MPW text command and bring up Commando on it. It would present a dialog box formated for just that commmand; you could check boxes, radio buttons, pulldown menus, etc. and as you did, it would compose the resultant text command in a window. You could run the command from the dialog or you could copy and paste the command into (what amounted to) a terminal window and run it there. That meant you could save commonly used commands in the text of that
Re:What? (Score:5, Insightful)
A Mac is a genuine Unix workstation that is much easier to administer, and has much better software and hardware support than Linux.
I can run basically every Linux/Unix application on my Mac, both command-line and GUI, while not having to worry about wireless networking drivers, printer support, power management / sleep support on my laptop, getting accelerated 3D drivers working, or any of the other minor hassles that are involved with setting up and maintaining a Linux install.
If you walk into the computer science department at MIT, basically all the faculty have a Mac, and fully half the students do. These people are not buying Macs because they saw a cool ad on the bus - they're buying them because a Mac is the best tool available.
The argument that Macs are just expensive, "designer" PCs that look pretty and sell well because Apple has marketed them well doesn't hold water. Yes, they have nice hardware, and a clean, polished, slick UI, and that does make them more pleasant to work with than some blob of Dell plastic running Vista - but they have the functionality to back up their appearance, as well.
Yeah, they're more expensive. If you value your time at all, you should realize that spending an extra $100 on a Mac is well worth it if it improves your productivity. Hell, if you ever spend two hours fighting with some weird issue on your Linux box, it's no longer saved you any money. You know how long I've spent fighting with the OS to get my wireless working, or hibernate working, or whatever, in Mac OS X, in the five years I've been using a Mac? Zero. I'm not exaggerating. It lives up to the hype. It "just works". It gets out of my way and lets me get things done.
Re:What? (Score:5, Funny)
So basically, Mac IS Linux on the desktop?
I think I've just given Linux fans nightmares for months.
Mac is UNIX on the desktop (Score:5, Insightful)
Linux is also UNIX on the desktop. It's just an oddball version of UNIX, with a whole bunch of extra APIs that people using Linux get used to and come to depend on, so they think writing portable code means "it runs on Red Hat and Suse" (or Debian and Ubuntu, if you're on the Left Hand path), and then when they go to port to a more standard version of UNIX, they write stuff like this:
If you're writing code that depends on the expansion of system macros, or if you're depending on obscure Linux-only functions, you're writing unportable code. What really bothers me is the idea that someone writing a Linux-only program would already have run into situations where they had to conditionally compile code. Has Linux really fragmented that much?
Re:Mac is UNIX on the desktop (Score:5, Interesting)
Re: (Score:2, Interesting)
True. Linux is not UNIX'03 certified.
Someone still has to have it certified and it still wouldn't pass the certification because there are still missing features.
List of missing features in studio, please.
I'm working on bunch of *nix systems, and frankly Linux always stroke me - software developer - as most compatible *nix clone. Essentially bunch of stuff (written after SUSv3) simply works on Linux while on e.g. Solaris some stuff is either buggy or simply missing or in different "UNIX traditional" header.
Shortly, looking into SUSv3 and programming on Linux works. But it doesn't on true-UNIX Solaris and *BSD.
I'm kind
Re:Mac is UNIX on the desktop (Score:5, Informative)
Actually you are dead wrong here, I have seen so many developer starting to use Macs and most of them bother to use the command line seriously.
I think one of the biggest user group the mac nowadays has are developers, thanks to the NextStep underneath!
Re:Mac is UNIX on the desktop (Score:4, Interesting)
Compared to *NIX development on Solaris, even *NIX development on Windows is more pleasant, so I wouldn't take that as a benchmark 'how *NIX' Linux actually is from a developer viewpoint.
I've been doing *NIX development on a lot of different OS's and versions for the pas few years, including FreeBSD, OpenBSD, SunOS, Solaris, Linux (from RH 7/RHEL 4 to Ubuntu 8.10), OS X and HPUX. About any flavour of *NIX you will encounter as a software engineer nowadays.
My experiences where that BSD is indeed the best base-line platform for *NIX development. If it compiles on BSD, it will most likely also compile on Linux, OS X and Solaris (unless you have an older Sun Studio release or didn't install one of the gazillion optional packages for proper userland tools and libraries). This is not true the other way around: stuff that works great on any Linux system might not work at all on BSD or OS X. Initially it frustrated me, in an 'always those damn BSD boxes' kind of way, but eventually I started to appreciate it more and more. Turned out I wasn't that much of a *NIX expert after all, only having worked with Linux. The code I write now is much better and although I still use Linux as my primary development platform, my code generally works out of the box on all other *NIX systems.
On a side note: Solaris is simply terrible for software development. Every Solaris system is different, some do have this and that libraries, others don't. Some have GNU userland, some have crippled, incomplete userland tools with totally idiotic command-line interfaces. Some have compiler versions that kind of work, some can't even compile boost::shared_ptr. Some have GCC, some have Sun CC. Some have only STLPORT4 standard C++ libraries, some have only libCstd, some others have both but if you mix them your program might or might not link, but it definitely won't work. If you want to link in 3rd party binary stuff that's only available linked with libCstd you're basically screwed: forget about using Boost or any other development libraries that rely heavily on templates because libCstd is nonstandard, incomplete garbage that breaks perfectly valid C++ code.
It's a complete nightmare, a complete disaster, and if you'd ask me Sun should just kill Solaris alltogether and just release their own Linux distro (which they're more or less doing with OpenSolaris already, except it's not Linux)
Who cares what the open group says? (Score:3, Insightful)
Bunch of kids with a trademark in their pocket.
UNIX is a family of operating systems with a native API and system call interface based on the UNIX programmer's manual, published by Bell Labs (usually the 7th edition).
That's a *useful* definition for UNIX.
Re: (Score:3, Informative)
Linux on its own couldn't pass UNIX '03, because UNIX '03 is for the whole OS distribution, not the kernel.
GNU/Linux might pass with the GNU toolchain, the Bourne shell and a vi attached. Linux on its own can't pass, because it's not meant to.
Re:Mac is UNIX on the desktop (Score:4, Insightful)
OS X 10.5 is certified.
Re: (Score:2, Insightful)
What really bothers me is the idea that someone writing a Linux-only program would already have run into situations where they had to conditionally compile code. Has Linux really fragmented that much?
It's not Linux-only: it's Linux and Win32. Hence the conditional compilation.
The author was hoping that his Linux-specific code would work on OSX, but it doesn't.
Re:Mac is UNIX on the desktop (Score:4, Informative)
Re: (Score:2)
Re: (Score:3, Informative)
Re:Mac is UNIX on the desktop (Score:5, Informative)
Last time I looked at IRIX, it looked like System V to me.
Once you get used to real, traditional BSD, going into the OS X terminal is weird. Where's /etc/init.d and /hw? Why can't I boot -f dksc(1,5,8)sash64? No /usr/people?
Re: (Score:3, Informative)
Of course, this is slashdot. (Score:3, Informative)
Right... because apart from Linux, all Unices are exactly the same
That's why writing portable code requires experience.
Linux is a fragmented mess of incompatibility
Good thing I still have this in my paste buffer:
If
Re: (Score:3, Funny)
Okay. I'll amend my previous statement.
So basically, Mac IS FreeBSD on the desktop?
I think I've just given FreeBSD fans nightmares for months.
UNIX is UNIX is UNIX (Score:3, Insightful)
I think I've just given FreeBSD fans nightmares for months.
You don't understand FreeBSD fans, then. Most of the FreeBSD users I know have Mac desktops. Jordan Hubbard works at Apple now.
Mac on the desktop, FreeBSD in the back office, it's a sweet environment and everything "just works".
Re:UNIX is UNIX is UNIX (Score:5, Informative)
Who the hell is Jordan Hubbard?
One of the founders of the FreeBSD project [freebsd.org].
BTW, OS X uses a Mach kernel, not *BSD. OS X has much more in common with NEXTStep than FreeBSD.
Mac OS X uses a kernel that combines Mach code and *BSD code. It also has some userland "core OS" libraries (e.g., libSystem) that combine *BSD code with code developed at NeXT and/or Apple.
So, no, it's not as much like 386BSD as 386BSD's more direct *BSD descendants, but it's still closer to *BSD than to other flavors of UN*X.
Re:UNIX is UNIX is UNIX (Score:4, Interesting)
OS X has largely eliminated Mach messaging, for example.
O RLY? Fire up Activity Monitor, select some busy process, and watch the count of Mach messages sent and received. Perhaps the NeXTStEP developer documentation touted Mach messaging more than the Apple developer documentation [apple.com] does, but at least some higher-level APIs use Mach messaging.
Re: (Score:2)
Re: (Score:2)
Yes, eeeeverything that can be run on Linux/Unix can be run on a Mac. A lot of things are already there, or downloadable in binary form. For most everything else, there's MacPorts or Fink. If it's not covered there, there's always downloading source code and compiling. This sometimes is a lot of extra work.
It all depends on how much extra work you want to put into it, and how familiar you are with *nix environments.
Re:What? (Score:4, Informative)
Sadly macports and fink are pretty poor :( They don't have enough people and most of the packages are broken or out of date. I have simple patches for projects which I run which have been sitting in the macports tracker for more than six months and still have not been approved.
Debian/Ubuntu/etc. still have by far the best package repository and that's enough to make my mac almost useless and my linux laptop the place where I do most of my work. Plus OS X is rather slow, argh.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Informative)
He actually said that academics in a computer science department use it. Don't you think that they count as developers? His anecdote isn't that uncommon, on the academic research projects that I've worked on Macs are used much more heavily than other systems. My current project may be a bit extreme but more than 90% of the participants use a mac. They are all developers - across a wide range from people who are optimising low-level assembly routines, people developing in C, through to some higher level doma
Re: (Score:2)
Apart from if you want to run the most popular scripting language [slashdot.org].
Re: (Score:2)
I can run basically every Linux/Unix application on my Mac, both command-line and GUI, while not having to worry about wireless networking drivers, printer support, power management / sleep support on my laptop, getting accelerated 3D drivers working, or any of the other minor hassles that are involved with setting up and maintaining a Linux install.
I partly agree, but in defense of Linux, you seem to be comparing a shrink-wrapped Mac (hand built by dusky maidens from only the finest OS X-compatible components) with slapping a standard Ubuntu CD on your old Dell. If you bought a purpose-built Linux workstation you'd expect it to, likewise, have been made out of Linux-supported components and be properly configured out of the box.
You'd probably have similar driver nightmares with OS X if, rather than buy a nice shrink-wrapped system from Apple, you tr
Re: (Score:2, Flamebait)
ROTFLMAO! I just learned something new! I was unaware that OS X runs on ALPHA, ARM, and the other 19 processor platforms that Linux supports.
Re: (Score:2)
I was unaware that OS X runs on ALPHA, ARM, and the other 19 processor platforms that Linux supports.
You spelled that wrong. It's "BSD runs on Alpha, ARM, ...". Mac OS X just happens to be the best BSD version for the desktop... and it's a much better desktop than any Linux desktop.
THIS IS SLASHDOT! (Score:5, Interesting)
I was at Berkeley when 4.2BSD was being pulled together, and did some work for the 4.1C release. I was one of the guys who got 386BSD to compile clean in the first place. I had a NeXTstation on my desk for several years. I was the FreeBSD handbook guy for a while. I worked with Tru64 back when it was the only fully 64-bit UNIX. I know what "BSD" and "Mach" are, better than you and better than most of the people who contributed to that Wikipedia page.
you should be aware that this is Slashdot
Yeh, I'm keeping that in mind. Thats why I'm not going to even TRY and explain just how badly you're misreading that Wikipedia page.
Re:THIS IS SLASHDOT! (Score:5, Insightful)
Oh yeh, this is slashdot alright. "When someone suggests you might be wrong, tell them they're a troll. Everyone hates trolls and accusing someone else of being a troll is the best possible way to divert attention from your own trolling".
No, I'm not playing, sorry.
Re: (Score:3, Interesting)
One would assume you were trolling from the blatant dishonesty of your post. OS X isn't a particularly good BSD for the desktop; the only thing that makes it decent is the proprietary non-BSD stuff running on top of Darwin. As a BSD, Darwin is pretty damn poor, in almost all respects. There's a reason why no one uses it except as part of OS X, you know.
And insofar as other BSDs support a bunch of other platforms, that has nothing to do with the fact that Linux has far superior hardware support compared to O
Re: (Score:3, Interesting)
I didn't suggest that Darwin by itself was a particularly good BSD for the desktop. I said that OS X is. Yes, that involves a bunch of stuff that isn't part of BSD, but that's true regardless of whether you're running Cocoa, NeXTstep, Gnome, KDE, or code written for raw Xlib:
Re:THIS IS SLASHDOT! (Score:4, Insightful)
I am well aware that your initial post was a strawman attack.
Let's see. Someone claimed that OS X supports the hardware it runs on better than Linux does. You responded by talking about the variety of platforms that Linux runs on. That's a perfect example of a straw man. The guy you were responding to doesn't care if OS X runs on an Alpha or Integrity server, or that old Indy you have in the back room to show how cool you are.
My response was that it doesn't matter, if you want to run on oddball hardware, you could run pretty much the same OS on all the same oddball hardware. For someone who actually uses BSD variants on a regular basis, it doesn't matter whether whether one is running FreeBSD, OS X, OpenBSD, Tru64, NetBSD, and so on... they are largely fungible, just as Ubuntu, YDL, and Gentoo are. Which is why I run FreeBSD servers and Mac desktops, and develop software on both.
I didn't complain that you can't boot a Ubuntu install CD on a Powermac, and need to get a Yellow Dog image instead. Now that WOULD be a straw man.
OSX is BSD, just as much as OpenBSD is. Yes, you have to use Apple's hardware to run OS X. That's the cost you pay to get the best UNIX desktop. I wish I could run OS X on a Thinkpad instead of my Macbook, but OS X is enough better than any Linux desktop that I've found that I'm willing to put up with it.
But that doesn't make OSX "not BSD" any more than the fact that YDL won't boot on your Thinkpad makes YDL "Not Linux".
Re:THIS IS SLASHDOT! (Score:4, Insightful)
Indeed, he said "a Mac", not "OS X".
OS X has better support for Mac hardware than Linux does for any hardware I've tried it on.
Re:THIS IS SLASHDOT! (Score:4, Insightful)
This is a ridiculous assertion based upon an interpretation of an overly abstract -- if not inaccurate sentence -- out of context; however the context for the statement does clarify the original meaning of the sentence.
Let's look at the sentence being argued:
And it doesn't take much to find one sentence later this clarification:
So having some silly pseudo philosophical argument about the meaning of "hardware support" in the original post and calling people liars if their argument doesn't conform to your viewpoint is not productive, nor does it take into account the original post.
Re: (Score:3, Insightful)
A Mac is a genuine Unix workstation that is much easier to administer, and has much better software and hardware support than Linux.
It has *better* software support from major ISVs, I will grant you that, but it does not have better software support generally and Linux supports far more hardware than MacOS. Not all Linux software runs on the macOS either.
My wife and son have macs, and I tell you, I'll take Linux every time.
I can run basically every Linux/Unix application on my Mac, both command-line and GUI
All the world's a VAX. (Score:5, Informative)
Not all Linux software runs on the macOS either.
Yeh, there's a lot of Linux programmers who wouldn't know how to write portable code if the portable code fairy shat clue down their throats. Last decade it was SunOS programmers, the decade before that it was people who thought all the world was a VAX. The world is full of people like that.
For techies, there is no substitute for Linux or FreeBSD. (I prefer Linux, but I have friends who prefer FreeBSD.)
Ask your friends about porting Linux code from people who think portable means "it compiles on Red Hat and Suse... ship it!"
Oh, while we're on the subject, you do know that Jordan Hubbard works at Apple now, don't you?
Re: (Score:3, Informative)
lets see a Mac do this:
ssh -X hostaddr application
And have the GUI application pop up on a remote screen without the WHOLE screen like VNC.
You can't seriously be suggesting that X11 is unavailable [apple.com] for OSX. If you have an X11 application that you would like to run, you can certainly do what you're suggesting. No serious UNIX weenie should have any trouble building it [apple.com]. Your only possible room for complaint here is 1) that that not every application is an X11 application (something for which most users are thankful) or 2) that you want your mommy to have compiled the app for you already.
Re: (Score:3, Insightful)
Abandoning X11 was a mistake.
"Abandoning" implies that they used to use it and stopped. NeXTStEP never used X11 as its underlying window system (its window system was Display Postscript-based), and Mac OS X never did, either.
Whether it was a mistake or not depends on your goals. It was a mistake if being able to run individual GUI applications "over the wire" is important. It wasn't a mistake if it allowed them to get a given level of graphics performance and capabilities faster.
Re: (Score:3, Insightful)
It is certain your iTunes application will not run in this way.
Have you tried tunneling Amarok in the way you suggest ? Unfortunatly for you, sound isn't part of the X11 specification, so unless you're using something like a sound daemon (a la esound) and have forwarded that also, the result might not be what you're expecting. I don't see why not being able to remotely display iTunes' GUI would be a problem.
Re:What? (Score:4, Interesting)
And what would that "good reason" be?
Because of the experience and features that their applications provide. What they do not know, and cannot be expected to know, is that these things stem from deliberate tradeoffs made by the developers of the underlying frameworks.
As any programmer worth his salt knows, any design decision comes with a set of tradeoffs. This is an inescapable fact, and only goes unrecognized by the ignorant (whether their ignorance be innocent or willful.) The fine art of balancing a set of tradeoffs is very difficult, and an inherent aspect of it is that you can't please everybody 100% of the time.
In this case, you are one of the unfortunate few that Apple deliberately chose to devalue in their design priorities, since one of the items high on your wish-list is ubiquitous remote displayability via the X11 protocol.
But, bringing our minds back to the subject of tradeoffs, what did they win by giving you the finger? (*) This is an easy exercise for those skilled in software architecture. The first thing that one needs to ask is what sort of restrictions does conformance to X11 bring with it? X11 is a set of abstractions that end up leaking into many different layers of your software stack. While I love X11, a lot of those abstractions were invented a very long time ago before anyone thought they might like different abstractions, like a hardware-accellerated Quartz display server - or CoreImage, CoreVideo, CoreAnimation.
This choice has given them the freedom they need to make architectural advancements faster, and now they're in a leadership position. If you are a programmer and you still think they could have delivered their current product in the same timeframe after having volunteered to be hamstrung by obedience to X11, then you might want to consider a career change.
Nothing comes without a cost. There is a long history of UNIX vendors who tried for years to bring a good GUI environment to X11 and the best they could come up with was CDE. (WTF.)
(*) one footnote here: it wasn't Apple that gave you the finger. This decision was made in the late 80s at NeXT when they opted against X11 so that they could get the WYSIWYG properties afforded by the Display Postscript system. After Apple acquired them, they kept the imaging model but replaced the Postscript interpreter with Display PDF. (PDF is, more or less, the PostScript imaging model without the full force of the Postscript programming language.)
Re: (Score:3, Insightful)
Let's be clear. The limited capability here is the fact that you have to use a kludge like VNC or Apple Remote Desktop to access your computer remotely. Apple (and Microsoft) have ditched this "important feature" in favour of improving the UI experience for the user when they are sitting in front of the computer. And guess what, only a few people ever complain about the lack of this "important feature", the reason being that most of us do not lock our computers in server rooms and access them with X term
Re: (Score:3, Insightful)
you should realize that spending an extra $100
I think you forgot a zero. Prices may be different in my-vs-your neck of the worlds, though.
and has much better software and hardware support than Linux.
I think Linux has much wider hardware support (it works on non-apple hardware too), whereas OS X has full support for a much smaller set of hardware. What's better depends on personal preferences.
I can run basically every Linux/Unix application on my Mac
Really? Didn't the summary just say that some of the system calls are missing on OS X and some macros are different? Or did you mean running the binaries (in which case, there's still the system calls)? Or do you cheat
Re: (Score:2)
Didn't the summary just say that some of the system calls are missing on OS X and some macros are different?
No, the summary said that the code was written with unnecessary dependencies on obscure libraries that didn't happen to be shipped on OS X, and that it was so buggy it depended on the expansion of macros being identical on different operating systems.
I can pretty much guarantee that any program with those kinds of dependencies would have just as much of a problem porting to any other UNIX system. It
I like Macs, but they're not easy to administer (Score:2)
OS X is easier to operate, and is set up to work well with Apple hardware. Those are nice features, and the reason I have an OS X laptop. But it surely isn't easy to administer. It's indeed quite terrible, since there's no package management to speak of.
Ever tried to walk through a classroom of mac-using high-school students how to get pygame installed and working on their OS? Including the proper version of python if their OS X has an old version, PyObjC, and so on? On Debian or Ubuntu, this requires doubl
Re: (Score:2)
Ever tried to walk through a classroom of mac-using high-school students how to get [some non-portable piece of Linux-only code that was written with the assumption that it would only ever be run on Linux]
That's not a "package management" problem. FreeBSD has had better package management than Linux since before Linux HAD any package management system, and getting random "all the world's Red Hat" (or "all the world's Ubuntu") code running can be a pain and a half. That's a problem caused by writing code wit
Re: (Score:2)
hmm . excuse me, but : better hardware support?
Since when? Can I just take, let's say, some non-apple graphic adapter I've got lying around, plug it in, and it will work? Or go out and buy some random wireless adapter?
Better sofware support is arguable, but better hardware support, I think not
Re: (Score:3, Funny)
iChat: your router or ISP sucks. It works ok for everybody else.
e-mail: Clean your caches. This also works ok for everybody else.
Mouse cursor: either don't let direct daylight shine on the mighty mouse or throw that junk away and get a real mouse with a opaque body
Fax:
What "being UNIX" means... (Score:3, Interesting)
OK, whats with this unix love? Yes it is a unix workstation, but so is HPUX. HPUX is ...different. You will understand if you have ever used it.
Don't talk to me about HPUX, I'm still bitter about Alpha.
Being Unix compliant does not mean a OS is good, reliable, or stable OS.
Not being UNIX would mean that it doesn't matter how good, reliable, or stable it is... it wouldn't matter, I wouldn't be using it. I've done my time in the trenches dealing with VMS, TOPS, RSX, RTE-IV, MS-DOS, CP/M, Windows, AmigaDOS, Ex
Re:What? (Score:5, Informative)
but Mac hardware is crap
Have you ever used mac Hardware? Their laptops have been amazing for ever. Apple has long been a major innovator on the laptop front. And many of the things you expect in a laptop were made a standard feature first on the Mac. Things like target mode, gig ethernet, auto-crossover, built-in wifi, built-in bluetooth, Ac adapter standardization, integrated mic, integrated camera, external battery indicator, backlit keys (or any way to view the keys in the dark), DVD burners, and there are probably more that I just can't think of. Macs have great hardware.
Yes, they may not have every possible feature, but they have lots of good ones and really versatile. Computer snobs who turn up their noses at macs remind me of car snobs, except that a lot of the cars those people like aren't that useful, and break down a lot. I don't get that mentality and I probably never will.
Re: (Score:3, Interesting)
Real developers actually use the Mac?
Of course. The MacBook and MacBook Pro are nice laptops for on the move, and it runs ssh, gcc, vi, emacs and X11 perfectly.
Re: (Score:2)
Yes. At work, they've given me a kick-ass Dell - serious high-end piece of machinery - and I almost never touch it. Instead, aside from e-mail, all my efforts are through my personal MacBook Pro. Even if I'm VNC'ing over to my Solaris session, I still use the Mac.
Re: (Score:2)
Re: (Score:3, Informative)
Because the compiler ignores whitespace it's probably not the best design decision to let a non-visible character be the end-of-line terminator.
Re: (Score:2)
I agree. I can't get my head around the insanity of python's system of indentation delimiting block structure...
What's wrong with lines terminating with semi-colons anyway? It encourages much tidier and more readable code, i reckon.
Re: (Score:2)
I use tabs, and the indentation-equals-block makes a lot of sense to me.
Re: (Score:2)
I use tabs for indentation. But i don't use python.
Maybe i'm wrong, but using an invisible character (which could be a/some tab(s) or a/some space(s)) seems like it could make the code very hard to read and prone to errors.
Re: (Score:2)
There's never a mismatch between what you intuitively see and what it actually executes. Which is probably why they let you mix tabs and spaces as long as it's consistent visually. Although personally I think it should be a syntax error to mix them (or better yet, an error to use spaces for indentation in general
Re: (Score:2)
Yeah, maybe so. It seems a bit ideologically driven to me - a language that forces a particular coding style on you.
I invariably indent blocks - but i do it because it's good practice, and i don't think i'd like it if the languages i wrote in forced it on me.
Now if a language forced you to intersperse code blocks with verbose comments, i might think it was a good idea! ;-)
Re: (Score:2)
Maybe i'm wrong
You're wrong. In the last 6 years of our company using Python for business logic development, we've never had a single bug that was traced back to indentation or an end-of-line problem.
Re: (Score:2)
I really don't see the problem with this (Score:2)
Re: (Score:2)
Re:Shouldn't it be called P? (Score:5, Funny)
Re:Shouldn't it be called P? (Score:4, Funny)
Re:Shouldn't it be called P? (Score:4, Funny)
Re: (Score:3, Insightful)
Depends what you like... Perl's a favourite of mine, probably because of the things you hate about it. It has a very natural feel to it, as the way which it's evolved, like natural spoken languages, there's all kinds of often hidden subtleties, and there's nearly always more than one way to do anything, different ways efficient in different ways and so good for different purposes. Downside of course is it's multithreading support (or rather, lack of it).
I like it for server stuff, as it's real easy to get i
Re: (Score:2)
Visual Basic sucks. BASIC rules. Long live GOTO!