Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Walter Bright Ports D To the Mac 404

jonniee writes "D is a programming language created by Walter Bright of C++ fame. D's focus is on combining the power and high performance of C/C++ with the programmer productivity of modern languages like Ruby and Python. And now he's ported it to the Macintosh. Quoting: '[Building a runtime library] exposed a lot of conditional compilation issues that had no case for OS X. I found that Linux has a bunch of API functions that are missing in OS X, like getline and getdelim, so some of the library functionality had to revert to more generic code for OS X. I had to be careful, because although many system macros had the same functionality and spelling, they had different expansions. Getting these wrong would cause some mysterious behavior, indeed.'"
This discussion has been archived. No new comments can be posted.

Walter Bright Ports D To the Mac

Comments Filter:
  • by master_p ( 608214 ) on Sunday February 22, 2009 @11:18AM (#26948589)

    I don't think D will ever have the high performance of C++, because D objects are all allocated on the heap. The D 'auto' keyword is just a compiler hint (last time I checked) to help in escape analysis. D has structs, but one has to design upfront if a class has value or reference semantics, and that creates a major design headache. Avoiding the headache by making everything have reference semantics negates the advantages of struct.

    D is a mix of C and Java, with the C++ template system bolted on top. It is no way C++. D is not what a veteran C++ programmer excepts as the next generation C++.

  • Re:What? (Score:3, Interesting)

    by dkf ( 304284 ) <donal.k.fellows@manchester.ac.uk> on Sunday February 22, 2009 @11:28AM (#26948645) Homepage

    Real developers actually use the Mac?

    Of course. The MacBook and MacBook Pro are nice laptops for on the move, and it runs ssh, gcc, vi, emacs and X11 perfectly.

  • by argent ( 18001 ) <peter@slashdot . ... t a r o nga.com> on Sunday February 22, 2009 @12:24PM (#26948999) Homepage Journal

    OK, whats with this unix love? Yes it is a unix workstation, but so is HPUX. HPUX is ...different. You will understand if you have ever used it.

    Don't talk to me about HPUX, I'm still bitter about Alpha.

    Being Unix compliant does not mean a OS is good, reliable, or stable OS.

    Not being UNIX would mean that it doesn't matter how good, reliable, or stable it is... it wouldn't matter, I wouldn't be using it. I've done my time in the trenches dealing with VMS, TOPS, RSX, RTE-IV, MS-DOS, CP/M, Windows, AmigaDOS, Exec/1100, many of which were by all kinds of measures good, reliable, or stable. But dealing with different operating systems sucks rotting frog innards through used oil filters, and I'm too old for that kind of manure.

    Being UNIX means that, if it also happens to be good, reliable, or stable (which it does), it's worth using. If OS X was based on Copland or even BeOS I'd still be running free UNIX on my desktop.

  • by RedK ( 112790 ) on Sunday February 22, 2009 @12:30PM (#26949053)
    The opengroup would disagree about Linux being Unix. Someone still has to have it certified and it still wouldn't pass the certification because there are still missing features. Linux is compatible to most of the Unix specification, it is not Unix.
  • THIS IS SLASHDOT! (Score:5, Interesting)

    by argent ( 18001 ) <peter@slashdot . ... t a r o nga.com> on Sunday February 22, 2009 @12:58PM (#26949281) Homepage Journal

    I was at Berkeley when 4.2BSD was being pulled together, and did some work for the 4.1C release. I was one of the guys who got 386BSD to compile clean in the first place. I had a NeXTstation on my desk for several years. I was the FreeBSD handbook guy for a while. I worked with Tru64 back when it was the only fully 64-bit UNIX. I know what "BSD" and "Mach" are, better than you and better than most of the people who contributed to that Wikipedia page.

    you should be aware that this is Slashdot

    Yeh, I'm keeping that in mind. Thats why I'm not going to even TRY and explain just how badly you're misreading that Wikipedia page.

  • by ThePhilips ( 752041 ) on Sunday February 22, 2009 @01:02PM (#26949313) Homepage Journal

    True. Linux is not UNIX'03 certified.

    Someone still has to have it certified and it still wouldn't pass the certification because there are still missing features.

    List of missing features in studio, please.

    I'm working on bunch of *nix systems, and frankly Linux always stroke me - software developer - as most compatible *nix clone. Essentially bunch of stuff (written after SUSv3) simply works on Linux while on e.g. Solaris some stuff is either buggy or simply missing or in different "UNIX traditional" header.

    Shortly, looking into SUSv3 and programming on Linux works. But it doesn't on true-UNIX Solaris and *BSD.

    I'm kind of also happy that Linux is not *nix. Having worked on FreeBSD for some short time, it's simply impossible to be as efficient with their antique true-UNIX text/file tools as one can be with GNU text/file tools. Needless to mention that on *BSD /bin/bash isn't default shell, line editing isn't always there, no usable pager and no decent text editor is installed by default. On fresh Linux install one can start working already. On fresh *BSD setup - you have to start compiling ports. Feel the difference.

    If one tried to treat Mac OS X as *BSD, than it makes the *BSD misery even more apparent: minority of Mac OS X users ever bother to use the command line. Had it been something useful, I bet more people would have used it. On Linux, e.g. on Ubuntu you rarely (if ever with recent version) have to go to command line. Yet people use it - because it is usable and useful.

  • Qt bindings (Score:3, Interesting)

    by SwedishPenguin ( 1035756 ) on Sunday February 22, 2009 @01:12PM (#26949411)

    I really like D, and would give up C++ for it, but the one thing I feel is really missing is bindings for Qt. :(

  • by baxissimo ( 135512 ) on Sunday February 22, 2009 @01:16PM (#26949445)

    Maybe your [sic] right, I can't say.

    I can. The gp is wrong on just about every count.

    • I don't think D will ever have the high performance of C++, because D objects are all allocated on the heap.

    As you already say, if you are very concerned about performance in a situation with lots of small objects you can use structs. Simply ignoring structs because you are too lazy to use them does not make D slow. With a bit of experience and a few rules of thumb it's not hard to choose.

    • The D 'auto' keyword is just a compiler hint (last time I checked) to help in escape analysis.

    I think maybe you're talking about what is called "scope" now. It allocates the memory for the class on the stack. Yeh, it doesn't cover every possible use case of by-value classes, but it can be a nice optimization.

    • D has structs, but one has to design upfront if a class has value or reference semantics, and that creates a major design headache.

    Yes you use structs when you want an efficient aggregate value type. Classes and structs have different semantics in D. It's pretty easy to choose once you get the hang of it. If you are likely to want to put virtual functions on the thing, make it a class. If you want to pass lots of them around by value, make it a struct. If you can count on your fingers how many instances you will have, make it a class -- the hit from allocation won't matter. There is some gray area in-between, granted, but in practice it's usually pretty clear, and the benefit is that you do completely eliminate the slicing problem by going this route. If you really can't decide you can also write the guts as a mixin and them mix them into both a class and a struct wrapper. Or just write it primarily as a struct, and aggregate that struct inside a class. The use cases that are left after all that are pretty insignificant.

    • Avoiding the headache by making everything have reference semantics negates the advantages of struct.

    Yeh, well don't try to avoid it, then. It's not as much of a headache as you make it out to be.

    • D is a mix of C and Java, with the C++ template system bolted on top. It is no way C++. D is not what a veteran C++ programmer excepts as the next generation C++.

    D's template system has gone far beyond C++'s. It's even far beyond what C++0x is going to be. Alias template parameters, template mixins, static if, a host of compile-time introspection facilities, the ability to run regular functions at compile-time, type tuples, variadic template parameters. Of these, I believe C++0x is only getting the last one. D metaprogramming is to C++ metaprogramming as C++ OOP is to OOP in C. It takes a technique that the previous language begrudgingly permitted and turns it into one that is actually supported by the language.

  • by Anonymous Coward on Sunday February 22, 2009 @01:40PM (#26949651)

    I'm an IRIX user, and I can tell you that we have all kinds of problems porting FOSS stuff. Little problems usually, but getting big things like Mono, Firefox 3, or OOo is a sisyphean task.

    Also, as I type this on my Mac, no, OS X is _not_ UNIX. Neither is Linux, they are both UNIX-like. Linux much more so, it's gone from being a pretender to the throne to the future of UNIX. But OS X is Mach with BSD extensions, and is really a NextStep-like OS, which was in turn UNIX-like.

    Once you get used to real, traditional BSD, going into the OS X terminal is weird. Where's /etc/init.d and /hw? Why can't I boot -f dksc(1,5,8)sash64? No /usr/people? Whyyyy?

    Of course, I am an anonymous coward who posts, like, once a month, so this'll stay modded 0 and no one will ever care...

  • by physicsnick ( 1031656 ) on Sunday February 22, 2009 @02:26PM (#26950059)

    >> D did another thing right: it did not remove destructors, like Java did. Instead, when there are zero references to an object, the GC calls the destructor *immediately*, but deallocates the memory previously occupied by that object whenever it wishes (or it reuses that memory). This way RAII is possible in D, which is very useful for things like scoped thread locks.

    First of all, Java does have destructors. It's called finalize().

    Second of all, calling destructors on a modern GC are extremely costly. Sure, your example implementation of destructors seems simple, but it is only possible in a reference counted garbage collector, which is so primitive as to be nearly useless.

    Modern Java GCs are generational copying collectors. They have a young heap, where objects are allocated, and an old heap, where objects are moved when they survive an allocation. Object retention is done by tree search from a root node.

    This means you can do fun things like allocate a linked list, then drop the reference node. When a collection happens, anything living is rescued from the young heap, and then it's simply wiped clean. No computation is performed regardless of how large it is or how many links it has, because there's no such thing as deallocation on the young heap. When you drop that first link, it's like the VM doesn't even know it's there anymore; the whole list just gets overwritten on the next pass over the heap.

    If, however, you write a destructor for your links (or in Java, finalize()), the destructor then needs to independantly keep track of all of your destructable objects. It needs to remember that they're there so it can call their destructor when they do not survive an allocation. Furthurmore, if you impose your hard requirement of calling the destructor immediately, then the implementation of such a collector is impossible for your language. Even a primitive mark sweep collector, or anything not reference counted is impossible.

    This example is discussed in detail here:

    http://www.ibm.com/developerworks/java/library/j-jtp01274.html [ibm.com]

    You should familiarize yourself with modern garbage collectors. I don't know much about D, but if D really is tied down to a reference-counting collector due to its destructor requirements, that makes it extremely unnatractive as a language. Here is more information on various collector implementations:

    http://www.ibm.com/developerworks/java/library/j-jtp10283/ [ibm.com]

  • Re:What? (Score:4, Interesting)

    by pohl ( 872 ) on Sunday February 22, 2009 @02:50PM (#26950253) Homepage

    And what would that "good reason" be?

    Because of the experience and features that their applications provide. What they do not know, and cannot be expected to know, is that these things stem from deliberate tradeoffs made by the developers of the underlying frameworks.

    As any programmer worth his salt knows, any design decision comes with a set of tradeoffs. This is an inescapable fact, and only goes unrecognized by the ignorant (whether their ignorance be innocent or willful.) The fine art of balancing a set of tradeoffs is very difficult, and an inherent aspect of it is that you can't please everybody 100% of the time.

    In this case, you are one of the unfortunate few that Apple deliberately chose to devalue in their design priorities, since one of the items high on your wish-list is ubiquitous remote displayability via the X11 protocol.

    But, bringing our minds back to the subject of tradeoffs, what did they win by giving you the finger? (*) This is an easy exercise for those skilled in software architecture. The first thing that one needs to ask is what sort of restrictions does conformance to X11 bring with it? X11 is a set of abstractions that end up leaking into many different layers of your software stack. While I love X11, a lot of those abstractions were invented a very long time ago before anyone thought they might like different abstractions, like a hardware-accellerated Quartz display server - or CoreImage, CoreVideo, CoreAnimation.

    This choice has given them the freedom they need to make architectural advancements faster, and now they're in a leadership position. If you are a programmer and you still think they could have delivered their current product in the same timeframe after having volunteered to be hamstrung by obedience to X11, then you might want to consider a career change.

    Nothing comes without a cost. There is a long history of UNIX vendors who tried for years to bring a good GUI environment to X11 and the best they could come up with was CDE. (WTF.)

    (*) one footnote here: it wasn't Apple that gave you the finger. This decision was made in the late 80s at NeXT when they opted against X11 so that they could get the WYSIWYG properties afforded by the Display Postscript system. After Apple acquired them, they kept the imaging model but replaced the Postscript interpreter with Display PDF. (PDF is, more or less, the PostScript imaging model without the full force of the Postscript programming language.)

  • Re:THIS IS SLASHDOT! (Score:3, Interesting)

    by MrHanky ( 141717 ) on Sunday February 22, 2009 @03:19PM (#26950483) Homepage Journal

    One would assume you were trolling from the blatant dishonesty of your post. OS X isn't a particularly good BSD for the desktop; the only thing that makes it decent is the proprietary non-BSD stuff running on top of Darwin. As a BSD, Darwin is pretty damn poor, in almost all respects. There's a reason why no one uses it except as part of OS X, you know.

    And insofar as other BSDs support a bunch of other platforms, that has nothing to do with the fact that Linux has far superior hardware support compared to OS X. Basically, you argue like a delusional fanboy. and when that doesn't work you try an appeal to authority. Well, you may be an authority, but you're also a liar.

  • D is nice, but... (Score:4, Interesting)

    by Hangeron ( 314487 ) on Sunday February 22, 2009 @03:30PM (#26950559)

    Programming in D is nice, but the situation is a bit annoying.

    1. Tango vs Phobos. Phobos is the official standard library, but it seems most use Tango. Phobos is also pretty low level compared to Tango.
    2. The reference compiler dmd is 32bit only, gdc is outdated and abandoned, and ldc is still alpha status and has missing features. Ldc is quite promising though.
    3. D2 is maybe the biggest issue. It has very useful features, such as partial C++ compatibility, but D2 is a moving target and practically no library supports it. It's also unknown when or if ever D2 will become stable. I haven't seen much discussion about it in the newsgroup either.

  • by John Betonschaar ( 178617 ) on Sunday February 22, 2009 @04:27PM (#26951023)

    Compared to *NIX development on Solaris, even *NIX development on Windows is more pleasant, so I wouldn't take that as a benchmark 'how *NIX' Linux actually is from a developer viewpoint.

    I've been doing *NIX development on a lot of different OS's and versions for the pas few years, including FreeBSD, OpenBSD, SunOS, Solaris, Linux (from RH 7/RHEL 4 to Ubuntu 8.10), OS X and HPUX. About any flavour of *NIX you will encounter as a software engineer nowadays.

    My experiences where that BSD is indeed the best base-line platform for *NIX development. If it compiles on BSD, it will most likely also compile on Linux, OS X and Solaris (unless you have an older Sun Studio release or didn't install one of the gazillion optional packages for proper userland tools and libraries). This is not true the other way around: stuff that works great on any Linux system might not work at all on BSD or OS X. Initially it frustrated me, in an 'always those damn BSD boxes' kind of way, but eventually I started to appreciate it more and more. Turned out I wasn't that much of a *NIX expert after all, only having worked with Linux. The code I write now is much better and although I still use Linux as my primary development platform, my code generally works out of the box on all other *NIX systems.

    On a side note: Solaris is simply terrible for software development. Every Solaris system is different, some do have this and that libraries, others don't. Some have GNU userland, some have crippled, incomplete userland tools with totally idiotic command-line interfaces. Some have compiler versions that kind of work, some can't even compile boost::shared_ptr. Some have GCC, some have Sun CC. Some have only STLPORT4 standard C++ libraries, some have only libCstd, some others have both but if you mix them your program might or might not link, but it definitely won't work. If you want to link in 3rd party binary stuff that's only available linked with libCstd you're basically screwed: forget about using Boost or any other development libraries that rely heavily on templates because libCstd is nonstandard, incomplete garbage that breaks perfectly valid C++ code.

    It's a complete nightmare, a complete disaster, and if you'd ask me Sun should just kill Solaris alltogether and just release their own Linux distro (which they're more or less doing with OpenSolaris already, except it's not Linux)

  • by Guy Harris ( 3803 ) <guy@alum.mit.edu> on Sunday February 22, 2009 @04:48PM (#26951161)

    OS X has largely eliminated Mach messaging, for example.

    O RLY? Fire up Activity Monitor, select some busy process, and watch the count of Mach messages sent and received. Perhaps the NeXTStEP developer documentation touted Mach messaging more than the Apple developer documentation [apple.com] does, but at least some higher-level APIs use Mach messaging.

  • by mehemiah ( 971799 ) on Sunday February 22, 2009 @05:12PM (#26951313) Homepage Journal
    does anyone else remember this already being on mac? I specifically remember downloading a D compiler plugin for XCode. It had a package and everything. I just never did anything with it. Also, it is what many Doshin shooters are written in. Epecially the ones written with BulletML. Many of these are crossplatform http://www.asahi-net.or.jp/~cs8k-cyu/ [asahi-net.or.jp]
  • by loufoque ( 1400831 ) on Sunday February 22, 2009 @06:54PM (#26952157)

    D's template system has gone far beyond C++'s. It's even far beyond what C++0x is going to be

    C++0x is mostly syntactic sugar. Nothing is really new.
    The most important new feature being rvalue references, to which D has no equivalent AFAIK.

    Alias template parameters

    You mean taking symbols as template parameters? You can do that in C++.

    template mixins

    You do them in C++ with templated inheritance.

    a host of compile-time introspection facilities

    C++0x concepts, which are really syntactic sugar for things that were already available in C++03, allow compile-time reflection, albeit in a limited way (you can check if something exist, but you cannot list everything that does exist).

    type tuples

    Unlike D, C++ does not add that kind of facility in the language, since it is doable as a library.

    static if,

    the ability to run regular functions at compile-time

    D metaprogramming is to C++ metaprogramming as C++ OOP is to OOP in C. It takes a technique that the previous language begrudgingly permitted and turns it into one that is actually supported by the language.

    That is overstating it. In C++, meta-programming is done using a functional programming style manipulating types, which are fully immutable, as values. In D, it's done just like regular programming, fuzzing the line between the two, except you have to make sure you add the keywords "static", "template" and the "!" operator in the right places.
    I personally prefer the C++ way.
    All in all, the D way of doing things at compile-time really looks like you have to write it as you were doing it at runtime and rely on the optimizer to work it out at compile-time. Look at variable arguments for example, they look more like C varargs than C++0x variadic templates.

    On an unrelated note, any good C++ programmer I know finds that D has little to do with C++. It is closer to C, and closer to Java as well. Which are ironically the two worst ways to code in C++.

  • by mdwh2 ( 535323 ) on Sunday February 22, 2009 @06:55PM (#26952167) Journal

    It doesn't matter about time. The point is that classic MacOS was ditched, and they shifted to a new platform OS X. Trying to say the workflow of classic MacOS applies to a new platform 7 years later makes no more sense than saying that the AmigaOS workflow applies. Sure, some people might have been classic Mac users and then became OS X users, but plenty of OS X users were previously using other platforms. In fact, given how OS X seems to be a lot more popular than classic Mac OS was, I'd say this is true for most of them.

  • Re:THIS IS SLASHDOT! (Score:3, Interesting)

    by argent ( 18001 ) <peter@slashdot . ... t a r o nga.com> on Monday February 23, 2009 @09:28AM (#26956495) Homepage Journal

    I didn't suggest that Darwin by itself was a particularly good BSD for the desktop. I said that OS X is. Yes, that involves a bunch of stuff that isn't part of BSD, but that's true regardless of whether you're running Cocoa, NeXTstep, Gnome, KDE, or code written for raw Xlib:

    peter@enclave 112 % uname -a
    FreeBSD enclave.in.taronga.com 6.2-RELEASE FreeBSD 6.2-RELEASE #0: Fri Jan 12 10:40:27 UTC 2007 root@dessler.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC i386
    peter@enclave 113 % cd /usr/src
    peter@enclave 114 % find . -name '*xterm*'
    peter@enclave 115 %

    And "broad hardware support" isn't the same as "superior hardware support". Linux is a jack-of-all-trades, it provides acceptable performance and consistent behavior on a variety of platforms, but Jack of all trades is master of none. For the hardware that OS X runs on, it provides better hardware support than Linux. That's true for just about any UNIX workstation vendor... they are simply able to focus their resources better. Which is the whole point of saying, as the OP did, that the Mac is a UNIX workstation.

Our business in life is not to succeed but to continue to fail in high spirits. -- Robert Louis Stevenson

Working...