OpenBSD 4.9 Released 137
An anonymous reader writes "The release of OpenBSD 4.9 has been announced. New highlights included since 4.8: enabled NTFS by default (read-only), the vmt(4) driver by default for VMWare tools, SMP kernels can now boot on machines with up to 64 cores, support for AES-NI instructions found in recent Intel processors, improvements in suspend and resume, OpenSSH 5.8, MySQL 5.1.54, LibreOffice 3.3.0.4, and bug fixes."
Also in BSD news, an anonymous reader writes "DragonFly BSD 2.10 has been released! The latest release brings data deduplication (online and at garbage-collection time) to the HAMMER file system. Capping off years of work, the MP lock is no longer the main point of contention in multiprocessor systems. It also brings a new version of the pf packet filter, support for 63 CPUs and 512 GB of RAM and switches the system compiler to gcc 4.4."
Why is NTFS read only. (Score:5, Funny)
Re: (Score:2, Insightful)
If it's so easy, and you seem to care, can we expect your diff on misc@ in the next few days?
Re: (Score:3, Insightful)
They can post, but TDR will never accept it. NEVER!!11 (insert maniac laughter)
OpenBSD is knows for things like throwing away wireless drivers, for example.
Re: (Score:1)
Throwing out closed source binary blobs you mean. Yes.
Re: (Score:1)
Where can I buy a 63-core CPU?
Re:Why is NTFS read only. (Score:4, Informative)
eh? The last wireless fiasco I remembered was one of the linux wireless guys stealing openbsd's reversed engineered code, and re-releasing it as their own. I guess you can say they threw away encumbered code as they reverse engineered and re-wrote it.
Re: (Score:1)
Re: (Score:3)
Wrong.
http://www.undeadly.org/cgi?action=article&sid=20070829001634 [undeadly.org]
Re: (Score:2)
No, what happened was the Linux guys took the BSD'd code (not a problem - the BSD was designed for this), then locked it up with the GPL (they even had the gall to replace the BSD license with the GPL in the header files).
The end result is because of this, all changes to the Linux drive
Re: (Score:2)
*WRONG* on so many fronts. You were provided a URL below, so you can read it. But. And here is the important part. *YOU CANNOT TAKE SOMEONE ELSE'S CODE AND RE-LICENSE IT AS YOUR OWN. YOU DO NOT HAVE THAT RIGHT.
*YOUR RIGHT* is to USE the code under a BSD, GPL or another license, as licensed to you by the copyright owner.
Re:Why is NTFS read only. (Score:4, Informative)
You do realize that NTFS is completely closed source right? All the work on it has been done through reverse engineering.
Re:Why is NTFS read only. (Score:4, Informative)
NTFS is only writable on linux though NTFS-3G, the write support in the kernel only works if the file size doesn't change.
Re: (Score:3)
And even still, writing with NTFS-3G isn't 100% perfect. It is progressing very nicely but it's far from being bulletproof.
Re: (Score:3)
Re: (Score:1)
Re: (Score:1)
Re:Why is NTFS read only. (Score:5, Informative)
Add to that a few other fun things
1.Multiple versions of NTFS with subtle changes
2.Its a complex file system with lots of features, some of which are not even used by windows but you still have to take care of the on disk data correctly.
3.The security scheme does not cleanly map onto UNIX style rules even with ACL support and such.
NTFS is by no means avant guard but its by any means simple and without documentation figuring out its internals completely and correctly is a BIG job. Now why they can't gleen allot of that from the Linux source I don't know. I know they can't use the Linux source because of the GPL being incompatible with BSD maybe there is a contamination concern.
Re:Why is NTFS read only. (Score:4, Insightful)
Contamination isn't normally an issue for kernel code, they can always cram it in its own corner of the code and not include it in binaries by default.
Without being involved with the discussions its hard to say, but I've personally found Linux filesystem code to be less than reliable. But there's also the issue of that it would have to pass their auditing to be included in the base install, there's a reason why they have so few base exploirts.
Re:Why is NTFS read only. (Score:5, Funny)
Just like your knowledge of French, it would seem.
Re: (Score:2)
wha? he was exactly right. NTFS isn't innovative, it's deliberately not cross-compatible.
Re: (Score:1)
I think he's a foreign-language spelling troll.
Are you free, Mrs Slocombe? (Score:2)
I'm just someone who gets pissed off add nauseum at people whom try to sound fancy and get it wrong.
Do it right or don't do it at all.
Re: (Score:2)
what in the world are you smoking? People who've looked at disk structures say that NTFS is just like VMS fs - you know, the OS that Dave Cutler wrote at |D|I|G|I|T|A|L|
Re: (Score:2)
You, sir, have just been whooshed.
Re: (Score:1)
http://en.wikipedia.org/wiki/Avant-garde [wikipedia.org]
Re: (Score:3)
Those crafty French persons not only provide cliche'd phrases that we're expected to adopt as binary blobs, they deliberately obfuscate them by using letters that aren't supported in normal open-source ASCII.
Re: (Score:1)
And football, since Jason Avant [wikipedia.org] plays wide receiver, not guard.
Re:Why is NTFS read only. (Score:5, Informative)
Those are important items, especially #1. There are a lot more which make life hell for someone trying to get NTFS to work fully as a supported filesystem for a UNIX based OS. A few more:
4: Alternate data streams. It is common for malware to add an ADS onto a file, a directory, a junction point, or even the C: drive object itself. Without a dedicated utility that snorts out these, they are essentially invisible.
5: Like #1 above, NTFS changes in undocumented [1] ways. For example, EFS changed to add different encryption algorithms between Windows XP and Windows XP Service Pack 3. So, not knowing that may bring someone a world of hurt.
6: Similar to #3, NTFS's ACLs are hard to reimplement in the UNIX world. U/G/O permissions can be mapped (Cygwin does this).
7: For a filesystem to be usable as a production one, it needs a filesystem checking utility that can go through the whole filesystem and check/repair integrity on every part of it, be it mostly unimplemented/unused items (transactional-NTFS), features off the filesystem (NTFS compressed files, EFS), and many other items.. Yes, there are ways to run Windows's chkdsk.exe utility, but that is a hack at best.
One of the biggest problems with operating systems today is that there are no compatible filesystems beyond FAT and FAT32. Perhaps UFS. Either one filesystem has too much patent encumbrance to be used, or its license.
I wonder how easy life would be if we had a standard filesystem that could replace the LVM (similar to ZFS), offer modern features (deduplication, encryption, 64-bit checksumming [2], encryption, compression (various levels), snapshotting [3]. On an LVM level, it would be nice to have mountable disk images similar to OS X's sparse bundles. If something changes on the encrypted drive, only a few bands change, as opposed to having to back up the whole file.
Life would be easier if every OS out there had a common filesystem with modern features. A good example about how useful this would be would be antivirus scanning. Unpresent a LUN from a Windows server, scan it on a Solaris box for malware, then re-present it, for example.
[1]: Undocumented unless you are elite enough to have the MS source code handed to you, all work on the filesystem is all reverse engineering.
[2]: Backup programs would have it easy and not rely on dates or archive bits... just look for files where the checksum has changed and back those up just like the -c option in rsync.
Stuff NTFS hides in Alternate Data Streams (Score:2)
I briefly used Kaspersky anti-virus, and now lots of my files have :KAVICHS: or something like that tacked onto them as alternate data streams. When I copy those files to devices that don't support them (e.g. memory sticks using FATxx), Windows has to pop up dialog boxes to warn me that it'll be unable to copy the extra baggage. [Insert snarky comments here...]
Re: (Score:2)
if ReactOS would adopt a BSD filesystem-to-rule-them-all then ntfs goes the way of the dodo. :-)
Some clever soul then slipstreams that code into your Win8 installer as a root fs driver and world domination ensues.
Re: (Score:2)
Why is it read only by default? To frustrate users is the only reason I can come up with.
If a driver is known to be potentially flaky and may put data at risk, I think the user having to knowingly enable RW with that caveat is a safe and decent.
Re: (Score:3)
Re: (Score:3)
a) yes it is hard to make a proper (*) file system "driver"
b) its not getting easier by the file-system being closed source
(*) proper here means: will under no circumstances behave in a way that you loose data trough silent corruption, as opposed to: will not normally loose data obviously after using if for a few hours.
Re: (Score:2)
You know the source for that OS was leaked. I'm not saying developers should just outright copy it. Just look at the source off record and make your own implementation of it.
Re: (Score:2)
Can't you just run a script to tighten up that loose data? It't not like you would *lose* data, would you?
Re: (Score:2)
In FreeBSD you can enable write support and recompile your kernel, not sure about OpenBSD. I always wondered why default kernels in BSD and Linux don't just enable all well-supported filesystems to be available rw by default. Let the burden be on people who want the two second advantage in booting or something, rather than those of us who are trying to do something as basic as access our data.
Re: (Score:2)
There's no need to to recompile the kernel, it's a loadable module. Even more, mount_ntfs will autoload the module automatically if necessary from an fstab entry.
Additionally, there's no need to "enable write support". The limited writing ntfs.ko supports is always on provided you didn't do a read-only mount.
http://www.freebsd.org/cgi/man.cgi?query=mount_ntfs&sektion=8 [freebsd.org]
Re: (Score:2)
Re:Why is NTFS read only. (Score:5, Insightful)
In the specific case of OpenBSD, I suspect that the read-only support is because the OpenBSD team has very low tolerance for what they see as crap. If they can't support something the way that they want to, they can and will just toss it(see the Adaptec RAID driver case, or some wireless chipsets). They don't do binaries, they don't do NDAs, they don't do blobs. They also don't like software they consider to be of inadequate quality. Thus, since the state of full NTFS support is a bit dodgy, it is entirely in character for them to drop it.
More broadly, NTFS read/write isn't really something that there is a strong incentive in the OSS world to polish to a high sheen. NTFS-3G is pretty much good enough for dual booters and rescue disks. NTFS doesn't have any points of superiority strong enough that building top-notch reverse-engineered support would be competitive with spending the same effort implementing a non-secret design. Also, for the sorts of purposes that pay the bills for a lot of Linux development, NTFS support is largely irrelevant. You don't dual-boot servers, and any halfway serious network setup is going to either use SMB/NFS(which makes the local filesystem irrelevant to all other hosts), or some filesystem with concurrent access support or other esoteric features that isn't NTFS.
NTFS R/W is really just a convenience feature for sneakernet and dual-boot scenarios. Neither of those really pay for enough development to get a fully baked reverse engineering of a (quite complex) filesystem.
Re: (Score:1)
This post makes me ashamed to troll slashdot. I hope you get run over by a bus full of Jamaican tourists before you can post another... blind justice and all that...
Re:missing some key features... (Score:5, Informative)
wake me when they have:
1) start/stop scripts, so I don't have to ps|grep|kill|...crap, what were those flags for the daemon again... to manage running processes or daemons
Well, for this one:
New rc.d(8) [openbsd.org] for starting, stopping and reconfiguring package daemons:
The rc.subr(8) framework allows for easy creation of rc scripts. This framework is still evolving.
Only a handful of packages have migrated for now.
rc.local can still be used instead of or in addition to rc.d(8).
Re: (Score:2)
Cripes they could have just copied from netbsd ten years ago.
Re:missing some key features... (Score:4, Insightful)
Re: (Score:2)
But every application has its own interpretation of signals. For some a HUP may reload the configuration or force an update. For others this is not so. For some there may be a safe way to request a shutdown. For others, no. I have written start/stop scripts for daemons I wrote. I think it is a lot more consistent that way and I bet many openbsd users have improvised their own rc.d mechanisms.
Re: (Score:3)
OpenBSD primarily gets used on boxes with very focused purpose, so just a few daemons to manage and I'd rather have single file to control them than runlevels and rc.d
Re: (Score:1)
Anyway, if someone wants all that rc.d/ directory shit, why don't they just use NetBSD or Debian or whatever...
OpenBSD is one of the few (maybe the only one left?) that tries to keep /etc simple and uncluttered.
Fuck all that init.d bullshit! I'll use pkill and read the daemon manpage if I have to (usually just checking rc.conf is enough).
Re: (Score:3)
Re: (Score:3)
BSD.ru confirms Netcraft is dead!!!!!
Re: (Score:2)
Some minor bugfixes get their own news article here, but two major releases of BSD based OSes are bundled together in the same news article?! WTF, dude, what's next, the /. BSD news digest posted once a year?
Because that never happens on Linux [slashdot.org].
Re: (Score:2)
Re: (Score:3)
Couldn't it be said that OpenBSD and DragonflyBSD are just different distributions of BSD?
It couldn't, because they have very different kernel and base system (source code wise). They have descended from the same codebase, yes, but it was a very long time ago.
Slack and Ubuntu use the same Linux kernel, albeit with a certain combination of patches in case of Ubuntu.
Re: (Score:1)
Why? Just give me a simple OS that can make it into the mainstream. Something I can program, and script, and alter to my taste. This has become too unwieldy and way too over done.
OpenBSD is about as simple as you can get.
Stop me if you've heard this one (Score:1)
Re: (Score:1)
Stop, HAMMER time!
Re: (Score:2)
63 CPUs? (Score:2)
Re: (Score:1)
63 CPUs is enough.
also 640GB would be enough
Re: (Score:2)
Re: (Score:2)
AMD is releasing a 20core CPU next year, lets hope they don't sell a quad socket mobo because this OS won't support it.
Re: (Score:2)
I have CentOS 5.5 on it because that's what the commercial software that it has to run likes and they won't support it on anything newer. That's a pity because I can't get the most recent version of blender to build on the thing to play with out of hours.
Re:63 CPUs? (Score:4, Interesting)
The basic mobo support for large N-way configurations has gotten cheap. Power management still has a long ways to go on these beasts, though. Our monster.dragonflybsd.org box is using the quad-socket supermicro mobo with four 12-core opterons (48 cores) and 64G of ram, and I think all told cost around $8000 or so.
The limitation for for these sorts of boxes is basically just power now. The 12-core opterons are effectively limited to 2GHz due to power issues, and these big beasts are really only high performers in environments where all the cores can be used concurrently with very little conflict.
By comparison, a PhenomII x 6 or an Intel I7 runs 6 cores for the PhenomII and 4 x 2 cores for the I7 but automatically boosts the base ~3.2GHz clock to almost 4 GHz when some of the cores are idle. These single chip solutions also have a MUCH faster path to memory than multi-chip solutions, particularly the Intel Sandybridge cpus, and much faster bus locked instructions. So if your application is only effectively using ~4-6 cores concurrently it will tend to run at least twice as fast as it would on a high-core-count monster.
That means that for most general server use a single-chip multi-core solution is what you want. The latest single-chip mobos for Intel and AMD support 16G-32G of ram and 5 or more SATA-III (6GHz) ports. Throw in a small SSD and you are suddenly able to push 400MBytes/sec+ in random-accessed file bandwidth out your network using just ONE of those SATA-III ports. That's in a desktop configuration! So today's modern desktop mobos is equivalent to last year's server mobos at 30-50% the power cost.
A modern high-end configuration as above eats ~60W idle where as the absolute minimum power draw on a 48-core Supermicro box w/ 64G of ram (the ram eating most of the power) is ~250-300W. Big difference.
So lots of cores is not necessarily going to be the best solution. In fact, probably the only really good fit for a 48+ core box is going to be for virtualization purposes.
-Matt
Re: (Score:2)
Not necessarily, there are plenty of other roles.
There is a lot of existing software for geophysical applications that could scale to a cluster of a lot of 48 core machines and get all those cores running at 100% to get the job done quickly, let alone a single machine. Many tasks working on seismic data are embarrassingly parallel since
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
The two sides almost came to blows over whether to allocate four bits or eight bits in the descriptor for the CPU ID. Finally, after weeks of infighting they compromised at six bits, with one reserved value.
from "Difficult: the Inside Story of OpenBSD" (unpublished)
Re: (Score:3)
Atomic ops are limited to 64-bits for the most part (though maybe 128 bits w/fp insns we can't really depend on that). There are several subsystems in the kernel which rely on atomic ops to test and manipulate cpu masks which would have to be reformulated.
The main issue there is one of performance. We don't want to have to use a spinlock for cases where cmpxchg solves the problem because spinlock collisions can get VERY expensive once you have more than 8 cpus in contention.
Similarly the stolen bit for th
Using 1000 watts of power for DRAM? (Score:2)
1000 watts is about what you need for a toaster. And the usual operating system for various toasters was always BSD, so what's not to like there?
Re: (Score:3)
Well, you don't run your toaster 24x7. In fact, most residental homes use less than 1000W of power averaged 24x7 for the entire home.
Running 1000W 24x7 is ~$180-$240/month in electricty depending on where you live. Commercial power isn't much cheaper (and due to improvements in density most colo facilities now also charge for power or include only a few amps in the lowest-tier of service).
It adds up fairly quickly. The DragonFly project has 7 core production machines. Six of those in my machine room tog
Re: (Score:1)
Possibly Obligatory (Score:2)
Re: (Score:1)
OpenBSD 4.9 released... (Score:2)
At the risk of being modded flamebait, etc (Score:2)
Back in the day - or rather the last time I was paying a lot of attention to /. - all BSD articles were flooded by that "BSD is dying... blah, blah confirms it" story (I believe the kids call it a "meme" but I am too old for that).
Now they are not here: is this because they are blocked before they get posted or because it was one obsessive who died/finally had a beer/discovered masturbation or because the idea just, errr, died?
Really interested to know what the answer to this one is.
Re:At the risk of being modded flamebait, etc (Score:5, Funny)
Re: (Score:2)
Re: (Score:1)
Absolutely! And Solaris's ZFS is a superb filesystem. Definitely superior to HAMMER in a number of ways. Miles ahead of anything when its integrated volume management is considered.
But HAMMER offers a few interesting design points to contrast with ZFS. HAMMER was born with support for what would be called 'bprewrite' in ZFS -- dedup for example can be carried out just fine in the garbage collection phase in hammer, something which has no analogue in ZFS. HAMMER's performance under write streams is solid, si
Re:At the risk of being modded flamebait, etc (Score:5, Informative)
ZFS has a large team of people behind it and resources that I don't have. That said HAMMER wasn't really designed to try to compete against it. HAMMER was designed to solve similar problems, but it wasn't designed to replace RAID as ZFS was. But ZFS is no panacea, and anyone who uses it can tell you that. The IP is now owned by Oracle, the license isn't truly open-source. ZFS itself is an extremely heavy-weight filesystem and essentially requires its ARC cache and relatively compatible workloads to work efficiently... and a veritable ton of memory.
HAMMER has a tiny footprint by comparison, gives you fine-grained automatic snapshots, and most importantly gives you near real-time queueless mirroring streams that makes creating backup topologies painless. Among many other features. Frankly ZFS might be the filesystem of choice if you are running dozens of disks but HAMMER is a much better fit otherwise.
People scream the RAID mantra all the time but the vast majority of people in the open-source world don't actually need massive RAID arrays to put together a reliable service. Often it takes just one 2TB HD and one 80G SSD x a few servers and in DragonFly HAMMER + swapcache fits that bill extremely well.
Our ultimate goal is real-time multi-master clustering. HAMMER doesn't get us quite there, primarily owing to the topology mismatch between HAMMER's B-Tree and OS filesystem cache topologies (mostly the namecache), but as the work progresses it will eventually achieve that.
In anycase, there's a huge difference between the people who do the actual design and implementation of these filesystems and the people who merely use them. Our goals as designers and programmers are not necessarily going to match the goals of the typical end-user who wants a magical black box that does everything under the sun with maximal performance in all respects and works without having to life a finger. ZFS can't even achieve that!
-Matt
Re: (Score:2)
I still just want dm-cache [fiu.edu] so that I can have block-level caching of network filesystems to local, or from slow storage like optical to fast scratch. Why does Linux still not have this? :(
Re: (Score:2)
Maybe if you put it into dragonfly then it will shame someone into putting it in Linux. It allegedly worked very well last time it was updated :)
Re: (Score:1)
Re: (Score:2)
Only 64 cores? (Score:1)
Re: (Score:1)
I don't want to be a troll but seriously what is this 1998? At least they'll have cloud computing I guess...
October 3, 2002: NetBSD: i386 -current Adds SMP Support [kerneltrap.org]
Not that long time ago..
Dragonfly is looking interesting (Score:1)
What's the point of dragonfly again? (Score:1)
Yes I'm a FreeBSD person and no I'm asking an honest question. I've read the info on dragonfly but really where is the result? It split at the same point that I started on FreeBSD using 5.0 which he split it using the older 4 base.. The guy that forked dragonfly wanted to take a different approach to multiprocessing. I have to assume he knew something of what he was talking about. The project is mature enough, it's had years. Milticore processors are common now. So where are the benchmarks proving hi
Re: (Score:3)
vmt(4) and VMWare official support (Score:2)
Hopefully VMWare will take notice of the good work being put onto vmt(4) and add official support to OpenBSD in ESX.
been watching DragonFly for five years (Score:2)
Cheap replica watch site worth buying? (Score:1)
Re: (Score:1)
IPsec stack audit was performed, resulting in:
Several potential security problems have been identified and fixed.
ARC4 based PRNG code was audited and revamped.
New explicit_bzero kernel function was introduced to prevent a compiler from optimizing bzero calls away.
-- http://www.openbsd.org/49.html
Re: (Score:1)
IPsec stack audit was performed, resulting in:
* Several potential security problems have been identified and fixed.
* ARC4 based PRNG code was audited and revamped.
* New explicit_bzero kernel function was introduced to prevent a compiler from optimizing bzero calls away.
from:
http://www.openbsd.org/49.html