OpenSolaris Indiana Released 359
Lally Singh writes "The Linux-friendly OpenSolaris Indiana has been released! A new, modern package manager and all the goodies of Solaris: ZFS, DTrace, SMF, and Xen on a LiveCD that was designed for Linux users. 'Why use the OpenSolaris OS you ask? It's pretty simple, you'll find it full of unique features like the new Image Packaging System (IPS), ZFS as the default filesystem, DTrace enabled packages for extreme observability and performance tuning, and many many more. We think you'll be quite happy to came by to take a look!'"
Re:ZFS simply rocks (Score:3, Informative)
Relegated to VMWare on x86 (Score:3, Informative)
Re:Still not sold (Score:5, Informative)
Don't want easy raid/storage expansion on your desktop? You don't want efficient storage?
Dtrace doesnt offer me anything as im not a developer
You don't want to know how your system is performing [opensolaris.org] in a way like never before? I'm not a developer, but a sysadmin and use dtrace every day to tell those pesky developers that yes, it's actually THEIR CODE that's at fault at not the server I setup for them. It's also neat to be able to easily see what process is using how much network bandwidth in realtime. That was difficult before.
SMF doesnt offer me anything i cant do with startup
I don't like the complexity of SMF, but it's self-healing for the stuff that's already built for it is cool as is it's dependancy checking.
IPS doesnt seam any better than deb or rpm
It's better than just RPM, but it's about the same as deb or yum. It's a big step foreward for what was a commercial OS.
I can tell you haven't even tried solaris 10, but give it a swig. Before solaris 10 I wrote (often rightly) wrote of Sun. Why would I pay a premium for something FreeBSD can do for free and outperforms it? The hardware is cool (see coolthreads processors...it's hyperthreading done right), it's affordable, and it's innovative. It may not be compelling enough to switch from linux or whatever if all you use from a desktop is firefox and thunderbird, but there is actually some VERY cool stuff in there. Don't write it off. There's a reason FreeBSD is taking in a lot of these features.
Re:Relegated to VMWare on x86 (Score:3, Informative)
There are also a bunch of free network drivers for Solaris can be found here. [nifty.com]
Re:Difference between Indiana and Nexenta? (Score:4, Informative)
OpenSolaris is not necessarily a "distribution". Nexenta, Shillix, etc are "distributions" built on OpenSolaris. Project Indiana as I understand it, is a distribution coming directly from the OpenSolaris project.
At first OpenSolaris wasn't supposed to come up with it's own distribution, and now that it is it did some people didn't like it. Or they didn't like that they were going to call it OpenSolaris instead of Indiana or something like that. I'm not clear on all the details.
Since Solaris will be built using OpenSolaris, Project Indiana is also kind of like an early access release of Solaris 11, without JDS.
Re:ZFS simply rocks (Score:2, Informative)
"Conditioned upon Your compliance with Section 3.1 below and subject to third party intellectual property claims, the Initial Developer hereby grants You a world-wide, royalty-free, non-exclusive license
Re:Image Packaging System? (Score:5, Informative)
Apart from that, you can also create partial images, which is a space you as a normal user can install packages to. These link back to the libraries already installed.
I'm sure some of these features are available in existing linux packaging systems. But these are things the Opensolaris community has wanted for a long time.
Apart from these features IPS also has automatic snapshoting (using ZFS in the background), so you can revert your system back to earlier snapsots.
All in all a very effective packaging system
Re:IP Issues with OpenSolaris? (Score:3, Informative)
Re:Image Packaging System? (Score:2, Informative)
Re:Still not sold (Score:5, Informative)
It's a common misconception that raid "prevents" data corruption.
RAID only protects you against (complete) hardware failures, and "noisy" IO errors.
Consider:
You have bad data on disk, but the hard drive reads the bad data without error.
With parity, (even assuming the parity is read upon each read request, which would be a faulty assumption), raid 5 has no way of telling which disk is bad, or whether the parity is bad.
Unlike raid, ZFS has end to end checksumming, so it knows when the data on disk is bad, and it knows which copy is bad, too.
Unfortunately though, from what I've heard, ZFS isn't stable enough for production environments yet:
http://www.datacenterknowledge.com/archives/2008/Jan/15/joyent_backup_services_down_for_three_days.html [datacenterknowledge.com]
read these comments [prefetch.net]
Re:ZFS simply rocks (Score:3, Informative)
The source code [opensolaris.org] (which remaps linux systems calls to open solaris and fudges inconsistencies)
Info on installing debian [sun.com] (it's designed for RedHat based linux, so it's slightly painful ... though possibly out of date).
Brand Z info [opensolaris.org]
Overview of linux support [opensolaris.org]
I haven't tried it, but there shouldn't be much overhead/performance loss.
Re:Still not sold (Score:5, Informative)
Re:Image Packaging System? (Score:3, Informative)
As it happens, it's actually not a bad tool. From SMC you can manage users, track workloads, install patches, and do dozens of other day-to-day functions for all the servers on your network. The "slowness" you're talking about is just Solaris, not the tool. It takes just shy of forever to get a fresh Solaris system up to date with the latest patches. (I swear, Sun releases WAY too many patches.) A secret for you is that you don't actually have to install all of those patches. Pick the patches that apply to you and ignore the rest. (e.g. If you don't have a Sun Elite Graphics Card, why are you bothering to install patches for it? On occasion, some of the patches can even be exclusive to each other depending on your configuration!)
Of course, all of this has absolutely NOTHING to do with the new IPS system. Standard Solaris 10 installs include the tradition Solaris packaging system, not the updated IPS system. So you should really give back that mod point that was so kindly provided to your rant.
Re:zfs (Score:2, Informative)
Re:Still not sold (Score:5, Informative)
These days we see a lot of performance related calls being logged by customers
DTrace is a massive leap forwards
I would really not write off Solaris, it's far from dead
Re:Finally... (Score:3, Informative)
Re:Difference between Indiana and Nexenta? (Score:4, Informative)
"Project Indiana" was just the codname for founding OpenSolaris
OpenSolaris = Bleeding-Edge Test Version of Solaris 11 (Think "Alpha")
Solaris Express = Snapshot of OpenSolaris found to be "relatively stable". (Think "Beta")
Solaris 10 = The full "retail" version, often updated with features seeping up from OpenSolaris, that needs to run fine and be perfectly stable on Big Iron.
Re:the true shame... (Score:3, Informative)
http://blogs.sun.com/ahl/entry/expand_o_matic_raid_z
Re:Still not sold (Score:3, Informative)
http://www.datacenterknowledge.com/archives/2008/Jan/15/joyent_backup_services_down_for_three_days.html [datacenterknowledge.com]
read these comments [prefetch.net]
Re:the true shame... (Score:3, Informative)
I haven't played around with ZFS so I'm surprised to hear that it can't grow like you described. Hopefully that changes soon.
Do you know if you can add a mirrored vdev to a pool that contains a RAID-Z vdev?
Re:Image Packaging System? (Score:4, Informative)
You may believe what you're saying, but you're probably just confused. Don't worry about it. It happens to the best of us.
Re:installing now (Score:3, Informative)
More on it: http://en.wikipedia.org/wiki/Nexenta_OS [wikipedia.org]
Re:the true shame... (Score:3, Informative)
This is not exactly true. No matter what your pool config is, you can always grow it by adding any sort of top-level vdev to it. For example if you have a N-drive raidz, you can add to it a 1-drive "mirror" (no redundancy, not recommended), or a 2-drive mirror, or a 3-drive raidz, or a 4-drive raidz2, etc.
I think what you tried to say is that it is not possible to convert a N-drive raidz/raidz2 array into a (N+1)-drive array. The reason Sun hasn't implemented this feature yet is because (1) ZFS provides more features than your average RAID layer, so they face implementation problems no one else has ever had to solve when dynamically reconfiguring the layout of a RAID array, and (2) ZFS targets mostly machines with lots of drives where storage expansion is often done by adding a bunch of disks at a time (which is possible, see above) rather than occasionally adding single drives.
Re:Hey! It's Debian! (Score:2, Informative)
Re:Still not sold - OpenSolaris in Peril (Score:3, Informative)
Re:Still not sold (Score:4, Informative)
Secondly, with RAIDZ (or RAID5) and 4x500GB, you wouldn't end up with 2TB of disk space -- you'd end up with 1.5TB due to the overhead of the parity data.
Thirdly, you don't have to replace all of the disk drives with RAIDZ to increase the amount of disk space dramatically. You seem to be thinking of RAID5, not RAIDZ. With RAIDZ replacing one of your 500GB disk drives with a new 2TB disk drive would indeed still leave you with only 1.5TB of disk space, due to the requirement for redundancy, but if you bought a pair of 2TB disk drives to replace two of your 500GB disk drives, you would increase your disk capacity from 1.5TB to 3TB, and if you just added the pair of 2TB disk drives to the pool as a mirror, as opposed to replacing existing drives, then you'd increase your disk capacity to 3.5TB.
Fourth, no one is forcing you to use redundancy with ZFS if you don't want to suffer the redundancy/reliability overhead. You can add non-redundant disk drives to a ZFS pool.
If you want extra reliability, you have to pay for it somehow.
|>oug
Re:What is the news ? (Score:3, Informative)
Actually, this is incorrect. Ext3 can support up to 16TB (there were some bugs for kernels older than 2.6.18 for really big filesystems, but even back then 8TB was no problem). The filesize limit is 2TB, and with ext4 that limits for the filesystem and individual files with be 1024 Petabytes, or 1 Exabytes.
As far as requiring an fsck every X mounts, thats basically due to paranoia because PC class hardware, is, well, PC class hardware. You can disable that if you wish, and if you have an enterprise grade RAID array with a robust controller and disk-level checksums, it would probably make sense to turn it off. On a typical PC class-whatever-is-the-cheapest-parts-that-fell-off-the-board-from-Taiwan white box, where companies are competing for price much more than reliability or quality, then a periodic fsck is a good idea. But it's always something you can turn off if you wish.
Re:Still not sold (Score:5, Informative)
most raid environments don't do checksumming at every step of the data write / read process.
most raid environments cannot detect silent corruption (bad cache, bad sector, flipped bit, etc) once the data has been read or written.
most raid environments don't offer double parity.
most raid environments require that the entire raid array be initialized at once, wasting potentially hours of time for the formatting/initializing to be completed.
most raid environments when using off the shelf SATA/PATA drives can potentially go bad, even with parity... If you were doing a RAID 5 array with TB size drives, there's a potential that the MTBE can be reached while regenerating data on a replaced volume from parity causing the entire array to be toasted.
All of these things are not issues with ZFS....
ZFS is easily expandable, automatically realigns that data as you expand the pool, can have multiple sub-mount points (mounted anywhere) that can have different attributes - like compressing/shared/extended permissions/iSCSI and more on the way, like encryption, multiple compression algorithms, etc....
I've played/worked with ZFS now for over 2 years and have never lost a single bit of data - even though I've tried...
Build your RAIDZ pool on 20 drives, in 2 disk expansion units attached to 2 channels of a single SCSI card (10 drives per channel)... now shut the box down, remove all the drives, move them around between units, add an additional scsi card to the box, split the disks up between the scsi cards so they are now split 5 per channel, take one drive back out, and erase it... hold onto it for later...
Bring the box back up... the pool will come back online without problems, running degraded as one drive is missing.
now put the erased drive back in, and issue a resilver command, wait a while (not as long as a standard raid controller would take) and voila - all data that was stored on that erased drive is back and in place, and the pool is no longer running in degraded performance mode.
try any of that with a standard raid controller and your data is f0rked!
Re:Unix is dead (Score:3, Informative)
You get the Blackberry answer because I'm remote.
stupid enough? I don't know about the characterization.
They had the rights to SVR4 that Solaris is based on to use it, to develop their own OS based on it, to sell it under trade secret and copyright protection but not to make it open. They then bought that right for a song from SCO because at that moment the latter needed a cash infusion to continue their jihad against Linux.
The judge in te SCO V Novell case ruled last August that SCO does not own the copyright to SVR4 and it follows had no right to sell the right to make it open. Novell owns that right and reserved it in a document called th APA. You can read about it on groklaw [groklaw.net].
The reason why Novell reserved that right - and why the rights to unix were split so badly as to make it irretrievably dead had to do with Ransom Love's hubris. Hubris is the arrogance of pride.
Mr. Love was president and founder of one of the earliest commercial Linux companies, the Santa Cruz Operation (not to be confused with SCO). When his IPO went bizarrely huge for no good reason its treasury stock was worth several more digits than he was used to dealing with. Paid in equity grew to half a billion and they never sold the majority of treasury stock. Marke cap was several times that early on. Of course he bought all the toys the bubble millionaires like and threw huge company parties, but he was also an old school geek like you find here on slashdot now and then.
It happened that at that precise moment the company that had purchased the rights, code and business of unix from bell labs in their breakup (but not the trademark oddly enough), Novell was finding no success with unix and needed a cash infusion to retake ownership of the network, which had been their genesis. (Some will get the irony of this!). Being an old school geek Ransom wanted to "buy unix" as a trophy for having built a successful Linux business.
Unfortunately he didn't have enough to buy the whole thing and being as he was still wet behind the ears in corporate goverment and his company had never ever turned a quarterly profit, Novell insisted on terms. Here's where the hubris come in. He bought the right to market unix for 5% of the gross with the rest going to Novell plus the right to develop a new and better unix he could keep all of the profits from. For rhis he paid in company stock that I hope Novell sold right away because within a year it was nearly wothless. I think he really believed he could mix in GNU/Linux and come out with something like this Open Solaris and buy up the rest of the rights and take over the server market. It didn't work out because he didn't have the rights to open it fully to attract open developers, his ipo money bled out too fast and eventually his company was bought out by an investment group (this one is the SCO we know and loathe)
By making the attempt he fractured the rights to unix in such a way that the OS languished for over a decade, much to the glee of Microsoft which spent that decade taking ownership of the desktop network client and nearly half of servers. In software a decade is a very long time.
I hope this explains it well enough. My thumbs hurt now.
Re:Still not sold (Score:2, Informative)
Linux 2.6 Kernel AIO [sourceforge.net] (and its flaws)
Google is your friend!
(BTW and it was 2005 when I needed it; it killed the performance of my code because it was executed synchronously instead of asynchronously)
Re:Want to smash a harddrive like this guy (Score:1, Informative)
a series of disks
a series of disks in a stripe
a series of disks in a stripe with single parity
a series of disks in a stripe with double parity.
please read the man page:
uname -a
SunOS opensolaris 5.11 snv_86 i86pc i386 i86pc
cat
OpenSolaris 2008.05 snv_86 X86
Copyright 2008 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 26 April 2008
Virtual Devices (vdevs)
A "virtual device" describes a single device or a collection of devices organized according to certain performance and fault characteristics. The following virtual devices are supported:
disk
A block device, typically located under "/dev/dsk". ZFS can use individual slices or partitions, though the recommended mode of operation is to use whole disks. A disk can be specified by a full path, or it can be a shorthand name (the relative portion of the path under "/dev/dsk"). A whole disk can be specified by omitting the slice or partition designation. For example, "c0t0d0" is equivalent to "/dev/dsk/c0t0d0s2". When given a whole disk, ZFS automatically labels the disk, if necessary.
file
A regular file. The use of files as a backing store is strongly discouraged. It is designed primarily for experimental purposes, as the fault tolerance of a file is only as good as the file system of which it is a part. A file must be specified by a full path.
mirror
A mirror of two or more devices. Data is replicated in an identical fashion across all components of a mirror. A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices failing before data integrity is compromised.
raidz
raidz1
raidz2
A variation on RAID-5 that allows for better dis- tribution of parity and eliminates the "RAID-5 write hole" (in which data and parity become inconsistent after a power loss). Data and parity is striped across all disks within a raidz group.
A raidz group can have either single- or double- parity, meaning that the raidz group can sustain one or two failures respectively without losing any data. The raidz1 vdev type specifies a single-parity raidz group and the raidz2 vdev type specifies a double-parity raidz group. The raidz vdev type is an alias for raidz1.
A raidz group with N disks of size X with P parity disks can hold approximately (N-P)*X bytes and can withstand P device(s) failing before data integrity is compromised. The minimum number of devices in a raidz group is one more than the number of parity disks. The recommended number is between 3 and 9 to help increase performance.
spare
A special pseudo-vdev which keeps track of available hot spares for a pool. For more information, see the "Hot Spares" section.
log A separate intent log device. If more than one log device is specified, then writes are load-balanced between devices. Log devices can be mirrored. However, raidz and raidz2 are not supported for the intent log. For more information, see the "Intent Log" section.
cache
A device used to cache storage pool data. A cache device cannot be mirrored or part of a raidz or raidz2 configuration. For more information, see the "Cache Devices" section.
Virtual devices cannot be nested, so a mirror or raidz virtual device can only contain files or disks. Mirrors of mirrors (or other combinations) are not allowed.
Meaning no RAID 10.
Re:Still not sold (Score:2, Informative)
I checked
I dropped the drive off the array, and reassembled it, and all my files were fine afterward. I am currently mirroring everything to a 1 TB external drive.
Re:Want to smash a harddrive like this guy (Score:3, Informative)
zpool create tank mirror c0t0d0 c0t1d0 mirror c2t0d0 c2t1d0 :-)
Re:Still not sold (Score:4, Informative)
|>oug