The Downsides of Software as Service 326
JustinBrock writes "Dvorak's article yesterday, entitled Don't Trust the Servers, argues that the danger of software as a service was highlighted when 'the WGA [Windows Genuine Advantage] server outage hit on Friday evening and was finally repaired on Saturday. It was down for 19 long hours.' The whole fiasco raises an interesting perspective on the software as a service 'fetish'. Dvorak highlights it hypothetically: What if the timeline were reversed, and we were moving from online apps to the desktop. Hear his prophecy of the marketing: 'You can image the advertising push. "Now control your own data!" "Faster processing power now." "Cheaper!" "Everything at your fingertips." "No need to worry about network outages." "Faster, cheaper, more reliable." On and on. I can almost hear the marketing types brag about how much better "shrink wrap" software is than the flaky online apps. The best line for the emergence of the desktop computer in a reverse timeline would be "It's about time!"'"
Re:Here's a few more - readable this time... (Score:5, Informative)
4) IT maintenance - while not a big issue for most of us that post here, for all those mere mortals keeping the software up to date, or upgrading to a new version can be a major headache. With software as a service, its done for you.
5) Accessibility - what if you're outside the firewall and can't get thru the VPN? Again, a bigger deal for mere mortals that
6) less start up risk. If I can start with a couple of seats a month for $50/seat versus having to kick out hundreds or thousands of dollars per desktop copy, it's a better deal (well, legally anyways).
7) Generally the Software as a service providers have better backup/recovery processes than the average SMB (think law firm, not software house).
There's lots more reasons of varying importance. I think the parent's point #1 is probably the most relevant of all tho.
Re:When is the last time Dvorak... (Score:3, Informative)
It all comes down to what your needs are and if you can live with the possible negatives of such a hosted application.
Re:When is the last time Dvorak... (Score:2, Informative)
Performance can be tricky in such a scenario, as you're abusing the system bus a bit harder, but I'd rather have a slightly slower array than a sudden-death array.
One thing is certain: RAID controller manufacturers are well aware that their devices are the point-of-failure and it suits them, because hardcore sysadmins will setup redundant controllers, which means more money for the vendor. It's not uncommon to keep a few spare RAID cards in a drawer, just in case, because you know damn well that when one of them fries, you won't be able to buy them from the vendor anymore and your data will be trapped in limbo.
Re:When is the last time Dvorak... (Score:1, Informative)
No... it cannot, as running something locally is NOT software as a service! Software AS A SERVICE means it is being provided AS A SERVICE and not as a physical peice of equipment inside your office! If you have the server in-house it is no longer software as a service, regardless of what the contract for maintenance or ownership of server relationships are. If the server is in-house it is just software running localy, even if you rent the software and get free updates, and even if other people own and operate the server for you...
Re:This time it's extra stupid (Score:2, Informative)
I have to agree (Score:3, Informative)
Right now, dealing with company's oversubscribed servers and under subscribed bandwidth makes response time as bad as it used to be when green screen terminals were attached to mainframes.
The rule used to be response time should be no longer than two to four seconds. How often do you wait for considerably more than four seconds for a Web server to respond?
Granted, the four second rule was more or less intended for more "interactive" activities (like data entry) than mere Web browsing. But the whole SaaS and Web 2.0 stuff is intended for exactly that - interaction with applications over the Web.
And right now, Web response time just doesn't cut it.
When the telcos get their head out of their butts - or someone does it for them - and we get 100Mbps or more speed to the desktop AND the people who offer SaaS learn what the words "load balancing" mean, maybe then it will be viable.
Right now, every time I go to Superiorpics.com for my babe picture downloads, I click on a link to Shareavenue, I'm lucky they respond in less than thirty seconds to a minute. And twice this week they've been completely down. Not to mention the WGA outage which started this discussion.
It's ridiculous.
Add to that the mysterious ability of data transmitted over the Net to literally CRASH an application such as a browser. I've never understood that. Most desktop applications read files and other data and have mechanisms in place to treat that data AS data, no matter how malformed it may be. If it's wrong, they complain without crashing (usually - there are numerous exceptions, of course.) But when we go to network apps, somehow all that goes out the window - and crashes are regular. Maybe it's because network protocols have states and when data is lost, the states get corrupted and the network apps aren't coded to deal with that because of the rigidity of the protocol. There's the simple issue of knowing when the next network data packet just isn't coming and how to recover from that. But most network apps seem as fragile as glass to bad data. Firefox just grinds to a halt or bombs immediately when multimedia data coming in isn't as expected.
The reliability just isn't there.
The problem is not technical (Score:3, Informative)