Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software Hardware

Ask Slashdot: Low Cost Way To Maximize SQL Server Uptime? 284

jdray writes "My wife and I own a mid-sized restaurant with a couple of Point of Sale (POS) terminals. The software, which runs on Windows and .NET, uses SQL Server on the back end. With an upgrade to the next major release of the software imminent, I'm considering upgrading the infrastructure it runs on to better ensure uptime (we're open seven days a week). We can't afford several thousand dollars' worth of server infrastructure (two cluster nodes and some shared storage, or some such), so I thought I'd ask Slashdot for some suggestions on enabling maximum uptime. I considered a single server node running VMWare with a limp-mode failover to a VMWare instance on a desktop, but I'm not sure how to set up a monitoring infrastructure to automate that, and manual failover isn't much of an option with non-tech staff. What suggestions do you have?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Low Cost Way To Maximize SQL Server Uptime?

Comments Filter:
  • by Anonymous Coward on Monday June 25, 2012 @02:48PM (#40442093)

    Seriously put the SQL server up in Clound service and let them worry about it. If its a Microsoft SQL server then Azure is the place to be. Hell put a full instance of your service up in the cloud.

  • MICROSOFT SQL Server (Score:1, Interesting)

    by Anonymous Coward on Monday June 25, 2012 @02:54PM (#40442177)

    Could we please stop ceding generic terms like "SQL Server" to Microsoft? Oracle produces an SQL server, as does IBM (DB2), as do several other companies and open source organizations. Why does "Microsoft SQL Server" get to be "SQL Server"? Isn't it bad enough that we already given Microsoft the "Windows" name (how old is X Windows?) and "PC" has morphed from meaning "personal computer" to something that runs Microsoft Windows?

  • by markus_baertschi ( 259069 ) <markus@@@markus...org> on Monday June 25, 2012 @03:10PM (#40442401)
    Good uptime is great, but unfortunately very expensive in terms of hardware, software and manpower. Questions you should ask yourself: - What is the maximum allowable downtime duration ? - How many outages can you tolerate per year ? - What is actual cost to you of a one day/evening outage ? - How many such outages did you have with your actual infrastructure ? I think the best option in your case is to have two identical servers/PCs of good quality with two mirrored harddrives each in hot-swap slots. If a harddrive fails, you can carry on for the evening and replace it the next day. If something else fails you swap the SQL server drives into the second server/PCs and fix the problem later. This is simple enough that you can instruct someone by phone to do that, when you are absent yourself.
  • by gnetwerker ( 526997 ) on Monday June 25, 2012 @03:19PM (#40442569) Journal

    If you can't measure it, you can't manage it. You haven't taken the first and most essential step in analyzing your problem: measuring it. Is your problem caused by network failure? By power? By software failure? Hardware? If hardware, by server hardware, disks, or something else?

    If software, by OS, database, or application software? All of these have different solutions. Going to the cloud won't solve a network failure, it will make things worse. Going to the cloud may improve persistent hardware failures. but the MTBF of most decent hardware is pretty good, so are you sure you have clean power and a good (cool, clean) environment?

    If your software or system is crashing, then that's its own problem.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...