Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Databases Programming Software Data Storage

Improving Database Performance? 95

An anonymous reader asks: "An acquaintance of mine runs a fair sized web community. Having nearly 280,000 users in a single MySQL database with about 12 mirrors accessing it, performance has become an big issue. I was wondering what practical methods could be used to lessen the load and decrease user query times. Funds are limited, so he'd rather not pay for a commercial database system at this time."
This discussion has been archived. No new comments can be posted.

Improving Database Performance?

Comments Filter:
  • Memcached (Score:5, Informative)

    by Exstatica ( 769958 ) * on Monday August 15, 2005 @05:16PM (#13325678) Homepage
    Use Memcached [danga.com]. Which is used by livejournal, slashdot, wikipedia, and sourceforge. I also use it. My database has no load. It's not to difficult to implement and there are tons of API's [danga.com]
    • Re:Memcached (Score:4, Insightful)

      by SpaceLifeForm ( 228190 ) on Monday August 15, 2005 @06:12PM (#13326107)
      And add more RAM.

      • And add more RAM.

        After that, if it's still slow, add still more RAM.

        Seriously, disk access is hideously slow. And disk seeks are much, much slower than that. RAM for the entire user database will be a couple hundred bucks. And if other things on that machine are eating up the RAM, then move the user database to its own machine as a read-only partial clone.
        • Re:Memcached (Score:2, Interesting)

          by Anonymous Coward
          "Add more RAM" sounds good, but it is not necessarily a real solution.

          For many "real" databases, there is most definitely a big IO performance issue, regardless of the amount of RAM in the system. The main reason is the requirement that data be in stable storage after a commit succeeeds in order to comply with ACID semantics (though being MySQL, it wouldn't surprise me if this requiement didn't bother him).

          If the database is read-only (and being MySQL, that wouldn't surprise me), then sure, adding more RAM
          • Otherwise, the best way to get really good performance out of a database, after there is a *reasonable* amount of RAM and CPU power there, is to add a good disk controller with a nice battery backed writeback cache.

            Oh, agreed. We were talking about a user database, which usually has a really heavy read bias. If you're bottlenecking on writes, then a battery-backed write cache can indeed help, possibly quite a bit.
    • i am a coding ignoramus, so please excuse if this is a stupid question.
      does this require a complete rewrite of existing code ?
      if so, is there something that will cache transparently so that a code rewrite is not necessary ? thanks !
      • Re:Memcached (Score:4, Informative)

        by stevey ( 64018 ) on Monday August 15, 2005 @07:51PM (#13326716) Homepage

        Pretty much, yes.

        Here's an introduction to memcached I wrote [debian-adm...ration.org] which might explain it for you.

        In short you modify all the parts of your code that fetch from the database to first check from the memory cache - and when storing invalidate the cache. In general most sites read data more than they write it so most of your accesses come from the cache - thus reducing the load upon your DB.

        If you don't want to modify your code you could look at optimizing the setup of the database server, moving it, setting up replication, etc.

        Still without more details it is hard to know what to suggest.

        • Or just use the caches that are built into the hard drives and the operating systems. Add more RAM to the system. And, like someone else said, if it's still slow, add more RAM. 2-4GB.

          • The problem is that the OS doesn't cache stuff that comes from another computer through a socket, so adding RAM to your DB will make the DB faster, but adding a cache to the DB client will make the DB less busy and the client faster. If it's important, you can use a distributed cache that lets you invalidate the caches on other machines. I don't know anything about memcahced but I imagine it has that feature, otherwise it's just a hashmap.
    • Ramdisk
    • (and if it is, why are you even asking on /. but hey)

      But if you have lots of database updates going ahead, locking and huge index searching, you might want to look at your slowest most costly queries (sometimes they can be stupid little footer fillers) and reduce those queries. Check what is slow, and see how you can work around it, add a new index, or cut the fat.

      But seriously, if your 'performance hit' is pulling static stories out of a database, then how come you haven't looked at cacheing?
  • Although I am too lazy to look up any hard evidence (and when is it necessary on slashdot?), I've heard that Postgre-SQL is much better at heavy loads then MySQL.
    • Re:Postgre-SQL (Score:2, Informative)

      by Anonymous Coward
      If the OP's queries are such that the database is mostly read-only, I don't think that switching over to Postgresql is going to really show enough improvement to justify the pain of switching to a different rdbms.

      Postgresql scales better than Mysql under heavy concurrent read/write conditions. If that is the access pattern for the OP, then I said yes - look into switching to Postgresql.
      • PHB: I think we should build an SQL database.
        Dilbert (thinking):Does he know what he's talking about, or did he read it in a trade magazine ad?
        Dilbert (speaking):What color would you like that database?
        PHB: I think mauve has the most RAM. -
      • Re:Postgre-SQL (Score:3, Informative)

        by dubl-u ( 51156 ) *
        If the OP's queries are such that the database is mostly read-only, I don't think that switching over to Postgresql is going to really show enough improvement to justify the pain of switching to a different rdbms.

        If the queries are indeed read-mostly, it might be worth benchmarking it against an LDAP server. LDAP servers are built specifically for serving user data, which is usually greater than 99% read. It's been a while since I did benchmarks, but at the time they could beat the pants of of general-purpo
    • More importantly (Score:5, Informative)

      by Safety Cap ( 253500 ) on Monday August 15, 2005 @06:14PM (#13326123) Homepage Journal
      It actually supports ACID, whereas MySQL does not.

      So, for example, if you want to insert a string that is too big for the field, MySQL will gladly suck it up with nary a peep (meanwhile, your data is trashed: truncate hell), whereas Postgre (and other non-toy RDBMSs) will refuse to insert the record.

      Wikipedia has a nice comparison [wikipedia.org].

      • ... if you want to insert a string that is too big for the field, MySQL will gladly suck it up with nary a peep (meanwhile, your data is trashed: truncate hell) Starting from MySQL 5.0.2, there are configuration variables that tell the database to reject invalid field values. (At least that's what the online MySQL manual says ...).
    • Comment removed based on user account deletion
  • It Depends (Score:5, Insightful)

    by AtrN ( 87501 ) * on Monday August 15, 2005 @05:34PM (#13325826) Homepage
    Without some idea of the access patterns, schema and actual DBMS configuration its hard to say what can be done to improve performance. There are purely mechanical things like caching as much as possible, getting faster disks, more memory, etc... but these may not even help depending upon the more fundamental issues of DB design and deployment. Depending upon the use MySQL may not even be appropriate given some of its limitations, PostgreSQL may be a better fit for instance if there's a lot of updating going on or client s/w is performing many queries to assemble views which could be better done closer to the data.
    • You know, you'd like to think that after SO many ask slashdot topics that either these comments would stop (which is unlikely) or that people would realize that nearly every ask slashdot question requires much more detail for any good solution to be presented (which is equally likely).
  • MySQL replication. (Score:5, Informative)

    by billn ( 5184 ) on Monday August 15, 2005 @05:43PM (#13325891) Homepage Journal

    Works like a champ.

    Set up multiple read slaves to carry the bulk of your read traffic, especially for your mirrors. Considering MyISAM's native table locking behavior, this should reduce your master db load quite a bit, even just moving your mirror read-load to a slave replicant.

    Also, query caching is a beautiful thing.
    • Be careful with those slave replicants. Sometimes, they get cranky and want to know their expiration dates.
    • Considering MyISAM's native table locking behavior

      Correct me if I'm wrong, but IIRC from my database design classes in college, isn't table-locking a horribly BAD thing to do? Row-locking is much better, since it allows other items in the table to be accessed and updated while this record is altered.

  • If you do a lot of Reading from the database, add a index on the columns you query for.

    If you do a lot of Writing to the database, you _may_ want to do batch updating. Or buy/build a raid (I still recommend SCSI )

    Finally; Databases are easy to use, but hard to master. There are reasons why people get paid hundreds of dollars an hour to fix peoples database problems.
    • Adding keys to your tables will really speed up read queries. Also, look at your most frequently used SQL queries, and time them with different data sets. Adjust the queries by adding keys to tables, especially in WHERE clauses. Also, you may want to look into completely revamping queries.

      Multi-part queries involving lots of roundtrips between the script and the database are a no-no. Consider replacing these queries with SQL joins. Joins are relatively fast if done properly. Next, you may want to tr

      • Totally agree with the parent on keys etc. Keys and their associated indexes or just seperate indexes on columns or groups of colomns provides a major performance boost. Analyse you selects and joins to find the columns that need indexing. And remember that indexes need periodic maintenance - schedule some scripts to rebuild them every few days. Also remember that a query can normally be written in several ways and still provide the same result. Play around till you find the optimal one that makes best use
        • Rebuilding indexes every few days is not necessairily a good idea. A very tightly packed index will require extra IO to do inserts, while one that has 'spread out' a bit due to use will be faster. Ask Tom [oracle.com] had an article about this for Oracle. A nice quote is 'No, there are no times when a scheduled rebuild of all indexes is "good".'
  • by ubiquitin ( 28396 ) * on Monday August 15, 2005 @05:49PM (#13325936) Homepage Journal
    You can push MySQL way beyond 280,000 customer records. I know because I've done it.

    With properly normalized data, on fast, current, commodity hardware (10krpm drives for instance), using InnoDB, you can pretty easily push MySQL into the 5 to 10 million records-per-table range before you start really needing a bigger relational database engine. This assumes no more than 1% of your data is needing to be updated per day. Staying with GPL database software is a really smart thing to do: you don't know how much time and money gets spent on just negotiating with Oracle over their licenses: it is anything but simple. Small business web sites cease to be "small business" when they grow beyond of .01% of 10 million per day.

    A non-trivial part of my business is in advising companies in how to get the most out of MySQL. Replication is one part of that, but having the right data structure for scalability is really key. Want more? Ask around at: www.phpconsulting.com
    • Half the time the problem is actually not the database software itself. It's the hardware and storage.

      - At the least stripe the volumes across multiple disks.

      - The system should have the virtual memory partition/swap on a separate disk alone. If not controller.

      - SCSI drives are notorius for utililizing less resources.

      - Run the database on raw device, not on some filesystem.

      • - At the least stripe the volumes across multiple disks.

        Err, do you mean stripe sets or do you mean RAID?

        - The system should have the virtual memory partition/swap on a separate disk alone. If not controller.

        Add enough RAM and throw out the swap space. RAM is (still) cheap these days. With the advance of serial ATA, having the swap on a different controller should not be too difficult.

        - SCSI drives are notorius for utililizing less resources.

        Notorius? Don't you mean "famous", maybe?

        - Run the database on raw

    • I'm not a DBA, but wouldn't normalization slow the queries down due to increasing the number of joins that queries would need to make? I thought that the purpose of normalization is to eliminate data anomalies by getting rid of as much redundancy as possible.
      • It can also save quite a bit of IO. Assume all fields are 1 byte, now say you have a table called person which has many to one relationships to 10 other tables. Each of those 10 tables has ten fields. Fully normalized this table is 11 bytes per row. Fully denormalized it is 101 per row. For any query that needs to read more than a small subset of the data (so that an index is not the most efficient path) this means roughly 10 times the IO cost. Even for queries that are more efficient with an index, yo
  • by MosesJones ( 55544 ) on Monday August 15, 2005 @06:07PM (#13326074) Homepage

    280,000 records, even for MySQL, isn't that much and indicates that performance is being driven down either by tiny hardware or more likely...

    1) Badly optimised queries
    2) Poor index selection and maintainance
    3) Generally poor schema design

    It might also be that queries should be cross table with sub-queries(not a MySQL strong point).

    9/10 poor database performance is due to bad database design.
    • not records :) I don't know what he means by 'user' though, simultaneous users, or users that login once a week causing just a few queries to be run?
      • For a busy site, I've often seen a ratio of 1 concurrently logged on user for every five or six "users". YMMV. In this case, a 35000 (approx.) concurrent user would kill a very hefty database server in no time flat, without
        1) impressive tuning
        2) considerable one-machine hardware
        3) consistent design choices
        Again, your mileage may vary
    • Does he have proper hashing is what the above is asking...

      Yes I just finished Data Structures, no I don't know mySql
      • Does he have proper hashing is what the above is asking...

        Hashed indexes (especially when the data and index key are co-located on the same page) are great for OLTP systems, but are otherwise a cast iron bitch to maintain.

        Yes I just finished Data Structures, no I don't know mySql

        If you wind up a "DP" developer writing lots of SQL, you'll never have to implement things like b-trees and hashes.

        You will, though, need to grok the positive & negative consequences of each kind of data structure, and how those
  • A plea. (Score:5, Insightful)

    by linuxwrangler ( 582055 ) on Monday August 15, 2005 @06:17PM (#13326146)
    To the O.P.: Provide some info - we're not mind-readers. Today's User Friendly [userfriendly.org] is somehow appropriate.

    How well normalized is the schema? Mostly reads? Writes? Both? 280,000 users? So what. Do you mean simultaneous users or are only 2 on at a time? Are they accessing a single 100 record table or lots of large tables? Are they indexed properly? What is the OS, memory, disk, processor...? How much processing is required of the DB vs. the front-end. Have you run into any specific problems [sql-info.de] that might indicate that a different db might be more appropriate. What have you tried and what was the result?

    To the editors: Please reject Ask Slasdot questions from posters who can't be bothered to provide the most basic background info.

    This is Slashdot. I would like to believe that the typical reader could be rather more technically erudite.
  • "it depends" (Score:5, Insightful)

    by Anonymous Coward on Monday August 15, 2005 @06:22PM (#13326179)
    Like a poster above mentioned, it really depends on your access patterns.

    The ABSOLUTELY MOST IMPORTANT THING is to set up some benchmarks that reflect your usage patterns. Once you have some solid benchmarks, you can set up a farm of test machines and start benchmarking, adjusting, benchmarking, adjusting, over and over until you've got the performance you need. I can't stress this enough. You need a good, automated benchmark system to get you the first 85-90% of the way. The last 10% has to be done "in the field" unless your benchmarks are REALLY good.

    Generally, you want to minimize disk usage time on your databases. Everything else is just gravy. Make sure you've got some FAST disks for your MySQL boxes, and make sure they are striped RAID (or whatever your benchmarks show as being the fastest). Choose your filesystem (ext3, reiser, etc) the same way: use the one that benchmarks the fastest.

    Next, there are lots of things you can tune in MySQL. For instance, did you know there's a "query cache" that will save the results of queries, indexed by the SQL query text? In *some* situations, it can be very useful. In others, it actually degrades performance. First learn about MySQL's various knobs and get them tuned.

    Next, you might need to distribute reads to more mirrored DBs and/or to application-level caching like memcached. Depending on your app, this can give you a 100x speed increase.

    Next, you might want to partition your database, if your data is suited for it. For instance, all queries for customers in Idaho go to a separate machine just for the idaho customers. All your application goes through a DB access layer that selects the right machine.

    Basically, you need to get the "main loop" down: benchmark, adjust, benchmark, adjust, etc., etc, and then start trying things out!

    The same goes for PostgreSQL.

    But whatever you do, the LAST thing you want to do is mess with your database intregity. If anybody tells you to "turn off constraints" or "denormalize for performance", they are idiots. Your primary goal should always be data integrity! If you've got a real app, with real paying customers, and real valuable data (i.e., not a blog or something), you can't afford to throw 30 years of database best practices out the window to get a 5% speed increase. Today's SQL databases unfortunately don't even begin to implement even the most basic relational features, but that doesn't mean you shouldn't try. Just a tip...I've made plenty of consulting dollars fixing the mess people left when they started valuing performance over data integrity.
    • Re:"it depends" (Score:2, Informative)

      by UnckleSam ( 605767 )
      Real workload performance testing is easy with MySQL: Dump your database (mysqldump or a real filesystem snapshot if you have the HW) in a clean state, turn on the query log (don't use binary logging - it only contains statements modifying your db's contents such as UPDATEs and INSERTs) and wait some hours or days. Then deploy the dumped database to a testing system, select an appropriate starting point in the query logs (choose the first statement that arrived after you dumped your db) and feed all stateme
    • *cough*

      make sure they are striped RAID ...

      Your primary goal should always be data integrity!.

      I'm confused... you want the OP to use striped disks, yet you want the OP to have data integrity as a primary goal. That's like asking for a raw boiled egg.
      • I'm confused... you want the OP to use striped disks, yet you want the OP to have data integrity as a primary goal. That's like asking for a raw boiled egg.

        Striping with data integrity is called RAID 10 or RAID 5, both of which provide better data integrity than a single disk. RAID 10 if you need good write performance, and RAID 5 if you are on a budget and only need good read performance. Heck, even mirroring will increase read speed.

        Very different than a raw boiled egg...

  • The trick is to keep your hits from getting to the DB as much as possible. The techniques for this are varied, but mostly this means caching your pages as static content. Depending on the dynamic nature of your site's content, you might be able to run a cron job daily that renders much of your site's content into static HTML files.
  • I discovered something very interesting when I was running a large mySQL installation. We had only about 50 users but they were telemarketers continuously beating on the database all day.

    Certain reports would kill the system - make it stop entirely for minutes at a time. What I discovered was that this kind of query

    select (fields) from calendar where date like '2005-08-15%'

    was horribly slow. Instead, use

    select (fields) from calendar where date >= '2005-08-15' and date date_add('2005-08-15', interval
    • Explain syntax [mysql.com] is an interesting and informative tool that can help optimize your indexes and queries.

    • Marginally off topic.

      I use a 4GL Database called Progress. (Not to be confused with postgresql).

      Recently in code I was writing I needed to check the status of a flag, basicly to ignore all previously processed records in a queue.

      Easy enough...

      FOR EACH tablename WHERE NOT tablename.Processed

      After a hundred thousand records were created the query was taking ages to run, in spite of tablename.Processed being an indexed logical field. I didn't realize it when I wrote the code, but a NOT statement disables index
      • You might not be getting much performance out of that index anyway. I can't speak to your specific database but in general if you have a large table and the indexed column has a small number of possible values, you won't be buying yourself a whole lot. Let's say that 30 plus percent of your 100,000 plus record table at any one time has not been processed. IO wise that probably won't be much better than a table scan. Additionally if a table has a lot of writes done to it, a lot of indexes will hamper per
        • In this instance there are other fields in the index, and the records are processed sequentially.

          As I mentioned, it is a queue of records. Once the records are processed the flag is toggled.

          As for the hundred thousand records, it wasn't the rule for this table, it was an except which was created by a combination of factors outside of my control, it was however a good stress test.

  • sqlrelay (Score:3, Interesting)

    by Vlad_Drak ( 20809 ) on Monday August 15, 2005 @07:38PM (#13326663)
    SQLRelay http://sqlrelay.sourceforge.net/ [sourceforge.net] might be a good option here. If you do end up switching the backend from MySQL to PostGres or whatever, it's supported there too.

  • by ajayrockrock ( 110281 ) on Monday August 15, 2005 @07:58PM (#13326765) Homepage

    Check out the High Performance MySQL book for info on how to speed it up. Most of it's probably obvious for the hardcore DBA guy, but I found it useful:

    http://www.oreilly.com/catalog/hpmysql/ [oreilly.com]
  • by eyeball ( 17206 ) on Monday August 15, 2005 @10:02PM (#13327442) Journal
    I have to agree with some other posters -- without knowing some of the dynamics of the db usage, it's difficult to make suggestions. 280,000 users can be a piece of cake if all you're doing is user auths for each (that's about 3/second if everyone logs in once/day). Worse-case,

    A few things too look at:

    - If there is excessive or improper locking being done (i.e.: do you really need to lock a table to update a record that could only possibly be updated one at a time?)
    - If queries can be made less complex
    - Indexing. You should become intimate with how indexing works and the various ways of setting it up
    - Caching infrequently changed content on the front-end (i.e. generate static web pages that don't change too often rather than dynamically creating them constantly).
    - de-normalize your tables if it improves performance. Don't worry nobody's looking :)

    Also, look into some lighter-weight DB & DB-related technologies: HSqlDB, SQLite, C-JDBC, BerkeleyDB, SQLRelay, to name a few. Granted some aren't distributed, but again, not knowing the architecture, some data may be lazily replicated out from the master.

    Also, I can't find it now, but I read a while back that MySQL was adopting an open-sourced in-memory DB from a company (Ericcson?) that may be available separately. You also may want to look into something like LDAP (OpenLDAP) if the app is very read-heavy.
  • Here's a very simple solution to your problem. Take n servers (where n is the number of DB machines you have) and evenly split your user accounts across them. Then, use a simple hash table in your application to determine the server from which to query. Example:


    Account Names Server
    a-c db01
    d-g db02
    h-j db03 ...


    The key is to choose your boundaries so that each DB server holds a roughly equal number of account.

    If you have a really, really busy dat
    • ... which works great until you have a hardware failure. Or until you have to add yet more servers to cope with even heavier use and you then have to redistribute the m users that were distributed over n server over the now (n+p) servers.

      A distributed approach may well be a good idea, but once you start to distribute, you have to consider what should happen if one of the machines falls down.
      • Not to mention any kind of query that needs to aggregate all of those users, or join the user table twice (friends and foes in /.). Or what do you gain if you segement your database by user name in to 4 servers and the users in db #4 are the most plentiful and most active and also hit the parts of the site that take the most computing power? I don't think that the segmenting solution would work very well at all. It could possibly be a benefit if the users could be isolated into seperate independent group
    • A better idea would be to genuinely figure out how to parallel process your database queries. If this is just a quick-n-dirty fix you want, you could use some perl scripts and use it to split your database, or you could get more technical, accurate and use MPI/PVM to actually parallel process your database accessing. Since Mysql is open source, you could look into it. Just an option.
  • Upgrade from the 486, it's about time.

    (provide more statistics than just how many records are in the user database, and we can probably help you a little bit. Otherwise, we're pretty crippled. With 280,000 user records, it doesn't sound like there's a whole hell of a lot happening on it to choke off a powerful machine, not if the software that's accessing things is done right. So, need more info.)
  • ask the source (Score:4, Interesting)

    by hankaholic ( 32239 ) on Tuesday August 16, 2005 @06:44AM (#13329116)
    You might suggest that your friend consider asking MySQL for a quote:

    http://www.mysql.com/company/contact/ [mysql.com]

    Their Enterprise contracts are probably a bit much for your friend's needs, but they may offer single-incident support for optimization and tuning assistance.

    If he doesn't mind delving into DBA-land, he may want to buy a book. If he values the time it would take him to get up to speed and would rather spend it on other pursuits, it may well be worth the money to get some help.

    Either way, he'll have to spend something (time or money) -- it's a question of how much his time is worth to him.
  • general advice only takes you so far.

    Optimizing any system involves two steps, one analytical and one creative. The analytical step is determinining exactly where all that time is going. Sometimes it isn't the 80/20 rule, it's the 99/1 rule. The creative step is figuring out how to avoid spending so much time there, either by avoiding unnecessary trips (caching or just cleaner programming), or to speed up the process in question.

    If you're lucky, the bottleneck may be in a single tier. It could be as si
  • It's technically about Oracle, but it's a good introduction to DBMS performance and how use good science instead of urban legends to tune a database:
    Optimizing Oracle Performance [oreilly.com] by Cary Millsap with Jeff Holt
  • Not much to offer without knowing more about how the system is designed. If the DBMS is replicated 12 times I suspect some kind of _serious_ design problem and the best fix would be to re-think the aplication. The best advice is to tell the person to get onto some DBMS forums and talk about his aplication and how best to design it. Notice I say "DBMS forums" not MySQL forums. Need to step way back and look at the bigger picture and not ask about low-level mysql related nuts and bolt issues. The best to
  • by cr0sh ( 43134 ) on Tuesday August 16, 2005 @04:20PM (#13333835) Homepage
    In a bone-headed move, I might add - but it is something worth checking out to see if this is part of the issue:

    I am in the process of re-doing my website using PHP and MySQL. My new site will be complete DB driven, to the point that the page content is driven and built from PHP stored in the DB. My goal is to be able to update the website from anywhere in the world with a web connection. I am custom writing this system, rather than use a pre-existing CMS or other blogging system (and there were quite a few that were tempting) - because I wanted to learn PHP and MySQL by doing, rather than by observing.

    Anyhow, one of my editors was slowing down on an update - that is, when I clicked "update" to update the site, it was taking a long time to update the database. Various tests indicated that it wasn't PHP with the issue, but running 'top' on my dev box indicated that the apache process was thrashing on these updates. I checked the code, and here is what I discovered:

    In my update code, I was issuing a SQL insert for each field in a record, where I was updating multiple fields on the same record, rather than doing an insert with all the fields to be updated in the SQL statement. If I had 10 fields to update, that was 10 INSERTS, instead of the single I should have been doing. As I said, this was a bone-headed move I won't be doing again in the future. Once I corrected the issue, my performance shot up immediately on the update.

    I would imagine that the same could be true of any simple SELECT - select out all the fields (and only those fields) you need at one shot, then loop thru the records building your output (whatever it is). Optimize the queries well, too (a misplaced pair of parentheses can make a WORLD of difference in some cases).

    In short, keep the number of queries to the backend as short and sweet as possible, reducing the load (and thrashing) on the backend. This should be common sense design, but sometimes in the thrill and rush to build something, programmers forget this, and it can easily cause issues down the line (I was lucky in that I caught it very early in my design of the system).

    Good luck, and I hope this helps...

  • I don't know why people always want to get new hardware or software to improve performance. If you review your application code I bet you will find many performance problems. Make better use of caching of data, limited database connections, more efficint SQL, etc... These are all ways to improve performance and can be performed incrementally. I've worked on many application where I was told the hardware simply won't support fast operations and then found I just changed some looping structure or poorly w
  • One (obvious) way to improve database performance is to cache some database results so you dont need to query the databse all the time you can eaily go 1000* faster. It exist some tools who cares about the cache so try to find one for your language/framework

Marvelous! The super-user's going to boot me! What a finely tuned response to the situation!

Working...