Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Software

PostgreSQL Reconsiders Its Process-Based Model 54

Jonathan Corbet, writing at LWN: In the fast-moving open-source world, programs can come and go quickly; a tool that has many users today can easily be eclipsed by something better next week. Even in this environment, though, some programs endure for a long time. As an example, consider the PostgreSQL database system, which traces its history back to 1986. Making fundamental changes to a large code base with that much history is never an easy task. As fundamental changes go, moving PostgreSQL away from its process-oriented model is not a small one, but it is one that the project is considering seriously.

A PostgreSQL instance runs as a large set of cooperating processes, including one for each connected client. These processes communicate through a number of shared-memory regions using an elaborate library that enables the creation of complex data structures in a setting where not all processes have the same memory mapped at the same address. This model has served the project well for many years, but the world has changed a lot over the history of this project. As a result, PostgreSQL developers are increasingly thinking that it may be time to make a change.
This discussion has been archived. No new comments can be posted.

PostgreSQL Reconsiders Its Process-Based Model

Comments Filter:
  • by rabbirta ( 10188987 ) on Monday June 19, 2023 @12:31PM (#63615752) Homepage
    Ugh. From the article, here's what they're doing instead:

    The overhead of cross-process context switches is inherently higher than switching between threads in the same process - and my suspicion is that that overhead will continue to increase.

    • by OneOfMany07 ( 4921667 ) on Monday June 19, 2023 @02:08PM (#63616026)

      Yes, that is a general reason when separate processes might be a bad design choice where you have a lot of data being passed around. But it also keeps some things separate from each other. Makes it impossible to cheat in certain ways.

      Processes share an address space. Meaning all the data in RAM that one thread can access is also physically possible for other threads to access. Separate processes should prevent that, even by mistake.

      I assume data would go from untrusted (you'd want separate processes and higher security) to usable (reviewed or created from a trusted source). And there should be a good design pattern for that situation. I don't happen to know it, but... this isn't a rare situation in software.

    • Right. The original article summary just laments the problem, but does not mention the proposed solution and the /. TFS just copies it.

      • The proposed solution isn't really a solution though. The problem is that they've got close to four decades of code built around the process-based model, along with huge amounts of third-party addons and modules, that would all need to be extensively rewritten. We have a code base that's about ten years newer than PostgreSQL and started in the Windows world so it was thread-based rather than process-based, meaning we were lucky and backed the right horse by accident, and I can't even begin to imagine what
  • by Anonymous Coward

    I like PostgresSQL but there are a couple things they're stubborn about because they're so incredibly conservative when it comes to database design. No index hinting is annoyingly paternalistic.

    At least some of those annoying things can be circumvented by using modules (for example, the lack of a single-byte int type is provided by pguint)

    Glad to see they're catching up with these newfangled "threads"

    • I can see why they'd be nervous about adding that, index hinting is a giant footgun in that in almost all cases the query optimiser does a much better job than you can at index selection, and if you do provide hints and then anything at all in the schema or indexing changes you're pessimising the query without even being aware of it. It's like C's 'register' keyword, it made sense with primitive compilers in the 1970s but hasn't really been needed for decades, which is why pretty much all compilers have ig
  • by Waffle Iron ( 339739 ) on Monday June 19, 2023 @12:49PM (#63615796)

    For each individual SQL query, a new Docker container should be launched with a fresh web server to handle the request. Javascript on the server will convert the request to REST calls to a pool of backend VMs. These will then convert all of the data to YAML, which will also be the new on-disk format. This all needs to run exclusively in the cloud.

  • Not a good idea (Score:5, Interesting)

    by rlwinm ( 6158720 ) on Monday June 19, 2023 @12:50PM (#63615800)
    Tom Lane made a few arguments in the past as to why the process model worked for them. While there is some additional overhead in terms of TLB thrashing on a process context switch it's not nearly a big pain point in Postgres. Plus the PG code base is not reentrant in many places.

    If you want to fix some warts in Postgres efforts would be better spent by:

    (1) Using io_uring or direct I/O rather than double buffering everything in the page cache.
    (2) Handling correlated columns better so the planner is smarter about certain kinds of joins.
    (3) Fixing the terrible plans that come out of queries with OR-clauses and some star-queries.
    (4) Reducing write amplification in indexes if possible, ala HOT updates.
    • by nester ( 14407 )

      Doesn't x86 have address-space #s now? (Like other arches have had for decades.) If the working set for the TLB is close to its total size, then it wouldn't help much, but if it's a problem of TLB-invalidate on context switch, address space numbers allow the TLB and other virtually addressed caches to avoid flushing entries that might hit again when context is switched back.

      • by rlwinm ( 6158720 )
        A typical Postgres process has a large number of shared page tables with other Postgres processes (the buffer cache and code pages). While I was quite a CPU architecture buff I haven't kept up with the changes to x86 for a long time - so this may very well help once it filters down to production equipment.
    • I know a lot of features in modern CPU's is to fix issues like this. And that ARM compatible CPU's are not as high performing.

      Would this situation change on a simpler CPU? Say with a smaller TLB and cache sizes in general.

      There are companies making 100's of ARM compatible cores (per socket? board?) for cloud and server companies. Having software that works well on a range of devices is useful.

      And maybe there are other issues with the processes, like making debugging harder? Or scaring away new open so

      • by rlwinm ( 6158720 )
        TLB churn is something that is also system dependent. A modern server is quite complex and modeling performance is difficult. The reality is that if you wanted to make HW improvements for workloads like Postgres then do what you can to streamline I/O.

        Almost no real-world Postgres databases fit in RAM - and because of ACID, even with synchronous_commit off, you will be doing writes unless you have a read-only database. So worrying about RAM (i.e. TLB) isn't as likely to have a good payoff as optimizing th
      • by edwdig ( 47888 )

        I know a lot of features in modern CPU's is to fix issues like this. And that ARM compatible CPU's are not as high performing.

        Would this situation change on a simpler CPU? Say with a smaller TLB and cache sizes in general.

        On any database where the performance really matters, you're going to have far more data than RAM. You'll be churning through tons of data on a big query, so you'll be thrashing the TLB and cache regardless.

        I think the bigger issue is that Postgres will try to share data between processes when possible. Doing that adds a lot of overhead. You can't use direct pointers to the data, as the memory will be mapped to different addresses in each process. All your data accesses will have a layer of indirection adde

    • Ohhh... They read/write through the standard OS fs-libraries, thereby - as you write - hit that cache as well as their own? Sounds like something they should fix a.s.a.p
      . and shouldn't be that hard to implement. Libraries for Direct I/O, or Asynch I/O, bypassing the OS fs, exists almost everywhere nowadays, right?
      I was thinking of switching to PostgreSQL, but I understand I need to investigate this a bit further. ðY'

      • by rlwinm ( 6158720 )
        So they have been talking about moving to direct I/O for a while. It's a bit trickier than it sounds once you take concurrency in mind. I have been using Postgres for real-world systems and am quite happy with it.

        PostgreSQL gets so many things right. Even updating the system catalog is ACID, that's nice. PostgreSQL also handles dates well. In fact it handles many data types really, really well - far better than MySQL.

        I also have found that when you start piling on the load (real world OLTP) Postgres h
        • If your queries are dominated by primary key lookups MySQL will probably win.

          Why would you think that? My tables have hundreds of millions of rows, and I can construct a query that does direct primary key lookups for multiple columns (they usually have nine or ten columns) using subqueries, and pull in targeted results in three to five milliseconds.

          • by rlwinm ( 6158720 )
            I meant "queries" in the more general sense. MySQL doesn't have quite the write overhead that Postgres does for updates. Then again, MySQL doesn't have good compliance with the SQL standard either. At least from the last time I checked its secondary indexes don't use a ctid type structure but rather point to the primary key of the row.

            This means for updates on tables with lots of secondary indexes (sometimes you just need those) you get far more writes in Postgres than MySQL.

            In case my posts here didn
  • by CAIMLAS ( 41445 ) on Monday June 19, 2023 @01:06PM (#63615866)

    The crux of the issue is that postgres is using a more expensive multi-process model instead of using kernel threading to communicate, duplicating both memory resources used by each process/connection but also requiring a non-trivial amount of IPC which would not otherwise happen with lighter threads.

    Some OSes have a concept called "lightweight threads" which sort of helps with this condition. I'm somewhat surprised postgres still uses this model, honestly - it's exceedingly long in the tooth. What I've seen typically is, even with applications which have this kind of model they'll use multiple worker threads within each process, at least, to spread resource use across cores more effectively.

    For reference, the Linux kernel treats both threads and processes roughly the same (as "tasks"), and the fundamental problem is simply in how postgres communicates.

    From kernel docs -

    "Linux uses a different implementation for threads. The basic unit is called a task (hence the struct task_struct) and it is used for both tasks and processes. Instead of embedding resources in the task structure it has pointers to these resources.

    Thus, if two threads are the same process will point to the same resource structure instance. If two threads are in different processes they will point to different resource structure instances."

    https://linux-kernel-labs.github.io/refs/heads/master/lectures/processes.html

    Effectively, this has interesting implications for administration, as each thread has the same kind of limit a process would normally (eg. open file limits, for instance). You can get a better accounting of what's actually running on your system by using `ps -eLf` - and of course, /proc/ contains all the appropriate info as well.

    • by rlwinm ( 6158720 )
      Actually the memory overhead is pretty much close to equal. Postgres processes fork from a common parent which preloaded lots of data. Everything else is done in a shared buffer pool. As fork() has copy-on-write semantics once a server process has run for a while the number of COW page faults flat lines.

      And the typical use case of Postgres is to do some sort of connection pooling. Either externally, like PGbouncer, or within your application code (by maintaining a pool of connections at the ready).
      • by CAIMLAS ( 41445 )

        All those things might be true in the specific context to postgres, which would be curious - because it's definitely not the norm for that to be the case with process vs thread models.

        The IPC overhead, though... that's definitely an added expense, and is going to get more expensive/less performant disproportionate to the work being done as load increases, particularly with our ever-expanding many-cored NUMA systems.

        • by rlwinm ( 6158720 )
          So the IPC overhead is pretty low as most of it involves data in shared memory. The way Postgres works is when it starts the postmaster process will setup some core data structures and then allocate (via mmap, typically) the shared buffers. The interesting bit is that the shared buffers are in a shared VMA. When postmaster forks to handle a connection those mappings to the shared memory are kept in the child process. The child process can read/write the buffer cache without much IPC. It also has locks, obvi
    • I'm somewhat surprised postgres still uses this model, honestly - it's exceedingly long in the tooth.

      After reading the whole article the issue seems to me fundamentally a project management and staffing one. Sure you can posit rewriting the process/memory management using threads in shared memory like you probably would if writing a new server from scratch, but how to not break stuff right and left (2000 gloal variables, etc.) and still support all old installs and have regular releases with new features?

      A fork of the current system, which continues to be maintained, while the new version is developed and

      • by rlwinm ( 6158720 )
        I am not sure what problem moving to threads will solve? The only case I can think of is in terms of parallel query - you would shuffle a little less data between workers. To be honest I turn parallel query off to get more predictable utilization. Most of my queries are short lived and don't have enough tuples to make parallel query all that useful.

        The performance gains are likely to be quite modest. I say this as someone pretty familiar with the Postgres code base. I'm certainly a fan of threading in ma
  • This is way over my head
  • I would like to see them go the other way instead. Instead of ditching the process model in favor of threads, I would like to see this process model expanded to work over the network. Several hosts glued together and behaving as one giant database.
    Maybe this has to happen at the Linux kernel level, however, I am not quite sure.
  • That's good and all but the important question is:

    Does it have web scale? [youtube.com] =P

  • As it is not about getting rich on some hype before the victims/customers notice what something actually can and cannot do, FISS is typically moing at a rather sedate phase and that is how it should be for all solid engineering.

Genius is ten percent inspiration and fifty percent capital gains.

Working...