Removing the Big Kernel Lock 222
Corrado writes "There is a big discussion going on over removing a bit of non-preemptable code from the Linux kernel. 'As some of the latency junkies on lkml already know, commit 8e3e076 in v2.6.26-rc2 removed the preemptable BKL feature and made the Big Kernel Lock a spinlock and thus turned it into non-preemptable code again. "This commit returned the BKL code to the 2.6.7 state of affairs in essence," began Ingo Molnar. He noted that this had a very negative effect on the real time kernel efforts, adding that Linux creator Linus Torvalds indicated the only acceptable way forward was to completely remove the BKL.'"
I don't understand (Score:2)
RTFAing says that temporarily forking the kernel with a branch dedicated to experimenting with the BKL is being considered. Maybe they can call it 2.7...
Re:I don't understand (Score:5, Interesting)
Why did they remove the preemptable BKL?
I'm not a kernel developer, but I'd say it's because there's widespread belief that the preemtable BKL is "the wrong way forward". Statements like these lead me to believe this:
In any large software project there's always a path to get from where you are, to where you want to be. It sounds like any version of BKL is considered ugly and causes problems, and patching it just won't work. In other words, fixing this part of the kernel isn't really possible, so they need to start over and change any code that relies on it to rely on something different entirely.
Bad interaction with the generic semaphores. (Score:5, Informative)
The consensus was to not revert the generic semaphore patch, but to fix it another way. Linus decided on a path that will make people focus on removing the BKL rather than a workaround in the generic semaphore code. Also, Linus doesn't think that the latency of the non-preemptable BKL is too bad [2].
[1] http://linux.derkeiler.com/Mailing-Lists/Kernel/2008-05/msg03526.html
[2] http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=8e3e076c5a78519a9f64cd384e8f18bc21882ce0
Re:I don't understand (Score:5, Funny)
We need MACRO kernels! (Score:2, Funny)
The best course of action would be to redesign the Linux kernel from scratch and this time integrate all possible drivers. Hardware support would be a lot easier!
I would even go so far as to suggest integrating the most important server tools into the kernel to decrease latency. Why not integrate Apache? You could even integrate the shell for added responsiveness!
Linus has demonstrated that micro k
Re:We need MACRO kernels! (Score:4, Funny)
Re:I don't understand (Score:5, Interesting)
The monolithic design is slowly forming a focal point in performance: something has to do a lot of locked switching - if SMP machines could do what they do best and handle IRQs and threads concurrently without waiting for a lock (they're better spent sending/receiving lockless messages), life would be easier on the scalability gurus.
Re:I don't understand (Score:5, Informative)
rather than merge the old locking code back in, and reintroduce the many different locking primitives they had, someone decided to simply reenable the BKL - the downside of which is they have to either fix the regression caused by the simpler semaphore code (not likely, it's very simple and clean - everyone's favourite pet/child) or remove instances of where the semaphore code is likely to be called (the BKL).
Matt
LWN has a summary of the issue (Score:4, Informative)
Re: (Score:3, Interesting)
new semaphore code was introduced that simplified locking. Unfortunately in many kernel situations it's proven to affect performance at around something like 40% - which isn't just considerable its disastrous. rather than merge the old locking code back in, and reintroduce the many different locking primitives they had, someone decided to simply reenable the BKL - the downside of which is they have to either fix the regression caused by the simpler semaphore code (not likely, it's very simple and clean - everyone's favourite pet/child) or remove instances of where the semaphore code is likely to be called (the BKL). Matt
Couldn't they just ask the real-time developers to kindly find a real-time kernel to work on? Why try to make a non-preemptible kernel preemptible for the sake of real-time, if it affects non-real-time performance?
(Performance != Speed) // in an RT system (Score:5, Insightful)
Re: (Score:2, Interesting)
Re:I don't understand (Score:5, Informative)
For example, you will *know* your PC will never become utterly unusable to the point it's unsafe with your data. ie: while it's handling many IO operations (say you're being ddosed whilst transcoding a dvd and flossing with a sata cable) unless you run completely out of system memory. Nothing should run away wasn't design to. This stems from the predictability of code execution times that pre-empting offers.
The predictably allows devices to make guarantees, for example, if your mouse is aware it's going to get a time slice, at worst, every 100ms, at least it'll be doing something every 100ms and you gain a visually responsive mouse (aye, it's not as great as it could be). The non-preempt side of life has your CPU tied up doing work that was sits inside a BKL - ie: dealing with a character device or ioctl - your mouse could be waiting 500ms or 1000ms before updating it's position: giving you the impression your PC is dying.
Code that is stuck inside the BKL isn't pre-emptable (you *must* wait for it to finish.) - there's a lot of it that does a lot of regular stuff. Often this will hold up other cores if you've got a cooperative multi-threaded program. The net effect is a slow PC.
RT systems have a different use: they want a guarentee that something is unable to ever delay the system, in particular interrupts. The BKL allows code to take a time slice and run away with it, because you thought it was very important and wrapped it in [un]lock_kernel. This then delays IRQs (IRQs cant run until the lock has finish - at least, I'd like to believe second level IRQs can't run, I'm unsure of the specifics) which will delay data coming in and out of your PC - hard file, disk and display buffers suffer: they fill and you start to loose data because there's nothing dealing with it.
Preempt kernels are good
(viva the Desktop)
Matt
Re: (Score:2)
Couldn't they just ask the real-time developers to kindly find a real-time kernel to work on? Why try to make a non-preemptible kernel preemptible for the sake of real-time, if it affects non-real-time performance?
Running a non-real-time kernel, is like asking Eddie the shipboard computer for tea.
If you ever want to put the linux kernet into anything other than your standard IT server, a airplane, a missile or any weapons system for example, it is going to HAVE to be real-time-kernel.
Re: (Score:2)
Re: (Score:2)
IM's patch, amongst a host of other things, adds down(&kernel_sem) to the lock_kernel (and up() to unlock) to making it a true lock.
It's interesting to note that copy and save (CAS/CAS2) instructions have been around in many architectures for nearly a decade (longer still for some). It's surprising to se
Re: (Score:3, Interesting)
The new semaphore implementation gives way in a very fair manor proving that sometimes you can be too fair and Kon was right from the get go (a desktop user's opinion, i don't mean to bait). if something didn't explicitly ask for a time slice, ie: a kernel section was preempted by an interrupt and then queued back in normally, then it doesn't get one until the p
Re:I don't understand (Score:5, Informative)
Re:I don't understand (Score:4, Informative)
Duh, I meant mutex/semaphore. And Linux semaphores have become slower, meanwhile mutexes still are fast as old semaphores were, as #23446368 says. The options were to move from a semaphore to mutexes or spinlocks, but Linus chose spinlocks because the RT/low-latency crow will notice it and will try to remove the remaining BKL users.
Re: (Score:2)
Comment removed (Score:5, Interesting)
Comment removed (Score:5, Interesting)
Re: (Score:3)
Re: (Score:2, Informative)
Re:I don't understand (Score:4, Insightful)
If the BKL code is rarely used then the general usage performance impact is minimal and the efficiency of a spinlock vs mutex is irrelevant. If this is not true then saying it is rarely used is misleading.
However for real-time use you either do or don't meet a given worst case latency spec - the fact that a glitch only rarely happens is of little comfort.
It seems like it should have been a no-brainer to leave the pre-emptable code in for the time being. If there's a clean way to redesign the lock out altogether then great, but that should be a seperate issue.
BKL is again a big source of latency (Score:5, Informative)
Unfortunately this caused a 40% regression in the AIM7 benchmark [google.com]. The BKL was now a (slower) semaphore and the high lock contention on it was made worse by its ability to be preempted. As the ability to build a kernel without BKL preemption had been removed [google.com] Linus decided that the BKL preemption would go. Ingo suggested semaphore gymnastics to try and recover performance but Linus didn't like this idea.
As the the BKL is no longer be preemptible [google.com] it is now a big source of latency (since it could no longer be interrupted). People still want low latencies (that's why they made the BKL preemptible in the first place) so they took the only option left and started work to get rid of the BKL.
(Bah half a dozen other people have replied in the time it's taken me to edit and redit this. Oh well...)
Looks like "Worse is Better" all over (Score:5, Insightful)
That's fine, but once you reach maturity you should be trying to do the "right thing" (the exact opposite.) And the Linux kernel has reached maturity for quite a while now.
I think Linus is right on this.
Re: (Score:2, Insightful)
RTOS users need a non-interruptible lock.
Nope they need a guaranteed number time slices for their RT application(s) to run whithin constraints. Which they can't get if the kernel not the applications has a lock for arbitrary amounts of time.
Desktop and server users' need conflict with the RTOS users' needs.
No they are orthogonal. In fact no BKL means more robust systems under heavy loads.
Unfortunately, the "right thing" is undesirable, because the added indirection will substantially reduce performance for everybody.
In fact none of the people in the discussion seem to think that. They just think it is a huge amount of work (and they should know far better than me, but for what it's worth I agree)
The problem? Changing legacy code use
Punchline (Score:5, Informative)
Assuming everything is stable and correct, the next step is to break the BKL into locks with finer granularity so that the BKL can go the way of the dodo.
What does this mean for us mere mortals? (Score:2)
I know (or think I know) low latency is important for audio work, and I know people who do a lot of audio work under Linux, should I be giving them aheads up to avoid upgrading their kernel until this gets fixed, or should I start looking for unofficial, special low latency versions of the kernel to recommend to them?
It means pain for long term gain (Score:2)
This work won't go into 2.6.26 (it's too late). It may not even go into 2.6.27 (it's been done outside of the mainline tree). This may mean that until it is
Re: (Score:2)
Im sure all the major distros will be watching whats happening here and making sure it doesnt affect their end users. From what i've read the changes mean better performance at the expense of latency but they're working to elimintate the BKL all together.
By the time your distros next kernel upgrade comes around im sure none of this will matter.
Re: (Score:2)
Re: (Score:2)
If they DO run real-time audio processing applications, they most likely have their kernel specifically configured for it, and won't get updates until developers will make sure that there is no additional latency introduced.
Keep these on Front Page... (Score:5, Insightful)
This is the type of stuff that needs to be kept in the news, as the people who post here often have no understanding of, and the ones that do, have the opportunity to explain this stuff, bringing everyone up to a better level of understanding.
Maybe if we did this, real discussions about the designs and benefits of all technologies could be debated and referenced accruately.. Or even dare say, NT won't have people go ape when someone refers to a good aspect of its kernel design.
Sounds like the Linux kernel needs some tests... (Score:3, Informative)
Wow. It sounds like it's about time someone on the kernel team reads Working Effectively With Legacy Code [amazon.com] by Michael Feathers.
I'm a software developer myself on a very large project myself, and this book has absolutely revolutionized what I do. Having things break silently in the kernel is a sure sign that dependency problems in the code exist, and most of this book is about ways to break dependencies effectively and get code under test. And that's the other thing... if they aren't writing tests for everything they do, then even the code they write today is legacy code. Code without tests can't be easilly checked for correctness when a change is made, can fail silently easilly, and can't be understood as easilly.
That's what this book is about, and if things in the kernel have deteriorated to such a state then they need to swallow their pride and take a look at resources designed to cope with this. I know they are all uber-coders in many respects, but everyone has something they can improve on, and from the description they give of their own code, this is their area for improving.
Re:Sounds like the Linux kernel needs some tests.. (Score:5, Interesting)
In other words:
TESTS DON'T VERIFY THAT YOUR CODE IS NOT BUGGY. YOU VERIFY THAT YOUR CODE ISN'T BUGGY.
Re:Sounds like the Linux kernel needs some tests.. (Score:2)
Also, I wonder if the Linux kernel can carry on expanding or if it's time for the form of the kernel to change.
I know people like the monolithic kernel, but lack of change does not promote new techniques. Doesn't have to be a microkernel or have to fit in any existing box.
Re:Sounds like the Linux kernel needs some tests.. (Score:5, Insightful)
You talk about unit testing, but how exactly are you going to unit test multi-threading issues? This is not some simple problem that you can run a test/fail test against. These kinds of things can really only be tested by analysis to prove it can't fail, or extensive fuzz testing to get it "good enough"..
Re: (Score:2)
As for your comment about them "knowing better", I've worked on a multi-million line project. When your lines of code reaches that sort of size the issues faced for someone on ten million are pretty much the same for people on twenty million loc projects. If you RTFA you'll see it was a series of system tests which demonstrated the problem in the first place. Although the fact the kernel doesn't seem to have a standardised set of sys
Re: (Score:2)
Re: (Score:3, Funny)
Re: (Score:3, Insightful)
As for your comment about them "knowing better", I've worked on a multi-million line project. When your lines of code reaches that sort of size the issues faced for someone on ten million are pretty much the same for people on twenty million loc projects.
All lines of code are not equal.
There's a huge difference between typical application code and system code. Very little application code is as performance-sensitive as system code, because the goal of the system code is to use as little time as possible (consistent with some other goals), to make as much time as possible available to run application code.
OS code is performance-tuned to a degree that application code almost never is, and that focus on performance results in complexity that isn't well-
Tough to test drivers for hardware you don't have (Score:5, Informative)
This is not to say that there need to be tests for things that can be caught at compile time or run time regardless of hardware but there is only so far you can take it.
It's not like the kernel doesn't have any testing done on it though. There's the Linux Test Project [sourceforge.net] which seems to test new kernel's nightly. If you ever look in the kernel hacking menu of the kernel configuration you will see tests ranging from Ingo Molnar's lock dependency tester [mjmwired.net] (which checks to see locks are taken in the right order at run time), memory poisoning, spurious IRQ at un/registration time, rcu torture testing, softlockup testing, stack overflow checking, marking parts of the kernel readonly, changing page attributes every 30 seconds... Couple that with people like Coverity [coverity.com] reporting static analysis checks on the code. Tools like sparse [kernel.org] have been developed to try and so some of the static checks on kernel developer machines while they are building the code.
But this is not enough. Bugs STILL get through and there are still no go areas of code. If you've got the skills to write tests for the Linux kernel PLEASE do! Even having more people testing and reporting issues with the latest releases of the kernel would also help. It's only going to get more buggy without help...
This is why monolithic kernels do real-time badly (Score:5, Insightful)
This task is not easy at all. 12 years after Linux has been converted to an SMP OS we still have 1300+ legacy BKL using sites. There are 400+ lock_kernel() critical sections and 800+ ioctls. They are spread out across rather difficult areas of often legacy code that few people understand and few people dare to touch.
This is where microkernels win. When almost everything is in a user process, you don't have this problem.
Within QNX, which really is a microkernel, almost everything is preemptable. All the kernel does is pass messages, manage memory, and dispatch the CPUs. All these operations either have a hard upper bound in how long they can take (a few microseconds), or are preemptable. Real time engineers run tests where interrupts are triggered at some huge rate from an external oscillator, and when the high priority process handling the interrupt gets control, it sends a signal to an output port. The time delay between the events is recorded with a logic analyzer. You can do this with QNX while running a background load, and you won't see unexpected delays. Preemption really works. I've seen complaints because one in a billion interrupts was delayed 12 microseconds, and that problem was quickly fixed.
As the number of CPUs increases, microkernels may win out. Locking contention becomes more of a problem for spinlock-based systems as the number of CPUs increases. You have to work really hard to fix this in monolithic kernels, and any badly coded driver can make overall system latency worse.
So when can we run QNX-Ubntu? (Score:2, Funny)
Just asking . . .
Re: (Score:3, Interesting)
You can actually run X-windows on QNX, although nobody does. QNX has its own GUI system called Photon, which is oriented towards things one might want in an industrial control panel or an entertainment system, like meters, dials, and graphs.
QNX the OS is fine; it's QNX the company that's difficult to deal with. On the other hand, who else has gone up against Microsoft on x86 for 20 years and is still alive?
This same type Big Giant Lock DragonFlyBSD .... (Score:2)
If so then perhaps what DragonFlyBSD (the BSD could be dropped at this time - as its only relevant to history line) is doing to remove it can be helpful to removing it in Linux.
Re: (Score:2)
I think Dragonflys links to the BSD world are slightly more than you seem to!
Re: (Score:2)
In regards to teh Big Giant Lock...
DragonFly Projects
http://wiki.dragonflybsd.org/index.cgi/GoogleSoC2008 [dragonflybsd.org]
Extend Multi-Processing (MP) support
* Robert Luciani, mentored by Simon Schubert
* Back in 2003 when DragonFly was born, the first subsystem to be implemented was the LWKT. The reduction in complexity achieved
Re: (Score:3, Informative)
The break from BSD is in regards to code resemblance. The code of DragonFly has changed enough that it doesn't resemble BSD so much anymore.
You might have been able to say something was "BSD" even 5 years ago, but I think you would have a lot more trouble saying that now. The family is more philosophical than architectural now. How close are the kernels between OpenBSD and FreeBSD for instance? My guess is if you looked back to FreeBSD4, they would be far more similar than now. Likewise, if you just look at the Dragonfly change logs, they frequently import code directly from the other BSDs--I believe the ATA code is one recent example.
Dragonfl
Layman's translation? (Score:2)
Layman's summary in terms of user experience (Score:3, Informative)
For those who enable the option this should reduce the chance of a hardware's buffer not being filled in time (so audio is less likely to skip in demanding environments). If you are an audio recording person or need VERY (less than hundredths of seconds) fast responses all the time your experience should improve. If you run VERY big workloads that have lots of pieces t
Re: (Score:2)
They're trying to make your mouse pointer move smoothly, your audio never skip, your FPS game never miss a beat and your Linux-driven coffee machine never burn a bean, even when there is some process opening a random device in the background.
Why testing isn't enough (Score:5, Informative)
Race conditions are mostly eliminated by design, not by testing. Testing will find the most egregious ones but the rest cause bizarre and hard-to-trace symptoms that usually end up with someone fixing them by reasoning about them. "Hmm" you think to yourself "that sounds like a race problem. Wonder where it might be?" and thinking about it, looking at the code, inventing scenarios that might trigger a race; that's how you find them.
Re:Fascinating. (Score:5, Funny)
Re: (Score:3, Insightful)
Hey, I consider myself a code junky (and yes, even consider the issue of the BKL somewhat interesting), but I realize that this topic has about as much appeal to the average Slashdotter as mowing the lawn.
Re: (Score:3, Funny)
Keep off my lawn too, you pesky kernel hackers^WWkids
Re:Fascinating. (Score:5, Insightful)
Re: (Score:2)
s/somewhere else/Faux News/gi;
Fixed it for ya ;-)
Re: (Score:3, Insightful)
Hey, I consider myself a code junky (and yes, even consider the issue of the BKL somewhat interesting),
but I realize that this topic has about as much appeal to the average Slashdotter as mowing the lawn.
This topic is probably mainly of historical interest. (BKL used to be one of those bread-n-butter slashdot stories in the early days)
The funny thing is that the reply quality here is quite high for technical topics, but over time slashdot management has found that retarded political threads are much more popular.
Re:Translation? (Score:5, Funny)
Re:Translation? (Score:5, Informative)
Now, a group is trying to see if it is possible to weed out all the remaining uses of the BKL and replace them with localized locking for specific sections of the kernel. This is tricky, as there are side-effects of the BKL that are not always obvious.
Re: (Score:2)
I don't believe the -rt patch is expected to perform well as a drop-in replacement everywhere, or on systems that have management interrupts, but I think for people interested in real-time programming, the rt patch is a good starting point.
Re:Translation? (Score:5, Informative)
There would be thousands of such, and you'll probably never succeed in debugging it.
The approach suggested in the article is to replace the BKL by a true lock, then "pushing it down", which means understanding WHY that code want the BKL, and get smaller locks instead in subroutines.
For instance, one piece of code could take the BKL because it will change 3 data structure. You could then remove the BKL and use, in the 3 part of code that changes those 3 structure, and use a finer grained lock for each of those.
By iterating this way, you should always get a somewhat working kernel, and slowly kill the BKL.
Re:Translation? (Score:5, Informative)
No.
You know, like, remove that part of the code and code it all over again, see what is broken, and continue this way?
It's not that simple. When it comes to locking, there is no "part of the code" that can be replaced. Locking governs interaction between two pieces of code, sometimes two pieces that are very different but have some small thing in common.
Besides, the kernel is too big to just start throwing parts of it out and redoing them from scratch. It's much better to make incremental improvements, because then the people working on them will actual learn how to solve the problem. The BKL is not just a coding problem, but also a people and project management problem.
Re: (Score:3, Insightful)
Managed languages have their place (and in fact all my work now is w
Re:Translation? (Score:4, Insightful)
Keep dreaming ...
With all those processors, you'll want to be saving energy, so you'll be aiming to turn off individual processors until needed, and run the remaining processors at full load, so you'll still need a scheduler, locks, etc.
And yes, it's possible even today to use up more than 4 gig of ram and have to hit swap.
Re: (Score:2)
In 10 years? (the timespan you posited). Not going to happen.
Re: (Score:3, Interesting)
The answer is to allow sections of code to "lock" access for a brief duration -- "I'm working with this right now, don't anyone else touch it." Simple in theory, very difficult in concept.
Note that I'm speaking generically; I'm not an expert on the Linux kernel. Ideally, though, you want
Re:Translation? (Score:5, Informative)
The granularity is important because you want other threads(jobs) to beable to get something done. At some point there is this thing called a scheduler that assigns your thread to execute, because every job needs a CPU. You get to work until your time allotment is expired or you have to stop because something you need to continue is not availible, because its say "locked".
Think of this like working in a shop along side someone else. You have one set of tools, you need a little screw driver, and a big one to do your work. The other guy needs the little scew driver and a pair of pliers. You want him to put the screw driver down while he is using the pliers so that you can use it if you need to. If he instead puts it in his breast pocket you are going to have to wait to finish your job until he finishes his. Even though its your turn at the work bench(CPU) you can't do anything with it because you don't have what you need. So all you can do is yeild the rest of the time to the other task, and hope he finished up soon.
In the kernel world this really short circuits the work of the scheduler. It might want to give time to other threads and it will but they are going to just turn around and give that up because whichever thread is holding the BKL is likely the only one who can actually do any work. As an end user this means something like data gets read from your network card ok but your sound keeps skipping.
The tricky part with more granular locks though is avoiding circular conditions, these can crash the system. Imagine: Job One needs resource A and B and has A locked, its waiting for B. Job Two needs B and C and has B locked and is waiting on C. Job Three needs A and C and has C locked and is waiting on A. Unless the system can detect this condition which is hard to do in many cases none of these threads will ever be able to run. The kernel contributors likely have some work ahead to eliminate the BKL and not cause these types of problems.
Re: (Score:2, Interesting)
Re:Translation? (Score:4, Interesting)
This is one approach to deadlocks; it would fall generally under "avoidance".
The problem is that if you have any serious contention over the resources, it is entirely possible that the process will _never_ get the resources (because one of them always gets snatched up before another gets released, so all n resources are never available at once to the requesting process). This leads to starvation and general sadness.
If the system has minimal contention (so the normal case is that all three resources are unclaimed) and resources are held very briefly (so if a resource is taken it is likely to be released before another is taken, anyway) then it may work. In real systems these are hard properties to guarantee.
Also, the scheme requires a process to know in advance which locks it will need. A lot of algorithms may discover this on the fly (e.g. if you are traversing a data structure), which becomes a problem. The best you could hope to do is to lock aggressively--taking everything you might need--but this is ugly, and would tend to violate the conditions above (locking everything will lead to contention, locking everything in advance will lead to holding locks for a long time). Alternatively, if you discover that you need a new lock that you don't yet have, you could give up all the locks you do have and then try to lock again (with the new lock added to the set). This is also ugly and increases the chance of starvation (since now you need to lock a bunch of resources several times). Additionally, since you have to unlock in the middle, the algorithm becomes much more complicated. For example, when you discover you need a new lock, you must put the world in a consistent state before unlocking. And when you re-lock, you must check to make sure that the world hasn't been modified under your feet (which is entirely possible, and may very well cause you to need a still-different lock).
Basically it doesn't work that well.Re: (Score:3, Insightful)
I'll respond because it is fantastic to see new people thinking about these issues. But I must agree with twizmer on this - grabbing multiple resources might solve the problem, but it is very clumsy. Some resources (eg storage) may take milliseconds to complete, whereas others (eg graphics) might take only microseconds. Holding up the fast ones while the slow ones complete is very undesirable (for all the reasons twizmer gives).
There are techniques that are used for problems like deadlocks and starvation
Re:Translation? (Score:5, Informative)
Linux is a preemptive multi-tasking kernel. What this means is that a hardware interrupt like a keyboard click or the system timer will interrupt whatever is currently running on the CPU, and an interrupt handler in the kernel starts running code. In order to make sure that all the states of the kernel are consistent (ie: not corrupt), the different parts of the kernel are supposed to lock the data that they are using or modifying (ie, readlock or writelock) in case another code path gets run at the same time trying to modify the same data. It becomes even more important in a multi-cpu environment where locks have to be atomic (happen at the same time on all CPUs). So what you are supposed to do is only lock the resources you currently need (a file system drivers would only lock parts of the filesystem, not a character device). Because some programmers are lazy, or not sure what they are doing, they just use the big kernel lock which locks pretty much everything in the kernel. This is bad for multi-tasking and multi-processing because it means you can only have one codepath using the lock at a time.
Note: it's been a while since I've done kernel work, so I'm sure this is not 100% true, but hope it helps you understand.
Re: (Score:2)
There should be very few things which lock up an entire kernel, even for a few nanoseconds.
Re:Translation? (Score:4, Funny)
Linux is a preemptive multi-tasking kernel. What this means is that a hardware interrupt like a keyboard click or the system timer will interrupt whatever is currently running on the CPU, and an interrupt handler in the kernel starts running code. In order to make sure that all the states of the kernel are consistent (ie: not corrupt), the different parts of the kernel are supposed to lock the data that they are using or modifying (ie, readlock or writelock) in case another code path gets run at the same time trying to modify the same data. It becomes even more important in a multi-cpu environment where locks have to be atomic (happen at the same time on all CPUs). So what you are supposed to do is only lock the resources you currently need (a file system drivers would only lock parts of the filesystem, not a character device). Because some programmers are lazy, or not sure what they are doing, they just use the big kernel lock which locks pretty much everything in the kernel. This is bad for multi-tasking and multi-processing because it means you can only have one codepath using the lock at a time.
Re: (Score:2)
Wish I had some mod points today. Hilarious.
Re: (Score:3, Funny)
And you learn more when you write your own (virtual) device driver which crashes your kernel and renders it to unbootable state :)
Not that I know anyone who has done so... or at least I wont admit it!
Re: (Score:2)
And the reasons for each CPU wishing access to the kernel might be for a completely different reason. One CPU migh
Re: (Score:2)
When another process releases the lock on that process, the first process in that queue is restarted.
For a single atomic multi-resource lock to work, the process would have to be placed on multiple queues. Whenever one resource was released, the process would be unable to continue because it did not have the other processes. It would be permanently asle
Re:Linux? (Score:4, Informative)
The future.
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
1%? (Score:2, Informative)
Re:Linux? (Score:4, Funny)
Re: (Score:2, Funny)
Re:Linux? (Score:4, Funny)
On second thoughts, disregard that... I just remembered that Emacs has a Vi emultation mode!
Re: (Score:2)
In fact I've got the code right here.
what you want to see it?
Oh look over there a flying car.
Re: (Score:2)
I appreciate the work the kernel devs do (I'm using their work right now), but the mere act of working on the kernel doesn't make them inherently more intelligent or smarter than anybody else.
The OP has a point. This is a pretty big design issue (witness all the things it's screwing up), and should've been addressed a long time ago.
Re: (Score:2)
It is a deign issue, but its not that big, most of the kernel has granular locks, this only really affects the goal of having an entirely realtime kernel, so while it is important it's not that important. And im not sure what you mean by a long time ago as having a RT kernel is a fairly recent goal.
Re: (Score:2)
Re: (Score:2)
It was ALL LOCKED.
Now, we're trying to UNLOCK it. See? Locking semantics are tricky.
Re: (Score:3, Interesting)
Time passes, technology changes, and now it's become urgent to deal with this, so now it's being dealt with.
One should, perhaps, wonder what currently unreasonable problem should actually start being addressed RIGHT NOW!! The things I ca
ext4 (Score:2)
A while ago I read this article: LWN article about ext4 fs [lwn.net]
and your comment reminded me of it. I think it's very impressive that some people can think far ahead.
Re: (Score:2)
He has a point. All of this stuff in Solaris, for example, was sorted out in Solaris 2.7 which came out well over a decade ago.
Linux is great, but its development is weird. Remember all the problems in 2.4 that didn't get sorted out until about 2.4.23? Then there's 2.6 which didn't become usable until 2.6.13 or so.
In my very humble opinion, there should be a 2.7.x development branch for these sorts of experiments. But, I'm not Linus, and I suppose I should write my own damned kernel instead of complaining
Re: (Score:2, Interesting)
Re: (Score:2)
All of this stuff in Solaris, for example, was sorted out in Solaris 2.7 which came out well over a decade ago.
Which reminds me of Alan Cox's trollish remark saying that Sun should drop work on Solaris and support Linux instead - might be easier to add the nice stuff from Linux to Solaris than to clean up the mess with the BKL. IIRC, Sun had been supporting SMP machines from before the time that Linus started on Linux. In addition, getting SMP support done right has been a much higher priority with the Solaris developers than the Linux developers.
Re:Interesting (Score:5, Funny)
Re: (Score:2)