Protothreads and Other Wicked C Tricks 229
lwb writes "For those of you interested in interesting hard-core C programming
tricks: Adam Dunkels' protothreads library
implements an unusually lightweight type of threads. Protothreads are
not real threads, but rather something in between an event-driven
state machine and regular threads. But they are implemented in 100%
portable ANSI C and with an interesting but quite unintuitive use of the switch/case
construct. The same trick has previously been used by Simon Tatham to implement
coroutines
in C. The trick was originally invented by Tom Duff and dubbed Duff's
device. You either love it or you hate it!"
Looks pretty cool (Score:5, Interesting)
I guess the idea is it's extremely portable. (Score:5, Informative)
If that's what folks are looking for, another option is the tasks added to LibGG a while back. Tradeoffs either way -- LibGG's requires at least C signals (but will use pthreads or windows threads if detected during compile time), whereas this can be used in OS-less firmware. But on the positive side you can use switch() in LibGG tasks -- what
you can't use are a lot of non-MT-safe system calls. It's an OK abstraction but of course there are so very many ways to accidentally ruin portability that it is far from foolproof.
http://www.ggi-project.org/documentation/libgg/1.
Re:I guess the idea is it's extremely portable. (Score:5, Insightful)
I read his paper where he said "writing an event-driven system is hard". I guess he has never heard of a using Finite State Automata for the design? State machines are very simple to program. An event driven system is not at all hard to write, although you often times do have to have some deep hardware and/or procesor knowledge to do it well. I wrote many of them in the 1980's when I did embedded C code for DOD work, although I have not done so in quite a few years. Once Ada came along everyone abandoned C as too obtuse for embedded work for the DOD. I once did benchmarks that showed decent C code without strong optimization outperformed Ada code, but C was dead already in their minds. I'm glad to see some folks are still interested in it on the commercial side of programming. After all we can't write everything in Java
Re:I guess the idea is it's extremely portable. (Score:5, Informative)
The argument that Rob Pike makes in A Concurrent Window System [swtch.com] and with Luca Cardelli in Squeak: a Language for Communicating with Mice [microsoft.com] is that many of the event systems and associated state machines that we write can be much simplified by treating input multiplexing, and thus coroutine-like structures, as language primitives.
This work follows directly from Hoare's Communicating Sequential Processes - a good summary can be found here [swtch.com]. Working with CSP only a little has convinced me of how much easier so many systems tasks are in this framework than in the world of the massive state-system/event loop world.
Re:I guess the idea is it's extremely portable. (Score:2)
Wow, I know this one from first hand experience. We use a "graphical" programming language called LabVIEW where I work and I was tasked with the maintanance of a software program that was one giant state machine. Let me tell you, it was almost impossible to tell where the program was and where it was going half the time
Re:I guess the idea is it's extremely portable. (Score:2)
Never ascribe to programming paradigm that which can be adequately explained by programmer stupidity ;).
Lambda Calculus? (Score:2)
Re:I guess the idea is it's extremely portable. (Score:4, Interesting)
My experiences contradict your statement. State machines are both easy to implement, and easy to debug, if you do it the right way. I have seen many entirely wrong implementations, including one where you can go from any of about two dozen "states" to any other. I have seen some that just switch states when they feel like it, or switch states based on complex decisions, which makes debugging difficult. Put another way, you can make a "state machine" degenerate into something else, and nullify its benefits, if you refuse to follow the rules.
A well-implemented state machine has an important characteristic: it is clear to see why you are where you are. This means that state transitions are checked (against unexpected events) and traced, so debugging the machine is literally a matter of reading a log that looks like this:
in state 0, received event A so went to state 2
in state 2, received event B so went to state 1
in state 1, received event C so went to state 5
in state 5, ignoring event A
in state 5, received unexpected event C
and so on. In this particular example, the question to answer is why we're not handling event C properly in state 5, or why we went to state 5 in the first place. Either should be pretty obvious when you consult the original design. The fix is likewise obvious. Figuring out state machines, in my experience, has always been easier than figuring out multi-threaded code.
This isn't to say that all programs should be implemented as a state machine. Simple Unix-style pipe programs, for instance, are generally unsuitable. If you don't know how to design a state machine properly, it's also going to be unsuitable.
Re:I guess the idea is it's extremely portable. (Score:2)
and just have one state variable of type state. This will:
Re:I guess the idea is it's extremely portable. (Score:3, Informative)
Re:I guess the idea is it's extremely portable. (Score:3, Informative)
Your point about only working on a particular kind of OS isn't a valid one. Why would it need to be the highest priority native thread?
I've actually used the Protothread library in implementing the playback code of a PVR - and what it actually provides is explicit scheduling between a set of tasks. For example - playing back an MPEG2 Transport stream requires you to do perform several distinct tasks:
1) Demultiple
Re:I guess the idea is it's extremely portable. (Score:3, Insightful)
Explicit scheduling is NOT pre-emptive, it's static. It can be priority driven though as is pre-emptive. Pre-emptive is dynamic based on operating conditions such as a user input, interrupt, etc. I've never seen a lot of overhead on task/thread changes in an OS in many years.
Producer- Consumer tasks have to be very tightly coupled and managed unless you
Re:I guess the idea is it's extremely portable. (Score:2, Informative)
Job security? (Score:4, Funny)
Sounds ideal!
From the source: (Score:5, Informative)
http://www.lysator.liu.se/c/duffs-device.html [lysator.liu.se]
...sane detexi hanc marginis exiguitas non caperet (Score:3, Interesting)
From the above Duff on Duff's Device: I have another revolting way to use switches to implement interrupt driven state machines but it's too horrid to go into.
Perhaps this is the Duff's Device equivalent of a proof of Fermat's Last Theorem? Or is my ignorance of the history of Evil Computing showing?
Re:...sane detexi hanc marginis exiguitas non cape (Score:2)
http://brainwagon.org/archives/2005/03/05/1060/#c
Re:From the source: (Score:5, Funny)
When I saw that code snippet, I found myself switching back and forth between thinking "this is the most beautiful thing I have ever seen" and "dear god, who ordered that monster" so rapidly my brain almost a sploded.
Recursive main() (Score:2)
Wow. And I used to think C was frightening when I discovered the fun you can have with a program that takes command-line arguments when you start making recursive calls to main().
And just what's wrong with that?
Re:From the source: (Score:2)
Re:From the source: (Score:2)
Re:From the source: (Score:2)
Google Groups [google.com]
Re:sobleq??? (Score:2)
Re:sobleq??? (Score:2)
Bruce
Re:sobleq??? (Score:2)
Seen this already (Score:5, Funny)
Implementation in languages? (Score:3, Insightful)
Re:Implementation in languages? (Score:2)
Re:Implementation in languages? (Score:2, Interesting)
Real threads are much better for things like interpreters. JIT compilation with native threads is even better.
Re:Implementation in languages? (Score:2)
Re:Implementation in languages? (Score:2)
Python (Score:5, Interesting)
http://www-128.ibm.com/developerworks/library/l-p
They are cooperative but far more efficient than Python's own threading model. You can easily create hundreds of thousands of concurrent threads.
Re:Python (Score:2)
Re:Python (Score:2)
Re:Python (Score:2)
Python's way ahead of ya (Score:3, Informative)
The actual Python Enhancement Proposal [python.org] gives more detail and several badass use-cases.
Re:Python's way ahead of ya (Score:2)
It's been tried. There were problem with 3rd party C libraries. It's not like you can hand-wavey, hand-wavey, and suddenly C itself is stackless. That's what Christian Tismer ran up against, why the PEP to add continuations to Python was ultimately rejected.
C//
Re:Implementation in languages? (Score:2)
Re:Implementation in languages? (Score:2)
Re:Implementation in languages? (Score:2)
Re:Implementation in languages? (Score:2, Informative)
I'm kind of surprised it hasn't been mentioned yet.
It isn't Duff's device. (Score:3, Interesting)
Duff's device is a way of forcing C to do a form of loop unrolling. It has nothing to do with coroutines.
Re:It isn't Duff's device. (Score:4, Informative)
All these C "tricks" employ the same technique (though more elegantly) for different goals. Nonetheless, Duff's device can be said to have inspired such code.
Re:It isn't Duff's device. (Score:3, Insightful)
"Actually, I have another revolting way to use switches to implement interrupt driven state machines but it's too horrid to go into."
Re:It isn't Duff's device. (Score:3, Interesting)
Re:It isn't Duff's device. (Score:2)
Re: It isn't Duff's device. (Score:5, Informative)
you misread (Score:2)
Re:It isn't Duff's device. (Score:2)
Re:It isn't Duff's device. (Score:3, Informative)
That's what Duff 'discovered', and it's the trick they're using here.
Re:It isn't Duff's device. (Score:2)
That's the idea of a switch statement. Duff's device introduces a loop inside the switch statement but covering multiple cases, to efficiently unroll a loop into larger chunks.
Re:It isn't Duff's device. (Score:2)
Rob Pike invented this in 1985 (Score:5, Informative)
http://citeseer.ist.psu.edu/cardelli85squeak.html [psu.edu]
Doug Moen
Re:Rob Pike invented this in 1985 (Score:2, Informative)
From research!ucbvax!dagobah!td Sun Nov 13 07:35:46 1983
Received: by ucbvax.ARPA (4.16/4.13) id AA18997; Sun, 13 Nov 83 07:35:46 pst
Received: by dagobah.LFL (4.6/4.6b) id AA01034; Thu, 10 Nov 83 17:57:56 PST
Date: Thu, 10 Nov 83 17:57:56 PST
From: ucbvax!dagobah!td (Tom Duff)
Message-Id:
To: ucbvax!decvax!hcr!rrg, ucbvax!ihnp4!hcr!rrg, ucbvax!research!dmr, ucbvax!research!rob
neat way in C to express an old trick (Score:3, Informative)
I recall seeing the same trick implemented in assembler somewhat earlier, I think the technique was called towers?
Not new (Score:4, Informative)
Loop Abuse (Score:5, Interesting)
The PPC architecture has a special-purpose count register with specialized branch instructions relating to it; e.g., the assembly mnemonic 'bdnz' means "decrement the count register by one, and branch if it has not reached zero." I've used this in some pretty weird loops, including this one that broke the Codewarrior 9.3 compiler (fixed in 9.4.) This computes the location of the n'th trailing one in a 32-bit integer. Pardon my weak attempt at formatting this in HTML:
static uint32 nth_trailing_one(register uint32 p, register uint32 n) { end: }
return __cntlzw(p ^ (p - 1));
}
The idea was that the instruction stream should stay as linear as possible; most of the time the branches are not taken, and execution falls through to the next line of code. Ironically (siliconically?), the entire function could probably be implemented in a single cycle in silicon; shoehorning bitwise functions like this into standard instructions tends to be extremely wasteful. Perhaps FPGA's will make an end run around this at some point. I've also tried this function with a dynamically-calculated jump at the beginning, similar to the case statement logic in the article.
Hmm, I had a point I was trying to make with this post, but now it's escaped my mind...
Re:Loop Abuse (Score:2)
It came out okay, but for future reference, Slashdot does allow a specialized <CODE> tag similar to <PRE>.
Re:Loop Abuse (Score:2)
Re:Loop Abuse (Score:3, Interesting)
Adam
Re:Loop Abuse (Score:4, Interesting)
It would be if I were looking for the n'th leading one, but this code is looking for the n'th trailing one. (e.g. for 0b0010011001011100, the 3rd trailing one is in the fifth-lowest bit.) The equivalent code sequence for leading ones is in fact more complicated, requiring three arithmetic instructions and a branch per iteration. (cntlzw, shift, xor, branch).
I actually use this code as part of an algorithm where I have a very large (e.g. 65k-element) packed single-bit histogram array, and need to find the position of (say) the 1000th set bit. Vector instructions can do a coarse population-count from one end fairly efficiently, but once it's narrowed down to a 32-bit region, it comes down to slicing and dicing. My code operates by clearing the rightmost set bit in each iteration (x & (x - 1)), then at the end, isolating the needed bit (x ^ (x - 1)) and using cntlzw to find its position. To clear the leftmost set bit, you need three instructions: first get its position with cntlzw, then shift 0x80000000 right by that number of bits, and finally XOR to clear the bit. (If there's a shorter sequence, I haven't found it.)
(oh, and for the troll responder-- you are quite spectacularly wrong. But thanks for the giggle.)
It was looking interesting until (Score:5, Interesting)
I got to this little gem:
My English parser thread shut down at that point . . .
Seriously, this looks like a handy little thing for low-memory systems, though I'd be a bit hesitant about pushing at the C standard like that--the last thing you need is a little compiler bug eating your program because the compiler writers never thought you'd do crazy things to switch blocks like that.
Stackless Python (Score:2, Interesting)
extremely limited applicability (Score:5, Informative)
This is bad, lame, faux cooperative threads.
It's also not even particlarly new [mine-control.com] [1998].Unless memory is at an absolute premium, just use cooperative threading instead. If you try to use prototheads, you'll quickly discover how unlike "real" programming it is. Even just a 4K stack in your cooperative threads will get you way more than protothreads does.
Re:extremely limited applicability (Score:3, Funny)
Yeah, but my brain -- Ooh! Shiny!
Re:extremely limited applicability (Score:2, Insightful)
You may think they are lame, I still think they are cool.
You want cool C stuff... (Score:5, Interesting)
Dijkstra says... (Score:2, Interesting)
Re:Dijkstra says... (Score:2)
Re:Dijkstra says... (Score:4, Funny)
Actually, since the running of $export DEITY=Dijkstra, he is now.
Re:Dijkstra says... (Score:2)
a fun trick only useful in very specialized cases. (Score:4, Insightful)
Even if you are writing in the purest of C, you aren't guaranteed that the optimizer isn't going to very reasonably want to introduce the equivalent of local variables. And even if you are sure there's no optimization going on, you STILL don't know for sure that the compiler isn't using space on the stack. There just is no guarantee built into the language about this. And if you were wrong, you'd get strange, highly intermittent and non-local bugs.
You could be pretty sure. You could force the compiler to use registers as much as possible. You could keep your routines really short. (Hey, if they don't preserve local variables, then how do they do parameter passing?? Parameters are passed on that same stack!)
But to be completely sure, you'd have to look at the output code. It wouldn't be too hard I suppose to write a tool to automatically do it...you'd just look for stack-relative operations and flag them. But then what would you do if something wasn't working? Yell at the compiler? Rewrite the machine language?
I guess I don't quite see the use now I've written this up. When is memory THAT important these days? It ain't like I haven't done this, I've written significant programs that I got paid money to do that fit into 4K (an error correction routine).
But that was an awfully long time ago. Now it's hard to find memory chips below 1Mbit. That two byte number is interesting but your "threads" aren't doing any work for you -- the whole point of threads is that you are preserving some context so that you can go back to them.
And since you can't use local variables, you can't use things like the C libraries or pretty well any library ever written, which is teh sux0r.
For just a few more bytes of memory and a few more cycles, you could save those local variables somewhere and restore 'em later. Suddenly your coding future is a brighter place. Tell the hardware people to give you 128K of RAM, damn the expense!
You could even put in a flag to indicate that that particular routine didn't need its local variables saved so you'd get the best of both worlds, use of external libraries as well as ultra-light switching.
Re:a fun trick only useful in very specialized cas (Score:3, Informative)
It's ugly as sin, but your compiler had better get it right, o
Re:a fun trick only useful in very specialized cas (Score:3, Interesting)
I downloaded it. But the version that is there does not, in fact
Re:a fun trick only useful in very specialized cas (Score:2, Informative)
But the compiler won't decide to do this because it won't be able to establish that y (or pi) can not be changed between instances of this code.
Re:a fun trick only useful in very specialized cas (Score:3, Informative)
In your second example, the compiler *cannot* remove the sub-expression because the case statement that gets you there crosses a basic block boundary; the return statement from the blocking code, and the jump in through the sw
Re:a fun trick only useful in very specialized cas (Score:3)
I was talking to a friend the other day, who had to write the code for a car door opener dealie. You know the one. A really nice, high end one with an LCD that displayed stuff (not your average 100% hardware door opener). His code had a staggering 256 bytes of RAM to work with, and even then they were potentially 7 bytes overbooked. So yes, these kinds of constraints still exist. Sadly.
Re:a fun trick only useful in very specialized cas (Score:5, Informative)
It's too clever to be really useful unfortunately. The big issue is of course the no "local variables". Trouble is, if you are writing in C, the compiler may well be creating local variables for you behind your back. In C++ for example there are many cases where this will certainly happen, like
void DoSomething(const string&);
DoSomething("hollow, whirled");
where a local variable of type string will be temporarily created to pass to routine DoSomething.
You need to read the article.
It only says you can't use local variables across functions that block. Actually, it doesn't even say that you can't use them, it only says don't expect their value to be preserved.
In your example, even if the compiler does create a local variable to call DoSomething, and even if DoSomething does block, who cares if the value of that local variable is preserved, since it's impossible to reference it again after that statement?
But that was an awfully long time ago. Now it's hard to find memory chips below 1Mbit.
I can help you with this problem! Is 16 bytes small enough [microchip.com]?
And since you can't use local variables, you can't use things like the C libraries or pretty well any library ever written, which is teh sux0r.
But you can use the C libraries. Just don't use local variables across functions that block. Only a very few C library functions block.
Yes, can be useful (depending on platform) (Score:3, Interesting)
Another advantage is its portability. Small embedded systems, whether they have operating systems or not, usually can't support some fully-blown threading standard. Those operating systems seem to implement some kind of 'specially tuned' thread APIs.
Using these kind of threads on a full-blown PC (or servers) would have almost no benefit. However, in the embedded software engineer's perspective, it's great to see a ultra-lightweight thread library without any platform-dependent code.
lightweight threads = fibers? (Score:2)
wtf? (Score:3, Interesting)
switch (count % 8)
{
case 0: do { *to = *from++;
case 7: *to = *from++;
case 6: *to = *from++;
case 5: *to = *from++;
case 4: *to = *from++;
case 3: *to = *from++;
case 2: *to = *from++;
case 1: *to = *from++;
} while (--n > 0);
}
What the hell is up with that do { applying only in case zero? It's in several places on the net just like that and Visual Studio compiles this just fine, so it's not an error. I checked K&R, and they don't even hint at what could be going on there... I'm lost. Help?
Re:wtf? (Score:2)
Re:wtf? (Score:5, Informative)
You can implement a simple memcpy function like this: So far, so good. Now Duff's problem was that this was too slow for his needs. He wanted to do loop unrolling [wikipedia.org], where each iteration in the loop does more operations, so that the entire loop has to iterate less. This means the 'is count > 0? if so, go back, otherwise go on' part of the loop has to execute fewer times.
Now, the obvious problem with this is that you don't know how much you can unwind this particular loop. If it has 2 elements, you can't unwind it to three elements, for instance.
This is where Duff's Device turns up: First, we check to see how much we can unroll the loop - for instance, if count is perfectly divisible by 5, but not 6, 7, or 8, in which case we can safely have 5 copies inside our loop without worry that the copy is going to move past the end of the array. Then - and here's the magic trick - we use switch to jump into a do loop. It's a perfectly ordinary do loop; the trick is entirely in the fact that if count==6, for instance, then C considers the do-loop to begin at 'case 6:', causing 6 copies of '*to++ = *from++' to be executed before the 'while' returns the loop position to the 'case 6:' point which is where, as far as C is concerned, the do-loop began.
Thus, the loop is unwound to a level that it can handle.
I think.
Feel free to correct/amplify/mock.
cheers,
Gaurav
cool and all... (Score:2)
Besides, on any modern machine you'll get more speed up by copying a word at a time instead of tightening your byte-at-a-time loop. Because due to the caches, you'll likely end up moving cache lines at a time over the bus anyway (assuming stuff is aligned), and so the key becomes to minimize the number of instructions needed to do it. And
Re:wtf? (Score:3, Informative)
No, it returns to the 'case 0:' point where the 'do {' is. (Otherwise the loop wouldn't be executed count times, and somehow I think this Duff guy would have thought of that...)
Re:wtf? (Score:5, Informative)
The way you describe it is that the loop is unrolled to a size that is safely divisible into the 'count' value, which is an interesting idea, but would not be as efficient (large prime number counts would not get unrolled, for example, and a more complex computed got would be required at the loop end).
My take is this: with loop unrolling, one always has to take care of the 'remainder'. In the above example, the loop is unrolled to be a fixed size (8 repeated copy instructions, instead of one), and any count not divisible by 8 has to handle the remainder of the count after dividing by 8. Conceptually, you could imagine handling this remainder with a separate case section after the unrolled loop. In Duff's device, the remainder is actually dealt with first, by intially jumping into the loop somewhere other than the beginning, then letting the fully unrolled loop finish up.
In answer to the previous poster's question, the 'do' could (probably) be put on it's own line, before case 0:, but that wouldn't look nearly as bizarre.
Of course, maybe I'm wrong too. I hope not.
Re:wtf? (Score:4, Informative)
1) you determine how many groups of 8 you will need, rounding up to count the remainder block as well (if there is one)
2) code enters switch statement based on the remainder value, hits the correct case and falls through (note that if there was no remainder we start at the top of the cases and fall through, consuming an entire 8 block)
3) code hits the while, decrements the number of 8 blocks (as we just finished off the partial "remainder block")
4) return to do, fall through to finish this 8 group
5) loop back to 3
Took me a few minutes of staring at it (and I admit, some tme looking at above descriptions) to get over 4 years of no C in my diet, but now I have to admit that is beautiful.
Re:wtf? (Score:2)
Hmm. Doesn't make full sense to me this way.
First part of your explanation is OK - let's say we need to unroll a loop for efficiency. And we decide to unroll it so it has 8 statements inside the loop. So, assuming count>0, we would do something like this (using dots to show indentation because "ecode" eats up leading spaces):
Use of this technique in Felix (Score:4, Interesting)
http://felix.sf.net/ [sf.net]
to provide user space threading. The main difference is that all the 'C tricks' are generated automatically by the language translator. If you're using gcc then the switch is replaced by a computed jump (a gcc language extension). On my AMD64/2800 time for creating 500,000 threads and sending each a message is 2 seconds, most of the time probably being consumed by calls to malloc, so the real thread creation and context switch rate is probably greater than Meg/sec order
Should work quite fine (Score:3, Insightful)
1) silently modifies you 'switch' statement sematics
2) fails to continue from the right spot on next iteration.
Re:Wait just a minute ... (Score:4, Funny)
Re:Wait just a minute ... (Score:2)
Before people could use text editors, someone had to input them as direct machine language... same thing goes even for ASM compilers. The first compiler and runtime environment generation has to be created from existing technologies but nothing (other than motivation or justification) is preventing it from coming full-circle.
Re:Wait just a minute ... (Score:2)
Other than Java being a horrible language for low-level programming. There are times when you need the close-to-the-hardware aspect of C.
Re:Stupid (Score:3, Insightful)
Hideous, but efficiency is not it's problem.
Re:Stupid (Score:2)
people are all so excited currently about kernel thread and pthreads and stuff like that, but what they dont realize is that it's terrible overhead actually makes your app slower than it would be if it was built on a single process model. this "threading" api is a quite wicked implementation, but it works in simple cases and since it doesn't switch the registers back and forth all the time like your 250 pthrea
Re:Stupid (Score:2)
You seem to be working under the impression that your target machine only has 1 CPU (and that CPU only has one core). Using 2 CPUs with a 20% overhead from thread locking etc. is a lot faster than using one without the overhead. Most new CPUs have two cores. In 18 mon
Re:Stupid (Score:2)
if you count all the threads that run in an usual linux/apache/mysql box that in addition has a threaded mailserver attached to it (and a some lightweight java applications runnnin on tomcat in java threads), then the poor machine is just overkilled. even if
Re:Stupid (Score:2)
Disclaimer: I haven't read the article yet, so maybe it has nothing to do with microthreads.
Re:Stupid (Score:4, Insightful)
Re:This is mostly useless (Score:2)