Protothreads and Other Wicked C Tricks 229
lwb writes "For those of you interested in interesting hard-core C programming
tricks: Adam Dunkels' protothreads library
implements an unusually lightweight type of threads. Protothreads are
not real threads, but rather something in between an event-driven
state machine and regular threads. But they are implemented in 100%
portable ANSI C and with an interesting but quite unintuitive use of the switch/case
construct. The same trick has previously been used by Simon Tatham to implement
coroutines
in C. The trick was originally invented by Tom Duff and dubbed Duff's
device. You either love it or you hate it!"
Implementation in languages? (Score:3, Insightful)
Re:It isn't Duff's device. (Score:3, Insightful)
"Actually, I have another revolting way to use switches to implement interrupt driven state machines but it's too horrid to go into."
Re:I guess the idea is it's extremely portable. (Score:5, Insightful)
I read his paper where he said "writing an event-driven system is hard". I guess he has never heard of a using Finite State Automata for the design? State machines are very simple to program. An event driven system is not at all hard to write, although you often times do have to have some deep hardware and/or procesor knowledge to do it well. I wrote many of them in the 1980's when I did embedded C code for DOD work, although I have not done so in quite a few years. Once Ada came along everyone abandoned C as too obtuse for embedded work for the DOD. I once did benchmarks that showed decent C code without strong optimization outperformed Ada code, but C was dead already in their minds. I'm glad to see some folks are still interested in it on the commercial side of programming. After all we can't write everything in Java
Re:Stupid (Score:3, Insightful)
Hideous, but efficiency is not it's problem.
Re:Stupid (Score:4, Insightful)
a fun trick only useful in very specialized cases. (Score:4, Insightful)
Even if you are writing in the purest of C, you aren't guaranteed that the optimizer isn't going to very reasonably want to introduce the equivalent of local variables. And even if you are sure there's no optimization going on, you STILL don't know for sure that the compiler isn't using space on the stack. There just is no guarantee built into the language about this. And if you were wrong, you'd get strange, highly intermittent and non-local bugs.
You could be pretty sure. You could force the compiler to use registers as much as possible. You could keep your routines really short. (Hey, if they don't preserve local variables, then how do they do parameter passing?? Parameters are passed on that same stack!)
But to be completely sure, you'd have to look at the output code. It wouldn't be too hard I suppose to write a tool to automatically do it...you'd just look for stack-relative operations and flag them. But then what would you do if something wasn't working? Yell at the compiler? Rewrite the machine language?
I guess I don't quite see the use now I've written this up. When is memory THAT important these days? It ain't like I haven't done this, I've written significant programs that I got paid money to do that fit into 4K (an error correction routine).
But that was an awfully long time ago. Now it's hard to find memory chips below 1Mbit. That two byte number is interesting but your "threads" aren't doing any work for you -- the whole point of threads is that you are preserving some context so that you can go back to them.
And since you can't use local variables, you can't use things like the C libraries or pretty well any library ever written, which is teh sux0r.
For just a few more bytes of memory and a few more cycles, you could save those local variables somewhere and restore 'em later. Suddenly your coding future is a brighter place. Tell the hardware people to give you 128K of RAM, damn the expense!
You could even put in a flag to indicate that that particular routine didn't need its local variables saved so you'd get the best of both worlds, use of external libraries as well as ultra-light switching.
Re:extremely limited applicability (Score:2, Insightful)
You may think they are lame, I still think they are cool.
Should work quite fine (Score:3, Insightful)
1) silently modifies you 'switch' statement sematics
2) fails to continue from the right spot on next iteration.
Re:extremely limited applicability (Score:1, Insightful)
That's probably why local variables aren't preserved....
Re:I guess the idea is it's extremely portable. (Score:3, Insightful)
Explicit scheduling is NOT pre-emptive, it's static. It can be priority driven though as is pre-emptive. Pre-emptive is dynamic based on operating conditions such as a user input, interrupt, etc. I've never seen a lot of overhead on task/thread changes in an OS in many years.
Producer- Consumer tasks have to be very tightly coupled and managed unless you use some type of semaphore (i.e. primitves as you call them) to manage who has control of the resource. Personally, I prefer semaphores versus co-routines, it's a heck of a lot easier to debug. Tightly coupled routines are not considered good Software practices. If you are running this under the priority of a system thread then are you really gaining anything? I'd prefer things be simple and just make it a small thread of it's own instead of a proto-thread.
Memory these days is cheap, small and fast as are CPUs. Why waste a lot of labor on this "really cool" implementation which IMO will be dam hard to maintain and debug? Now back when I had 2K of RAM for all data AND Stack I had to be concerned, but then hardware was expensive, slow and tricky and my time was cheap