Have Sockets Run Their Course? 230
ChelleChelle writes "This article examines the limitations of the sockets API. The Internet and the networking world in general have changed in very significant ways since the sockets API was first developed in 1982, but the API has had the effect of narrowing the ways in which developers think about and write networked applications. This article discusses the history as well as the future of the sockets API, focusing on how 'high bandwidth, low latency, and multihoming are driving the development of new alternatives.'"
Really... (Score:3, Funny)
Re:Really... (Score:5, Funny)
Really, for networking, all they need to do is ask slashdot's elite technical team. Years before Gmail automatically saved my drafts, /. consistently preempted everone with the above example (or Homeland_Security/FBI/Police knocking on the door, or person getting a hard attack) and snatches the post from the jaws of defeat when the user wouldn't otherwise be able to hit submit. Moreover, unlike anyone else to this day, even gmail, there is also a nice little hint as to the cause of the interruption.
Re:Really... (Score:5, Funny)
or person getting a hard attack
Viagra overdose?
Re:Really... (Score:5, Funny)
Re: (Score:3, Funny)
Re: (Score:3, Funny)
I live in a tropical country, you insensitive clod!
Re: (Score:2)
I get it.
Re:Really... (Score:5, Insightful)
Dealing with network failures isn't actually a trivial issue from the POV of an application, let alone an OS supporting it.
http://en.wikipedia.org/wiki/Two_Generals'_Problem [wikipedia.org]
whats really needed... (Score:2, Insightful)
is no sockets. some way to seamlessly connect LOCAL processes to each other without socket overhead by using the familiar socket interface. something simpler than shared memory.
and a better protocol method of opening sockets with the hard stuff taken care of by the OS. and with transparent buffer protection etc.
Re:whats really needed... (Score:5, Informative)
You mean like this? http://en.wikipedia.org/wiki/Unix_domain_sockets
Re: (Score:2)
Re:whats really needed... (Score:5, Funny)
Unix always had it (Score:5, Informative)
You mean, like pipes [wikipedia.org]?
Re:Unix always had it (Score:5, Funny)
You mean, like pipes?
Pipes for local communication and tubes for global communication. Seems like a winner.
Re: (Score:2)
Pipes are good, but they were designed for a specific paradigm, not the kind of thing you'd use sockets for. Bidirectional pipe communication is clunky, to say the least.
Re: (Score:2)
Open Transport, Part II (Score:5, Informative)
Been there, done that. Apple (once again) had a great implementation of an alternative technology, that it finally abandoned when it didn't feel like fighting any more.
Open Transport [wikipedia.org] (the PPC stack used in the Classic Mac OS) was fast, efficient, and cool. And based on the STREAMS [wikipedia.org] methodology, the only real competition to Berkeley Sockets.
Choice is good, mmmkay?
Re: (Score:2, Funny)
As next you will probably claim Apple has invented MAC addresses too....
Re:Open Transport, Part II (Score:5, Funny)
Well, I did hear it was a Xerox standard so it must have been copied from someone. I guess it could have been Apple.
Re:Open Transport, Part II (Score:5, Funny)
"Well, I did hear it was a Xerox standard so it must have been copied from someone."
I hope you meant to make that joke.
Re:Open Transport, Part II (Score:5, Informative)
Re:Open Transport, Part II (Score:4, Interesting)
Re: (Score:3, Insightful)
Open Transport didn't come about until the mid 1990's.
So, if you were programming for the Classic Mac OS in the 128K days, still doing that 10 years later and hating it *that* much, you probably feel like you've wasted half your life.
Yes, you could have moved on to other, newer, more advanced operating systems, but you *chose* to stick with it. One really has to respect that I suppose.
Shows your more masochistic side.
Re: (Score:2)
STREAMS are overkill for simplex
Re:Open Transport, Part II (Score:4, Informative)
Sure it was cool how you could push and pop drivers (say you wanted a different line discipline) but please tell me how it prevented any copies? The AT&T implementation also had two extra context switches.
This is what was bad about STREAMS:
In early implementations there was no notion of multithreading so a bad thing happened later. There was a time when the STREAMS drivers and demultiplexers assumed single threaded so the kernel had to pass off to a single worker thread in the kernel everything STREAMS related. So yo had some big iron box of the time with say four processors and IO performance was just balls until the STREAMS drivers were rewritten. But then you still had that worker thread around, so one thing was that those were broken out, so there was an extra two thread switches there. Then they did some stuff variously called something like Fast STREAMS where the fast paths would not switch. So all this optimization work went in to making STREAMS fast and they were still slow. It turned-out that the reason for that was due to the complexity of the STREAMS subsystem and all the layering that caused so many extra function calls per driver. STREAMS have largely been relegated to legacy and conformance at this point.
RFC 1925 (Score:5, Insightful)
This seems to dance a bit too close to Networking Truths [faqs.org] 6a, 11, and possibly 12. I will reserve judgment until I see solid real-world evidence.
Re:RFC 1925 (Score:4, Interesting)
Re:RFC 1925 (Score:5, Insightful)
Yes, there are always pathological cases that demonstrate the weaknesses of any technique. A big point I take away from RFC1925 (and personal experience), is that you have to A) recognize that trade-offs are always going to be made, and B) adapt your implementation to fit the laws of physics, instead of trying to bend the network to fit what you think an implementation should be.
The simple fact is that Sockets have worked very well for a long time. Yes, this sometimes means you have to shape your design and implementations to fit the "socket style", and history has shown that it is not only possible, but practical. Changing to a new design will not remove the fact that if you design your protocol/app badly, or are inherently in a pathological use-case, then your network performance will suffer.
For some problems, the ssh idea of multiplexing a single socket works well. For others, multiple rsh (*1) style work better. To say that Sockets need to be replaced because you chose to use rsh for your transport is an amazingly arrogant (*2) position. And yes, some of this is "tradition" and inertia, but designing a whole new library should be for significant real-world benefit, and not for corner-cases or maginal 1% gains.
Of course, if someone can actually produce some real-world benchmarks that validate the "let's ditch Sockets" claim...
[*1] As with you, this is totally ignoring the security implications, etc.
[*2] In no way is this a personal attack at you; I mean it in a purely academic sense. It's a very tall claim to say that decades of networking history, and thousands of talented engineers were wrong.
Re: (Score:3, Interesting)
Re: (Score:3, Interesting)
Of course, if someone can actually produce some real-world benchmarks that validate the "let's ditch Sockets" claim...
There are really few real world example where you can do something better than sockets.
BSD sockets are quite versatile API. I have programmed them on both side - implementing my own protocol/address family and actually using them in program - and hardly see how one can do it better, maintaining level of guarantees provided by the API. And the level of guarantees what makes it possible to develop applications behaving reliably/predictably under ever varying conditions - and not loose your sanity in the p
Hilarious (Score:5, Insightful)
Re: (Score:2)
Re: (Score:2)
yes. but it really helps if its the right abstraction
Re:Hilarious (Score:4, Insightful)
And many these new abstractions as described in the article could be built on top of an OS supplied socket-like API.
Re: (Score:2)
Re: (Score:2)
Actually, some MMOG developers are infact ex-MUD developers.
Re: (Score:2)
"REM Old programmers don't die. They just GOSUB without RETURN."
Or their stack overflowed.
Re: (Score:3, Funny)
> reinventing...
and USENET as Web Forums :-(
Re: (Score:3, Funny)
Re:Hilarious (Score:4, Insightful)
I'd vote for talk.bizarre.slashdot
Re:Hilarious (Score:4, Insightful)
Personally I don't use the service, but I'm not sure if I buy a lot of the ideas people have about Twitter (all about ego, vidiots, convergence wackos who want to tack myspace on to your toaster). I'll agree that it is a lot like the .plan updates of old, but deep down it seems more like a hack or set of hacks than a full reimplementation of anything.
Would you rather send out a mass text message, possibly costing your non-text messaging friends hundreds of dollars (those $1/text costs gather pretty quick) or post something on Twitter that he can either look at on his PC or smart phone with unlimited data? Then tinyURL fits in another cheap hack. Sure it makes it easier to fit the URL in your twit (saying that just doesn't feel right), but it also allows Bob to look at that YouTube you sent him at work via redirect. All of this isn't anything new, it is just people coping with changes in the landscape.
Which sockets API? (Score:5, Informative)
Re:Which sockets API? (Score:5, Interesting)
The Berkeley socket API has stood up very well against the tests of time, and it is fairly lean and quite versatile, but yeah, there's definitely room for newcomers.
For example, when it comes to high packet rates - say, thousands of VoIP RTP streams - the length of the typical path a packet takes through the kernel layers becomes quite prohibitive.
I've been trying to reach gigabit ethernet saturation with G711 VoIP RTP streams (that is, 172-byte UDP packets @ 50Hz per stream), which works out to a theoretical maximum of 10500 streams - 525000 packets/second. My initial speed tests, with minor tweaking, got me around 1/10th of that, thanks to all the kernel overhead, and the lack of control over how and when packets will be sent.
So I wrote my own socket-> UDP-> IP-> ARP-> Ethernet abstraction which hooks directly into the PACKET_MMAP API (as used by libpcap), with the TX Ring [ipxwarzone.com] patch, and with all the corner-cutting I managed to achieve 10000 streams (500k packets/sec) which equates to about 95% of the theoretical peak.
In short, we probably need more widespread support for different network programming APIs which address more specific needs - BSD sockets are too generalised sometimes.
Re:Which sockets API? (Score:4, Interesting)
Stupid thing posted me anonymously despite being logged in!
Re: (Score:3, Funny)
Stupid thing posted me anonymously despite being logged in!
It was deemed you already had too much Karma. That was a test of the new Karma limitation system ;)
Re: (Score:3, Interesting)
Sounds like a new achievement "Too much karma: Enlightenment to Anonymous Cowardom"
-l
Re: (Score:3, Insightful)
Well, yeah. When I read the article, my immediate thought was "So, implement your fancy special-purpose socket replacement on top of UDP."
Re: (Score:2)
Ignore the RTFA. Quote:
I presume that the date on the article is off by 10 years or something. I make the judgment based on the facts that the author calls SCTP "recently developed" and apparently never heard of /dev/epoll or kqueues (or e.g. libevent allowing to use them in portable manner).
wrong (Score:5, Interesting)
Although the addition of a single system call to a loop would not seem to add much of a burden, this is not the case
Really? For a lot of networking code that's in use these days, I don't see that the system call overhead is the bottleneck. On clients you usually have network bandwidth as the limiting step (rather than system calls). On servers, it usually seems to be disk access or HLL interpreters.
Each system call requires arguments to be marshaled and copied into the kernel, as well as causing the system to block the calling process and schedule another.
That's easy to fix without changing the socket API: just add a system call that can return multiple packets from multiple streams simultaneously, a cross between select and readv. If there's a lot of data buffered in the kernel, it can then return that with a single system call.
Solving this problem requires inverting the communication model between an application and the operating system.
Not only does it not require that, inversion of control doesn't even solve it, since you still have the context switches.
Re: (Score:3, Interesting)
Oops... left out half of it...
That's easy to fix without changing the socket API: just add a system call that can return multiple packets from multiple streams simultaneously, a cross between select and readv. If there's a lot of data buffered in the kernel, it can then return that with a single system call. The user mode socket library can use that system call internally and still present every caller with the regular select/poll/socket abstraction; when callers request data, it first returns data that's
Re:wrong (Score:5, Informative)
Windows' solution is pretty nice. You can pass a pre-created socket handle to accept_ex, which automatically accepts an incoming connection using that socket handle, so that you don't have to use two system calls (select and accept). You can also pre-accept multiple sockets, instead of having to make the system calls under load.
Sockets can also be closed with a "re-use" flag, which leaves the handle valid and saves making a system call to create another.
You then associate the sockets with an "IO completion port", which as best as I can tell is a multithreaded-safe linked list for really fast kernel to user program communication. To receive from the socket you make an async receive call, giving a pointer to a buffer to receive into.
Whenever data is received on those sockets (and has had a corresponding async request made for it already) the kernel automatically queues the socket handle to that linked list. If you associate a socket with the completion port before you accept a connection with it (i.e. you're using acceptex) it also triggers when the socket accepts a connection.
In the user code, you run multiple threads listening on the completion port (you can also use the completion port in the thread pooling API, which runs two threads to each cpu core by default). When a message arrives from the kernel, the most recently finished thread wakes and processes the received data, which will already be in the user-space buffer you provided in the original receive call.
If all threads are busy and there are messages in the completion port they will bounce right off of the completion port, picking up the next bit of completed IO they need to process without making a system call.
Re:wrong (Score:5, Interesting)
``Windows' solution is pretty nice. You can pass a pre-created socket handle to accept_ex, which automatically accepts an incoming connection using that socket handle, so that you don't have to use two system calls (select and accept). You can also pre-accept multiple sockets, instead of having to make the system calls under load.
Sockets can also be closed with a "re-use" flag, which leaves the handle valid and saves making a system call to create another.
You then associate the sockets with an "IO completion port", which as best as I can tell is a multithreaded-safe linked list for really fast kernel to user program communication.''
I don't know. To me, it all just sounds like kludges to work around the facts that system calls are slow and that the implementation of the Berkeley API causes many system calls. You are adapting the structure of your program to code around the problems, instead of fixing the problems that cause the natural style of your program to lead to slowness.
There is nothing in the Berkeley socket API that mandates system calls or context switches. At worst, some copying is necessary (because the API lets the caller specify where data are to be stored, instead of letting the callee return a pointer to where data are actually stored).
The reason we have system calls and context switches, I claim, is that we are using unsafe languages. Because of this, applications could contain code that overwrites other programs' memory. We don't want that, and we have taken to separate address spaces to avoid it. The separate address spaces are enforced by the hardware, but this has a price, especially on x86. Perhaps it is time to rethink the whole "C is fast" credo. As the number of work instructions that can be executed in the time it takes to do a context switch increases, so does the relative performance of systems that do not need context switches, but of course we can only do away with context switches if we can provide safety guarantees in another way. One way would be to have the compiler enforce them. But that is outside the scope of Berkeley sockets, of course.
Re:wrong (Score:4, Insightful)
if this were all in one domain, the most flexible and efficient thing would be to have memory for receive frames allocated at the bottom of the stack, and use callbacks all the way up.
because of the user kernel boundary we have a copy which is difficult to get around (put the next 1k bytes exactly here, although i really dont care), and some unfriendly and inefficient hacks to weasel around the 'natural' blocking semantics.
even if its completely academic, i think its interesting to look at the user kernel boundary and try to refactor things which have negative structural impacts.
Re:wrong (Score:5, Interesting)
even if its completely academic, i think its interesting to look at the user kernel boundary and try to refactor things which have negative structural impacts.
And you think that 2009 is the first time people think about this? System call overhead used to be a much bigger issue. UNIX and Linux has the current set of interfaces because they are a good compromise between simplicity and efficiency.
And these issues are constantly being evaluated implicitly: people who write network servers benchmark their code and find the bottlenecks. If the bottleneck is some system call, they complain to the kernel mailing list and maybe roll up their sleeves and come up with something new. If that turns out to be useful, more and more people ask for it to be put into the kernel, and eventually it becomes standard.
What motivates kernel developers is real benchmarks and the needs of important, real-world applications, not fluff pieces that express generic displeasure with the way things are done.
Re:wrong (Score:5, Interesting)
no. in fact i can remember having discussions myself about this more than 20 years ago, and those were hardly the first.
unix has these interfaces as a matter of historical accident, what was an excellent design at the time. its hardly the only good point in the space.
you might find that it helps to think about these thing..even when developing important, real-world applications. why shouldn't the kernel be able to call into userspace safely and transfer ownership of a buffer? is that really so terrible to consider?
Re:wrong (Score:5, Insightful)
unix has these interfaces as a matter of historical accident, what was an excellent design at the time.
No, UNIX has these interfaces because they get the job done. People tried all sorts of other interfaces and none of them caught on.
you might find that it helps to think about these thing..even when developing important, real-world applications.
How does it "help" me to think about solutions to problems I'm not having? I've never seen the socket interface to be rate limiting in anything I care about.
why shouldn't the kernel be able to call into userspace safely and transfer ownership of a buffer? is that really so terrible to consider?
Well, if that's your biggest itch, be my guest: implement a kernel patch, make it public, convince people to use it, and if it develops a large user community, maybe Linus will pick it up and it will become a standard part of the kernel.
If nobody is willing to put in the effort, evidently the feature isn't needed.
Re: (Score:2)
I think I'm going to have to add to my list of RFC1924 issues with this proposal...
"(1) It Has To Work."
This whole topic stinks of a really bad case of Premature Optimization.
Re:wrong (Score:5, Insightful)
the most flexible and efficient thing would be to have memory for receive frames allocated at the bottom of the stack, and use callbacks all the way up.
Sure, in the same way that the "most flexible and efficient thing" would be to write inassembly language and turn off the MMU. But UNIX is not trying to do the most flexible and efficient thing, it's trying to be a reasonable tradeoff between simplicity, safety, and efficiency. And that means that efficiency only gets optimized to the point where it stops being a limiting factor for most programs.
Re: (Score:2)
but in this case we have structural flaws, which as you point out have some workarounds..some of which have their own problems. it seems reasonable to think about other approaches. i'm not going to buy into the tablets brought down from the berkeley hills.
gnn isn't really advocating throwing out sockets, you'll have to blame chellechelle for the inflammatory headline. queue is exactly that, a forum for discussing practice, and not a very deep one at that.
go ahead and live with your select and poll variants,
Re: (Score:3, Insightful)
but in this case we have structural flaws
Not conforming to someone's pipe dream of kernel design is not a flaw. It's a flaw only if it demonstrably causes problems.
i'm not going to buy into the tablets brought down from the berkeley hills.
That's why they make all kinds. You're free to use Windows Vista; those people spend billions correcting supposed "structural flaws". Don't spoil UNIX or Linux for the rest of us. We like its "structural flaws" the way they are.
Re:wrong (Score:5, Interesting)
But socket-like interfaces exist on systems without any user kernel interface. Ie, embedded systems. Many of those have implementations that do a good job of avoiding extra data copying, and yet still have an API that resembles sockets. I wonder if people are confusing the general idea of "sockets" with the specific "Berkeley Sockets" implementation and specification?
Structured Stream Transport (Score:5, Informative)
BSD sockets have a limitation of only a single stream at a time (for example, if you are loading a website over HTTP and you get stuck loading a huge image, you have no choice but to open up another socket connection or else wait). They are also stuck around the paradigm of only supporting byte streams, which means that users are always forced to write the same code over and over to create packet headers or delimited messages.
I would highly recommend checking out Structured Stream Transport [mit.edu]. I'm not from MIT and I wasn't entirely satisfied with their sample implementation, but the paper is really insightful and explains how you can develop basically a smarter version of TCP that is both more efficient and also more flexible. And I'm sure there are other systems being developed with similar ideas in mind.
We definitely need to keep bsd sockets, if not just because I'm a regular user of netcat :-p, and also because they are what allow the creation of more advanced protocols, but I don't think most applications should still be using such low-level protocols today.
Re:Structured Stream Transport (Score:5, Informative)
No matter how much abstraction you pile on top to open multiple streams, automatically add headers, communicate a fix message size to avoid in-band delimiters, etc., you'll still have to send all those messages over linear octet streams when using TCP.
Now you could choose not to use TCP -- UDP lets you send non-linear messages of arbitrary size without delimiters. And there may be other newer, better options available as well. But you can do both TCP and UDP (as well other other comm types) using the same sockets API.
Re:Structured Stream Transport (Score:5, Informative)
BSD sockets have a limitation of only a single stream at a time (for example, if you are loading a website over HTTP and you get stuck loading a huge image, you have no choice but to open up another socket connection or else wait).
No it doesn't. This is a limitation of TCP. You could just as easily use a different protocol (e.g., SCTP) with sockets.
Re:Structured Stream Transport (Score:4, Insightful)
I honestly had never heard of SCTP before, and I'm surprised that it is not used more widely since it has been around since 2000.
Firewalls don't support it. Consumer routers can't do NAT on it. New protocols on the Internet are fairly unlikely to have a chance.
Re:Structured Stream Transport (Score:5, Funny)
Firewalls don't support [SCTP]. Consumer routers can't do NAT on it. New protocols on the Internet are fairly unlikely to have a chance.
This is a good example of why NAT sucks. When IPv6 comes along and and restores true end-to-end connectivity across the Internet, there will be a lot more freedom to experiment with new and interesting protocols. Except, of course...
New protocols on the Internet are fairly unlikely to have a chance.
Damn.
Re:Structured Stream Transport (Score:4, Funny)
Sorry to cut it to you, but NAT is here to stay. As a security paradigm, there's no surface attack to a user's PC that isn't even visible.
Re: (Score:3, Interesting)
Re: (Score:3, Insightful)
Sorry to cut it to you, but NAT is here to stay. As a security paradigm, there's no surface attack to a user's PC that isn't even visible.
If only you could devise some kind of wall between your machine and the fiery flames that didn't require NAT, but alas, such is merely dreaming.
Re:Structured Stream Transport (Score:5, Insightful)
It is said that those who do not understand history are doomed to repeat it...
They are also stuck around the paradigm of only supporting byte streams, which means that users are always forced to write the same code over and over to create packet headers or delimited messages.
Byte streams is one of the UNIX fundamantals. Before UNIX, there were many systems which provided wide varieties of structured IO. This turned out to be a real pain and one of the UNIX innovations was simply to scrap it.
Ans today, most applications don't use low level sockets: they cal a library which does it for them. Moving the library in to the kernel is not a good idea.
Re: (Score:2, Interesting)
I definitely agree with you. In fact byte streams being a fundamental part of POSIX is one thing I love and make use of every day, for example piping output between programs/sockets. My post was not very clear, but I was trying to say that users developing application protocols should not be using BSD sockets directly any more--people usually write or use libraries for that sort of thing.
As far as new protocols go, you can build basically anything using UDP (and UDP is far less likely to be firewalled than
Re: (Score:2)
It is said that those who do not understand history are doomed to repeat it... (...) Byte streams is one of the UNIX fundamantals. Before UNIX, there were many systems which provided wide varieties of structured IO. This turned out to be a real pain and one of the UNIX innovations was simply to scrap it.
Ah, the "It's no longer our problem, thus the problem is solved" approach. While maybe it shouldn't be in the kernel, there's some things there should be only one of and basic messaging/IPC is one of them - looking at the wikipedia page there's more than two dozen listed and probably doesn't include the ancient pre-UNIX ways. It looks like finally the open source world is starting to settle on D-Bus as the core backend (Gnome, KDE and Win/Mac support) but that it's taken 40 years to get there exactly becaus
Re: (Score:3, Interesting)
if you are loading a website over HTTP and you get stuck loading a huge image, you have no choice but to open up another socket connection or else wait
I think your confusing the HTTP protocol with BSD sockets. Your example is an HTTP 1.0 limitation, check out HTTP pipelining. [wikipedia.org]
A socket is at it's very basic a read/write file handle. You can implement asynchronous handling, write your own protocol and do lots of extreme goodness. If you choose to be protocol stupid about how you transport your data then you live with the consequences.
As a network protocol engineer, you must look at minimum guaranteed latency, pick an average guaranteed bandwidth and taylo
Couldn't this be like a flag, rather than new API? (Score:5, Interesting)
he recently developed SCTP (Stream Control Transport Protocol)4 incorporates support for multihoming at the protocol level, but it is impossible to export this support through the sockets API
The word that bugs me there, is "impossible". The question is, why? If you have to do something with sockets under the hood, then so be it, but it would seem to me that you could just add a few more fields to socket address to take into account multiple homes.
We've already had alternative APIs to sockets and for quite some time. sockets won. There were named pipes, ipx/spx, and the seemingly stupid idea of treating a network resource as a file has trumped every time.
Re:Couldn't this be like a flag, rather than new A (Score:5, Interesting)
The word that bugs me there, is "impossible". The question is, why? If you have to do something with sockets under the hood, then so be it, but it would seem to me that you could just add a few more fields to socket address to take into account multiple homes.
Especially since SCTP actually does use the sockets API. You have to use recvmsg() instead of recv() if you want to do multi-homing, but in using SCTP I was actually impressed by how flexible the BSD socket API actually is. I can't say I particularly like it, and everyone who uses it ends up writing a wrapper around most of the send and recv calls, but flexibility is definitely it's strong point. If we ever do get routing by carrier pigeon, the BSD socket API will be able to adapt to it.
SCTP an interesting example (Score:5, Interesting)
I am developing SCTP applications and has contributed to the linux implementation, and I think that one of the advantages of the socket API is that it is usable with select()/ and poll(), ie. it is file descriptors you can pass around.
But for SCTP there are things that don't fit nicely into the socket API, especially when using one-to-many socket types. For instance for retrieving options for an association you have to piggyback data in a getsockopt() call by using the output buffer also for input. It works, but it is not nice. Also, for sending/receiving messages you have to use sendmsg/recvmsg with all the features including control data, and the ugly control data parsing.
Re:SCTP an interesting example (Score:4, Informative)
So use a wrapper, like sctp_send [die.net] from libsctp. There's no reason the kernel proper has to export these interfaces.
Re: (Score:2)
Also, select() and poll() are both inefficient. I suggest you use epoll(). Once you get the hang of it, I think you will like the interface better as well.
Hmm... (Score:3, Interesting)
In my experience the way the socket API can slow down a processor is having to monitor many thousands of socket descriptors using select() or poll(), like in a web server. For Linux epoll() was created for this scenario.
STREAMS? (Score:2, Informative)
Macs used STREAMS from system 7.5.2 onwards. Was kind of sad to see that go away with the switch to OS X.
Re: (Score:2)
Streams are how information moves to network and disk.
You can't transfer bytes without a stream unless you are opening and closing a handle for every byte. If that were the case, OSX would run like so much molassis.
Further since OSX is a UNIX operating system written in C, it *has* to support streams. Streams are a part of C and UNIX and OSX is an officially certified UNIX OS.
It really seems to me... (Score:3, Insightful)
...that most of the things that this guy is talking about would be better implemented below the sockets API. As in, how the OS handles things. Making things transparent is a good thing.
I'll also point out that having a fail over interface so that the client doesn't lose the connection has already been done in OpenBSD's pf called CARP. It is a free alternative to VRRP and HSRP. In other words, this doesn't have to be implemented in the API when another avenue already exists that does it.
Yes Mine are good (Score:3, Funny)
User level networking and the last copy (Score:5, Interesting)
This is hardly news and partly mistaken.
The statement that sockets limit throughput by copying between kernel and application processes is a bit simplistic. The copy of Rx data to an application usually primes the cache. If data isn't touched and loaded into the cache at this point, it will have to be loaded shortly, anyway. Granted, for Tx this trick does not hold.
Second, the interface is not the implementation. Just because sockets are traditionally implemented as system calls does not state that they have to. User level networking is a well known alternative to OS services for high-bandwidth and low-latency communication (e.g., U-net [cornell.edu] developed around '96). I know, because I myself built a network stack [netstreamline.org] with large shared buffers [netstreamline.org] that implements the socket API through local function calls (blatant plug, but on topic. The implementation is still shoddy, but good enough for UDP benchmarking).
User level networking can also offers low latency. My implementation doesn't, but U-net does.
This leaves the third point of the article, on multihoming. As sockets abstract away IP addresses and network interfaces, I don't see why they cannot support multihoming behind the socket interface. Note that IP addresses do not have to mapped 1:1 onto NICs. Operating systems generally support load-balancing or fail-over behind the interface through virtual interfaces (in IRIX) or some other means (Netfilter in [sgi.com]Linux [tetro.net]).
Not need to replace sockets just yet.
Alternatives (Score:2)
I couldn't get to the article, but if they think Berkeley sockets are obsolete, I'd like to see what alternative they offer, why they think these alternatives are better, and what the pitfalls of the alternatives are.
It's not sockets, its bind() (Score:5, Interesting)
The socket API... or rather the UNIX file descriptor API... has been extended many times. Sockets are already one such extension, and there's no reason you couldn't do something like mmap() a socket to map the buffers into user space directly. Heck, udp sockets already diverge from the read/write paradigm.
The problem with sockets is at a higher level. They're not mapped into the file system name space. You should be able to open a socket by calling open() on something like "/dev/tcp/address-or-name/port-or-name" and completely hide the details of gethostbyname(), bind(), and so on from the application layer. If they'd done that we'd already be using IPv6 for everything because applications wouldn't have to know about the details of addresses because they'd just be arbitrary strings like file names already are.
Plan 9 (Score:2, Informative)
Already does that.
Re: (Score:2)
I was under the impression that getaddrinfo() [wikipedia.org] already served to easily provide support for IPv6.
Re: (Score:2)
Anyone who writes an application that needs to know the details of addresses is doing it wrong. Sockets don't require any particular knowledge of the underlying network protocols.
Low latency? (Score:2)
"...high bandwidth, low latency..."? Low latency? Is the author working on some alternative universe Internet with low latency, rather than the high, increasing, and highly variable latency of the Internet here in this universe/on this planet? Or perhaps he has a telco that isn't continuously raising the price of T1s and T3s to force him onto high-latency IP connectivity "solutions"?
sPh
Re: (Score:2)
I think it's less the Internet that the author's talking to and more things like clustering, etc. which is LAN-centric, not WAN-centric.
For those sorts of configurations and applications, high-bandwidth and low-latency is crucial. To be able to analyze the chaotic traffic on the backbone of the WAN, you need the same sort of ability, actually.
Real problems, or.... (Score:3, Interesting)
It seems to me that all the issues the author mentions could be solved with some library written over the top of sockets (and potentially other primitives like threads). Sockets are meant to be a low level interface, not to solve every problem.
The multi-home problem is real, but could be fixed with a relatively minor extension to the API, like IPV6 has been added in.
Plan 9? (Score:2)
How does Plan 9 do this? From memory it wasn't precisely sockets... but more interesting. gah... I'll go research
Was it the Enquirer? (Score:2, Funny)
Having RTFA, I have to ask: "What in Cthulu's name have APIs got to do with all this?".
The author broadly complains of the current status of networking at the OS level (copying bytes, connecting to/from multihomed hosts, etc.). APIs don't get into it.
The title of the article appears to be an attention grabbing device, it could well have been titled "Does Britney Spears carry my baby?".
(The incipit would be "No. Now, in a world of low latency and high bandwidth...")
Cheers,
alf
Comment removed (Score:4, Interesting)
Horrible for multiple connections (Score:4, Interesting)
Sockets are very annoying when you have a lot of clients being served by one server. Consider, for instance, a chat server, with 25000 clients connected. You have 25000 sockets, one per client (plus a listen socket for new clients to connect to).
Whenever data arrives, the system has to somehow notify you that one of your sockets is ready to read. That generally involves some kind of polling, with select or poll, or some kind of interrupt mechanism, such as a signal. I'm leaving out some options, but regardless of how you get notified, you then read the data from the appropriate socket.
Then guess what happens? Most likely you take that data, wrap it in a data structure that tells you which client it was for, and stick it on a work queue, where the main thread or threads pull things to process.
Step back and look at what happened here:
That's just insane! The kernel demultiplexed the incoming data, and the server just remultiplexed it when it put it onto the work queue. Demultiplexing belongs in the server application, not the kernel.
What I want is a single stream between my code and the kernel that delivers all the data for all 25000 clients. Whenever any client has data, I want to be able to read from that, and get back a message, that identifies which client it is from, and gives me that data.
The kernel should just be parsing the incoming TCP stream enough to recognize what port a given packet is for, and what client it came from, and then should queue it up into a single stream for the server handling that port. (The kernel has enough information from that to keep track, on a per client basis, of how much data is pending in the queue for the server app, so has what it needs to manage flow control).
Re: (Score:2, Insightful)
There has been an alternative all the time:
http://en.wikipedia.org/wiki/Transport_Layer_Interface [wikipedia.org]
Re:haha (Score:4, Insightful)
That explains why - fortunately - it wasn't widely adopted.
Re: (Score:3, Funny)
Ro-ro..
Let's get outta here Scooby!
Re: (Score:2)