IEEE Seeks For Ethernet To 'Go Green' 166
alphadogg submitted a piece at the NetworkWorld site about the IEEE's efforts to introduce energy efficiency to Ethernet use. The group's Energy Efficient Ethernet group is looking into methods by which standards can be tweaked to encourage power savings. Current plans include ways to make computers 'choosier' about what level of bandwidth they're using. Idle systems would only run at 10Mbps, while email might draw 100Mbs, and scale up to 1000Mbps for large downloads and streaming video. The group is planning to discuss changes to the Ethernet link and higher layers. No restrictions are planned for device manufacturers, although the article suggests some companies might try to use energy efficiency as a competitive advantage. The EEE group estimates some $450 million a year could be saved via the use of energy efficient Ethernet technology.
Re:Saving energy now (Score:2, Informative)
I did. The problem (FTA):
"One challenge is finding a way to make a PC or laptop network interface card (NIC) change gears more quickly -- "a couple orders of magnitude faster than auto-negotiation, to make the switch as seamless as possible," Bennett says. "Auto-negotiation runs at about 1.4 seconds and we're talking about -- just to start the discussion -- a millisecond of switching time."
So, why not just set NIC(s) to negotiate at the lowest speed first? Then throttle up gradually based on end to end transmission intervals. They talked about using buffers and NIC electrical consumption to handle the negotiation. I say, just start at 10mbps and negotiate up to Gig speed gradually, and make the firmware drivers allow one to turn that feature off/on and back to our current default. My simpleton mind must be overlooking something.
Re:I have an idea (Score:2, Informative)
Re:Power over Ethernet Could Help (Score:5, Informative)
You're just transferring the wall-wart to another room though, and making the loss over the cable add to the power inefficiency. Imagine the extra airconditioning provision the room with the new site-wide AC-DC converter will need
PoE is a clever way to power devices that are in hard to power places (where you can wire a network using a thin cable but far away from a power socket) and keeps devices cheap (no need to do anything but DC-DC conversion from PoE to components) but it's not any better energy-efficiency-wise.
Can't this IEEE stuff they're talking about simply be built into drivers? I know my laptop ethernet (Intel) has the ability to scale down the ethernet speed when the battery is in use, or during standby and so on. Would it cause too much trouble to have the driver anticipate and schedule a renegotiation on a power source change or based on activity? Why would ethernet vendors need to be involved if it was simply a driver 'problem' - apart from having to write drivers that do it for their hardware (which most of them DO already).
Can't we have a sysctl or a sysfs tweak in Linux/BSD/whatever to demonstrate it and see if it even helps? Does networking hardware at the other end (for instance a 32-port Cisco switch) actually use less power if half it's ports are at 10mbit rather than 100mbit?
Can't we do this with wireless? 802.11b etc. already has power calibration built in but could it pull it back when the bandwidth requirement isn't so high, saving battery life and not polluting the airwaves with high powered chatter? My card uses the same transmit power whatever the state of the laptop is..
Re:What about the power supplies... (Score:3, Informative)
Re:Power over Ethernet Could Help (Score:5, Informative)
An idea I've always thought about is converting to DC supplies indoors. AC has an advantage in terms of long-distance transmission, but in this day and age a HUGE part of our electric use is in devices that require DC power. Hell, many of the things that run AC (like lights) can in fact run DC with nary a problem. It's always boggled my mind why we have a bajillion power bricks sitting around, each venting heat like mad converting AC/DC, when in fact we could have a much more efficient "main" transformer installed in the house that does it on a larger scale and feeds our devices directly.
I imagine this would be even more useful for heavy power using environments like server farms - imagine if you can do with the huge boxy PSUs in every single box and just have a unified DC power source that can FAR more efficient than what's in the average beige boxen.
It is a good idea; in fact it's such a good idea that people have been thinking about ways to try and implement it in datacenters for a while. Unfortunately one of the bigger problems is that most motherboards don't run off of a single voltage; they have +5, -5, +3.3, +12, and so on. There has been a push by some big server-farm operators, Google in particular, to encourage board makers to produce mobos that only require a single +12V supply, because then you could do exactly what you say: have a big AC to DC converter somewhere (probably running from a medium-voltage AC main) and then distribute the 12VDC around to the racks.
It was a Slashdot article back in September:
http://hardware.slashdot.org/article.pl?sid=06/09
Re:Power over Ethernet Could Help (Score:5, Informative)
(DC doesn't travel well, that's why wall power is AC remember)
This is a very common misconception. Low voltages don't travel well because you need more current (i.e: amps) to carry the same amount of power and this requires bigger wires. The main reason your wall power is AC is because it's easier and cheaper to build transformers for AC that convert high voltages (for distribution) into low voltages (for usage).
DC is actually used in electrical distribution. It's known as HVDC [wikipedia.org] and it's actually more efficient then AC because it doesn't have to contend with capacitance issues.
Re:Measurable? (Score:4, Informative)
Re:Power over Ethernet Could Help (Score:3, Informative)
Using shunt regulation? Bleeding off what you don't use in the form of heat? That's worse than linear voltage regulators!
Re:I have an idea (Score:2, Informative)
Re:Question? (Score:2, Informative)
It's the square of the clock speed; it comes from some math in second- or third-year Electrical Engineering.
It has an awful lot to do with line capacitance and inductance; you've basically got to "fill up" the line before you can see the signal change at the other end. (Be it at chip-level or network-cable-level.)
Which is why narrower fab processes and low-voltage differential signaling is so important in high-speed circuits; all those watts are heat that has to be dissipated. Narrower CMOS gates take fewer electrons to charge up. And by also reducing the voltage needed to see the signal change, you can reduce the impact of that clock speed increase.
But that also means the old, slower speeds with modern signaling could be run on nearly no power. Which we do; that's how the iPod and cellphones get smaller and runs longer each year. (Dropping analog support on a 'phone helps a lot, too.)
Re:Power over Ethernet Could Help (Score:3, Informative)
Actually the networking industry DOES do it that way. SPower supply to many routers (such as ALL the ones some major companies make) and other networking gear is redundant 48V DC - a standard for networking equipment dating from the days of relays. (Line powered units have extra line powered supplies to make the 48 DC.)
Not only that, but often the boxes don't have a per-box 48-to-whatever supply. Instead each blade requiring other voltages has its own switching regulators.
(This isn't just for efficiency - it's also for redundancy. A box power supply is a single point of failure for the box. Give each card its own supply running directly from the redundant power busses and if one fails all the other cards in the box keep working - meaning only the lines to that card are in trouble, not everything hooked to the box. You have to pull a card with a failing component to replace it anyhow - so if you want to cover the lines to it in case of card failure you need other redundancy anyhow. So single points of failure on a card are OK.)
Power requirements on modern ASICs, networking processors, and RAMs are getting higher, operating voltages lower (for better speed-power products) forcing higher currents, and switching DC-DC converter/regulators are getting more efficient. These days it actually makes sense to add an additional regulator near a major load so the power can cross a few inches of the PC board at a higher voltage and lower current, to avoid heating and voltage droop in the layer of copper that carries it. You're starting to see that in PC motherboards, too.
Re:Saving energy now (Score:4, Informative)