Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Networking Technology

IEEE Seeks For Ethernet To 'Go Green' 166

alphadogg submitted a piece at the NetworkWorld site about the IEEE's efforts to introduce energy efficiency to Ethernet use. The group's Energy Efficient Ethernet group is looking into methods by which standards can be tweaked to encourage power savings. Current plans include ways to make computers 'choosier' about what level of bandwidth they're using. Idle systems would only run at 10Mbps, while email might draw 100Mbs, and scale up to 1000Mbps for large downloads and streaming video. The group is planning to discuss changes to the Ethernet link and higher layers. No restrictions are planned for device manufacturers, although the article suggests some companies might try to use energy efficiency as a competitive advantage. The EEE group estimates some $450 million a year could be saved via the use of energy efficient Ethernet technology.
This discussion has been archived. No new comments can be posted.

IEEE Seeks For Ethernet To 'Go Green'

Comments Filter:
  • Re:Saving energy now (Score:2, Informative)

    by skoaldipper ( 752281 ) on Friday February 02, 2007 @03:02PM (#17862960)

    I did. The problem (FTA):

    "One challenge is finding a way to make a PC or laptop network interface card (NIC) change gears more quickly -- "a couple orders of magnitude faster than auto-negotiation, to make the switch as seamless as possible," Bennett says. "Auto-negotiation runs at about 1.4 seconds and we're talking about -- just to start the discussion -- a millisecond of switching time."

    So, why not just set NIC(s) to negotiate at the lowest speed first? Then throttle up gradually based on end to end transmission intervals. They talked about using buffers and NIC electrical consumption to handle the negotiation. I say, just start at 10mbps and negotiate up to Gig speed gradually, and make the firmware drivers allow one to turn that feature off/on and back to our current default. My simpleton mind must be overlooking something.

  • Re:I have an idea (Score:2, Informative)

    by ToxikFetus ( 925966 ) on Friday February 02, 2007 @03:24PM (#17863278)
    On an open collector [wikipedia.org] data bus, '1's would actually uses less power since that is the high impedance state. The '0's pull down the current.
  • by NekoXP ( 67564 ) on Friday February 02, 2007 @03:28PM (#17863352) Homepage
    Just because it doesn't generate heat doesn't mean it isn't losing power. The energy wasted in relation to the power on the cable is probably quite high (DC doesn't travel well, that's why wall power is AC remember) compared to just wiring up wall sockets and using warts or switching PSUs.

    You're just transferring the wall-wart to another room though, and making the loss over the cable add to the power inefficiency. Imagine the extra airconditioning provision the room with the new site-wide AC-DC converter will need :D

    PoE is a clever way to power devices that are in hard to power places (where you can wire a network using a thin cable but far away from a power socket) and keeps devices cheap (no need to do anything but DC-DC conversion from PoE to components) but it's not any better energy-efficiency-wise.

    Can't this IEEE stuff they're talking about simply be built into drivers? I know my laptop ethernet (Intel) has the ability to scale down the ethernet speed when the battery is in use, or during standby and so on. Would it cause too much trouble to have the driver anticipate and schedule a renegotiation on a power source change or based on activity? Why would ethernet vendors need to be involved if it was simply a driver 'problem' - apart from having to write drivers that do it for their hardware (which most of them DO already).

    Can't we have a sysctl or a sysfs tweak in Linux/BSD/whatever to demonstrate it and see if it even helps? Does networking hardware at the other end (for instance a 32-port Cisco switch) actually use less power if half it's ports are at 10mbit rather than 100mbit?

    Can't we do this with wireless? 802.11b etc. already has power calibration built in but could it pull it back when the bandwidth requirement isn't so high, saving battery life and not polluting the airwaves with high powered chatter? My card uses the same transmit power whatever the state of the laptop is..
  • by jaredmauch ( 633928 ) <jared@puck.nether.net> on Friday February 02, 2007 @03:30PM (#17863380) Homepage
    When you're talking about larger switches and routers and not the cheap linksys/dlink crap most people call a "router, there was actually a good presentation [nanog.org] at NANOG last year. You can watch it(real video) from the link (and view slides). Most of the efficency in these larger devices has already been done. (obviously excluding that whole google + pc power supply discussion). Check it out if you are truly interested in this space.
  • by Kadin2048 ( 468275 ) <slashdot.kadin@xox y . net> on Friday February 02, 2007 @03:31PM (#17863416) Homepage Journal


    An idea I've always thought about is converting to DC supplies indoors. AC has an advantage in terms of long-distance transmission, but in this day and age a HUGE part of our electric use is in devices that require DC power. Hell, many of the things that run AC (like lights) can in fact run DC with nary a problem. It's always boggled my mind why we have a bajillion power bricks sitting around, each venting heat like mad converting AC/DC, when in fact we could have a much more efficient "main" transformer installed in the house that does it on a larger scale and feeds our devices directly.

    I imagine this would be even more useful for heavy power using environments like server farms - imagine if you can do with the huge boxy PSUs in every single box and just have a unified DC power source that can FAR more efficient than what's in the average beige boxen.


    It is a good idea; in fact it's such a good idea that people have been thinking about ways to try and implement it in datacenters for a while. Unfortunately one of the bigger problems is that most motherboards don't run off of a single voltage; they have +5, -5, +3.3, +12, and so on. There has been a push by some big server-farm operators, Google in particular, to encourage board makers to produce mobos that only require a single +12V supply, because then you could do exactly what you say: have a big AC to DC converter somewhere (probably running from a medium-voltage AC main) and then distribute the 12VDC around to the racks.

    It was a Slashdot article back in September:
    http://hardware.slashdot.org/article.pl?sid=06/09/ 26/2039213 [slashdot.org]
  • by Shakrai ( 717556 ) on Friday February 02, 2007 @03:36PM (#17863492) Journal

    (DC doesn't travel well, that's why wall power is AC remember)

    This is a very common misconception. Low voltages don't travel well because you need more current (i.e: amps) to carry the same amount of power and this requires bigger wires. The main reason your wall power is AC is because it's easier and cheaper to build transformers for AC that convert high voltages (for distribution) into low voltages (for usage).

    DC is actually used in electrical distribution. It's known as HVDC [wikipedia.org] and it's actually more efficient then AC because it doesn't have to contend with capacitance issues.

  • Re:Measurable? (Score:4, Informative)

    by Matt_Bennett ( 79107 ) on Friday February 02, 2007 @03:49PM (#17863708) Homepage Journal
    Actually, it is pretty surprising how much current Gigabit takes- The output drives usually work in a current mode, and they draw 40mA per pair- since gigabit uses 4 pairs, that's 160mA on each end of a gigabit link. *But* the big difference is in what happens when the link is idling- 10mbit only puts through link test pulses, but 100Mbit and Gigabit both keep up idle patterns that are basically encoded strings of no information- this keeps both ends of the link ready to accept data- 10Mbit has to transmit a synchronization series of pulses to make sure both ends are clocking at the same rate. For 100 and gig, at least to the output drivers, they draw the same amount idling or transmitting at line-speed.
  • by dattaway ( 3088 ) on Friday February 02, 2007 @04:13PM (#17864032) Homepage Journal
    just use zener diodes to get the voltage down to whatever it is that the device requires?

    Using shunt regulation? Bleeding off what you don't use in the form of heat? That's worse than linear voltage regulators!
  • Re:I have an idea (Score:2, Informative)

    by skiingyac ( 262641 ) on Friday February 02, 2007 @04:22PM (#17864210)
    If you really wanted to, you could include a sync preamble like is done in many wireless physical layer protocols (might negate any efficiency gains though), or use an encoding like is used in CDs to ensure you don't end up with too long a string of all 1's or all 0's that the clock drift/differences cause ambiguity (less efficient than no encoding, but possibly better than manchester).
  • Re:Question? (Score:2, Informative)

    by greed ( 112493 ) on Friday February 02, 2007 @04:24PM (#17864256)

    It's the square of the clock speed; it comes from some math in second- or third-year Electrical Engineering.

    It has an awful lot to do with line capacitance and inductance; you've basically got to "fill up" the line before you can see the signal change at the other end. (Be it at chip-level or network-cable-level.)

    Which is why narrower fab processes and low-voltage differential signaling is so important in high-speed circuits; all those watts are heat that has to be dissipated. Narrower CMOS gates take fewer electrons to charge up. And by also reducing the voltage needed to see the signal change, you can reduce the impact of that clock speed increase.

    But that also means the old, slower speeds with modern signaling could be run on nearly no power. Which we do; that's how the iPod and cellphones get smaller and runs longer each year. (Dropping analog support on a 'phone helps a lot, too.)

  • by Ungrounded Lightning ( 62228 ) on Friday February 02, 2007 @04:45PM (#17864588) Journal
    It is a good idea; in fact it's such a good idea that people have been thinking about ways to try and implement it in datacenters for a while.

    Actually the networking industry DOES do it that way. SPower supply to many routers (such as ALL the ones some major companies make) and other networking gear is redundant 48V DC - a standard for networking equipment dating from the days of relays. (Line powered units have extra line powered supplies to make the 48 DC.)

    Not only that, but often the boxes don't have a per-box 48-to-whatever supply. Instead each blade requiring other voltages has its own switching regulators.

    (This isn't just for efficiency - it's also for redundancy. A box power supply is a single point of failure for the box. Give each card its own supply running directly from the redundant power busses and if one fails all the other cards in the box keep working - meaning only the lines to that card are in trouble, not everything hooked to the box. You have to pull a card with a failing component to replace it anyhow - so if you want to cover the lines to it in case of card failure you need other redundancy anyhow. So single points of failure on a card are OK.)

    Power requirements on modern ASICs, networking processors, and RAMs are getting higher, operating voltages lower (for better speed-power products) forcing higher currents, and switching DC-DC converter/regulators are getting more efficient. These days it actually makes sense to add an additional regulator near a major load so the power can cross a few inches of the PC board at a higher voltage and lower current, to avoid heating and voltage droop in the layer of copper that carries it. You're starting to see that in PC motherboards, too.
  • Re:Saving energy now (Score:4, Informative)

    by phayes ( 202222 ) on Friday February 02, 2007 @11:54PM (#17869656) Homepage
    Given the number of times that autonegotiation has given me headaches because supposedly compliant devices couldn't agree on how to setup a connection, I wouldn't want to set this up on any of my networks. I just can't see myself explaining to the CIO that the reason that the ERP is slow to the point of being unusable is because the core switches renegotiated their bandwidth down to 10Mbit/sec overnight when they were unused and were unable to ramp it back up again correctly. There is a reason that autonegotiation is often disabled & it's called experience...

If you want to put yourself on the map, publish your own map.

Working...