Forgot your password?
typodupeerror
Networking Technology

IEEE Seeks For Ethernet To 'Go Green' 166

Posted by Zonk
from the stop-packet-waste-now dept.
alphadogg submitted a piece at the NetworkWorld site about the IEEE's efforts to introduce energy efficiency to Ethernet use. The group's Energy Efficient Ethernet group is looking into methods by which standards can be tweaked to encourage power savings. Current plans include ways to make computers 'choosier' about what level of bandwidth they're using. Idle systems would only run at 10Mbps, while email might draw 100Mbs, and scale up to 1000Mbps for large downloads and streaming video. The group is planning to discuss changes to the Ethernet link and higher layers. No restrictions are planned for device manufacturers, although the article suggests some companies might try to use energy efficiency as a competitive advantage. The EEE group estimates some $450 million a year could be saved via the use of energy efficient Ethernet technology.
This discussion has been archived. No new comments can be posted.

IEEE Seeks For Ethernet To 'Go Green'

Comments Filter:
  • by jbeaupre (752124) on Friday February 02, 2007 @02:49PM (#17862750)
    Seems they are saving energy by throttling bandwidth for the article. Any manage to read it?
    • Re: (Score:2, Insightful)

      by jbeaupre (752124)
      Nevermind ... I just had to try connecting 3 or 4 times. Interesting idea. Let's see ... throw out millions of PC's with integrated ethernet, replace them with new machines. Oh, guess they mean in a decade or so through normal replacement.
    • Re: (Score:2, Informative)

      by skoaldipper (752281)

      I did. The problem (FTA):

      "One challenge is finding a way to make a PC or laptop network interface card (NIC) change gears more quickly -- "a couple orders of magnitude faster than auto-negotiation, to make the switch as seamless as possible," Bennett says. "Auto-negotiation runs at about 1.4 seconds and we're talking about -- just to start the discussion -- a millisecond of switching time."

      So, why not just set NIC(s) to negotiate at the lowest speed first? Then throttle up gradually based on end to

      • by AoT (107216)
        I think the problem is that they don't want the throttling up to be gradual, they want it to change gears quick like.
      • Re:Saving energy now (Score:4, Informative)

        by phayes (202222) on Friday February 02, 2007 @11:54PM (#17869656) Homepage
        Given the number of times that autonegotiation has given me headaches because supposedly compliant devices couldn't agree on how to setup a connection, I wouldn't want to set this up on any of my networks. I just can't see myself explaining to the CIO that the reason that the ERP is slow to the point of being unusable is because the core switches renegotiated their bandwidth down to 10Mbit/sec overnight when they were unused and were unable to ramp it back up again correctly. There is a reason that autonegotiation is often disabled & it's called experience...
        • by amorsen (7485)
          There is a reason that autonegotiation is often disabled & it's called experience...

          Usually it's called resistance to change. I haven't seen any trouble since around 2000. Except when the idiot at the other end of the cable locked it at 100-full, forcing my end to go 100-half. Luckily that problem is gone with gigabit, since that is autonegotiation or nothing.
          • by phayes (202222)
            No, it's experience.

            I spent part of the holidays on a transatlantic trip to debug a network where applications being used to keep track of trains in a subway were failing. Periodic misnegotiation of the Ethernet parameters was a major part of the problem that disappeared once the ports were set statically. Part of the problem was that some of the equipment was a few years old, but then we didn't have the luxury of telling the client that all he needed to do to have a functional network was to replace the o

    • Re: (Score:3, Interesting)

      by Anonymous Coward
      For broadcom Ethernet PHY chips, they use about 1W/port when configured as 1000BaseT (GigE). GigE require some heavy duty DSP filtering as well as driving 4 pairs of bidirectional transceiver. They would burn less power when they are running at 100BaseT which only to drive 1 pair of receive and transmit. Not sure if there are significant saving going down to 10BaseT as the number of transmit pairs and the DSP's are dominant.

      While this might not seem a whole lot of power, when you are looking at Enterprise
  • by MECC (8478) * on Friday February 02, 2007 @02:50PM (#17862770)
    Once Apple adds the ability to negotiate EEE in Macs, they'll call it iEEE.
  • by Salsaman (141471) on Friday February 02, 2007 @02:58PM (#17862892) Homepage
    Use more zeros and fewer ones.
    • Re: (Score:2, Informative)

      by ToxikFetus (925966)
      On an open collector [wikipedia.org] data bus, '1's would actually uses less power since that is the high impedance state. The '0's pull down the current.
      • by dfn_deux (535506)
        You know what would be awesome? If people read the articles that they linked too...

        Open collector BJTs (which are usually NPN) exhibits faster fall time and greater current handling capabilities than FET, but have other problems. One of them is that they consume a lot of power.

        Possible problems

        As mentioned above, open-collector devices can handle more current, but they also have higher current minimums for correct operation. Even in the "off" state, open-collectors have some few nanoamps of leakage curr

  • by Kadin2048 (468275) <[slashdot.kadin] [at] [xoxy.net]> on Friday February 02, 2007 @02:59PM (#17862910) Homepage Journal
    One of the easiest ways that the Ethernet people could encourage energy efficiency would be by promoting greater use of Power Over Ethernet. By moving networked devices away from each having an individual wall wart, which are typically inefficient (as well as inconvenient), PoE lets you concentrate the AC to DC conversion in one place, for greater efficiency. As long as you don't have terribly long cable runs, I think there would be a significant net savings overall.

    The number of networked devices people are going to have in their homes is only going to grow. I think a big segment could be in "Micro NAS" devices, basically single HD boxes that plug in to a home network and add storage that's accessible from any computer in the home. They're smaller and cheaper than RAIDed NAS solutions, but more convenient for people who have multiple computers than a FireWire or USB2.0 hard drive. And then you have routers, WiFi APs, network cameras, set-top-boxes for playing back video and audio, etc. All of those light-draw devices could be powered over the network connection instead of each having a wall wart.
    • by GigsVT (208848) on Friday February 02, 2007 @03:01PM (#17862926) Journal
      Running power over tiny 24 gauge wires is very inefficient too. Try again.
      • by p0tat03 (985078) on Friday February 02, 2007 @03:20PM (#17863218)

        An idea I've always thought about is converting to DC supplies indoors. AC has an advantage in terms of long-distance transmission, but in this day and age a HUGE part of our electric use is in devices that require DC power. Hell, many of the things that run AC (like lights) can in fact run DC with nary a problem. It's always boggled my mind why we have a bajillion power bricks sitting around, each venting heat like mad converting AC/DC, when in fact we could have a much more efficient "main" transformer installed in the house that does it on a larger scale and feeds our devices directly.

        I imagine this would be even more useful for heavy power using environments like server farms - imagine if you can do with the huge boxy PSUs in every single box and just have a unified DC power source that can FAR more efficient than what's in the average beige boxen.

        • by Kadin2048 (468275) <[slashdot.kadin] [at] [xoxy.net]> on Friday February 02, 2007 @03:31PM (#17863416) Homepage Journal


          An idea I've always thought about is converting to DC supplies indoors. AC has an advantage in terms of long-distance transmission, but in this day and age a HUGE part of our electric use is in devices that require DC power. Hell, many of the things that run AC (like lights) can in fact run DC with nary a problem. It's always boggled my mind why we have a bajillion power bricks sitting around, each venting heat like mad converting AC/DC, when in fact we could have a much more efficient "main" transformer installed in the house that does it on a larger scale and feeds our devices directly.

          I imagine this would be even more useful for heavy power using environments like server farms - imagine if you can do with the huge boxy PSUs in every single box and just have a unified DC power source that can FAR more efficient than what's in the average beige boxen.


          It is a good idea; in fact it's such a good idea that people have been thinking about ways to try and implement it in datacenters for a while. Unfortunately one of the bigger problems is that most motherboards don't run off of a single voltage; they have +5, -5, +3.3, +12, and so on. There has been a push by some big server-farm operators, Google in particular, to encourage board makers to produce mobos that only require a single +12V supply, because then you could do exactly what you say: have a big AC to DC converter somewhere (probably running from a medium-voltage AC main) and then distribute the 12VDC around to the racks.

          It was a Slashdot article back in September:
          http://hardware.slashdot.org/article.pl?sid=06/09/ 26/2039213 [slashdot.org]
          • DC-DC converters are fairly efficient and it doesn't really matter if it was on the mobo or a separate unit, you'd still need it. So you have a 12v (or maybe higher since there is less loss) supply all over and then have little DC PSUs inside your PCs.
          • Re: (Score:3, Informative)

            It is a good idea; in fact it's such a good idea that people have been thinking about ways to try and implement it in datacenters for a while.

            Actually the networking industry DOES do it that way. SPower supply to many routers (such as ALL the ones some major companies make) and other networking gear is redundant 48V DC - a standard for networking equipment dating from the days of relays. (Line powered units have extra line powered supplies to make the 48 DC.)

            Not only that, but often the boxes don't have a
        • by profplump (309017)
          I'll just copy and paste this from above, to help fight the bad science:

          DC travels just fine, it's the low-voltage part that increases transmission losses. AC or DC, low-voltage power experiences greater tranmission losses than the same power transfer at a higher voltage.

          The only reason that we have AC at the wall is because we didn't have a DC, solid-state equivalent of the transformer in 1900, and therefore it was difficult to create high-voltage DC power. It's fairly widely acknowledge that if we had acc
          • by mpe (36238)
            The only reason that we have AC at the wall is because we didn't have a DC, solid-state equivalent of the transformer in 1900, and therefore it was difficult to create high-voltage DC power.

            There's also the matter of AC generators (and motors) being simpler than their DC equivalents and AC not causing an electrolysis issue where disimilar metals are involved in connectors.
      • Running power over tiny 24 gauge wires is very inefficient too. Try again.

        At 48 volts you can push significant wattage through tens of feet of four 24-gauge conductors in two-conductor parallel and still be far ahead of wall-warts. (This is what the telephone companies do to power your POTS phone from a central office miles away - except they're going farther and only use half as many conductors.)

        What gauge do you think the wires in their coils are, and how much is wrapped around the core to form the trans
      • by Muad'Dave (255648)
        The spec calls for a maximum current of 350mA at 48V and a maximum power draw of 12.95 Watts. That means that the cable itself can dissipate up to 3.85 Watts worst case.

        24 ga wire has a resistance of 8.75 ohms/100m, making the total resistance of a 100m cable 17.5 ohms. At maximum allowed power draw (taking in account the resistance of the wire), that's a current of 303 mA, with the wire dissipating 1.6 Watts.

        I doubt you can find a more efficient wall tumor than that!

    • The biggest problem is that POE is limited to 13W at 48V- you'll have to have some sort of converter in there. I don't think that you would have much problem powering small scale things like APs and cameras, but it doesn't scale up very well.
  • Green (Score:3, Funny)

    by truthsearch (249536) on Friday February 02, 2007 @03:01PM (#17862934) Homepage Journal
    IEEE Seeks For Ethernet To 'Go Green'

    That's good because I'm really tired of the white and blue.
    • by karnal (22275)
      You forgot BROWN!

      and orange.

      Shoot.

      white-orange/orange/white-green/blue/white-blue/gr een/white-brown/brown -> 568b
  • Well Duh!! (Score:5, Insightful)

    by eclectro (227083) on Friday February 02, 2007 @03:02PM (#17862968)
    Another suggestion - Stop all the spamming. There must be a coal-powered powerplant's worth of electricity right there.
    • Whoever modded this Funny... interesting choice. Was that one of those Seinfeld-esque "its funny 'cos its true" moments?
    • by Tony Hoyle (11698)
      Even better.. round up the spammers and use them as fuel for the power plants.

      Doubly green energy - less spam... more efficient networks, and an infinite fuel supply (we'll never run out of spammers).
  • by russ1337 (938915) on Friday February 02, 2007 @03:05PM (#17862998)
    Sounds like someone is really starting at the wrong end. IMHO.

    I'd estimate that power supply inefficiency chews up more than this proposal will ever save. If you spent your time making the power supplies of PC's, Switches, routers more efficient you'd probably have a greater impact. How about better efficiency in the FET's, transistors and amplifier circuitry? Last time I checked, my Ethernet looms didn't get that hot. (isn't it all about "(i^2).R"?. Heck turning off the light in the switch room probably does more to save power. Plus all the heat im my server room is from the servers, not the Ethernet. If your that worried, switch to fiber.

    I thought the transfer of data at the physical layer was through the transfer of 'holes' anyway.
    • by gmack (197796)
      Last I checked my switches were starting to generate larger amounts of heat. The GigE switches definitely seem warmer than the 100mbps were.
    • by Yartrebo (690383)
      I'm not sure if this an Ethernet issue, but my router and cable modem are substantial (probably >10 W) sources of heat. Considering that they're on 24/7, I sure would like to cut that down.
    • by zaf (5944) <slashdot@NospAm.penguinmonster.com> on Friday February 02, 2007 @03:13PM (#17863102) Homepage
      Exactly. All the hundreds of devices independently converting AC voltage to DC all day long is far more power waste than what's inside the CAT5. Speaking of, whatever happened to the push for DC datacenters? As far as I can tell, there's still no widely-used DC standard as an option for most of the devices in a small-medium sized environment
      • by evilviper (135110)

        Speaking of, whatever happened to the push for DC datacenters?
        Switching (A/C) power supplies went from 60 to 80+ percent efficiency, eliminating the small power savings benefits of DC datacenters... All without requiring massive rewiring.
    • Re: (Score:3, Informative)

      by jaredmauch (633928)
      When you're talking about larger switches and routers and not the cheap linksys/dlink crap most people call a "router, there was actually a good presentation [nanog.org] at NANOG last year. You can watch it(real video) from the link (and view slides). Most of the efficency in these larger devices has already been done. (obviously excluding that whole google + pc power supply discussion). Check it out if you are truly interested in this space.
    • by vidarh (309115)
      And exactly how much influence do you think people manufacturing ethernet cards have on power supply efficiency?

      You seem to assume that if these people weren't improving the power efficiency of ethernet they'd just pick some random different field. The world doesn't work like that.

    • - it's the power required to process the packets. More or less, a GigE card should need 10X (divided by some fudge factor that probably makes the real ratio closer to 2 or 3X) the compute power of a 100Mbit card. Processing GigE at full throttle actually takes quite a bit of CPU - we don't notice it much because most GigE interfaces have a TCP Offload Engine that avoids bogging down the CPU and bus.

      So your TOE could easily have a variable speed CPU that basically goes to sleep when it can negotiate the phys
  • Question? (Score:5, Insightful)

    by SnarfQuest (469614) on Friday February 02, 2007 @03:06PM (#17863012)
    Does 100 (or 1000) really take that much power to download one "file", or is it the same amount of power used, just in a shorter time period?

    Or is it power used while idle? Does a 1000 device comsume more power idling in that mode than a 10 device would?
    • Re: (Score:3, Interesting)

      by Ant P. (974313)
      I think what they're proposing is clock frequency control for Ethernet chips, like CPUs have now. I read somewhere that the power consumption increases n^3 with the clock speed, dunno where that figure comes from though.
      • Re: (Score:2, Informative)

        by greed (112493)

        It's the square of the clock speed; it comes from some math in second- or third-year Electrical Engineering.

        It has an awful lot to do with line capacitance and inductance; you've basically got to "fill up" the line before you can see the signal change at the other end. (Be it at chip-level or network-cable-level.)

        Which is why narrower fab processes and low-voltage differential signaling is so important in high-speed circuits; all those watts are heat that has to be dissipated. Narrower CMOS gates tak

        • It's the square of the clock speed; it comes from some math in second- or third-year Electrical Engineering.

          Unfortunately it may be a bit late for this. Modern ICs have such small features, and electrons are such large, fuzzy objects, that leakage current has become large. In the generation being used for current designs it amounts to half the power consumption. Leakage doesn't change with speed - or even if the clocks are actually stopped! You have to turn the power completely off to to some chunk of t
    • Or is it power used while idle? Does a 1000 device comsume more power idling in that mode than a 10 device would?

      Yes and Yes. 1000base-T PHYs are DSPs; IIRC they use about 500mW. Since 100base-T is so much simpler, it should be implementable with less power.
    • by joe_bruin (266648)
      The problem is that Ethernet is always signalling, not just when transferring data. Since Ethernet uses Manchester encoding [wikipedia.org], the clock signal for the bus is encoded into the data signal, so even when sending out a steady stream of zeros, the signal is constantly alternating. That is, when your workstation is idle 99% of the time, it's still sending out a 1000-Mbps idle stream (if I recall correctly it's just a stream of 10101010...) and wasting power. What they're trying to do is to be able to clock down
  • Measurable? (Score:3, Insightful)

    by zaf (5944) <slashdot@NospAm.penguinmonster.com> on Friday February 02, 2007 @03:09PM (#17863050) Homepage
    Does that much current actually go over ethernet transmissions? It seems to be that more power could be saved by more efficient power supplies in the switches than by wasting a lot of time and research in figuring a way to throttle link speeds. Does anybody have a value for the amount of electricity used for an hour's worth of data at 10 megabits as opposed to 1 gigabit?

    It just surprises me that +/-5 volts over copper really makes all that much difference compared to all the other waste in the datacenter.

    Also, what's the difference in energy usage for copper vs fiber links??
    • Re:Measurable? (Score:4, Informative)

      by Matt_Bennett (79107) on Friday February 02, 2007 @03:49PM (#17863708) Homepage Journal
      Actually, it is pretty surprising how much current Gigabit takes- The output drives usually work in a current mode, and they draw 40mA per pair- since gigabit uses 4 pairs, that's 160mA on each end of a gigabit link. *But* the big difference is in what happens when the link is idling- 10mbit only puts through link test pulses, but 100Mbit and Gigabit both keep up idle patterns that are basically encoded strings of no information- this keeps both ends of the link ready to accept data- 10Mbit has to transmit a synchronization series of pulses to make sure both ends are clocking at the same rate. For 100 and gig, at least to the output drivers, they draw the same amount idling or transmitting at line-speed.
  • I see no reason to switch from 568B to 568A.
  • It seems to me that the energy savings would be more beneficial to whoever pays the bill on the huge server farms rather than individual "normal people" who have a small ethernet running at their house or small business and whatnot. I hope they will shift away from this and focus on another area where they can actually make a difference that would noticeably benefit everyone; I especially like the idea of improving power supply efficiency (which is a bigger problem that just ethernets, IMO). One way to do
  • By any chance... (Score:4, Interesting)

    by ivan256 (17499) on Friday February 02, 2007 @03:29PM (#17863370)
    ...is this group led by ethernet equipment vendors? Perhaps vendors who are unhappy with the recent decline in equipment upgrades since people aren't upgrading from gigabit or even to gigabit from 100mbit in a way that helps their stock price sufficiently?

    It seems to me that, considering the number of ports active out there, they're talking about a tiny amount of savings per port for a total investment that could have a much larger effect if spent elsewhere.

    Hell, I bet more power is wasted by the power supplies, overly conservative fan controls, uncleaned air filters, shorted out UPS batteries that should have been replaced decades ago, overpowered CPUs, and crappily written firmware of the currently deployed switches than is consumed by transmission losses.
  • Imagine fiber with green laser - how green is that!
  • by erroneus (253617) on Friday February 02, 2007 @03:36PM (#17863486) Homepage
    At an office I once worked, there were a lot of spare switches laying about after upgrading to 1000BaseT. They were considered "spare" or whatever, but there was a great many... so I sorta brought one home and mounted it into my rack and used it for a couple of months. The next two electric bills made me rethink how nice it looked to have a 24 port switch in my rack instead of that cheapy 8 port sitting on a shelf. It consumed a NOTABLE amount of power. Now, there were other things involved I'm sure... things like the changes of the seasons, global warming and all that. But when I brought the switch back to the office and went back to my cheapy 8 port again, I saw a change in my power bill.

    If I ever decide to spend money on a nice looking switch, I'll be sure to reference the power draw of the units I review.
    • Yeah, I used to have a catalyst 5000 with two 12 port cards and two 48 port 10mbps cards. But I didn't want to pay the power bill so I sold it. Now I have three tiny 10 port 10Mbps switches around the house doing switching things. Sure, I don't get any management, or vlans, or what have you, but I'm not sucking down an amp just running cooling fans, either, let alone what it takes to run one of those bastards. And that's not even that big a switch!
  • Meanwhile, far more than $450 million would be spent on IT support services, troubleshooting problems created by computers that keep changing their link speed.
    • by operagost (62405)
      Not to mention that I didn't see anywhere in this article, a statement on how much less power a 10 Mbps connection uses over a 100 or 1000 over the same time period. I'll assume that the lower speeds will actually be used when network utilization is below 10%, so determining power usage over the time to complete an operation is probably not necessary.
  • by PainBreak (794152) on Friday February 02, 2007 @03:50PM (#17863718)
    So a straight-through is: Green white / Green / Green white / Green / Green white / Green / Green white / Green Sweet. Crossovers then would be: Green white / Green / Green white / Green white / Green / Green / Green / Green white So much easier to remember! Thanks, IEEE!
  • Two methods (Score:2, Insightful)

    > idle or underutilized Ethernet connections more energy efficient

    There are several ways to increase measured efficiency. Two of them include:

    1) Load the network with verbose transmission protocols, junk, or spam such that more network cards have higher sustained traffic (quantity means more than quality from the usage point of view).

    2) Increase the number of hardware exploits such that underused network adapters can be continually used by those who know of the hardware exploits (make the network adap
  • The group is planning to discuss changes to the Ethernet link and higher layers.
    The first time I read that, I thought it said "The group is planning to discuss changes to the Ethernet link and hire lawyers."
  • Hmmm.... (Score:3, Insightful)

    by NerveGas (168686) on Friday February 02, 2007 @07:34PM (#17867312)

        Don't get me wrong. I'm all for being green. But it would seem that instead of putting all of that effort, design time, and eventual costs in equipment in order to save a very small number of watts on the ethernet chips at each end of the link, a slightly larger effort directed into power supply losses, CPU power usage, or GPU power usage would yield 10x the benefits.

        Realistically, I know that they can't just walk over to Intel, AMD, and NVidia, and say "Alright, guys, we're here to tell you how to use less power." They're just doing what they can, and they deserve applause for it.

We have a equal opportunity Calculus class -- it's fully integrated.

Working...