Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Networking Australia Television The Internet Technology

100GbE To Slash the Cost of Producing Live Television 180

New submitter danversj writes "I'm a Television Outside Broadcast Engineer who wants to use more IT and Computer Science-based approaches to make my job easier. Today, live-produced TV is still largely a circuit-switched system. But technologies such as 100 Gigabit Ethernet and Audio Video Bridging hold the promise of removing kilometres of cable and thousands of connectors from a typical broadcast TV installation. 100GbE is still horrendously expensive today — but broadcast TV gear has always been horrendously expensive. 100GbE only needs to come down in price just a bit — i.e. by following the same price curve as for 10GbE or 1GbE — before it becomes the cheaper way to distribute multiple uncompressed 1080p signals around a television facility. This paper was written for and presented at the SMPTE Australia conference in 2011. It was subsequently published in Content and Technology magazine in February 2012. C&T uses issuu.com to publish online so the paper has been re-published on my company's website to make it more technically accessible (not Flash-based)."
This discussion has been archived. No new comments can be posted.

100GbE To Slash the Cost of Producing Live Television

Comments Filter:
  • Re:Why? (Score:5, Informative)

    by ustolemyname ( 1301665 ) on Monday September 10, 2012 @03:33AM (#41285563)
    A summary of reasons (From the fine article):
    • Dominant reason is latency. Throwing around compressed video forces latency of at least 1 frame, in an industry where latency is measured in fractions of a scan line (single horizontal line in a frame)
    • Would need encode/decode hardware at every endpoint, this would add a lot of cost.
    • Compressing, uncompressing, recompressing video increases artifacts, can smooth/blur out the footage.

    As well, not everybody viewing HD footage has a shitty provider, and giving providers the excuse "it comes that way" won't help anybody.

  • Re:Why? (Score:5, Informative)

    by tysonedwards ( 969693 ) on Monday September 10, 2012 @03:40AM (#41285589)
    The intent here is to replace so much of the specialized cabling for lighting controls, audio, video, camera control systems, etc. with a single, multi-purpose system that can handle uncompressed data, thereby supporting existing models of data acquisition. Each level of re-compression and transcoding results in a loss of quality.
  • Re:Why? (Score:4, Informative)

    by realkiwi ( 23584 ) on Monday September 10, 2012 @03:43AM (#41285597)

    Because before compressing the video you have to move it from the camera to the editing system. The less often you compress the better the quality of the final compressed product. Once the live broadcast has been edited it will be compressed just once before delivery to the end viewer.

  • Re:Why? (Score:5, Informative)

    by snicho99 ( 984884 ) on Monday September 10, 2012 @03:46AM (#41285605) Homepage
    Well that's a failure of imagination. I'll admit technically speaking it often is *somewhat* compressed, - eg. 422 Subsampled chroma at least. But there is a massive difference between a delivery codec and a signal you're still working with. To start with H264 and their ilk are computationally expensive to do anything with. A single frame of 1080p is a pretty big dataset, and it's painful enough doing basic matrix transforms, but adding a bunch of higher level computations on top of that?... For example just cutting between two feeds of an inter frame compressed codec requires that the processor decompress the GOP and recreate the missing frames. Several of orders of magnitude more complicated than stopping one feed and starting another. And generally speaking the uncompressed feed you have in broadcast situation you're doing *something* oo. Switching, mixing, adding graphics, etc. But the biggest question is one of generation loss. Even one round trip through one of those codecs results in a massive drop in quality (as you rightly point out). You don't want to be compressing footage out of the cameras any more than you can, because you KNOW that you're going to be rescaling, retiming, wiping, fading, keying etc etc etc...
  • by SimonTheSoundMan ( 1012395 ) on Monday September 10, 2012 @05:22AM (#41285905)

    I work in film, we usually scan 35mm 3 perf at 8k and 2 perf at 6k. Output after offline edit is usually 4k or 2k. Punters are going to be flogged re-released videos that cost the studios nothing. 1080p is more than enough for most people, unless you are going to have to have a screen large than 100 inches from 10 feet away, most people have a 32 inch TV at 15-20 feet.

    TV does not work in 1080p anyway, still stuck at 1080i. Only your high-end dramas are captured with 1080p, 2k, 4k if digital (Sony F35, F65, Arri D21, Red if you don't mind downtime) or on 35mm (I haven't worked with 35mm in drama for over 5 years now).

  • by psmears ( 629712 ) on Monday September 10, 2012 @05:23AM (#41285911)

    I don't know how much data a 100GbE link can truly handle

    It's actually very close to 100 gigabits per second. (The encoding overhead is already accounted for in the 100Gb figure, and the protocol overhead is very low: if you're using jumbo packets - and you'd probably want to - then it's easily less than 1%).

  • Re:Why? (Score:5, Informative)

    by psmears ( 629712 ) on Monday September 10, 2012 @05:46AM (#41285973)

    The latency problem i can understand, but that will be a problem regardless of compression or not.

    The trouble is that the more effective codecs tend to require an entire frame before they can do any compression (so that they can compress more effectively by taking the whole frame into consideration). So if you have a series of pieces of equipment processing the video (camera, distribution, control desk(s), effects etc), then each one has to wait until it's received the last lines of a frame before it can even start sending out the first lines of that frame - so each element in the chain adds a whole frame's worth of latency. Whereas if you do it uncompressed, most equipment can start sending out the first line of a frame before it's even received the second line.

    Encoding and decoding will not add that much cost compared to the network.

    That's dependent on a lot of factors. 100Gbps Ethernet has the potential to reach much bigger economies of scale than broadcast-quality codec hardware (though it has a long way to go before reaching that far as yet).

    Compressing/uncompressing only destroys the pic if its lossy. There are numerous lossless codecs that should do the trick and save tons of money in the process.

    The trouble with lossless codecs is that they can never guarantee to make a frame smaller - mathematically there must be some frames that are uncompressible. Over the course of a long video, the codec will win on average, but when working with live streams, if you get just one frame that doesn't compress nicely (or worse, a few in succession) then your network has to be able to handle that bandwidth - so you might as well not use the compression in the first place.

  • by JumboMessiah ( 316083 ) on Monday September 10, 2012 @06:39AM (#41286123)

    Insightful write up. Getting rare here on ./

    For those not RTFA, they are referring to using Ethernet in professional live broadcast situations. Aka, newsroom or outdoor sporting broadcasts where cable [stagbroadcast.co.uk] bundles are still common. I believe they are imagining a world where a broadcast truck rolls up to a stadium and runs a few pair of 100Gbe fiber vs a large coax bundle. This could save considerable time and money. Some interesting bw numbers:

    SD 270 Mbit/s
    Interlaced HD 1485 Mbit/s
    Progressive HD 2970 Mbit/s

  • More realistically, 4096 * 3072 * 60 Hz * 20 bits (That's 10-bit 4:2:2 YCbCr, like HD-SDI today) = 14 Gbit/s. You could push 6 of those streams over 100GbE.

  • Re:Why? (Score:5, Informative)

    by smpoole7 ( 1467717 ) on Monday September 10, 2012 @07:50AM (#41286331) Homepage

    > The intent here is to replace so much of the specialized cabling

    Yup. I'm glad I work in radio, where we've been ferrying oversampled, high-quality audio over IP for some years now.

    The digital switching and input assignments are a dream as well. Not that many years ago, if someone came into Engineering and said, "sorry, forgot! We have a paid ballgame going on at 4PM!" ... my assistants and I would literally grab a punch tool and some Belden wire and start frantically running cables. Many was the time we'd put something on air by literally throwing a pair across the floor with gaffer's tape. "Watch Yer Step!" :)

    Nowadays, any source in our facility can be assigned to any input on any mixer in any control room. Run once, use many times. Ah, it's a beautiful thing. I can move an entire radio station from one control to another literally in a matter of minutes. It takes longer for the staff to physically grab their coffee cups and lucky charms than it does for my staff to move the signals.

    My poor brethren in TV just have entirely too much data. If we'd all go back to RADIO drama, see, this wouldn't be a problem, now woodit? :D

  • by Above ( 100351 ) on Monday September 10, 2012 @09:26AM (#41286901)

    Network Architect here, who's worked on many varied systems. I predict what the consumer will see is a drop in reliability.

    Real time communication is just that, real time. Gear of old (5ESS switches, TDM networks, Coax analog video switchers) were actually built around this notation from the ground up, and many design decisions were made to keep things operating at all costs. Of course, this added cost and complexity.

    Packet based networks were built on the assumption that losing data was a-ok. Packet drops are how problems are signaled. Protocols are just barely in some cases starting to figure out how to properly deal with this for real time situations, and largely the approach is to still throw bandwidth at the problem.

    So yes, running one 100Gbe cable will be cheaper in the future, but it's going to introduce a host of new failure modes that, no offense, you probably don't understand. Heck, most "Network Architects" sadly don't understand, not knowing enough about the outgoing or incoming technology. However I've seen the studies, and it's not pretty. VoIP is not as reliable as circuit switched voice, but it's pretty darn close as it's got more mature codecs and low bandwidth. iSCSI is laughably unreliable compared to even fiber channel connections, much less some kind of direct connection methodology. The failure mode is also horrible, a minor network blip can corrupt file systems and lock up systems so they need a reboot. Of course, it's also a straight up redundancy thing; when you're covering the Super Bowl having every camera feed leave the building on a single cable sounds like a great cost and time reducer, until it fails, or someone cuts it, or whatever, and you lose 100% of the feeds, not just one or two.

    With the old tech the engineering happened in a lab, with qualified people studying the solution in detail, and with reliability as a prime concern for most real time applications. With the new tech, folks are taking an IP switch and IP protocol, both of which were designed to lose data as a signally mechanism and who's #1, #2, and #3 design goals were cheap, cheap, and cheap and then multiplexing on many streams to further reduce costs. The engineering, if any, is in the hands of the person assembling the end system which is often some moderately qualified vendor engineer who's going to walk away from it at the end. It's no wonder when they fail it's in spectacular fashion.

    I'm not saying you can't move live TV over 100Gbe (and why not over 10Gbe, even 10x10Gbe is cheaper than 100Gbe right now), but if I owned a TV station and my revenue depended on it, I don't think that's the direction I would be going...

This file will self-destruct in five minutes.

Working...