Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Networking Australia Television The Internet Technology

100GbE To Slash the Cost of Producing Live Television 180

New submitter danversj writes "I'm a Television Outside Broadcast Engineer who wants to use more IT and Computer Science-based approaches to make my job easier. Today, live-produced TV is still largely a circuit-switched system. But technologies such as 100 Gigabit Ethernet and Audio Video Bridging hold the promise of removing kilometres of cable and thousands of connectors from a typical broadcast TV installation. 100GbE is still horrendously expensive today — but broadcast TV gear has always been horrendously expensive. 100GbE only needs to come down in price just a bit — i.e. by following the same price curve as for 10GbE or 1GbE — before it becomes the cheaper way to distribute multiple uncompressed 1080p signals around a television facility. This paper was written for and presented at the SMPTE Australia conference in 2011. It was subsequently published in Content and Technology magazine in February 2012. C&T uses issuu.com to publish online so the paper has been re-published on my company's website to make it more technically accessible (not Flash-based)."
This discussion has been archived. No new comments can be posted.

100GbE To Slash the Cost of Producing Live Television

Comments Filter:
  • by zbobet2012 ( 1025836 ) on Monday September 10, 2012 @01:57AM (#41285483)
    100GbE is huge demand for core infrastructure people due to backbones being strained everywhere by the explosion of online video usage. Tier 1 providers are simply at a demand level that current foundries can't even come close to providing. Thus no one has an incentive to slash prices.
    • by Meshach ( 578918 )

      100GbE is huge demand for core infrastructure people due to backbones being strained everywhere by the explosion of online video usage. Tier 1 providers are simply at a demand level that current foundries can't even come close to providing. Thus no one has an incentive to slash prices.

      That is the main notion I got from the summary: I have an idea for a cool technology but it is a long way from becoming reality. Same fate as interplanetary travel and zero-calorie beer.

    • by SmallFurryCreature ( 593017 ) on Monday September 10, 2012 @03:52AM (#41285809) Journal

      Replacement tech rarely catches up. 1080p signal? Please, that is so last year. 4k is the new norm. No TV's for it yet? Actually, they are already on sale which means that if you are not recording your repeatable content right now in 4k, you will have a hard time selling it again in the future. That is why some smart people recorded TV shows they hoped to sell again and again on film and not video-tape. Because film has a "wasted" resolution in the days of VHS video tapes but when DVD and now Blu-ray came out, these shows can simply be re-scanned from the original footage and voila, something new to flog to the punters.

      I don't know how much data a 100GbE link can truly handle but the fact is that trying to catch up to currect tech means by the time you are finished, you are obsolete. the 4k standard created by the Japanese (and gosh doesn't that say a lot about the state of the west) isn't just about putting more pixels on a screen it is about all the infrastructure needed to create such content. And you better be ready for it now because if you are not, you will be left behind by everyone else.

      The future may not be now, but it sure needs to have been planned for yesterday.

      • ...the 4k standard created by the Japanese (and gosh doesn't that say a lot about the state of the west) ...

        That the West is pretty great? Same as if United Kingdom or Canada created the standard. I mean, you're defining "The West" based on political and economic philosophy, not on some arbitrary lines on a map, right?

        • by mwvdlee ( 775178 )

          Clearly, Japan is not a western country: http://www.justworldmap.com/maps/asia-pacific-centric-world-map-3.jpg [justworldmap.com]

          • by cdrudge ( 68377 )

            I can't tell on the map, are longitude lines renumbered? Or did they just stay with the international standard and rotate it around?

            If you turn a map upside down, that doesn't magically make the north south, and vice versa. It just means north is in a different direction then normally expected. Likewise, re-centering the map doesn't make the far east not in the east. It's just not in the east on that map.

            • If you turn a map upside down, that doesn't magically make the north south, and vice versa. It just means north is in a different direction then normally expected. Likewise, re-centering the map doesn't make the far east not in the east. It's just not in the east on that map.

              This comment led me to an interesting thought. Where does the concept (and names) of East and West come from? The idea of North and South are based on the physical properties of magnetism and the Earth's ferrous core. Did someone just decide that we need new names for Left and Right to describe other directions orthogonal to the magnetic field?

              • The directions are based on the spin of the Earth. The concepts were there before we discovered magnetism and named the poles of a magnet after the directions on our planet. The concept of the direction east as "toward the rising sun" is pretty basic and comes out of the mists of time from proto-languages before mankind invented writing.

                Calling China and Japan the East is a more recent European centric terminology. Since the planet is a globe everything is east of some other place in a relative manner. Ho

                • The directions are based on the spin of the Earth.

                  Man, I feel like an idiot. This isn't a case of, "It's obvious once you hear it." It's just plain obvious.

                  Well, thanks for the non-condescending answer. Also, ++ on the land-centric point.

      • by SimonTheSoundMan ( 1012395 ) on Monday September 10, 2012 @04:22AM (#41285905)

        I work in film, we usually scan 35mm 3 perf at 8k and 2 perf at 6k. Output after offline edit is usually 4k or 2k. Punters are going to be flogged re-released videos that cost the studios nothing. 1080p is more than enough for most people, unless you are going to have to have a screen large than 100 inches from 10 feet away, most people have a 32 inch TV at 15-20 feet.

        TV does not work in 1080p anyway, still stuck at 1080i. Only your high-end dramas are captured with 1080p, 2k, 4k if digital (Sony F35, F65, Arri D21, Red if you don't mind downtime) or on 35mm (I haven't worked with 35mm in drama for over 5 years now).

        • by dave420 ( 699308 )
          There are many broadcasters the world over broadcasting 1080p over the air.
        • To be fair, I have a 1080p projector at 10 feet from my seating position, and while there are rare occasions (mostly in video games) where I wish I had better resolutions, 1080p is still quite good enough at this distance. At 6-8' you'd definitely notice though.

          Speaking of 1080i dramas, with the amount of compression artifacting I get from the limited bandwidth each show gets on satellite, I'd rather see compression improved (or higher bandwidth options) than higher resolutions for television.

          • At 6-8' you'd definitely notice though.

            I don't think it's ever going to matter at 6-8'. 6-8cm, is probably more likely.

            • by Zerth ( 26112 )

              If the projected screen size is greater than 45", it should be noticeable at 6-8'. If it is greater than 75", it should be obvious. You're right for 4k screens at "normal" sizes, though.

              http://s3.carltonbale.com/resolution_chart.html [carltonbale.com]

            • I seem to recall from a physics experiment I did in university that the angular resolution of the eye is roughly translated at 1mm per meter distance. 10ft = ~3m. So you'd have at 1080 (assuming the 10ft is the height that you are projecting to) about 3mm per pixel. Visible at about 3m depending how good your eyes are. Of course this is "moving pictures" too not a static text/desktop display like typical computer use so will you notice the pixels quick enough before they become something else? Not sure how

            • To be fair, my screen is 103" diagonal -- the pixellation is visible at 6' at 1080p.

              • To be fair, my screen is 103" diagonal -- the pixellation is visible at 6' at 1080p.

                OK, I guess there's no inherent reason not to have a screen that takes up an entire wall either (an actual 16x9 screen would be 220" diagonal). I just think the trend will be towards personal displays over time due to the surge of mobile devices.

              • To be fair, my screen is 103" diagonal -- the pixellation is visible at 6' at 1080p.

                bah, spazzed on the preview button... sorry for the double reply. ....

                but are you actually sitting 6' from your 103" screen? That would be fairly close to immersive, no?

      • by psmears ( 629712 ) on Monday September 10, 2012 @04:23AM (#41285911)

        I don't know how much data a 100GbE link can truly handle

        It's actually very close to 100 gigabits per second. (The encoding overhead is already accounted for in the 100Gb figure, and the protocol overhead is very low: if you're using jumbo packets - and you'd probably want to - then it's easily less than 1%).

        • Out here in the hinterlands, nobody will invest in 10GbE yet, but do any of these support larger jumbo frames? I see about 92xx as the largest supported frame size for 1 GbE with most equipment only accepting 9000.

          Do the jumbo frame sizes make a 10x leap when the data rate does? Or at least the 6x jump from standard to 9k that 100Mbit to 1Gbit was?

          I suppose there are reasons why they wouldn't (maybe 900k or even 325k frame sizes are too much, even at 100GbE), but it seems that if there's some efficiency

          • Really there are two reasons to increase maximum frame size. One is to improve efficiency on the wire, the other is to reduce the number of forwarding descisions to be made.

            With improving efficiency on the wire you quickly get into diminishing returns. With 9000 byte frames your header overhead (assuming a TCP/IPv6 session) is probablly of the order of 1%. Reducing that overheard further just isn't going to buy you much more thoughput.

            Reducing the number of forwarding decisions would be a legitimate reason

      • by kasperd ( 592156 )

        4k is the new norm.

        I tried to do the math. I don't have all the numbers, but I can still do a reasonable approximation. Assuming 8k*4k at 24bits per pixel and 100 frames per second you get 8*4*24*100Mbit/s=76.8Gbit/s. So it should be quite feasible to push a single uncompressed 4k stream over 100Gbit/s. There may very well be other issues such as what sort of hardware you need to process it, and maybe you need multiple streams over the same wire.

        • More realistically, 4096 * 3072 * 60 Hz * 20 bits (That's 10-bit 4:2:2 YCbCr, like HD-SDI today) = 14 Gbit/s. You could push 6 of those streams over 100GbE.

          • You could push 6 of those streams over 100GbE.

            Why do people in this industry need 6 simultaneous unbuffered streams? TFS said that cost isn't really an issue, so a 4-port link aggregation of 10Gbps ought to be widely deployed by now if three of these streams were good enough. There are switches ($$$) that can handle that kind of backplane traffic.

            • They need it to backhaul multiple sources from studio to vision mixer. They're wanting to use 100Gbe instead of whatever super-high-def SDI type solution they're currently using that is probably distance limited. If you can trunk 4/5 camera sources over one cable instead of multiple cables, you've got a simpler infrastructure.

            • by swillden ( 191260 ) <shawn-ds@willden.org> on Monday September 10, 2012 @10:24AM (#41288099) Journal

              Why do people in this industry need 6 simultaneous unbuffered streams?

              A typical broadcast studio has dozens, if not hundreds of simultaneous streams. Several editing suites running at once, a few people reviewing incoming feeds and selecting content from a variety of other sources, a couple of studios with 3-4 cameras each, plus actual output streams for each of the channels being produced, with large master control panels mixing the inputs to make them.

              I spent a couple of years working for Philips Broadcast Television Systems (BTS), which makes equipment to run these systems. I worked on the router control systems, a bunch of embedded 68K (this was almost 20 years ago) that control big video and and audio switchers, many with hundreds of inputs and outputs (technical terms: "gazintas" and "gazaoutas"). It's unbelievable how many video and audio streams even a small studio manages, and the wiring to support it all is massive, as in foot-thick bundles routed all over under the raised floor. It makes your typical data center cable management problem look like child's play.

              Besides just cabling costs, I could see packet-switched video enormously simplifying the engineering effort required to build and maintain these facilities. And it would also eliminate the need for lots of very expensive hardware like the switches BTS sold. Even with 100GbE, I'll bet large studios will still end up with cable bundles and link aggregation, but it would be vastly better than what can be done now.

              • A typical broadcast studio has dozens, if not hundreds of simultaneous streams

                I see, so 100Ge is primarily for 'backbone' networks then, not necessarily to each station? Or does it just make sense to switch only when the prices are really compelling vs. sorta-like-the-costs-of-foot-thick-cable?

                • I'm not sure what you mean by "station".

                  • I'm not sure what you mean by "station".

                    network leaf node

                    • Ah, yes, then. You could probably do just fine with Gig-E to most individual sources/sinks, as long as you ran them back to switches which could actually switch the full aggregate bandwidth, then 100GbE to form the internal "backbone" connecting the switches (which would need some 100GbE ports).

            • by isorox ( 205688 )

              You could push 6 of those streams over 100GbE.

              Why do people in this industry need 6 simultaneous unbuffered streams? TFS said that cost isn't really an issue, so a 4-port link aggregation of 10Gbps ought to be widely deployed by now if three of these streams were good enough. There are switches ($$$) that can handle that kind of backplane traffic.

              For the last 15 years, our central video matrix had 1512 inputs. That was SD, but for 1080i 4:2:2 that would be 1.25 Tbit. 2.5Tbit for 1080p.

              As for back plane switches, I believe a 10 year old cisco 6500 with SFM module will run with a 256gbit on the backplane.

              • For the last 15 years, our central video matrix had 1512 inputs. That was SD, but for 1080i 4:2:2 that would be 1.25 Tbit. 2.5Tbit for 1080p.

                What kind of max concurrency do you see out of those 1512?

          • by kasperd ( 592156 )

            More realistically, 4096 * 3072

            They switched to measure the width of the image instead of the height? Did they think 3k didn't sound impressive enough and then named it 4k instead?

            • by mr_exit ( 216086 )

              In film land and the visual effects industry where the 2k was standard long before HDTV was invented, it' was always a measure of the horizontal pixel dimension.

              It makes sense because you would start with a 2048x1536 scan from the 35mm Frame (4/3 aspect ratio) and cut off the top and bottom to reach 2048x853 2.35:1 aspect ratio seen in the cinema. These days you also work with a mask at 2048x1152 that matches the 16:9 or 1.77 aspect ratio used in HD tv.

              The delivery back to the editor is often the full 4/3 f

      • Having the spec is a long way from convincing fabs to manufacture it. Where is Samsung in their capital equipment depreciation cycle on their current fabs? Where are they in their current build plan? What about the other 3 panel manufacturers? Considering how FED/SED has vanished into a black hole, I think we can safely assume they're years away from running out those investments and ongoing investments.

        Consider for a moment that a bigscreen OLED TV is $10,000, if you can buy one at all. They're going

        • Vizio came out of nowhere and drove the prices down on LCD TVs by offering televisions with a low-processing mode, contrary to what it was believed the market desired, pleasing gamer nerds who help drive purchasing decisions for friends and relatives, and anyone not too discriminating about what the final image looks like without much cash in their pocket. If any new players get into manufacturing OLEDs you'll see their price change rapidly, too.

      • That is why some smart people recorded TV shows they hoped to sell again and again on film and not video-tape. Because film has a "wasted" resolution in the days of VHS video tapes but when DVD and now Blu-ray came out, these shows can simply be re-scanned from the original footage and voila, something new to flog to the punters.

        Maybe some people did, but most of them didn't. Ironically, American TV dramas from the late-80s onwards moved from being entirely shot and edited on film, to being shot on film but edited (and postproduced) on video. Standard-def crappy NTSC video, that is.

        This probably didn't matter at the time, because as their primary audience was only going to be viewing the programme via an NTSC video transmission anyway. 20-25 years on, shows like Star Trek: The Next Generation look like fuzzy crap because they wer

      • something new to flog to the punters

        I'm honestly curious where this phrase is used. It means as much as "something new to fish to the ketchup" to me - a verb and a noun, obviously, but I can't figure out the meaning of the words in context.

      • the 4k standard created by the Japanese (and gosh doesn't that say a lot about the state of the west)

        The Japanese are gadget freaks - they were actually at the forefront of HDTV research. They were working on TVs with >1000 lines of resolution as far back as the 1970s. But their HDTV standard was analog. The advances in CPUs and DSPs allowed real-time compression and decompression of 1080i digital video at an affordable price point by the mid-1990s (my 80386 right around 1990 took ~5 sec to decode a

      • by shmlco ( 594907 )

        " Because film has a "wasted" resolution in the days of VHS video tapes but when DVD and now Blu-ray came out, these shows can simply be re-scanned from the original footage and voila, something new to flog to the punters."

        Yes, you could rescan film for DVD. And you can (barely) rescan most film for BluRay (1080p). But 4K? Forget it.

        Since lenses aren't perfect, and since many elements in a scene aren't perfectly in focus, the "information" regarding a scene actually consists of of a lot of blurry elements.

      • by isorox ( 205688 )

        Replacement tech rarely catches up. 1080p signal? Please, that is so last year. 4k is the new norm.

        For long form, but not for live, the glue is only just coming into realistic territory.

        This year's the first year at IBC that I've really noticed 4K. NHK are still plugging their UHDTV stuff, which looked very impressive with the footage from the olympics, however I was more impressed with the 120Hz demo.

        In other news, we've finally got the money to upgrade one of our overseas offices, which actually preses, from an analog matrix to a digital one. Another overseas office still has a 4:3 studio camera (with

    • by kasperd ( 592156 ) on Monday September 10, 2012 @05:03AM (#41286013) Homepage Journal

      Thus no one has an incentive to slash prices.

      But then they have incentives to ramp up production.

  • by YesIAmAScript ( 886271 ) on Monday September 10, 2012 @03:13AM (#41285687)

    You don't see that all the time on slashdot.

    Great article.

    I think many are getting confused here and think that this article is about reducing the cost of producing live TV on a shoestring. The figures in this article are very high, but for professional video production, existing figures are also very high.

    If you take into account that this could allow production trucks to shrink in size a bit (RG6 takes up a lot of space), the price of this new way could be even lower.

    • by Guspaz ( 556486 )

      RG6 isn't a factor, since HD-SDI can run over fibre as well. The real savings comes from running many signals over a single ethernet cable (which at 100 GbE speeds would undoubtedly be fibre). That said, this study seems to ignore all cabling costs. It looks like their conclusions can be summed up as "An equivalent ethernet-based system has the same port costs as HD-SDI systems today, and the ethernet price will come down in the future, producing cost savings."

      • Please excuse my extreme ignorance in the matter, but wouldn't it be an order of magnitude cheaper just to use MTP fiber at 10Gb and split signals rather than push everything on to a single 100Gb link?

  • by Grayhand ( 2610049 ) on Monday September 10, 2012 @03:15AM (#41285693)
    Newtek's Toaster was one of the first steps into cheap digital broadcasting. In was an all in one digital switching and titling system. There are afordable 1080P display cards finally. I ran into that problem years back when I had to edit a 1080P film. The display cards we had were high end but they still couldn't handle that much information. There are three critical elements to actually handle 2K content. Your hard drive array has to be fast enough, your busses and cabling have to be able to handle that much information then your display cards have to be powerful enough. Obviously your need fast enough processors and enough ram as well. Anyone of the elements that's not fast enough and you have a bottle neck. They might want to look into firewire networking. It's been around a long time but hasn't been widely adopted. The speed should be adequate for what he's quoting. It blows away Ethernet.
    • The latest Firewire spec tops out at 3.2Gb. A single 1080p30fps video feed is 1.5Gb. Not going to be very good for routing multiple streams
    • Newtek's Toaster was one of the first steps into cheap digital broadcasting. In was an all in one digital switching and titling system.

      Yes, and it used analog sources and had an analog output, it's not until the flyer that you take steps into digital broadcasting, the toaster gave digital editing. (And, of course, there's LightWave 3D.)

      They might want to look into firewire networking. It's been around a long time but hasn't been widely adopted. The speed should be adequate for what he's quoting. It blows away Ethernet.

      Firewire, 800Mbps. Ethernet, 1000Mbps, costs $10 per node or so and you can now get an 8 port switch for forty bucks or something fancier with management and supporting many ports for only hundreds. And again, that's just cheap Ethernet, 10GbE is in relatively broad use now and as stated, 100GbE is around th

    • It blows away Ethernet.

      Do you have a source for that claim? because it seems to me you are remembering articles from the early 2000s that are no longer relavent

      Afaict both firewire and modern (full duplex switched) ethernet are low overhead. So it's reasonable to compare them on the basis of their headline data rates

      In the early 2000s firewire 400 was starting to appear on desktops and laptops (macs first IIRC but other vendors soon followed because of the digital video craze which at the time was firewire based) while gigabit et

  • It will become affordable right around the time 1080p is obsolete and replaced by 10Kp (or whatever is next), requiring 1TbE networking to handle the bandwidth...

    • by Zuriel ( 1760072 )

      There's more to life than pixels. Specifically, bitrate and codec. Or are broadcasters in my area the only ones who broadcast HD material that looks terrible with blockiness all over the screen whenever the camera moves?

      There's a lot of room for improvement before we reach the limits of 1080p.

      • How many stupid sub channels are they broadcasting in addition to their primary HD feed? All OTA broadcast stations get 6Mhz of spectrum here in the US, its just a matter of what they do with it.
  • by JumboMessiah ( 316083 ) on Monday September 10, 2012 @05:39AM (#41286123)

    Insightful write up. Getting rare here on ./

    For those not RTFA, they are referring to using Ethernet in professional live broadcast situations. Aka, newsroom or outdoor sporting broadcasts where cable [stagbroadcast.co.uk] bundles are still common. I believe they are imagining a world where a broadcast truck rolls up to a stadium and runs a few pair of 100Gbe fiber vs a large coax bundle. This could save considerable time and money. Some interesting bw numbers:

    SD 270 Mbit/s
    Interlaced HD 1485 Mbit/s
    Progressive HD 2970 Mbit/s

    • I believe they are imagining a world where a broadcast truck rolls up to a stadium and runs a few pair of 100Gbe fiber vs a large coax bundle.

      I don't see how that's cheaper - because the cost of labor is the same, regardless of what's under the cable jacket. The OP is also missing the difference between the one-time cost of the hardware, and the ongoing costs of... well, pretty much everything else.

    • And just as they get their 100GbE put in, they'll be trying to upgrade equipment to handle 4k resolutions instead ...

  • by quetwo ( 1203948 ) on Monday September 10, 2012 @06:53AM (#41286341) Homepage

    In the last studio upgrade we did, we retrofitted everything with Ethernet -- 10G switches. Cameras are all ASI -> GigE (MPEG-2 Multicast), switchers, and final outs.

    Uncompressed, at full rate, an ASI feed uses 380 MB/s. An uncompressed 1080p melted feed is 38 MB/s.

    You need to do careful network planning, but remember these are switches -- you shouldn't see traffic you didn't request. Right now we usually have about 8 cameras, plus the mixer, plus the groomer, plus the ad-insert. It then goes right out via the internet (Internet2 -- FSN is also a partner so we can send right to them), and a satellite truck as a backup. Our plan next year is not to have the satellite tuck on site anymore.

    This is for a live-sports studio that feeds about 300 cable / satellite providers, reaching about 73M homes.

  • IT and Broadcast TV Is not CS it's more trade like and needs lot's hands on skills with the equipment.

  • by Above ( 100351 ) on Monday September 10, 2012 @08:26AM (#41286901)

    Network Architect here, who's worked on many varied systems. I predict what the consumer will see is a drop in reliability.

    Real time communication is just that, real time. Gear of old (5ESS switches, TDM networks, Coax analog video switchers) were actually built around this notation from the ground up, and many design decisions were made to keep things operating at all costs. Of course, this added cost and complexity.

    Packet based networks were built on the assumption that losing data was a-ok. Packet drops are how problems are signaled. Protocols are just barely in some cases starting to figure out how to properly deal with this for real time situations, and largely the approach is to still throw bandwidth at the problem.

    So yes, running one 100Gbe cable will be cheaper in the future, but it's going to introduce a host of new failure modes that, no offense, you probably don't understand. Heck, most "Network Architects" sadly don't understand, not knowing enough about the outgoing or incoming technology. However I've seen the studies, and it's not pretty. VoIP is not as reliable as circuit switched voice, but it's pretty darn close as it's got more mature codecs and low bandwidth. iSCSI is laughably unreliable compared to even fiber channel connections, much less some kind of direct connection methodology. The failure mode is also horrible, a minor network blip can corrupt file systems and lock up systems so they need a reboot. Of course, it's also a straight up redundancy thing; when you're covering the Super Bowl having every camera feed leave the building on a single cable sounds like a great cost and time reducer, until it fails, or someone cuts it, or whatever, and you lose 100% of the feeds, not just one or two.

    With the old tech the engineering happened in a lab, with qualified people studying the solution in detail, and with reliability as a prime concern for most real time applications. With the new tech, folks are taking an IP switch and IP protocol, both of which were designed to lose data as a signally mechanism and who's #1, #2, and #3 design goals were cheap, cheap, and cheap and then multiplexing on many streams to further reduce costs. The engineering, if any, is in the hands of the person assembling the end system which is often some moderately qualified vendor engineer who's going to walk away from it at the end. It's no wonder when they fail it's in spectacular fashion.

    I'm not saying you can't move live TV over 100Gbe (and why not over 10Gbe, even 10x10Gbe is cheaper than 100Gbe right now), but if I owned a TV station and my revenue depended on it, I don't think that's the direction I would be going...

    • by Jeremi ( 14640 )

      Packet based networks were built on the assumption that losing data was a-ok. Packet drops are how problems are signaled.

      This is where AVB comes in. With AVB the data-sender is required to pre-reserve the necessary bandwidth across all switches from one end of the data path to the other, and the switches then make sure that the bandwidth you reserved is available for your packets to use (by holding off non-real-time traffic if necessary). By this method it is guaranteed that (short of hardware failure) no packets from your real-time video feed will be dropped. And if it's hardware failure you're worried about, you can set

  • by Controlio ( 78666 ) on Monday September 10, 2012 @09:46AM (#41287595)

    HDSDI uncompressed video is 1.5Gb/s. That is the standard for moving uncompressed video around inside a TV truck, whether 720p or 1080i. It rises to 3Gb/s if you're doing multiple phases of video (3D video, super slo-mo, etc). Within that 1.5Gb/s is still more than enough headroom to embed multiple datastreams and channels of audio (8 stereo pairs is the norm, some streams do up to 16). So I fail to see why 100Gb/s is necessary to transmit uncompressed video.

    It's also a chicken-and-egg scenario. I'm a broadcast engineer and audio specialist. I had Ma Bell contact me about 7 years ago asking about how important uncompressed video transmission was, as they were trying to gauge a timeframe for a network rebuild to allow for uncompressed video transmission. My answer hasn't changed much in 7 years, because although moving uncompressed video from site to (in the case of Fox) Houston and then back to your local affiliate would be nice, it's completely unnecessary because by the time it reaches your house your local cable or satellite operator has compressed your 1.5Gb/s signal down to between 4Mb/s and 10Mb/s typically, making the quality gains negligible.

    It will solve one problem, which is image degradation due to multiple passes of compression. Think about it... the 1.5Gb/s leaves our TV truck and gets ASI compressed into 270Mb/s (best case scenario, satellite transmission is significantly lower bandwidth, and most networks don't use an entire 270M circuit, they use less). It then arrives at the network hub, where it gets decompressed. If it's live it then goes through several switchers and graphics boxes, then gets re-compressed to ASI and sent either to another hub or to your local affiliate. (If not live, it gets put into a server which re-compresses the video even harder before playout.) Your local affiliate then decompresses it, it passes through more switchers and graphics boxes, then it gets either broadcast using 8VSB, or it gets re-compressed and passed on to your cable or satellite provider, who then un-compresses it, processes it into MPEG or some other flavor, and re-compresses it into its final 3-12Mb/s data stream for your receiver to decompress one final time.

    This would eliminate several compression steps, and mean a better final image quality because you're not recompressing compression artifacts over and over and over again. A real 1.5Gb/s video frame looks like staring out a window compared to the nastiness you see when you hit pause on your DVR during a football game (also a best-case scenario, most cable/broadcast/sat providers ramp up the bitrate to the max for live sports and then set it back down shortly thereafter).

    But the 100Gb/s makes no sense to me. Are you (crazy) overcompensating for latency? Are you sending 100% redundant data for error correction? Why in the world would you need that much overhead? I can't imagine it's to send multiple video feeds, the telco companies don't want you to do that because then you order less circuits from them. Plus you'd want at least two circuits anyways in case your primary circuit goes down for some reason.

    (Side note: The one benefit to a TV truck using Ethernet as a transmission medium is the fact that these circuits are bi-directional. Transmission circuits nowadays are all unidirectional, meaning you need to order more circuits if you need a return video feed, meaning higher transmission costs. The ability to send return video or even confidence return signals back down the same line would be huge for us and a big money saver.)

    • by TheSync ( 5291 )

      Your local affiliate then decompresses it, it passes through more switchers and graphics boxes, then it gets either broadcast using 8VSB, or it gets re-compressed and passed on to your cable or satellite provider, who then un-compresses it, processes it into MPEG or some other flavor, and re-compresses it into its final 3-12Mb/s data stream for your receiver to decompress one final time.

      The FOX Broadcast Network encodes in MPEG-2 once at the uplink site, and stations use stream splicers between the local MP

  • It's cheaper, faster and available today. Check out www.mellanox.com - newest dual fdr cards are especially nice.

There must be more to life than having everything. -- Maurice Sendak

Working...