100GbE To Slash the Cost of Producing Live Television 180
New submitter danversj writes "I'm a Television Outside Broadcast Engineer who wants to use more IT and Computer Science-based approaches to make my job easier. Today, live-produced TV is still largely a circuit-switched system. But technologies such as 100 Gigabit Ethernet and Audio Video Bridging hold the promise of removing kilometres of cable and thousands of connectors from a typical broadcast TV installation. 100GbE is still horrendously expensive today — but broadcast TV gear has always been horrendously expensive. 100GbE only needs to come down in price just a bit — i.e. by following the same price curve as for 10GbE or 1GbE — before it becomes the cheaper way to distribute multiple uncompressed 1080p signals around a television facility. This paper was written for and presented at the SMPTE Australia conference in 2011. It was subsequently published in Content and Technology magazine in February 2012. C&T uses issuu.com to publish online so the paper has been re-published on my company's website to make it more technically accessible (not Flash-based)."
It is going to be awhile (Score:5, Interesting)
Re: (Score:2)
100GbE is huge demand for core infrastructure people due to backbones being strained everywhere by the explosion of online video usage. Tier 1 providers are simply at a demand level that current foundries can't even come close to providing. Thus no one has an incentive to slash prices.
That is the main notion I got from the summary: I have an idea for a cool technology but it is a long way from becoming reality. Same fate as interplanetary travel and zero-calorie beer.
There is another issue and it is a constant one (Score:5, Interesting)
Replacement tech rarely catches up. 1080p signal? Please, that is so last year. 4k is the new norm. No TV's for it yet? Actually, they are already on sale which means that if you are not recording your repeatable content right now in 4k, you will have a hard time selling it again in the future. That is why some smart people recorded TV shows they hoped to sell again and again on film and not video-tape. Because film has a "wasted" resolution in the days of VHS video tapes but when DVD and now Blu-ray came out, these shows can simply be re-scanned from the original footage and voila, something new to flog to the punters.
I don't know how much data a 100GbE link can truly handle but the fact is that trying to catch up to currect tech means by the time you are finished, you are obsolete. the 4k standard created by the Japanese (and gosh doesn't that say a lot about the state of the west) isn't just about putting more pixels on a screen it is about all the infrastructure needed to create such content. And you better be ready for it now because if you are not, you will be left behind by everyone else.
The future may not be now, but it sure needs to have been planned for yesterday.
Re: (Score:3)
...the 4k standard created by the Japanese (and gosh doesn't that say a lot about the state of the west) ...
That the West is pretty great? Same as if United Kingdom or Canada created the standard. I mean, you're defining "The West" based on political and economic philosophy, not on some arbitrary lines on a map, right?
Re: (Score:2)
Clearly, Japan is not a western country: http://www.justworldmap.com/maps/asia-pacific-centric-world-map-3.jpg [justworldmap.com]
Re: (Score:2)
I can't tell on the map, are longitude lines renumbered? Or did they just stay with the international standard and rotate it around?
If you turn a map upside down, that doesn't magically make the north south, and vice versa. It just means north is in a different direction then normally expected. Likewise, re-centering the map doesn't make the far east not in the east. It's just not in the east on that map.
Re: (Score:2)
If you turn a map upside down, that doesn't magically make the north south, and vice versa. It just means north is in a different direction then normally expected. Likewise, re-centering the map doesn't make the far east not in the east. It's just not in the east on that map.
This comment led me to an interesting thought. Where does the concept (and names) of East and West come from? The idea of North and South are based on the physical properties of magnetism and the Earth's ferrous core. Did someone just decide that we need new names for Left and Right to describe other directions orthogonal to the magnetic field?
Re: (Score:2)
The directions are based on the spin of the Earth. The concepts were there before we discovered magnetism and named the poles of a magnet after the directions on our planet. The concept of the direction east as "toward the rising sun" is pretty basic and comes out of the mists of time from proto-languages before mankind invented writing.
Calling China and Japan the East is a more recent European centric terminology. Since the planet is a globe everything is east of some other place in a relative manner. Ho
Re: (Score:2)
The directions are based on the spin of the Earth.
Man, I feel like an idiot. This isn't a case of, "It's obvious once you hear it." It's just plain obvious.
Well, thanks for the non-condescending answer. Also, ++ on the land-centric point.
Re:There is another issue and it is a constant one (Score:5, Informative)
I work in film, we usually scan 35mm 3 perf at 8k and 2 perf at 6k. Output after offline edit is usually 4k or 2k. Punters are going to be flogged re-released videos that cost the studios nothing. 1080p is more than enough for most people, unless you are going to have to have a screen large than 100 inches from 10 feet away, most people have a 32 inch TV at 15-20 feet.
TV does not work in 1080p anyway, still stuck at 1080i. Only your high-end dramas are captured with 1080p, 2k, 4k if digital (Sony F35, F65, Arri D21, Red if you don't mind downtime) or on 35mm (I haven't worked with 35mm in drama for over 5 years now).
Re: (Score:2)
Re: (Score:2)
Who? Where? Not much use unless you tell us examples.
No ATSC ground based broadcasters over the air in the USA. ALL the "small dish" satellites deliver 1080p "over the air" technically for pay per view, etc.
Re: (Score:2)
Who? Where? Not much use unless you tell us examples.
The BBC for one.
Re: (Score:2)
1080p25 (30) or 1080p50 (60)?
Re: (Score:2)
Who? Where? Not much use unless you tell us examples.
The BBC for one.
Only p25, which doesn't count (and looks shit, bloody "film effect" idiots). 720p50 would have been a much nicer transmission standard than 1080i, but more pixels always win, in TV and in still cameras.
Re: (Score:2)
To be fair, I have a 1080p projector at 10 feet from my seating position, and while there are rare occasions (mostly in video games) where I wish I had better resolutions, 1080p is still quite good enough at this distance. At 6-8' you'd definitely notice though.
Speaking of 1080i dramas, with the amount of compression artifacting I get from the limited bandwidth each show gets on satellite, I'd rather see compression improved (or higher bandwidth options) than higher resolutions for television.
Re: (Score:2)
At 6-8' you'd definitely notice though.
I don't think it's ever going to matter at 6-8'. 6-8cm, is probably more likely.
Re: (Score:2)
If the projected screen size is greater than 45", it should be noticeable at 6-8'. If it is greater than 75", it should be obvious. You're right for 4k screens at "normal" sizes, though.
http://s3.carltonbale.com/resolution_chart.html [carltonbale.com]
Re: (Score:2)
I seem to recall from a physics experiment I did in university that the angular resolution of the eye is roughly translated at 1mm per meter distance. 10ft = ~3m. So you'd have at 1080 (assuming the 10ft is the height that you are projecting to) about 3mm per pixel. Visible at about 3m depending how good your eyes are. Of course this is "moving pictures" too not a static text/desktop display like typical computer use so will you notice the pixels quick enough before they become something else? Not sure how
Re: (Score:2)
To be fair, my screen is 103" diagonal -- the pixellation is visible at 6' at 1080p.
Re: (Score:2)
To be fair, my screen is 103" diagonal -- the pixellation is visible at 6' at 1080p.
OK, I guess there's no inherent reason not to have a screen that takes up an entire wall either (an actual 16x9 screen would be 220" diagonal). I just think the trend will be towards personal displays over time due to the surge of mobile devices.
Re: (Score:2)
To be fair, my screen is 103" diagonal -- the pixellation is visible at 6' at 1080p.
bah, spazzed on the preview button... sorry for the double reply. ....
but are you actually sitting 6' from your 103" screen? That would be fairly close to immersive, no?
Re:There is another issue and it is a constant one (Score:5, Informative)
I don't know how much data a 100GbE link can truly handle
It's actually very close to 100 gigabits per second. (The encoding overhead is already accounted for in the 100Gb figure, and the protocol overhead is very low: if you're using jumbo packets - and you'd probably want to - then it's easily less than 1%).
Will there be superjumbo frames? (Score:2)
Out here in the hinterlands, nobody will invest in 10GbE yet, but do any of these support larger jumbo frames? I see about 92xx as the largest supported frame size for 1 GbE with most equipment only accepting 9000.
Do the jumbo frame sizes make a 10x leap when the data rate does? Or at least the 6x jump from standard to 9k that 100Mbit to 1Gbit was?
I suppose there are reasons why they wouldn't (maybe 900k or even 325k frame sizes are too much, even at 100GbE), but it seems that if there's some efficiency
Re: (Score:2)
Really there are two reasons to increase maximum frame size. One is to improve efficiency on the wire, the other is to reduce the number of forwarding descisions to be made.
With improving efficiency on the wire you quickly get into diminishing returns. With 9000 byte frames your header overhead (assuming a TCP/IPv6 session) is probablly of the order of 1%. Reducing that overheard further just isn't going to buy you much more thoughput.
Reducing the number of forwarding decisions would be a legitimate reason
Re: (Score:3)
I tried to do the math. I don't have all the numbers, but I can still do a reasonable approximation. Assuming 8k*4k at 24bits per pixel and 100 frames per second you get 8*4*24*100Mbit/s=76.8Gbit/s. So it should be quite feasible to push a single uncompressed 4k stream over 100Gbit/s. There may very well be other issues such as what sort of hardware you need to process it, and maybe you need multiple streams over the same wire.
Re:There is another issue and it is a constant one (Score:5, Informative)
More realistically, 4096 * 3072 * 60 Hz * 20 bits (That's 10-bit 4:2:2 YCbCr, like HD-SDI today) = 14 Gbit/s. You could push 6 of those streams over 100GbE.
Re: (Score:2)
You could push 6 of those streams over 100GbE.
Why do people in this industry need 6 simultaneous unbuffered streams? TFS said that cost isn't really an issue, so a 4-port link aggregation of 10Gbps ought to be widely deployed by now if three of these streams were good enough. There are switches ($$$) that can handle that kind of backplane traffic.
Re: (Score:2)
They need it to backhaul multiple sources from studio to vision mixer. They're wanting to use 100Gbe instead of whatever super-high-def SDI type solution they're currently using that is probably distance limited. If you can trunk 4/5 camera sources over one cable instead of multiple cables, you've got a simpler infrastructure.
Re:There is another issue and it is a constant one (Score:4, Interesting)
Why do people in this industry need 6 simultaneous unbuffered streams?
A typical broadcast studio has dozens, if not hundreds of simultaneous streams. Several editing suites running at once, a few people reviewing incoming feeds and selecting content from a variety of other sources, a couple of studios with 3-4 cameras each, plus actual output streams for each of the channels being produced, with large master control panels mixing the inputs to make them.
I spent a couple of years working for Philips Broadcast Television Systems (BTS), which makes equipment to run these systems. I worked on the router control systems, a bunch of embedded 68K (this was almost 20 years ago) that control big video and and audio switchers, many with hundreds of inputs and outputs (technical terms: "gazintas" and "gazaoutas"). It's unbelievable how many video and audio streams even a small studio manages, and the wiring to support it all is massive, as in foot-thick bundles routed all over under the raised floor. It makes your typical data center cable management problem look like child's play.
Besides just cabling costs, I could see packet-switched video enormously simplifying the engineering effort required to build and maintain these facilities. And it would also eliminate the need for lots of very expensive hardware like the switches BTS sold. Even with 100GbE, I'll bet large studios will still end up with cable bundles and link aggregation, but it would be vastly better than what can be done now.
Re: (Score:2)
A typical broadcast studio has dozens, if not hundreds of simultaneous streams
I see, so 100Ge is primarily for 'backbone' networks then, not necessarily to each station? Or does it just make sense to switch only when the prices are really compelling vs. sorta-like-the-costs-of-foot-thick-cable?
Re: (Score:2)
Re: (Score:2)
I'm not sure what you mean by "station".
network leaf node
Re: (Score:2)
Ah, yes, then. You could probably do just fine with Gig-E to most individual sources/sinks, as long as you ran them back to switches which could actually switch the full aggregate bandwidth, then 100GbE to form the internal "backbone" connecting the switches (which would need some 100GbE ports).
Re: (Score:2)
You could push 6 of those streams over 100GbE.
Why do people in this industry need 6 simultaneous unbuffered streams? TFS said that cost isn't really an issue, so a 4-port link aggregation of 10Gbps ought to be widely deployed by now if three of these streams were good enough. There are switches ($$$) that can handle that kind of backplane traffic.
For the last 15 years, our central video matrix had 1512 inputs. That was SD, but for 1080i 4:2:2 that would be 1.25 Tbit. 2.5Tbit for 1080p.
As for back plane switches, I believe a 10 year old cisco 6500 with SFM module will run with a 256gbit on the backplane.
Re: (Score:2)
For the last 15 years, our central video matrix had 1512 inputs. That was SD, but for 1080i 4:2:2 that would be 1.25 Tbit. 2.5Tbit for 1080p.
What kind of max concurrency do you see out of those 1512?
Re: (Score:2)
Difficult to tell, certainly over 400 though.
Re: (Score:2)
They switched to measure the width of the image instead of the height? Did they think 3k didn't sound impressive enough and then named it 4k instead?
Re: (Score:3)
In film land and the visual effects industry where the 2k was standard long before HDTV was invented, it' was always a measure of the horizontal pixel dimension.
It makes sense because you would start with a 2048x1536 scan from the 35mm Frame (4/3 aspect ratio) and cut off the top and bottom to reach 2048x853 2.35:1 aspect ratio seen in the cinema. These days you also work with a mask at 2048x1152 that matches the 16:9 or 1.77 aspect ratio used in HD tv.
The delivery back to the editor is often the full 4/3 f
Re: (Score:2)
Having the spec is a long way from convincing fabs to manufacture it. Where is Samsung in their capital equipment depreciation cycle on their current fabs? Where are they in their current build plan? What about the other 3 panel manufacturers? Considering how FED/SED has vanished into a black hole, I think we can safely assume they're years away from running out those investments and ongoing investments.
Consider for a moment that a bigscreen OLED TV is $10,000, if you can buy one at all. They're going
Re: (Score:2)
Vizio came out of nowhere and drove the prices down on LCD TVs by offering televisions with a low-processing mode, contrary to what it was believed the market desired, pleasing gamer nerds who help drive purchasing decisions for friends and relatives, and anyone not too discriminating about what the final image looks like without much cash in their pocket. If any new players get into manufacturing OLEDs you'll see their price change rapidly, too.
Re: (Score:3)
That is why some smart people recorded TV shows they hoped to sell again and again on film and not video-tape. Because film has a "wasted" resolution in the days of VHS video tapes but when DVD and now Blu-ray came out, these shows can simply be re-scanned from the original footage and voila, something new to flog to the punters.
Maybe some people did, but most of them didn't. Ironically, American TV dramas from the late-80s onwards moved from being entirely shot and edited on film, to being shot on film but edited (and postproduced) on video. Standard-def crappy NTSC video, that is.
This probably didn't matter at the time, because as their primary audience was only going to be viewing the programme via an NTSC video transmission anyway. 20-25 years on, shows like Star Trek: The Next Generation look like fuzzy crap because they wer
Re: (Score:2)
something new to flog to the punters
I'm honestly curious where this phrase is used. It means as much as "something new to fish to the ketchup" to me - a verb and a noun, obviously, but I can't figure out the meaning of the words in context.
it's british slang (Score:2)
read it as "market to the consumers"
Re: (Score:2)
The Japanese are gadget freaks - they were actually at the forefront of HDTV research. They were working on TVs with >1000 lines of resolution as far back as the 1970s. But their HDTV standard was analog. The advances in CPUs and DSPs allowed real-time compression and decompression of 1080i digital video at an affordable price point by the mid-1990s (my 80386 right around 1990 took ~5 sec to decode a
Re: (Score:2)
" Because film has a "wasted" resolution in the days of VHS video tapes but when DVD and now Blu-ray came out, these shows can simply be re-scanned from the original footage and voila, something new to flog to the punters."
Yes, you could rescan film for DVD. And you can (barely) rescan most film for BluRay (1080p). But 4K? Forget it.
Since lenses aren't perfect, and since many elements in a scene aren't perfectly in focus, the "information" regarding a scene actually consists of of a lot of blurry elements.
Re: (Score:2)
Replacement tech rarely catches up. 1080p signal? Please, that is so last year. 4k is the new norm.
For long form, but not for live, the glue is only just coming into realistic territory.
This year's the first year at IBC that I've really noticed 4K. NHK are still plugging their UHDTV stuff, which looked very impressive with the footage from the olympics, however I was more impressed with the 120Hz demo.
In other news, we've finally got the money to upgrade one of our overseas offices, which actually preses, from an analog matrix to a digital one. Another overseas office still has a 4:3 studio camera (with
Re:It is going to be awhile (Score:4, Insightful)
But then they have incentives to ramp up production.
well written, detailed and interesting (Score:5, Interesting)
You don't see that all the time on slashdot.
Great article.
I think many are getting confused here and think that this article is about reducing the cost of producing live TV on a shoestring. The figures in this article are very high, but for professional video production, existing figures are also very high.
If you take into account that this could allow production trucks to shrink in size a bit (RG6 takes up a lot of space), the price of this new way could be even lower.
Re: (Score:2)
RG6 isn't a factor, since HD-SDI can run over fibre as well. The real savings comes from running many signals over a single ethernet cable (which at 100 GbE speeds would undoubtedly be fibre). That said, this study seems to ignore all cabling costs. It looks like their conclusions can be summed up as "An equivalent ethernet-based system has the same port costs as HD-SDI systems today, and the ethernet price will come down in the future, producing cost savings."
Re: (Score:2)
Please excuse my extreme ignorance in the matter, but wouldn't it be an order of magnitude cheaper just to use MTP fiber at 10Gb and split signals rather than push everything on to a single 100Gb link?
A first step in afordable digital broadcasting (Score:3)
Re: (Score:3)
Re: (Score:3)
Newtek's Toaster was one of the first steps into cheap digital broadcasting. In was an all in one digital switching and titling system.
Yes, and it used analog sources and had an analog output, it's not until the flyer that you take steps into digital broadcasting, the toaster gave digital editing. (And, of course, there's LightWave 3D.)
They might want to look into firewire networking. It's been around a long time but hasn't been widely adopted. The speed should be adequate for what he's quoting. It blows away Ethernet.
Firewire, 800Mbps. Ethernet, 1000Mbps, costs $10 per node or so and you can now get an 8 port switch for forty bucks or something fancier with management and supporting many ports for only hundreds. And again, that's just cheap Ethernet, 10GbE is in relatively broad use now and as stated, 100GbE is around th
Re: (Score:2)
It blows away Ethernet.
Do you have a source for that claim? because it seems to me you are remembering articles from the early 2000s that are no longer relavent
Afaict both firewire and modern (full duplex switched) ethernet are low overhead. So it's reasonable to compare them on the basis of their headline data rates
In the early 2000s firewire 400 was starting to appear on desktops and laptops (macs first IIRC but other vendors soon followed because of the digital video craze which at the time was firewire based) while gigabit et
It will become affordable... (Score:2, Insightful)
It will become affordable right around the time 1080p is obsolete and replaced by 10Kp (or whatever is next), requiring 1TbE networking to handle the bandwidth...
Re: (Score:2)
There's more to life than pixels. Specifically, bitrate and codec. Or are broadcasters in my area the only ones who broadcast HD material that looks terrible with blockiness all over the screen whenever the camera moves?
There's a lot of room for improvement before we reach the limits of 1080p.
Re: (Score:2)
Professional Broadcasting (Score:5, Informative)
Insightful write up. Getting rare here on ./
For those not RTFA, they are referring to using Ethernet in professional live broadcast situations. Aka, newsroom or outdoor sporting broadcasts where cable [stagbroadcast.co.uk] bundles are still common. I believe they are imagining a world where a broadcast truck rolls up to a stadium and runs a few pair of 100Gbe fiber vs a large coax bundle. This could save considerable time and money. Some interesting bw numbers:
SD 270 Mbit/s
Interlaced HD 1485 Mbit/s
Progressive HD 2970 Mbit/s
Re: (Score:2)
I don't see how that's cheaper - because the cost of labor is the same, regardless of what's under the cable jacket. The OP is also missing the difference between the one-time cost of the hardware, and the ongoing costs of... well, pretty much everything else.
Re: (Score:2)
And just as they get their 100GbE put in, they'll be trying to upgrade equipment to handle 4k resolutions instead ...
10 GigE should be enough for most situations... (Score:5, Insightful)
In the last studio upgrade we did, we retrofitted everything with Ethernet -- 10G switches. Cameras are all ASI -> GigE (MPEG-2 Multicast), switchers, and final outs.
Uncompressed, at full rate, an ASI feed uses 380 MB/s. An uncompressed 1080p melted feed is 38 MB/s.
You need to do careful network planning, but remember these are switches -- you shouldn't see traffic you didn't request. Right now we usually have about 8 cameras, plus the mixer, plus the groomer, plus the ad-insert. It then goes right out via the internet (Internet2 -- FSN is also a partner so we can send right to them), and a satellite truck as a backup. Our plan next year is not to have the satellite tuck on site anymore.
This is for a live-sports studio that feeds about 300 cable / satellite providers, reaching about 73M homes.
IT and Broadcast TV Is not CS it's more trade (Score:2)
IT and Broadcast TV Is not CS it's more trade like and needs lot's hands on skills with the equipment.
I predict a drop in reliability. (Score:5, Informative)
Network Architect here, who's worked on many varied systems. I predict what the consumer will see is a drop in reliability.
Real time communication is just that, real time. Gear of old (5ESS switches, TDM networks, Coax analog video switchers) were actually built around this notation from the ground up, and many design decisions were made to keep things operating at all costs. Of course, this added cost and complexity.
Packet based networks were built on the assumption that losing data was a-ok. Packet drops are how problems are signaled. Protocols are just barely in some cases starting to figure out how to properly deal with this for real time situations, and largely the approach is to still throw bandwidth at the problem.
So yes, running one 100Gbe cable will be cheaper in the future, but it's going to introduce a host of new failure modes that, no offense, you probably don't understand. Heck, most "Network Architects" sadly don't understand, not knowing enough about the outgoing or incoming technology. However I've seen the studies, and it's not pretty. VoIP is not as reliable as circuit switched voice, but it's pretty darn close as it's got more mature codecs and low bandwidth. iSCSI is laughably unreliable compared to even fiber channel connections, much less some kind of direct connection methodology. The failure mode is also horrible, a minor network blip can corrupt file systems and lock up systems so they need a reboot. Of course, it's also a straight up redundancy thing; when you're covering the Super Bowl having every camera feed leave the building on a single cable sounds like a great cost and time reducer, until it fails, or someone cuts it, or whatever, and you lose 100% of the feeds, not just one or two.
With the old tech the engineering happened in a lab, with qualified people studying the solution in detail, and with reliability as a prime concern for most real time applications. With the new tech, folks are taking an IP switch and IP protocol, both of which were designed to lose data as a signally mechanism and who's #1, #2, and #3 design goals were cheap, cheap, and cheap and then multiplexing on many streams to further reduce costs. The engineering, if any, is in the hands of the person assembling the end system which is often some moderately qualified vendor engineer who's going to walk away from it at the end. It's no wonder when they fail it's in spectacular fashion.
I'm not saying you can't move live TV over 100Gbe (and why not over 10Gbe, even 10x10Gbe is cheaper than 100Gbe right now), but if I owned a TV station and my revenue depended on it, I don't think that's the direction I would be going...
Re: (Score:2)
Packet based networks were built on the assumption that losing data was a-ok. Packet drops are how problems are signaled.
This is where AVB comes in. With AVB the data-sender is required to pre-reserve the necessary bandwidth across all switches from one end of the data path to the other, and the switches then make sure that the bandwidth you reserved is available for your packets to use (by holding off non-real-time traffic if necessary). By this method it is guaranteed that (short of hardware failure) no packets from your real-time video feed will be dropped. And if it's hardware failure you're worried about, you can set
Re: (Score:2)
I did, and I saw that part.
I worked on ATM networks in the past, which had resource reservation that first did not work the way anyone who used it expected, and second was turned off (well, ignored, really) in any operating network I ever saw because when push came to shove and the network had to be upgraded or oversubscribed, oversubscribed won every time.
I worked on MPLS networks, with resource reservation that had the exact same issues as the ATM networks, recreated anew with an "updated" protocol. Whil
Re: (Score:2)
I'll add that at least one major network uses IP contribution video from NFL statidums (JPEG 2000 at 100 Mbps). Evidently it works, but they and the stadiums are on-network with the same provider.
Numbers seem VERY wrong (Score:5, Interesting)
HDSDI uncompressed video is 1.5Gb/s. That is the standard for moving uncompressed video around inside a TV truck, whether 720p or 1080i. It rises to 3Gb/s if you're doing multiple phases of video (3D video, super slo-mo, etc). Within that 1.5Gb/s is still more than enough headroom to embed multiple datastreams and channels of audio (8 stereo pairs is the norm, some streams do up to 16). So I fail to see why 100Gb/s is necessary to transmit uncompressed video.
It's also a chicken-and-egg scenario. I'm a broadcast engineer and audio specialist. I had Ma Bell contact me about 7 years ago asking about how important uncompressed video transmission was, as they were trying to gauge a timeframe for a network rebuild to allow for uncompressed video transmission. My answer hasn't changed much in 7 years, because although moving uncompressed video from site to (in the case of Fox) Houston and then back to your local affiliate would be nice, it's completely unnecessary because by the time it reaches your house your local cable or satellite operator has compressed your 1.5Gb/s signal down to between 4Mb/s and 10Mb/s typically, making the quality gains negligible.
It will solve one problem, which is image degradation due to multiple passes of compression. Think about it... the 1.5Gb/s leaves our TV truck and gets ASI compressed into 270Mb/s (best case scenario, satellite transmission is significantly lower bandwidth, and most networks don't use an entire 270M circuit, they use less). It then arrives at the network hub, where it gets decompressed. If it's live it then goes through several switchers and graphics boxes, then gets re-compressed to ASI and sent either to another hub or to your local affiliate. (If not live, it gets put into a server which re-compresses the video even harder before playout.) Your local affiliate then decompresses it, it passes through more switchers and graphics boxes, then it gets either broadcast using 8VSB, or it gets re-compressed and passed on to your cable or satellite provider, who then un-compresses it, processes it into MPEG or some other flavor, and re-compresses it into its final 3-12Mb/s data stream for your receiver to decompress one final time.
This would eliminate several compression steps, and mean a better final image quality because you're not recompressing compression artifacts over and over and over again. A real 1.5Gb/s video frame looks like staring out a window compared to the nastiness you see when you hit pause on your DVR during a football game (also a best-case scenario, most cable/broadcast/sat providers ramp up the bitrate to the max for live sports and then set it back down shortly thereafter).
But the 100Gb/s makes no sense to me. Are you (crazy) overcompensating for latency? Are you sending 100% redundant data for error correction? Why in the world would you need that much overhead? I can't imagine it's to send multiple video feeds, the telco companies don't want you to do that because then you order less circuits from them. Plus you'd want at least two circuits anyways in case your primary circuit goes down for some reason.
(Side note: The one benefit to a TV truck using Ethernet as a transmission medium is the fact that these circuits are bi-directional. Transmission circuits nowadays are all unidirectional, meaning you need to order more circuits if you need a return video feed, meaning higher transmission costs. The ability to send return video or even confidence return signals back down the same line would be huge for us and a big money saver.)
Re: (Score:2)
Your local affiliate then decompresses it, it passes through more switchers and graphics boxes, then it gets either broadcast using 8VSB, or it gets re-compressed and passed on to your cable or satellite provider, who then un-compresses it, processes it into MPEG or some other flavor, and re-compresses it into its final 3-12Mb/s data stream for your receiver to decompress one final time.
The FOX Broadcast Network encodes in MPEG-2 once at the uplink site, and stations use stream splicers between the local MP
Forget ethernet, get Infiniband (Score:2)
It's cheaper, faster and available today. Check out www.mellanox.com - newest dual fdr cards are especially nice.
Re: (Score:2)
I don't think this is for broadcasting to home users. This newfangled 802.1Qav protocol requires compatible hardware at every hop, and for the broadcaster to know the MAC addresses of the recipients ahead of time.
Re: (Score:2)
Unless you're broadcasting, doesn't ethernet always require you to know the MAC addresses of the recipients ahead of time?
Re: (Score:2)
and for the broadcaster to know the MAC addresses of the recipients ahead of time
Oh, jesus, are they trying to work some impossible DRM dream into an IEEE protocol?
Re:Why? (Score:5, Informative)
As well, not everybody viewing HD footage has a shitty provider, and giving providers the excuse "it comes that way" won't help anybody.
Re: (Score:2)
A summary of reasons (From the fine article):
You'd think, but BBC R&D were claiming their DiracPro boxes introduced a latency measured in lines (like 30). I never measured it myself, and they seem to have moved to Stagebox now, with AVC-i100 as the intermediate.
Re:Why? (Score:4, Insightful)
I know it isn't cool to read the headline anymore, but this is about production not watching. Yes, a frame of latency makes a big difference when you are *inside* the studio, and need to keep things sync'd to within less than a frame so that you can do live switching without flickers or delays. If you try to do live switching to take between two cameras, and you have a few frames of latency in the encoder of the sources, and the decoder in the switcher and the buffer in the switcher the sync the frames, etc., you can make the process of doing live Television appreciably worse than it is today, which isn't something anybody would spend money on. You can only sell new gear to people if the new system isn't worse than the old.
Re: (Score:3)
You can only sell new gear to people if the new system isn't worse than the old.
Unless this new gear makes operating costs much lower. Some time ago I visited the production company that handles almost all of Dutch TV programming; these guys made the switch to an all-digital post-production system. According to them, the new (and hugely expensive) system didn't really offer any new or improved functionality, but the reduction in operational costs and time required to do post production was astounding.
Out of curiosity, what is the big deal would be with such a small latency? I ca
Re: (Score:2)
You can only sell new gear to people if the new system isn't worse than the old.
the new (and hugely expensive) system didn't really offer any new or improved functionality,
which does not at all address the statement that the new system must not be worse than the old.
Out of curiosity, what is the big deal would be with such a small latency?
Because it doesn't take many frames before the human eye can perceive the difference, and if you're trying to be slick you don't want any perceptible glitches. Because if you have a little latency here and a little latency there you eventually wind up with a bunch of latency.
Re: (Score:2)
Out of curiosity, what is the big deal would be with such a small latency?
Because it doesn't take many frames before the human eye can perceive the difference, and if you're trying to be slick you don't want any perceptible glitches. Because if you have a little latency here and a little latency there you eventually wind up with a bunch of latency.
This didn't really answer the question. You can time stamp every frame coming in, buffer appropriately wherever you're trying to switch between viewed streams, and things will NEVER be out of sync. Never ever. Traditional A/V types get weird ideas about how things have to work. I've help design and work with digital video storage/transmission standards, and there is absolutely no reason multiple video/audio streams should ever go out of sync.
You're never likely to be more than a frame off that you'd nee
Re: (Score:2)
You can time stamp every frame coming in, buffer appropriately wherever you're trying to switch between viewed streams, and things will NEVER be out of sync. Never
Right, your solution to latency is to add more latency. Their solution is to be on time.
Re: (Score:2)
You can time stamp every frame coming in, buffer appropriately wherever you're trying to switch between viewed streams, and things will NEVER be out of sync. Never
Right, your solution to latency is to add more latency. Their solution is to be on time.
The question was why does it matter, which was never answered. The GP did some hand waving about keeping in sync, which didn't answer the question. I pointed out that it didn't answer the question, and that sync isn't an issue.
Now you have managed to not answer the question, or rather say the equivalent of, "latency is important because latency". Good job.
Re: (Score:2)
Yes, a frame of latency makes a big difference when you are *inside* the studio, and need to keep things sync'd to within less than a frame so that you can do live switching without flickers or delays.
With causal differencng and a fixed Huffman, Golomb or exp-Golomb compressor, you can basically compress a pixel as soon as it arrives, at the bitrate of the source. Of course you'll almost certainly have to wait for the next pixel until the symbol for the previous one gets pushed out, but given that this is
Re:Why? (Score:5, Informative)
The latency problem i can understand, but that will be a problem regardless of compression or not.
The trouble is that the more effective codecs tend to require an entire frame before they can do any compression (so that they can compress more effectively by taking the whole frame into consideration). So if you have a series of pieces of equipment processing the video (camera, distribution, control desk(s), effects etc), then each one has to wait until it's received the last lines of a frame before it can even start sending out the first lines of that frame - so each element in the chain adds a whole frame's worth of latency. Whereas if you do it uncompressed, most equipment can start sending out the first line of a frame before it's even received the second line.
Encoding and decoding will not add that much cost compared to the network.
That's dependent on a lot of factors. 100Gbps Ethernet has the potential to reach much bigger economies of scale than broadcast-quality codec hardware (though it has a long way to go before reaching that far as yet).
Compressing/uncompressing only destroys the pic if its lossy. There are numerous lossless codecs that should do the trick and save tons of money in the process.
The trouble with lossless codecs is that they can never guarantee to make a frame smaller - mathematically there must be some frames that are uncompressible. Over the course of a long video, the codec will win on average, but when working with live streams, if you get just one frame that doesn't compress nicely (or worse, a few in succession) then your network has to be able to handle that bandwidth - so you might as well not use the compression in the first place.
Re: (Score:2)
> if you get just one frame that doesn't compress nicely ... your network has to be able to handle that bandwidth
Or, your system has to *insert* latency in the form of elastic buffers to give the stream time to "catch up." Either way, your point is valid. :)
Re: (Score:3)
Dirac Pro was made for exactly this. Latency of only 8 scanlines.
That's true, but to do that it has to sacrifice a lot on compression ratio (and to guarantee the latency you have to give up losslessness). That's great for squeezing, say, a 1080p signal into a channel designed for 1080i, but when it comes to having multiple 1080p streams from the different cameras in a studio you'll likely need the higher bandwidth Ethernet can provide anyway. And of course the potential market for fast Ethernet hardware is much bigger than for a codec that is only used within a particula
Re:Why? (Score:5, Informative)
Re:Why? (Score:5, Informative)
> The intent here is to replace so much of the specialized cabling
Yup. I'm glad I work in radio, where we've been ferrying oversampled, high-quality audio over IP for some years now.
The digital switching and input assignments are a dream as well. Not that many years ago, if someone came into Engineering and said, "sorry, forgot! We have a paid ballgame going on at 4PM!" ... my assistants and I would literally grab a punch tool and some Belden wire and start frantically running cables. Many was the time we'd put something on air by literally throwing a pair across the floor with gaffer's tape. "Watch Yer Step!" :)
Nowadays, any source in our facility can be assigned to any input on any mixer in any control room. Run once, use many times. Ah, it's a beautiful thing. I can move an entire radio station from one control to another literally in a matter of minutes. It takes longer for the staff to physically grab their coffee cups and lucky charms than it does for my staff to move the signals.
My poor brethren in TV just have entirely too much data. If we'd all go back to RADIO drama, see, this wouldn't be a problem, now woodit? :D
Re: (Score:3)
Nowadays, any source in our facility can be assigned to any input on any mixer in any control room.
That's the case in well-engineered TV studios as well, but they do it a different way. They have big video/audio switchers -- think about a big panel with 100 analog inputs lined up along the left edge and 100 analog outputs along the top each and wire each of the 100x100 = 10,000 intersection points and make each intersection independently activated under software control. In practice, they don't actually make switches that big, instead they use, say, 20x20 switches and then cascade them in really clever
Re: (Score:2)
Not sure where you get this. I've seen 256x256 routing switchers in the field, Grass Valley has a product with configurations up to 2048x2048.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Still, all of that stuff is really expensive, and the cabling required to connect every video source and every video sink to the switch is complex, expensive and just plain huge. Packet switching will make it much, much better -- when the networks can handle the data volume.
Well huge is right, but blackmagic do a 72 in 144 out 1080i matrix for $15k.
Now if you're going for the 1000+ sources/destinations, then yet, that's still big bucks.
Re:Why? (Score:4, Informative)
Because before compressing the video you have to move it from the camera to the editing system. The less often you compress the better the quality of the final compressed product. Once the live broadcast has been edited it will be compressed just once before delivery to the end viewer.
Re:Why? (Score:5, Informative)
Re: (Score:2)
Well that's a failure of imagination. I'll admit technically speaking it often is *somewhat* compressed, - eg. 422 Subsampled chroma at least. But there is a massive difference between a delivery codec and a signal you're still working with. To start with H264 and their ilk are computationally expensive to do anything with. A single frame of 1080p is a pretty big dataset, and it's painful enough doing basic matrix transforms, but adding a bunch of higher level computations on top of that?... For example just cutting between two feeds of an inter frame compressed codec requires that the processor decompress the GOP and recreate the missing frames. Several of orders of magnitude more complicated than stopping one feed and starting another.
And generally speaking the uncompressed feed you have in broadcast situation you're doing *something* oo. Switching, mixing, adding graphics, etc. But the biggest question is one of generation loss. Even one round trip through one of those codecs results in a massive drop in quality (as you rightly point out). You don't want to be compressing footage out of the cameras any more than you can, because you KNOW that you're going to be rescaling, retiming, wiping, fading, keying etc etc etc...
H264 has vastly varying levels of compression and computational complexity. Heck, it even has lossless modes, so there is zero generational loss. And there was dedicated hardware out there years ago that could compress frames before the next frame was finished receiving. Really though, this scenario is probably better suited to one of the less complex and lower efficiency codecs, which is what the BBC is doing with the Dirac codec. And I'd imagine that a lossy codec that retained 99.9% of detail would b
Re: (Score:2)
If you work in compressed video, then your end result will be even worse. Digital generational loss can contribute even more to the "abysmal quality of HD content" as you put it. Ideally there would only be one compression applied, and it would be before delivering to the end user. Practically, even with a fully uncompressed workflow, the best you can expect is two - one for distribution to the cable/satellite headends, and one done by the headend to fit the video signal within their bandwidth needs. Often
sour grapes (Score:5, Funny)
You're just jealous because Australia is a significant source of crappy stories, and some of them are extremely low quality.
Our crappy stories per capita ratio is truly astounding.
hmmm. i should write an article about this. I'm sure I can get it published.
Re: (Score:3)
As an Australian author (book in sig)... Ow! (truth hurts?)
Re:Is the Network really the bottleneck? (Score:5, Interesting)
this is about live TV. Live TV is a different. The infrastructure relies on point-to-point circuit switching. One video signal is sent down one coax cable. 8 cameras is 8 coax cables, now have 1km of cable that's 8km just for the live camera feeds to the OB truck. 100GbE means one cable. 8km of coax or fibreoptic isn't cheap, and usually requires a truck and a team of sparks to transport all these cables.
Back to caferace's conversation. It is a bottleneck indeed for content that is not live. Digitising rushes to intermediate codecs takes time, tape is usually played back at normal speed or double speed, output via HD-SDI from the deck and the workstation that transcodes on the fly in realtime. Tapeless workflows speeds the process up as you can import faster that 1-2x but still takes time to transcode. However, having this slow down is not a problem, the rushes have to be logged, while they are converting this logging process can be done manually.
Cinematic filming have the workflow sorted to some extent. High end cameras shoot direct to an intermediate codec, a DIT works on set and logs as the footage is shot, and sound and continuity departments can log electronically to the same system now too. The problem at the moment is it is not one system but many have to come down to one. I work in the sound department in film as an assistant. One of my responsibilities is keeping time-code correct on set, I have to go round each department times a day* and "jam" each system, recorder, slate, camera etc so they are correctly in time so when all the data is put together by the DIT. One day they will get unified
Logging while shooting cannot be done for news or reality TV as everything happens too quick.
* Three times a day because Sony can't make a $100,000 camera that doesn't have an internal clock that doesn't drift by +/- 2-3 frames a day.
Re: (Score:3)
this is about live TV. Live TV is a different. The infrastructure relies on point-to-point circuit switching. One video signal is sent down one coax cable. 8 cameras is 8 coax cables, now have 1km of cable that's 8km just for the live camera feeds to the OB truck.
You run co-ax over a mile?
8 cameras is 8 cores of a single fibre cable.
One of my responsibilities is keeping time-code correct on set, I have to go round each department times a day* and "jam" each system, recorder, slate, camera etc so they are correctly in time so when all the data is put together by the DIT.
I guess you don't distribute B&B + VITC then?
Logging while shooting cannot be done for news or reality TV as everything happens too quick.
It certainly can, we do it all the time on important feeds using our own system wrapped around a Quantel sQ system. EVS are particularly good at the interface for logging things like sports matches too. It's essential to log feeds that come in in realtime, otherwise you may as well throw them away.
Now getting editors to log rushes in overseas offices, where there's no librarian, and always