Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Networking Stats United Kingdom Wireless Networking

UK Scientists Claim 1Tbps Data Speed Via Experimental 5G Technology 71

Mark.JUK writes A team of Scientists working at the University of Surrey in England claim to have achieved, via an experimental lab test, performance of 1Tbps (Terabit per second) over their candidate for a future 5G Mobile Broadband technology. Sadly the specifics of the test are somewhat unclear, although it's claimed that the performance was delivered by using 100MHz of radio spectrum bandwidth over a distance of 100 metres. The team, which forms part of the UK Government's 5G Innovation Centre, is supported by most of the country's major mobile operators as well as BT, Samsung, Fujitsu, Huawei, the BBC and various other big names in telecoms, media and mobile infrastructure. Apparently the plan is to take the technology outside of the lab for testing between 2016 and 2017, which would be followed by a public demo in early 2018. In the meantime 5G solutions are still being developed, with most in the early experimental stages, by various different teams around the world. Few anticipate a commercial deployment happening before 2020 and we're still a long way from even defining the necessary standard.
This discussion has been archived. No new comments can be posted.

UK Scientists Claim 1Tbps Data Speed Via Experimental 5G Technology

Comments Filter:
  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Tuesday February 24, 2015 @10:56AM (#49119343)
    Comment removed based on user account deletion
    • by Anonymous Coward

      or $1.5-2 M in roaming fees in a second but let's say it takes up till 1 hour for you to get cut off you have a 1B+ data bill how are you going to pay that off?

    • 1 Tbps = 1e12 bit/sec
      2 GB = 8*2*2^30 = 2^34 bit
      2^34/1e12 ~= 0.017 sec
    • Please. Once Verizon gets done lobbying for a redefinition of "5G", you'd be lucky to see a 10% increase in bandwidth from 4G.

    • That's a funny post, but it misses the real point, which is speeds like that over mobile networks can compete with traditional land-based ISP speeds. These are some of the first hints at a massive shift in how consumers will access the internet and ISPs will operate in the not-so-distant future. Last month Verizon quietly announced that they weren't going to lay any more fiber optic cable and are selling some fiber networks to third parties because their wireless networks were much more profitable, which is

  • Any newselberries on what kind of modulation is used? TFA doesn't state much apart from 'MIMO'
    • Comment removed based on user account deletion
  • by __aaclcg7560 ( 824291 ) on Tuesday February 24, 2015 @10:59AM (#49119381)

    You can deliver more wireless bandwidth to users. Are you willing to pay big bucks to upgrade the backend equipment (i.e., routers and switches) for more bandwidth?

    *crickets*

    • Hell, certain "High Speed Internet" providers aren't even willing to apply a 10GB Fiber from one rack, to another, to help their users get content faster.

      http://qz.com/256586/the-insid... [qz.com]

      I remember seeing an interview with someone at Netflix, which basically said "Comcast has the bandwidth to carry all Netflix traffic, without issue. Netflix has the bandwidth to carry all the traffic requested by Comcast customers to Comcast, without issue. We have the capacity, they have the capacity, and if they need netw

    • Also, 100 MHz is *a lot* of spectrum to allocate to a single client, given the amount of spectrum that's currently available. They'd have to free up a lot of old spectrum that is used for obsolete stuff like 2G voice and 3G data, so that they could repurpose the spectrum for 5G. The only way they'd be able to pull this off, realistically, would be to increase tower density. 100 MHz is just too much to ask. Typical LTE bands have 1.4 MHz to 20 MHz allocated to a given LTE client; this increases that by a fac

      • divide that 100Mhz channel into 1000 1ms time slots (or 10000 100 microsecond time slots and allow 1000 users 10 apiece, or keep dividing into smaller slices as the technology allows, to reduce latency) and provide 1gbps to 1000 users. Or 10000 1ms slots, to provide 100mbps to 10000 users.

        At the 1gbps per user level, that's a little less capacity (without overselling) than current towers, considerable more if oversold at current rates. At the 100mbps level, that's insanely better coverage in high-populati
  • by goruka ( 1721094 ) on Tuesday February 24, 2015 @11:06AM (#49119429)
    I'm not an electrical engineer or anything close, but I live in a developing country and notice that the biggest problem here is not 3G or LTE speed (which just works fine everywhere) but that when a zone gets a little crowded, even if the signal strength is high, connectivity drops to E and stops working.

    Is this a problem that the specification does not allow more than a certain amount of frequencies per antenna and more are needed? As in, If it's so easy to saturate an antenna, shouldn't the extra frequencies, speed and bandwidth be used for allowing more connections instead first?
    • by mlts ( 1038732 )

      Ideally, it should do both. One device would have an extremely large amount of bandwidth to play with if in range of the tower, but as more devices get handed off to the tower, there is less bandwidth per device, but all devices get some level of service until a threshold is reached where the tower cannot accept any more items, where even EDGE or GPRS speed cannot be maintained. This is especially important at sporting events or SXSW where there are tens to hundreds of thousands of people in one space. A

    • by tlhIngan ( 30335 )

      I'm not an electrical engineer or anything close, but I live in a developing country and notice that the biggest problem here is not 3G or LTE speed (which just works fine everywhere) but that when a zone gets a little crowded, even if the signal strength is high, connectivity drops to E and stops working.

      Is this a problem that the specification does not allow more than a certain amount of frequencies per antenna and more are needed? As in, If it's so easy to saturate an antenna, shouldn't the extra

    • It already is multiplexed, via multiple access schemes. You typically see 3 antenna sets arrayed on a cell phone tower. Each of those typically operates at a different frequency set so that they don't interfere. Then in each of those coverage ares you're typically multiplexed via TDMA, or you're given time slices in which to communicate. There's only so fine that you can dice up time before either your calls get choppy, or not everyone in the cell can get synchronized enough to communicate effectively.
  • A summary that uses "bandwidth" in its correct, technical meaning? Heresy!
  • This will be a breakthrough when they can get their desired 5G speeds at 15 kilometers, or greater distances. Until then it's only PR.
    • Um, why? The vast majority of cellsites deployed today cover areas with a radius of MUCH less than 15km...

      • I live in a city, and can go from full bars to no coverage in about 1/2 mile. (1KM). There are notorious dead zones in the middle of the city, because the city regulates cell towers, making cell service unusable in large swaths of town.

        Yeah, it is that bad.

    • by fisted ( 2295862 )
      I don't think I would want to be anywhere near that transmitter..
    • Re:Only 100 meters (Score:4, Informative)

      by phoenix_rizzen ( 256998 ) on Tuesday February 24, 2015 @01:17PM (#49120459)

      Only 100 MHz, and using 100 MHz of spectrum. Most carriers in North America are lucky to have 10-20 MHz of contiguous spectrum, and maybe 40 MHz total usable spectrum in a specific area. Good luck finding 100 MHz of spectrum to use anywhere other than lab conditions.

      Would be nice if they worked on increasing the number of bits that can be transferred per MHz of spectrum, instead of increasing the amount of spectrum required to send the bits.

      • When comparing bandwidth its also important to compare what frequency we are talking about, after all there is exponentially more bandwidth available at higher frequencies(i.e. only 100MHz between 2.35 and 2.45GHz but 100Ghz between 2.35THz and 2.45THz

  • Would someone please explain how you get 1Tbps of data through just 100MHz of bandwidth?

    I just found a (not very credible) reference on the Internet that claimed that the amount of data you could transfer would be limited by your available spectrum frequency bandwidth. I.e., if you'd have the same data transfer capability if you could use 0 to 100MHz as if you could use
    1GHz to 1.1GHz.

    So how do you get more than 100Mbit through 100MHz of bandwidth?

    • I think this busts the physics, unless I misunderstand completely. Paging Dr. Shannon...

      • Re:Mod Parent Up (Score:5, Informative)

        by serviscope_minor ( 664417 ) on Tuesday February 24, 2015 @01:10PM (#49120401) Journal

        I think this busts the physics, unless I misunderstand completely. Paging Dr. Shannon...

        Nope.

        Think about baseband for a moment.

        Let's say you hae a bandwidth of 100MHz.

        You can basically change from 0v to 1v 100e6 times per second, giving 100Mbit/s.

        But you can also introduce more symbols. If you have 10 voltage levels between 0 and 1 V you get 1Gbit /s.

        What limits the number of symbols is noise. The datarate is symbol rate * bits per symbol. In the absence of noise, you can transmit an infinite amount of data in a 1Hz channel.

        For non baseband signals, they generally use QAM to get symbols spanning the whole phase space around the centre frequency.

      • by ledow ( 319597 )

        Cat5 cables is only aimed at 100MHz signals, but you can put Gigabit Ethernet over it.

        The number of bits sent does not have to be less than the frequency of the carrier (or even half that).

        Phase, amplitude, frequency-modulation, plus others, all combined allow you to get a lot more out of the signal than merely the carrier frequency rate.

        Otherwise your old 56Kb/s modem of old would never have got to that speed, your DSL modems wouldn't come close, your wifi would be nothing more than a radio modem, etc.

        Has

        • Well, if I understand GB Ethernet (with which I've wired my home, to ease passing MPEG-2 OTA TV streams around), it moves from one twisted pair to four, at the 100Mbit clock rate, and so approximates 1Gbps, though doesn't quite equal it.

          So not a like-for-like comparison. While the summary doesn't say much, the other provided explanation (multiple spatial paths) seems something like GB EN, in that there are multiple channels in which the information is transmitted.

          Hard for me to see how you cram a Terabit do

          • by ledow ( 319597 )

            There are four twisted pairs. Assume they are 100MHz each. That's only 400MHz (800 if you think the other one of a pair does anything (*)). Yet you push 1000Mbits a second over it (and, yes, that's the actual speed) .

            How? PAM, QAM, and a bunch of other tricks - because you think you need an entire cycle/wavelength in order to encode a single bit of information, which just isn't true.

            (*) it doesn't - the other half of the pair allows you to subtract interference received along the same route by an equal

    • by Anonymous Coward
      Using multiple beams formed by an array of antennas. No single user gets all that data bandwidth, but the total data bandwidth can be directed in multiple directions, using spatial structure to increase the total data rate with the same radio bandwidth.
      • by davidwr ( 791652 )

        Using multiple beams formed by an array of antennas. No single user gets all that data bandwidth, but the total data bandwidth can be directed in multiple directions, using spatial structure to increase the total data rate with the same radio bandwidth.

        This. Very much this.

        Even so, a 1:10000 ratio of bandwidth to bit-rate is noteworthy.

        I don't see this being practical except in either 1-box-to-many-boxes "point-to-points" situations or in situations where reflection and other characteristics are either very predictable or ascertainable in real-time. The former is uncommon and the latter is hard.

        A case of 1-to-many might be in a home or office where a single node is connecting to many fixed-point antennas in an "Internet of things" environment or even t

    • It would be a huge leap, LTE Advanced (i.e. 4G) could, in theory, get to around 15 bits/Hz (currently LTE's around 4), but this is more like 10,000 bits/Hz.

    • by Anonymous Coward

      There's no 1:1 relationship between Hz of bandwidth and the bits per second you squeeze through. The available signal to noise ratio is the other important factor. The good old analog modems managed 28.8 kbps in a 4kHz bandwidth channel. Check http://en.wikipedia.org/wiki/Quadrature_amplitude_modulation or http://en.wikipedia.org/wiki/Phase-shift_keying

    • by Anonymous Coward

      You can get as much data as you want through any bandwidth (theoretically anyway). What kills the ability to transfer data is when noise in the channel makes it impossible to determine if you are seeing data or noise. The Shannon theorem covers this. But this only specifies a limit to the amount of data you can transfer given a certain SNR.

      Nothing says you can't find ways to reduce the noise (better amplifers, antenna etc) or simply just increase the signal power. Doing this in a practical way is the challe

    • Take a look here: http://en.wikipedia.org/wiki/S... [wikipedia.org] 100MHz of bandwidth only equals 100MBps throughput if Signal power = Noise power. Working backwards implies the SNR from the experiment is 30000. I'm not sure if that is a reasonable number or not, but sound high to me. My work: 10 * math.log10(2**(1e12/1e8) - 1)
      • Shannon's theorem is true for a single channel. Eventually, cramming in more bits into one channel becomes power-prohibitive, because one must double power for each new bit added in. The benefits from adding power diminish even further when a system is widely deployed, as power from one system shows up as noise in the next, and SNR in all systems hits an interference limit.

        To get around these limits, two related techniques are used
        1) adding more antennas, to create more channels which are separated in

    • You make multiple measurements, and you get more fine grained measurements. Originally you had on/off keying (AM modulation). On = 1, Off = 0. You had FM modulation, where +freq = 1, -freq = 0. It's easy to see how to make either of those better -- for on/off keying, a simple amplifude modulation. Full power = 11, 2/3rd power = 10, 1/3 power = 01, off = 00. Boom, double the bit rate in the same amount of bandwidth (technically, potentially a little bit less if you do things right). You can see how yo
    • MIMO, you can have more than one set of signals without completely interfering.

  • by Anonymous Coward

    Wired 10Gbit ethernet is still not affordable for home use.

  • With this we can blow through our 2GB wifi data limits in a fraction of the time.

  • We could leech all the movies of a year in 1080, all the series, all the ebooks in under a minute.
    Way to go.

  • Watching the rest of world move past is not a fun thing to do.
  • ... now I'll be able to burn through my allotted bandwidth in 1 second.

    That's just excellent news.

  • Companies like verizon think bandwidth is scarce and charge crazy overage fees if you even think of going over. I can't even math that well, but seems with 100% saturation, you would use up a 4gb data plan, use another 121gb in overages in 1 second. At their current rate of 15$ per gb, that first 1 second would costs you little over $1800.

You know you've landed gear-up when it takes full power to taxi.

Working...