Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Linux

Linus Torvalds: Linux 5.8 "One of our Biggest Releases of All Time" (techrepublic.com) 61

This week saw the release Linux 5.8, which Linus Torvalds called "one of our biggest releases of all time," reports TechRepublic: The new version of the Linux kernel brings a number of updates to Linux 5.7 spanning security, core components, drivers, memory management, networking and improvements to the kernel's design, amongst others. This includes updates for Microsoft's Hyper-V virtualization platform, Intel Tiger Lake Thunderbolt support, improvements to Microsoft's exFAT file system, and support for newer Intel and ARM chips.

Torvalds said the kernel had received over 15,000 merge requests and that around 20% of all the files in the kernel source repository had been modified. "That's really a fairly big percentage, and while some of it is scripted, on the whole it's really just the same pattern: 5.8 has simply seen a lot of development," Torvalds said.

Translated into numbers, Linux 5.8 includes over 800,000 new lines and over 14,000 changed files. It also received one of the biggest number of merge requests during its merge window — over 14,000 non-merge commits and more than 15,000 including merges, according to Torvalds. "5.8 looks big. Really big," he added.

This discussion has been archived. No new comments can be posted.

Linus Torvalds: Linux 5.8 "One of our Biggest Releases of All Time"

Comments Filter:
  • not bad.... (Score:5, Insightful)

    by i.r.id10t ( 595143 ) on Sunday August 09, 2020 @04:04PM (#60383617)

    for something that was "just a hobby, won't be big and professional like gnu"

    As a non-kernel coding end user, I just gotta say "THANKS!" to Linus and the hundreds and thousands of others that make this all possible.

  • It was only recently posted here about 2019 having a drop in kernel commits.

    • by antdude ( 79039 )

      Is it because of COVID-19/2019-nCoV/SARS-CoV-2 since more people are staying indoor and not going out? I saw MAME got big updates recently.

      • Is it because of COVID-19/2019-nCoV/SARS-CoV-2 since more people are staying indoor and not going out?

        This is Linux geeks, how would you tell the difference between a Covid and non-Covid year?

  • by MachineShedFred ( 621896 ) on Sunday August 09, 2020 @04:48PM (#60383765) Journal

    It would be nice if the thunderbolt changes made an eGPU work anywhere as good as it does in Windows on the same machine. Granted, it's probably not a problem with the kernel specifically, but more a problem in the display manager, but using an eGPU has massive performance problems in Linux where it works like a dream in Windows 10.

    I'd love to just stay in Ubuntu and not boot into Windows, and this is the one thing holding me back.

    • Just install the latest kernel. It is pretty trival on Linux if you have a real need. A quick Google search. You could have installed it faster than making your post.

  • How to feel old (Score:3, Interesting)

    by inode_buddha ( 576844 ) on Sunday August 09, 2020 @04:55PM (#60383785) Journal

    When I first got into open source (around 1996) IIRC Slackware shipped with 1.7x kernels. I remember when the 2.x series started, with SMP support -- it was a huge deal. Back then, I had an entire x86 mini-distro on 2 floppies which would uncompress into ~5 mb and do everything in RAM. (see the Bootdisk-HOWTO).

    • I know I'll lose this contest to some real old-timer, but when I started with Slackware, it was Slack 2.0 and the included kernel was 1.1.47. :)

    • I remember running 1.2.x and thinking that 1.3.x were for braver souls for me. And 2.0.x I was like .. why would I need this? We just switched to ELF in 1.2, what other nonsense will I have to endure?

    • by MagicMike ( 7992 )

      I definitely remember ftp-ing slack 1.44 imgs and being really really happy when as a starving college student I could both afford something that could use the SMP kernels, and that the feature existed. Linux working well meant you didn't have to go to the freezing cinderblock box that was the computer lab at the time in order to do unix comp sci homework. The good ol' days

      Anyone remember hacking your own XF86Config modelines to overclock your monitor and get a few more characters in? You know, back when we

      • You mean trying to get it to run with more than 16 colours and the funky colourmap switching? Yup, remember that...
        But compared to a windows 3.1 machine, with 8mb or ram it would fly. Heck, even compared to a sparc ipc with 12 it would...

    • by inflex ( 123318 )

      1.2.13 was a great kernel. Held on to that a long time, then 2.0.39 became my next stable hop, but never was a fan of 2.2 or 2.4.

    • I remember when the 2.x series started, with SMP support

      I remember thinking "Who can afford 2 processors!"

  • I want to see a "there is no step 3" simplicity in making USB to USB connections between computers. Apple figured this out, though perhaps they "cheat" by relying on Thunderbolt as the underlying protocol. Even so every Apple laptop with a USB-C port can go into "target disk mode" and play like a USB drive to another computer. That's not in the OS but in the firmware, available by booting into this mode instead of loading an OS from the drive. I suspect Apple wasn't too concerned about USB to USB networking in their OS because they "inherited" that from Thunderbolt with USB-C, prior systems with USB-A still had FireWire and Thunderbolt 2 for this (and Ethernet of course), and making this work cross platform likely doesn't rank all that high for them.

    I'll see mention of people using USB "gadgets" where a computer running Linux can emulate an Ethernet or serial connection over USB. This is mostly with embedded systems using older USB 2 device side controllers. With USB 3.x and USB-C this doesn't take any "special" hardware since most every laptop since 2015 that has a USB-C port uses the same Intel controller chip as Apple uses. It should only take some GUI to make it easier for loading and configuring these "gadgets" to allow a laptop running Linux to pretend it's got an Ethernet adapter inside to allow a simple USB-C to USB-C cable to create a network to another computer. Being as it emulates a standard USB Ethernet adapter then it should work to network with a computer running Windows or macOS just as well as it would with Linux.

    I've been looking for this feature to being supported for a long time. It's been in the USB spec since 2010 or so. Apple and Intel did the work on making the hardware, and it's nearly ubiquitous now. Where's my USB to USB networking in Linux?

    Oh, and don't tell me I can "just" use two Ethernet adapters back to back, or some $50 cable with a chip in the middle. This is something that the USB hardware supports, the USB spec allows, and would be incredibly useful every day. This is a software problem that should have been solved a decade ago.

    • If I had to hook up two computers, let alone two Linux computers, I never would have thought to use a USB cable. My first thought would be to use the wireless networking if it was available on both and failing that use a network cable with a hub. Almost always it's going to be faster than USB unless you have cheap hardware that has USB-C with only a 100Mb ethernet. Most likely if a computer has USB-C it's going to have 1Gb ethernet, or at least that's been my experience.

    • With USB3 you need a cross-cable (not the one with a chip in the middle, but as per USB3 standard) something virtually nobody knows it exists and no nearby shop has it: https://www.datapro.net/produc... [datapro.net] . So that's a no-go compared with the availability of thumbdrives and wifi (and even ethernet cables).

      Now I don't know what's the situation with the USB-C (presumably better in theory) but given the power situation where except for some Android phones you have no software support to tell which device charges

    • by AmiMoJo ( 196126 )

      Can't be done with USB 3 because it is still single host multiple devices. However USB 4, which includes Thunderbolt, will do it.

      Thing is how useful actually is it? Target disk mode is only really needed if the machine won't boot, and you can't just stick a recovery USB drive in, and the disk can't be removed (soldered to the motherboard). Only Apple computers really have that problem due to their security chips and lack of key recovery options.

      • by MachineShedFred ( 621896 ) on Monday August 10, 2020 @09:00AM (#60385377) Journal

        It's incredibly useful if you get a new laptop and want to transfer all your crap quickly and easily, which depending on what you do and what you use the laptop for, can be hundreds of gigabytes of stuff. It takes ages over wireless to do that, but mere minutes with Thunderbolt Target Mode (or FireWire Target Mode back in the day).

        Also, back in the day I set up high availability between two servers using FireWire networking as the heartbeat link - a small cable connecting two otherwise unused ports between two rackmount servers in a rack, creating a second ~800Mbps link to allow software to tell if there is a node failure or an ethernet NIC / Switch failure.

        The other thing I've used it for, is putting a laptop in target mode, and then booting a desktop from the laptop disk - this way when you are at your desk, you have full desktop horsepower from the same disk as you move around with when on the go, without having to deal with synchronization, roaming profiles, or any other mess of that nature - just power off the desktop, unplug the laptop, and press the power button on the laptop and walk away with it. Note, there will be a bit more disk latency than if using an internal SSD, but not enough that you would notice.

        You can also use that for easily imaging that laptop - target mode, plug in, then use your favorite disk utility to create an image of the offline disk.

        Also, since when can you not use a USB recovery drive with a Mac? That's been a thing since the second gen MacBook Air shipped with an 8GB USB recovery key in the box. Apple even has documentation [apple.com] about how to make a USB recovery drive, and includes scripts within their OS installers to do the heavy lifting. I have no idea what the fuck you are referring to with "security chips and lack of key recovery options" since you can also boot and install the OS directly over the Internet, which you cannot do with literally any other system unless you set up your own TFTP host and create a PXE boot image of some kind, or use Intel vPro to remote-mount an ISO image at the BIOS level... but we all hate vPro around here and it has to be built into the system from the OEM, and provisioned properly, so that's probably not an option for many people.

        • by AmiMoJo ( 196126 )

          Those are actually some quite interesting use-cases.

          The issue with Macs is that now the have that security chip the soldered-down storage is encrypted. The only way to recover it is if the security chip is working. On older ones you could connect to a special header on the motherboard to get the data but not any more.

          On PCs with encrypted storage there are recovery keys, Macs don't seem to have them.

    • The couple of times I had to connect two Linux computers (mostly a Mint dev machine and some Debian-derivative-running Raspberry Pi clone) I plugged in some crossover ethernet cable I have lying around (since Windows XP days, straight cable also works nowadays), set up the eth adapters, and fire up scp, ssh or whatever. Hey, Nemo even has a "Connect to server" option, supporting various protocols, to make it look and work like just another mounted volume - so you can, say, open a spreadsheet file directly i

    • Length of USB cables for reliable and speedy data transfer is pretty limited. UTP cables can use CAT5e cable with a max. of 100 meters.

      That is a big difference. Also, generic hardware that has been designed from the ground up to communicate fast and reliably at relatively low cost...and yet you want to use USB instead?

  • 20% of all the files in the kernel source repository had been modified.

    thats a lot of testing I hope was done.

    • If that is the case, I would suggest this is a good point to bump the version up to 6.0.

    • probably 18% of those file touches were regex replacement of "bad" words with the new "good" replacement words.

      Not a lot of risk to those changes, but they still create commits.

  • ...I asked to myself: why is Trump tweeting about Linux ?!?
  • I'm still waiting for an update for my version of Yggdrasil Linux. It's been rock solid since 1992, so I've been hesitant to do any kernel updates, but this might be the one I finally pull the trigger for.

  • by AmazinglySmooth ( 1668735 ) on Monday August 10, 2020 @01:06PM (#60386335)
    Finally, the year of the Linux desktop!
  • Well, maybe it's not so bad. Could be a lot of them are not functionality changes, but just changes to replace words now considered inappropriate. :-)

There is very little future in being right when your boss is wrong.

Working...