OpenBSD Will Get Unique Kernels On Each Reboot (bleepingcomputer.com) 162
An anonymous reader quotes a report from Bleeping Computer: A new feature added in test snapshots for the upcoming OpenBSD 6.2 release will create a unique kernel every time an OpenBSD user reboots or upgrades his computer. This feature is named KARL -- Kernel Address Randomized Link -- and works by relinking internal kernel files in a random order so that it generates a unique kernel binary blob every time. Currently, for stable releases, the OpenBSD kernel uses a predefined order to link and load internal files inside the kernel binary, resulting in the same kernel for all users. Developed by Theo de Raadt, KARL will work by generating a new kernel binary at install, upgrade, and boot time. If the user boots up, upgrades, or reboots his machine, the most recently generated kernel will replace the existing kernel binary, and the OS will generate a new kernel binary that will be used on the next boot/upgrade/reboot, constantly rotating kernels on reboots or upgrades. KARL should not be confused with ASLR -- Address Space Layout Randomization -- a technique that randomizes the memory address where application code is executed, so exploits can't target a specific area of memory where an application or the kernel is known to run. A similar technique exists for randomizing the memory location where the kernel loads -- called KASLR. The difference between the two is that KARL loads a different kernel binary in the same place, while KASLR loads the same binary in random locations. Currently Linux and Windows only support KASLR.
Effects on overall speed? (Score:5, Interesting)
Re:Effects on overall speed? (Score:5, Informative)
Re:Effects on overall speed? (Score:5, Insightful)
My concern over the technology is that it also can cause a non-deterministic behavior for the platform making it hard to capture elusive bugs. This means that you would need to have a way to be able to load a kernel that's mapped identically to the last time when you perform your test and development.
Bugs that only appears when you have a certain constellation and load order are sometimes wasting weeks of work.
Re:Effects on overall speed? (Score:2)
If the bugs are already there, they are bound to appear some day. Maybe even with the next kernel version.
Re:Effects on overall speed? (Score:2)
Re:Effects on overall speed? (Score:2)
Actually, with a random kernel that kind of bug will occur much faster than previously so it will get fixed sooner and not after several years of lying dormant somewhere, probably used by some black hat hackers.
Re:Effects on overall speed? (Score:2)
Re:Effects on overall speed? (Score:2)
Re:Effects on overall speed? (Score:5, Interesting)
This is certainly true (and I'd assume that kernel devs might be running with this turned off, or some kind of historical logs kept to track what state their kernel was in during a bug) but it's equally true that exercising things in this way could reveal bugs which were otherwise exceedingly rare, leading to better overall code quality.
In practice, I would think that read-write data structures are more likely to exhibit this kind of problem though, not the read-only code. There are certainly edge cases (timing changes, etc.) but it's not like an off-by-one error is going to affect you, like it might with data.
Re:Effects on overall speed? (Score:2)
I'd make the order depend on a hash(filename+seed). Thus, kernel #13 will be always the same, and patched versions of kernel #13 will be similar. On regular builds, the seed will be a long (non-bruteforcable) random string, that's still saved with debug info so you can reproduce the kernel you are running.
Re: Effects on overall speed? (Score:1)
Re: Effects on overall speed? (Score:3)
If the attacker can read root-only files, you've already lost. And you can opt to have no debug info.
Re:Effects on overall speed? (Score:2, Interesting)
My concern over the technology is that it also can cause a non-deterministic behavior for the platform making it hard to capture elusive bugs. This means that you would need to have a way to be able to load a kernel that's mapped identically to the last time when you perform your test and development.
Bugs that only appears when you have a certain constellation and load order are sometimes wasting weeks of work.
This kind of thing might also expose bugs that were not exposed before, and that, in itself is a good thing. Of course they would likely be very hard to find bugs...
Re:Effects on overall speed? (Score:3, Interesting)
My concern over the technology is that it also can cause a non-deterministic behavior for the platform making it hard to capture elusive bugs. .
Or they could just simply not fix the bugs to take care of that. True story here. I work for a Fortune 500 company in their IT department and I joined them when my previously employer, a startup, was bought out by my current employer. A guy in my management chain was a huge OpenBSD fan, so he made us run some servers using it. Nothing in production used it, but we had some test systems that did. I know I know. Running something in test that isn't like production.... So anyway, he loved OpenBSD with a passion and we couldn't get rid of it while he was there, even though it wasn't used in production at all. We ended up finding a gigantic bug that made OpenBSD's kernel panic and we could make it panic at will. It was caused by a specific piece of hardware in our servers that ran OpenBSD. We and others told Theo and he admitted it was a real problem, but he said he wasn't going to fix it because too few people had this specific hardware. So manager dude finally admitted we couldn't go on like this because not only could we crash the servers at will, they crashed a lot on their own if we did nothing because of the bug. So we replaced all our OpenBSD installations with RedHat and never went back to OpenBSD.
Re:Effects on overall speed? (Score:3)
You filed a bug report, right? Where is that? (you didn't really provide enough info so that it could be found, even if it existed - like what hardware caused the bug?)
Re:Effects on overall speed? (Score:3)
Re:Effects on overall speed? (Score:2)
And Theo still cared not after that. :P
Non-production targets are good (Score:2)
There is little cost in doing so going from one posix platform to another and targeting something from the Linux camp and something from the BSD camp can be helpful. Again, note I used "mix of test platforms". Of course the target platform should be the main test platform, but a non-target should get some attention especially for automated testing.
Personally I take things a little farther and try to keep UI and core code separate and the core code portable. Even when targeting a Windows only environment I'll build the core code on Linux and run its regression and fuzzing tests. My supervisor and co-workers at the time thought it unnecessary, they changed their opinions over time. Occasionally asking to have their forked code tested under Linux before merging. Note this indicates nothing special about Linux. Same thing happens when port from one OS to another, say Windows to Mac, it really about a different environment than the main developers. Helps address the "works on my system" issues.
Re:Effects on overall speed? (Score:2)
You can say that about nearly every feature put into an Operating System. Changes to Memory Management, Changes to the multi-processing algorithm. Every time you add a level of complexity there is a chance you could create a problem, that is more difficult to fix.
Re:Effects on overall speed? (Score:2)
My concern over the technology is that it also can cause a non-deterministic behavior for the platform making it hard to capture elusive bugs.
Actually, one might argue that non-deterministic execution makes elusive bugs manifest.
Re:Effects on overall speed? (Score:3, Informative)
Re:Effects on overall speed? (Score:2)
The other concern is that randomising the link order has been shown (ASPLOS 2015) to have around a plus or minus 20% impact on performance. Having that variation across reboots for the kernel could be quite frustrating.
That variation is already present in every linked program anyway. This just changes the dice-roll from only once at build time to each and every boot time. Surely it would suck more to get a randomly slow link at build time and then be stuck with it?
Re:Effects on overall speed? (Score:3)
That variation is already present in every linked program anyway.
To a degree, yes, though in practice it's fairly deterministic and so you tend to only explore a smallish part of the overall space by accident.
This just changes the dice-roll from only once at build time to each and every boot time. Surely it would suck more to get a randomly slow link at build time and then be stuck with it?
Not really - predictable slowness is a lot easier to reason about and work around than unpredictable slowness. On a single machine, knowing that something will take 12 minutes is a lot easier to deal with than finding that yesterday it took 8 but today will take 12. Trying to debug a performance problem in userspace code can be very painful if the OS is unpredictable. In a networked system, if every node takes 100ms to respond, that's annoying, but if they all take 80-120ms to respond then your overall performance is typically limited by the 120ms (Twitter is a good example of this: their response time was limited by the fact that, on average, at least one of the machines that needed to respond to create a page for the user would be in the middle of a GC cycle and so respond a lot more slowly than the rest. They fixed it by forcing GC on a fixed interval so that all machines were slow for a few ms, then fast again for a while until the next GC tick).
Re:Effects on overall speed? (Score:2)
Re:Effects on overall speed? (Score:2)
It seems like the scope for non-deterministic behaviour should be very, very small. All it is really doing is re-linking the kernel in a random order, so basically running the last stage of the build process (linking) again but with an additional RNG thrown in.
In practice this will mean slightly different behaviour due to the way CPU caches work, but beyond that I can't really see much scope for variation. The caching will have a very small effect, so I suppose it is possible that some race condition or similar might only affect certain builds, but it's a fairly remote possibility.
In any case, something similar happens when applications are loaded anyway with ASLR, and presumably debug dumps involving the kernel will save a copy anyway.
Re:Effects on overall speed? (Score:2)
On the other hand it makes it more likely that such kinds of bugs manifest themseves.
Re:Effects on overall speed? (Score:2)
If you install a kernel manually, it'll be used until you re-activate the random-kernel setup.
Re:Effects on overall speed? (Score:1)
No - the actual 'kernel install' step is a simple rename (mv) which the filesystems should guarantee is atomic. (I believe OpenBSD's filesystems have harder guarantees about this than Linux, but I'm not sure.)
Upshot, you'll either boot with the new kernel, or worst possible situation, you'll boot with the previous one again.
Re:Effects on overall speed? (Score:4, Interesting)
Great way to insert a malicious blob for the normal one, and no ability o checksum the entire kernel to detect it.
Re:Effects on overall speed? (Score:3)
Surely if you have the capability to insert such a blob into the kernel, you can easily defeat any attempt to checksum it. Just intercept attempts to read the kernel binary and return an unmodified one.
Re:Effects on overall speed? (Score:2)
Likely a hash not a checksum (Score:2)
Make a blob that has the same checksum (the checksum routine is in the source, meh) and you're all set.
Except the "checksum" is likely not a checksum because malware could add padding to create the desired checksum just as well. Its likely the "checksum" is a hash and a collision (a match) is not easily created.
Re:Likely a hash not a checksum (Score:2)
Re:Effects on overall speed? (Score:2)
Re:Effects on overall speed? (Score:2)
Re: Effects on overall speed? (Score:2)
As it is different with each boot, can it still be validly signed?
Re: Effects on overall speed? (Score:2)
Re:Effects on overall speed? (Score:2)
Re: Effects on overall speed? (Score:5, Informative)
Re: Effects on overall speed? (Score:5, Informative)
well from TFS:
If the user boots up, upgrades, or reboots his machine, the most recently generated kernel will replace the existing kernel binary...
So it sounds the relinking happens when the system is running and the new kernel is used on next reboot.
Re: Effects on overall speed? (Score:2)
but then an attacker has all the time in the world to manipulate that next kernel
Re: Effects on overall speed? (Score:2)
Manipulating the next kernel requires root, while manipulating the current kernel requires ring 0.
Re: Effects on overall speed? (Score:3)
This is a good clarification. Macs already do this:
https://developer.apple.com/li... [apple.com]
No, not as far as I can tell. There is a difference between linking and relinking like this. Technically Linux kernels are also linked with their drivers in initrd when loaded, but that is separate from this new randomized relinking.
Re: Effects on overall speed? (Score:1)
Re: Effects on overall speed? Only on Windows... (Score:3)
I don't see how 3-year uptime correlates with the oft-repeated "just save your session, log out, shut it down, and start it back up again" workaround for missing or broken hibernate support on particular chipsets in laptop or desktop computers.
Re:Effects on overall speed? (Score:3)
I would say the extra time used for this process is an acceptable trade-off for enhanced security. Uniformity is the biggest security risk that these systems face. If the bad guys knows where all the pieces are on a lot of systems, then they know how to successfully target their attacks.
That being said, there is a lot of effort in protecting the core system, but protecting your home directory, is where the valuable data normally is. I would like to see Operating systems setup with Application level security with User level security. Right now the main workaround is having the particular application run logged in as a user. I would like to see the applications have their own security permissions outside of the user account.
Re:Effects on overall speed? (Score:2)
The rebuilding process shouldn't be that long. Especially if most of the modules are (mostly) precompiled. But with the random order that things will be re-compiled, will a bad order effect the overall performance of the system?
Wouldn't the 'performance' just be an issue during boot time, or upgrade time? In other words, I'd expect slower reboots. Which then brings to question the usage. If it's being used in an always-on router, doesn't sound like a big deal. If it's on a laptop that one reboots frequently, I'd think it would
Re:Effects on overall speed? (Score:2)
It's not recompiling, it's re-linking. The code's already compiled and unchanging. Just instead of linking a.o, b.o, c.o and d.o in that order, you link a.o, c.o, d.o, b.o.
The result may not LOOK too different, but most linkers work linearly - so all the sections in the second binary will have a.o at the beginning, followed by c.o, d.o and b.o, while the first binary will have a.o, b.o, c.o and d.o in that order. The symbol addresses WILL be different even though in general the code is the same (after all, the only real thing that's changed is the jump addresses).
About the biggest issue would be things that use linker magic to produce arrays of pointers to symbols. For example, the order of the list of initialization functions to call will be different, and if there is a subtle dependency that is not captured properly, a submodule may initialize before its parent module. (This is how you can write a driver with proper flagging and have the kernel auto-initialize it even though you didn't put in an explicit call to the init function).
Re:Effects on overall speed? (Score:2)
Re:Effects on overall speed? (Score:2)
Code pages are usually read only.
Only data pages can be overwritten.
So modifying code is most likely impossible or needs an attack vector where the access bits of a memory page can be changed.
Some more detail over at undeadly.org (Score:5, Informative)
And for that whirlwind tour of what's good in that system, take a peek at my OpenBSD and you [home.nuug.no] slides.
Correct links (Score:1)
Re:Correct links (Score:2)
Comment removed (Score:2)
Re:What are the (dis)advantages? (Score:5, Funny)
New is always better.
Re:What are the (dis)advantages? (Score:5, Informative)
Similar to the pluses and minuses of Address Space Randomization. Right now, in the static part of the kernel, different subsystems are linked together into 1 binary that is loaded into memory at boot. Along with the assumption that the different subsystems could be linked together in many valid orders, that link could be done when the kernel boots so the different sections of the kernel would load in different places relative to each other. in memory.
The problem would be when 1 compilation or binary kernel becomes widely distributed such that malware could rely on the relative positions of different subsystems in memory. Using that knowledge might make it easier for malware to make use of different subsystems at run time -- on the premise if you know where 1 is, you know where all the rest are. If you can load subsystems, unpredictably at boot, it would be harder for today's malware to make use of. Instead, the malware would have to replicated the loading algorithm and try to reproduce the load order by calculation to make the same use of those subsystems -- another level of difficulty for malware wanting to use existing subsystems.
Static load order would be most problematic for embedded binaries that come off a ROM and less so for binaries distributed by a large distro. It's also more problematic if the subsystems are linked together in the same order at compile time (which, AFAIK, they are for the linux kernel).
That link order could potentially be randomized at link time which would have a large amount of the same benefit as the boot-time randomization for end-user-systems that are locally compiled (with some pluses & minuses).
I don't know that the benefit of this feature has been quantified or is easily quantifiable. The deficits of this feature I would think minimal in a production (non-development/non-debuggable) product.
Comment removed (Score:2)
Re:What are the (dis)advantages? (Score:2)
--Further along this line, how does this affect UEFI/Secure Boot, if at all?
Re: What are the (dis)advantages? (Score:5, Interesting)
Re: What are the (dis)advantages? (Score:2)
but then a simple search function to locate the routine to be exploited can be added
Which is what return-oriented exploits do.
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: What are the (dis)advantages? (Score:2)
Will definitely be reading more about it at home.
Re: What are the (dis)advantages? (Score:2)
Re: What are the (dis)advantages? (Score:2)
May help prevent row hammer attacks.
Just one missing component (Score:2, Funny)
All OpenBSD needs now is to adopt systemd. Then it wil be totally secure. And more cromulent.
You mean systemd must assimilate OpenBSD? (Score:1)
systemd is RedHat's attempt to fracture Linux. Divide and conquer.
Because systemd is a solution in search of a problem. Something inspired by Microsoft's "registry" is nothing but thoroughly evil.
Re:You mean systemd must assimilate OpenBSD? (Score:2)
Re:You mean systemd must assimilate OpenBSD? (Score:2)
Re:You mean systemd must assimilate OpenBSD? (Score:2)
Wasnt the point of linux to have choice? Systemd was straight up forced on us.
You have to earn your options.
You can build an OS based on the Linux kernel. Linux itself doesn't require systemd; only the pre-built distros do.
Now, all of their experts have decided that systemd should a required component of their operating systems. You are free to disagree.
Complain to them if need be, but otherwise just shut up already. No one is going to change anything based on some random whine on Slashdot.
To paraphrase for this situation: "You're not making Linux better. You're just making Slashdot worse."
Comment removed (Score:2)
"should not be confused" with the same thing (Score:2)
The difference is merely in granularity; one is performed in the build time linker (aka static, ld), the other in the run time linker (aka dynamic loader, ld.so). The latter can be done in advance (prelinking), the former has to. The run time linker doesn't have information on all the intra-object links, so cannot operate at the level the build time linker does. That granularity might make a difference; it moves from up to about three (code, data, rodata) randomized pointers per program file to the same per object file (or possibly function, if the compiler splits them). The sort of exploit code this is supposed to mitigate doesn't need to hunt for all of those but typically a specific one, though, and the mitigation has been found to be fairly weak.
So what does this do (Score:2, Flamebait)
The advantages (Score:5, Informative)
Lots of people here asking about the advantages - here is the laymans explanation.
So typically with ASLR you load a kernel blob into a randomized space and then it just sits there. An attacker (e.g. an evil hypervisor) could search the entire address space for the kernel or in some other way hook into the kernel binary and then simply count up or down address spaces or more likely pass an evil payload to load exploits against specific parts of the kernel from there. Since you always know which parts come first, you can craft payloads so that it gets passed or overflows until it reaches the vulnerable piece of code.
What this is doing, it randomized the kernel and subsequently the entire kernel even though it sits in the same spot and you could still find or hook into it, you can't simply count up and down anymore to find the bad piece of code nor can you be guaranteed that weak boundary checks will pass your payload, because even though the system has hooked your vulnerable piece of code somewhere, it's not going to be in the same spot.
It's basically more fine grained ASLR where you break the program (the kernel) down further in smaller pieces to be randomized.
Re:The advantages (Score:2)
Instead of 'evil hypervisor' think 'Intel Management Engine.'
Re:The advantagages-the root of the problem. (Score:2)
Well, an actively evil hypervisor is indeed more problematic (where someone has full access to it), I was more thinking of an automated exploit, automated exploits will be much more difficult to execute, you will require someone with deeper understanding of the kernel to manually intervene on every exploitable machine.
And Intel SGX has been broken and will probably be further broken in the future, it is also a double-sided sword. You can hide attack code in an enclave and nobody will ever be able to find it and from there on you can load side channel attacks.
Doesn't uptime defeat this? (Score:5, Interesting)
Re:Doesn't uptime defeat this? (Score:5, Informative)
The idea is that when you have hundreds of machines, even though their uptime is high, they'll still all be running 'different' kernels.
To be able to find a memory location, you have to pretty much already run as root. This is to prevent exploits before they get to that point. Eg. if you have a weak TCP/IP stack and you send an 'evil bit' that overflows the buffer, you're no longer guaranteed that by filling the next n buffers you will be able to execute shell code.
Once you can search through the memory, you've gotten to a much farther point.
Re:Doesn't uptime defeat this? (Score:1)
How many modules are there? If, say, 20 then 20! is about 2x10e18, and 30! is about 2.6x10e32
Yes that's a finite number. So is a cryptography key.
Re:Doesn't uptime defeat this? (Score:2)
Re:Doesn't uptime defeat this? (Score:1)
So much for mocking Windows reboots (Score:1)
Interesting that nix advocates were always mocking Windows for not being able to hot patch without needing a reboot and yet here we are, a nix based OS that needs reboots to stay patched, fwiw this is why updates get disabled (and people get pwned), we are not running silly games or wordy processors in BSD but usually devices that need uptimes of months, preferably years, i don't want to intentionally reboot at all.
are there any stable and secure nix distros left ?
Stable and secure nix distros (Score:3)
Re:So much for mocking Windows reboots (Score:2)
If your system relies on a specific machine being up at any given time it will fail. You say you don't want to intentionally reboot, but since you're guaranteed to unintentionally reboot it's better to design things so that doesn't matter. In which case, intentional ones don't matter either.
Re:So much for mocking Windows reboots (Score:2)
you are running you virtual mouth in ignorance. Updates in OpenBSD are not automatic, and the last version 6.0 had all of 28 patches of which 12 were for kernel. Whether or not those particular kernel patches would be needed (e.g. one was for a display driver) would depend on application.
Uptime in years on a *node* means you are running an unpatched insecure application stack with a bad architecture. Uptime in years can be had with proper clustering, load balancing, etc.
Ooold. (Score:5, Funny)
Had it on my 486 that ran Gentoo, and not just with the kernel but most of apps. By the time Emerge World completed and I'd need to reboot for the upgraded kernel to start up, a new version was already available, and Emerge World ran right after start, on whatever updates happened during the previous run would finish about the time another kernel was available.
Re:Ooold. (Score:3)
Who wants their processor to be idle all the time? "Couch-processors" have shorter lifespans than their well-exercised brethren and are more susceptible to diseases and viruses. Gentoo was ahead of the curve by creating an exercise program for all processors.
AGW (Score:2)
Who wants their processor to be idle all the time?
People who understand that excessive energy use contributes to making the planet's climate less hospitable.
Re:AGW (Score:2)
At the cost of a greater area of the planet that becomes inhospitable.
Sometimes life surprises you (Score:2)
Re:Sometimes life surprises you (Score:1)
Microsoft's implementation was an obvious success, given how rarely malware infects Windows boxes.
Re:Sometimes life surprises you (Score:1)
Linus put your monkeys to work! (Score:1)
Don't think I like this (Score:4, Insightful)
If I understand this correctly, the kernel is being relinked and rewritten to the boot partition. That's instant fail in my book.... at least for us, the boot partition is sacrosanct. We do *NOT* write to it except when specifically upgrading a system. We do not do ad-hoc or automated writes to it because years of experience has shown that most corrupted boots (aka machine -> non-working) are due to unexpected events occurring while a filesystem is being written to.
The rename trick is not a solution (there's the 'ideal' atomic, and then there is the reality. That storage devices can fail in many different ways even while writing a particular sector, that are unrelated to that sector).
So, honestly, I think OpenBSD is making a huge mistake here. I can see randomization at load-time, but relinking and rewriting the kernel binary on every boot? No. Bad bad bad idea.
ASLR or equivalent is close to useless anyway. Malware has found ways around it, it makes debugging and bug reproducability difficult (which arguably is more important... that bugs get found and fixed, not simply detected). It also tends to fragment memory which can cause serious problems for long-running systems. And the vast majority of systems will simply restart the service anyway. They might log the seg-fault from the malware, but maybe 0.001% of system owners actually look at those logs.
-Matt
Linker exploit (Score:2)
Time to inject something into the kernel linker. This random order thing should make it very easy to high all sorts of fun gadgets.
Rebooting OpenBSD? (Score:2)
Re:Secure Boot ? (Score:2)
Secure Boot is not about security, it's about control. It only verifies the signature of the kernel loader against a list of 'approved vendors', once the kernel is loaded, you can do pretty much anything you want with the computer.
Re:Secure Boot ? (Score:2)
Nonsense. It does not verify the signature agains a list of 'approved vendors'. It verifies the signature against a list of approved signers. While some crappy consumer brands may have built-in keys that can't be changed, real computers (servers) allow you, the machine owner, to install your own keys.
As to what happens once booted, that depends on what you booted. GRUB will verify the signature of the kernel and initrd before it loads them. If Linux is configured for IMA (Integrity Management Architecture) it will verify the signature of files upon an 'open', including the kernel modules. And it you use remote attestation you can verify that you are not running previously signed but vulnerable code.
The amount of FUD surrounding secure boot is astounding.
Re:Secure Boot ? (Score:2)
Yeah, but Grub or Linux using or verifying signed code has nothing to do with Secure Boot. Secure Boot ends when Grub loads.
Re:Secure Boot ? (Score:2)
If you don't have secure boot turned on, how do you know that the GRUB you are loading is not compromised to load an unsigned kernel? In fact, if secure boot is not turned on GRUB will NOT verify the kernel.
With secure boot turned off, the shim can't be trusted. If the shim can't be trusted then GRUB can't be trusted. If GRUB can't be trusted then the kernel can't be trusted. And if the kernel can't be trusted then any verification of signed code that it does can't be trusted.
Saying secure boot is not about security is nuts. It IS about security. The fact that is can ALSO be used for other purposes does not diminish its security usefulness.
Re:Ceterum Censeo, UEFI needs to die. (Score:2)
Do you even know what FUD means? It means Fear, Uncertainty, and Doubt.
Now, let's look at what you call 'no FUD':
'need someones permission...redmond' FEAR! Be very afraid!!!
'maybe be MAFIAA' UNCERTAINTY! They could be watching!!!
'MAYBE is your work really hard...' DOUBT! You'll never be able to run anything bu Windows!!!
Your entire post is nothing but FUD.
Re:Secure Boot ? (Score:3)
Not that I know a lot about it but...
You sign the new kernel with the same signing key as the previous one.
But, that's not how it works. It's the kernel loader that's signed for Secure Boot, and then the kernel loader is free to use any further verification it likes of what it loads, what it checks, and how.
Part of why Secure Boot is "no more secure" in most configurations, because you're just certified that you're booting GRUB/whatever in most circumstances, which can be misconfigured to be quite open (which is what most people want - just a way to boot a free OS using a signed bootloader).
But even if it was signed, it doesn't mean that it can't be unique to the machine in question, it's just a matter of using a chain of certificates, but that's out of "Secure Boot"'s hands and into the hands of the kernel loader configuration.