Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Mark Russinovich on Windows Kernel Security 181

An anonymous reader writes to mention that in the final part of his three part series, Mark Russinovich wraps up his look at changes made in the Windows Vista Kernel by exploring advancements in reliability, recovery, and security. "Applications written for Windows Vista can, with very little effort, gain automatic error recovery capabilities by using the new transactional support in NTFS and the registry with the Kernel Transaction Manager. When an application wants to make a number of related changes, it can either create a Distributed Transaction Coordinator (DTC) transaction and a KTM transaction handle, or create a KTM handle directly and associate the modifications of the files and registry keys with the transaction. If all the changes succeed, the application commits the transaction and the changes are applied, but at any time up to that point the application can roll back the transaction and the changes are then discarded."
This discussion has been archived. No new comments can be posted.

Mark Russinovich on Windows Kernel Security

Comments Filter:
  • by Lally Singh ( 3427 ) on Wednesday March 21, 2007 @05:31PM (#18435013) Journal
    They also involve atomic I/O to multiple systems simultaneously. Userland can't do this. Databases work on one system, their own data files, and have full control over these files.

    Userland apps don't have that kind of control over the registry. Hell they may not be sure to have that kind of control over the files they're manipulating.

    Besides, I'd rather have this code once in a DLL than 10 times in 10 different apps. That's real bloat.
  • by Cyberax ( 705495 ) on Wednesday March 21, 2007 @05:41PM (#18435139)
    Because this DLL is just an interface to kernel features.

    Windows NT was initially designed to use single kernel for different subsytems (OS/2 subsystem, POSIX subsystems, etc.). Subsystems are implemented as dynamic modules talking with the kernel through LPC (Local Procedure Call, see http://en.wikipedia.org/wiki/Local_Procedure_Call [wikipedia.org] ). So in this case ktmw32.dll just wraps LPC calls in a nice API. That's actually a rather good design.
  • by EvanED ( 569694 ) <{evaned} {at} {gmail.com}> on Wednesday March 21, 2007 @05:46PM (#18435229)
    Windows NT was initially designed to use single kernel for different subsytems (OS/2 subsystem, POSIX subsystems, etc.)

    Not just initially designed, it DOES use a single kernel for different subsystems. You can't get the OS/2 one any more, but the POSIX subsystem morphed into (part of) the Services for Unix which has become the Subsystem for Unix-based applications.

    On 32-bit Windows, 16-bit Windows applications are handled by the "Windows on Windows" subsystem. On 64-bit Windows, 32-bit Windows applications are also handled by a "Windows on Windows" subsystem, though a different one than WOW16.
  • by Anonymous Coward on Wednesday March 21, 2007 @05:54PM (#18435337)
    The majority of the framework is implemented in userland. See the Distributed Transaction Coordinator service.
  • by nigelo ( 30096 ) on Wednesday March 21, 2007 @06:33PM (#18435889)
    > Yes, Microsoft does innovate sometimes. This is one of those occasions.

    Well, DEC VMS had this capability decades ago, so is it really innovation?

    http://h71000.www7.hp.com/commercial/decdtm/index. html [hp.com]
  • by Doctor Crumb ( 737936 ) on Wednesday March 21, 2007 @06:57PM (#18436177) Homepage
    You are confusing a windowing system (X11) with an OS (Linux). While you may have to "screw around on the command line" to get X working again, everything else will continue to work just fine (filesystem, webserver, internet, etc), all of which you can use either from a virtual console or a remote connection. If explorer.exe won't start, how exactly do you fix that without sitting down with a recovery CD?
  • by NearlyHeadless ( 110901 ) on Wednesday March 21, 2007 @08:12PM (#18437009)
    I just noticed today that Russinovich's utilities are available in a single-file download: http://www.microsoft.com/technet/sysinternals/Util ities/SysinternalsSuite.mspx [microsoft.com]
  • by dgatwood ( 11270 ) on Wednesday March 21, 2007 @08:53PM (#18437415) Homepage Journal

    Of course it can be done in user space. If a user space app can't do it, neither can the kernel. And it isn't atomic I/O. There's no such thing as atomic I/O. I/O operations are reordered, split, combined, etc. by everything from the OS to the controller to the hard drive, and for network volumes, it is even worse. There's no practical way to guarantee atomicity, so you have two choices: have filesystems (including remote filesystems) with rollback capabilities (which still don't completely guarantee anything) or design a file structure that achieves the same thing (which still doesn't guarantee anything). The former is nice for a lot of reasons (e.g. so that every developer doesn't have to reinvent the wheel), but isn't essential by any means. It also would greatly increase the complexity of the VFS layer and filesystems written for it, so if that is the only purpose for doing transactions, it makes a LOT more sense to implement them in a user space library instead of in the kernel.

    If your sole purpose is to be able to do multi-file rollbacks, user-space transactional support is as easy as designing your file format and/or layout around it. There are two easy ways to do this: files with built-in history and using swappable directories.

    Files with built-in history:

    For the initial modification pass, modify each file by appending what amounts to a diff footer. If an error occurs, you can undo all of the changes by truncating the files prior to the latest diff footer. Once these modifications are complete, you no longer need to worry about rolling anything back (except for cleaning up temp files if something fails in the second pass) because the data is safely on disk. (Note: this does require that the kernel and all devices and/or network disks reliably flush data to disk upon request. Don't get me started on buggy ATA drives.)

    In the second (optional) pass, you coalesce the diff into a new copy of the file and swap the compressed version in place of the original file. This is generally an atomic operation in most operating systems. If anything occurs during the second stage, it is a recoverable failure, so there is no need to roll anything back. Heck, Microsoft's file formats pretty much do this anyway. (Notice the 500 megabyte single page MS Word document that occurs when you make lots of changes and always "save" rather than "save as".)

    Swappable directories

    The easiest way (tm) to handle system configuration files in an atomic fashion is to modify config files in the same way you would perform a firmware update: you have an active configuration directory and an inactive configuration directory. You read the active one, make changes, and write to the inactive one. Then you trip a magic switch (tm) that says that the previously inactive directory is now active, and vice versa. Assuming you don't have out-of-order writes going on (which the kernel can't really guarantee any better than user space, sadly), this is a very easy, effective way to perform an atomic commit. And if you have an "exchange in place" operation in which the data for two files or directories in the same directory are swapped in a single atomic operation, that's a really lightweight way to implement an atomic commit/rollback mechanism without most of the complexity.

    Considering how easy this is to deal with in user space, the only legitimate reason I can think of to do it in the kernel is so that you can take it out of application control entirely (e.g. to make it easier to sandbox an untrusted application). Otherwise, it makes a lot more sense to do this in a library. Now if it had snapshotting where you could roll back the entire filesystem to arbitrary points in time, that might be interesting (for different reasons)... but basic transactional support in a filesystem is much less so, IMHO, unless your purpose is to be able to sandbox an application. If so, then all this other stuff basically comes for free. In that context, doing this in the filesystem layer makes sense. However, if that is not their purpose for doing this in Vista, then kernel bloat definitely strikes me as an accurate depiction.

    Just my $0.02.

  • by drsmithy ( 35869 ) <drsmithy@nOSPAm.gmail.com> on Wednesday March 21, 2007 @09:24PM (#18437767)

    However - a broken byte in an unbacked up (yeah a bad idea) registry [...]

    The Registry is automatically backed up at the completion of a successful system boot. This has been true since at least Windows 2000, and probably longer.

  • by Anonymous Coward on Wednesday March 21, 2007 @11:49PM (#18439021)
    The basic mechanism it uses is that it hooks all the low level operations you can do on your system (file access, process access, etc.) and prevents you from touching anything related to the game. The end result is that you can't even so much as end-task a misbehaving game 'protected' by this driver.

    This is not true, and it's also not what a rootkit is. These games use rootkits to hide files and drivers from the Windows API, which you can do yourself just by creating a share that starts with '$' or a registry key with a NULL in it. The game is not monitoring your every action, it is simply hiding itself from your interference (and potential reverse engineering).

    It goes around and comes around though - the most commonly installed rootkit is Daemon Tools, which uses rootkits to hide from games. :-)
  • by Foolhardy ( 664051 ) <[csmith32] [at] [gmail.com]> on Thursday March 22, 2007 @12:50AM (#18439447)
    The registry is a single root hierarchical database with registry hive files mounted at the second level (below \REGISTRY\MACHINE and \REGISTRY\USER for the computer's config and user config, respectively). The registry engine is implemented in kernel mode as an executive subsystem (inside ntoskrnl.exe), where it is known as the Configuration Manager. Registry hives use a transaction journal (like many filesystems do) to avoid corruption during a power failure or crash. Standard system hives are located in %SYSTEMROOT%\System32\Config and include SAM for local user accounts, SECURITY for various secrets held by the computer, SYSTEM for core system configuration early during boot, and SOFTWARE that stores all other config associated with the computer in the registry. Every user profile has its own registry hive for user-specific configuration. Everything above is still the same in Vista as it was in NT 3.1.

    There are two database engines that have been known as Microsoft "Jet", known as Jet Red and Jet Blue. Jet Red [wikipedia.org] is also known as the Access database engine. It is a fairly featureful SQL database. Jet Blue is now officially the Extensible Storage Engine [wikipedia.org] (ESE), and has been a system component since Windows 2000, backing WMI data, Active Directory, Exchange, and others. It is an ISAM database and is optimized for large sparse tables and also supports a transaction journal. Both are 100% user-mode and were not a part of the initial release of Windows NT. Microsoft has said that Jet Red is depreciated, and that future versions of the Access database engine will be integrated with Access and not have a public interface. Jet Blue's interface is well documented [microsoft.com] and will continue to see use for some time to come. Both being user-mode, dependent on Win32 and the wrong type of database (relational vs hierarchical), the Jet engines would not be suitable replacements for the registry.

    SQL Server is a high-end SQL database engine. It was rumored that WinFS would use SQL Server Express and that Microsoft eventually plans to move some of the services that use Jet Blue to SQL Server (such as Active Directory). In any case, SQL Server is an even less possible replacement for the registry.

    Microsoft has not gotten rid of the Registry in Vista. In fact, the new boot manager uses a registry hive to store boot configuration, replacing the old boot.ini.
  • by weicco ( 645927 ) on Thursday March 22, 2007 @03:28AM (#18440229)
    If explorer.exe doesn't start, user gets (at least in XP) blue screen (not that BSOD, but just a blue screen) in front of him. Then user can press CTRL+ALT+DEL, execute Task Manager and use that to start other applications. You can test this by terminating all explorer.exe processes with Task manager. But you must be quick, since XP will try to automatically start shell process if it sees it has been terminated. Btw. there's a registry entry which tells which shell should be started: HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\WinLogon Shell=Explorer.exe

    Maybe you are confusing windowing system to NT kernel :)

The moon is made of green cheese. -- John Heywood

Working...