NX - A Revolution In Network Computing? 404
Anonymous Coward writes "Judging from this interview, it looks like KDE developers have
found a new toy to add to their desktop's networking capabilities. They claim to be able to cram a fullscreen KDE session -- KMail for mailing, Konqueror for file management, Mozilla for web browsing and OpenOffice for word processing -- into a 40 KBit/sec modem connection without losing responsiveness for the user experience. At aKademy, the 9 day KDE Community World Summit, a group of core developers started to work on NX/FreeNX integration to help facilitate the "re-invention of the KDE desktop environment" for KDE4. Knoppix-3.6 is the first Linux distribution to ship an integrated FreeNX server (created by Fabian Franz) with the NoMachine NX Client."
Great! (Score:4, Interesting)
Kan we say marKeting? (Score:2, Interesting)
Are they inferring that corporations are all going to finally move to the thin-client type computing that was hyped 10 years ago? I still really doubt that it's going to happen as people are so entrenched in their current mode of deploying applications. MS Office still beats KOffice and OpenOffice and unfortunately I really don't think this is going to change that.
I read through the "interview" (which was more like a press-release-hype-sheet) and I didn't see anything that impressed me as far as non-marketing ad-speak. I haven't seen the source code (and probably won't) but I am confused as to why it must be in Bash scripts and a bunch of libraries. Why can't it just be standard code. I was especially confused by the comment "it's in shell so that everyone can contribute and make our code better." That's odd, I didn't think the Bash code did all that much if you are using libraries of machine code, etc...
I guess I will wait till someone reviews the actual code in use. Maybe after that they will rename it something less Kish than AKademy, blah.
So... (Score:3, Interesting)
I run sessions over shit dial-up connections, like 16.8kbit or so, and the responsiveness is decent. If I get a full 56.6 connection, it's really good.
I know we cheerlead for OSS around here, but is this a brand new amazing wonderful thing, or just another VNC protocol? And does KDE need more stuff? The K is for "Kram it all in!"
Been there, done that (Score:5, Interesting)
thin clients revisited (Score:5, Interesting)
Re:How's that different from how things work now?? (Score:4, Interesting)
Teacher (aka non-commercial Tutor) here. (Score:2, Interesting)
Re:Not Any Time Soon (Score:3, Interesting)
Regards,
Steve
diff NX LBX? (Score:3, Interesting)
How is NX different from the Low Bandwidth X (LBX) extension for the X windowing system [uq.edu.au] that usually underlies KDE and Gnome?
Imagane an OpenMosix cluster of these. (Score:3, Interesting)
Replacing Thin Clients (Score:4, Interesting)
It's not a knock by any means. I'd love to centralize the client apps and just serve sessions over 40kbps. But even that is a little expensive over a 128/256kbps frame relay connection. It's nice. It will be useful. But doesn't sound like something to adopt in a real, low-bandwidth, network computing environment at the moment. I'd love to hear that I'm wrong and missing the point, because I'd *love* to replace MS WTS as well as local PCs in our warehouses.
Another entry in the central vs. distributed power (Score:5, Interesting)
IBM is pushing toward a new (centrally managed and provisioned) eclipse based rich client (aka browser on roids) to talk mostly to its workplace portal (aka websphere with portlets)
Microsoft is pushing the other way with a more complex workstation model and an enhanced "user controlled" sharing place via sharepoint portal,
Linux desktops have, generally, been trying to mimick Microsoft stuff but with secure, stable applications....(e.g. open office)
Now, a KDE may become a player in the rich but centrally managed client space.
Interesting.
Clearly there is a place for centrally managed rich client -- think live of business stuff that at one time was managed through the 3270 terminal. The browser has proven it sucks for that kind of thing, but this trend toward a semi-open or open rich client replacement is going to be the thing to watch.
Line workers are going to use this stuff (IBM's or something like it -- this KDE version could be it too). The big battle will be for how corporations manage knowledge worker desktops. Will it be a Centrally controlled rich client, or a traditional powerful workstation with sharing tacked on.
It seems like Linux based machines are likely to have a horse in both races.
Re:Two words... (Score:3, Interesting)
Remote desktop is faster than TightVNC over a modem, but TightVNC just uses enhanced compression compared to VNC. LBX doesn't work over large latency high bandwidth links.
I've always suspected that if someone took the time to figure out how to do X-windows with better compression and client-server model, you could compress it down to a much better experience.
Back in the good ole text days, people wanted to do RIP grafix, and started using 64 or 128 bit numbers to pattern match and speed up graphics. Then there was a time people tried to use fractal compression to speed up photos over the web. (OR something even older, RLE graphics over 300 baud)
There are many methods that can be expanded and combined to make a low bandwidth, high response session. Maybe trading color for speed, a monochrome experience that is fast is better than a slow true color display.
Saying it can't be done is incorrect, there are many methods. Not sure if todays programmers even know of these methods, or have experience in low bandwidth situations.
It can be done, will it? With people getting more high speed connections, optimizing is a thing of the past. Same goes for video games, they come on DVD's and/or multiple CD's. The only place you see true optimization is the demo parties, 64K demos with extreme amounts of content. 64K would stream over a 28.8 modem in a flash.
My idea was use a server-client model, using content updates with a highly compressed pixel line in 1 byte patterns, widgets are cached, and text streams are compressed.
Not exactly sure how NX is speeding up RDP, other then widget caching, the articles appear to be slashdotted already.
Re:How's that different from how things work now?? (Score:5, Interesting)
As an administrator, I love the centralized configuration. The ability of a person to move from their office to a conference room and pop right back into a session is fantastic. Add the ability to enter a session from home or a VPN tunnel and it is really useful
In the office, that means many people can use inexpensive PC's or thin clients and get everything but 3D. Maintenance is a breeze, since I don't have to install whole bundles of software per machine or tweak up Ghost images it saves me time.
For most of my users, it makes things easier. They can access their work and stuff from anywhere. The network is either switched 100-base or in some cases switched 1000-base, so there is NO LAG in loading/saving data or running programs. Hell, stuff loads/saves FASTER through Gig-E to our SAN than to a desktop IDE drive!
An NX Server behind the firewall that I can tunnel to gives me a gateway to every PC in the building via translated RDP (Windows) or VNC. AND it seamlessly encrypts the sessions (unlike VNC). All I need is one hole in the firewall for the NX server, instead of one per VNC box. (Yes, I could tunnel VNC over SSH or stunnel, but that is a pain in the ass and NX is so much easier.)
And if KDE 3.2.3 or 3.3 is slow on your stated config, then something is wrong. Spend an evening and compile your own version w/P4 optimizations and remove the excess items you don't use. It should scream.
Re:So... (Score:2, Interesting)
Decent my ass, over a 1MB down 256Kbit up cable modem connection, RDP blows massive chunks.
Type
Wait wait wait wait
Type
Wait wait wait wait
Click
Wait await wait wait.
Horrid.
Heavily JPEG compressed VNC isn't much better.
RDP over a 10mbit LAN? Still a bit jerky, but at least usable.
Licensing Problem (Score:1, Interesting)
I have a problem. Even though I constantly used TS on a Win2k server it keeps issuing me a new 90 day temporary license ever 30 days, thus never requiring a regular MS TS CAL. Here's how it happenned:
1. Installed rdesktop
I guess I'll figure out the problem someday. Meanwhile my list of temp licences is getting long. Please don't take advantage of exploit.2. Created the following script to run rdesktop:
Re:Been there, done that (Score:4, Interesting)
Apparently you weren't even reading. I said "Evas-based apps".
And while NX is still a hack that attempts to get around the problem of poor network performance for X11 apps (by wrapping/translating X calls with its own calls), Evas solves the problem directly by making efficient use of Xlib and eliminating unneeded roundtrips. Evoak, the shared canvas goes even further: the host canvas is running on the same machine as the X server, and clients connect to Evoak (which has a much leaner protocol, supports compression *and* encryption) rather than to X. This is the kind of technology that could make running remote X apps on your cellphone possible if resources were available to implement it.
If you want to find out more go read http://enlightenment.org/pages/systems.html [enlightenment.org].
Re:Wait isn't open source supposed to only copy (Score:4, Interesting)
The post was really a feeble attempt at humor, but since you bring it up...
What I've read about NX is that it is more like x11 than it is like a VPN. Linux does have several VPN implementations to choose from, but NX is really a lightweight windowing client / server protocol.
Microsoft might have tried something like this back in the Win 3.11 days, but all they did there was make some of the office products client / server capable. After the shift to NT based os's, and their move to a peer to peer network, I think that they pretty much closed the door on something like X11 or NX (that's not a criticism, going client server has as many issues as going peer to peer, just different ones).
Besides, as Microsoft has shown us innovation is for losers. Winners let the Apples and Suns of the world innovate. Then the winners merely implement the stuff when it becomes a common standard. That way you don't waste time and money on R&D.
Re:Educate me. (Score:3, Interesting)
The already have those, they are called lights out management units and they are available from every major server manufacturer. They sit in a PCI slot and provide network attached KVM style controll over the server regarless of what state it is in (even off).
Re:Microsoft had this for years :-) (Score:3, Interesting)
Indeed, the winlogon process does an RPC call to the termsrv module to check if an additional connection is allowed. Termsrv is statically linked to the licensing module. Some beta versions op SP2 enabled multiple connections. More specifically, replace \WINDOWS\SYSTEM32\TERMSRV.DLL with version 2.1.2600.2055 (find on internet, replace in command line safe mode) and the registry key below to enable it (Requires Windows XP SP2 (RC), fast user switching, non-empty user passwords).
[HKEY_LOCAL_MACHINE\ SYSTEM\ CurrentControlSet\ Control\ Terminal Server\ Licensing Core]
"EnableConcurrentSessions"=dword:00000001
Re:thin clients revisited (Score:3, Interesting)
IBM had, what was to me, the best thin-client system on the planet. It was heavily OS/2-based, but there was a version for Windows as well! (First link above). Full fat clients running full fat operating systems, but served, managed and administered from a central server. Hardware dies? Use a different computer: get the same operating system, environment, etc. Roaming user? Get *your* OS from any computer. Need to deploy an app to 1000 users? Install once, and drag and drop an icon. Done.
To me, *that* is Network Computing. Not glorified VNC to a single computer...
Re:Kan we say marKeting? (Score:2, Interesting)
I am not sure the cross training is a factor.
I work in a perpetually cash strapped industry.
One of my clients did a 'study' (not scientific, but they spent a few weeks on it, with 12 people) on the migration costs from Office 97. Half the gorup migrated to Office XP and the other half migrated to OO1.0 (pretty sure it was 1.0, this was in early 2003)
They found it was easier for the end user to migrate to OO then to Office XP. Lots of confusing stuff in XP and many changes just for the sake of changing. They also scored exactly the same on the document migration process - yes XP wouldn't always get an O97 word doc right!( The spreadsheets worked 100% in XP, not very well in OO.)
They went to XP, MSFT gave them a ~$21/seat deal because they are a non-profit. Part of the deal was they had to buy a license for every seat, whether it needed Office or not. Sound familiar?
It's not a training issue as 97 and XP are APPARENTLY just as different as 97 and OO
JON
P.S. small study. meaningless. I know. But think about it...