Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Unix Operating Systems Software

Are Unix GUIs All Wrong? 402

BrightIce writes "Advogato has an interesting article about GUIs on Unix. The basic message is that it is wrong that the command line is "a completely seperate world" and proposes some interesting ways CLI and GUI could cooperate." The feature actually isn't all that long - but I'm sure the discussion can get going from there.
This discussion has been archived. No new comments can be posted.

Are Unix GUIs All Wrong?

Comments Filter:
  • You couldn't be closer to the truth.

    However your last paragraph bring up the idea that is still must be possible, and easy for 'power users'. Nobody seems to be able to solve the problem of a gui that is intuitive and simple for the average user, but that will scale up to the needs of a 'power user'.

    An added bonus would be a gui that made it easy for people to develop their abillity easily with little effort.
  • Say UNIX didn't have a command line.

    Ala MacOS or Ala NT.

    I have too often had too re-install a killed system, because it could no longer, for one reason or another, reach the complicated state of the GUI.

    Having the fallback of console mode is a UNIX strength and let's not forget it!

    The article in question doesn't address any of the strength's of UNIX GUIs, it just ponders the unaddressed feature possibilities of the standard tools (tar and rm are examples). These arn't core UNIX issues, and also, with a bit of Tcl could quite easily be implemented.

    The strength's of the UNIX GUI (I'm talking about X here) are:

    Client/Server philosophy, allowing X clients to run on one architecture and X servers to display those clients on a completely different architecture. (e.g. You can run X applications on a Cray and display them on x86 or even embedded Architecture. You can even display Cray applications on an iPaq (bloody hell!!!)

    and

    Placing the "look and feel" into user space. Be it desktop environment any user can change the L&F by simply choosing a different Window Manager at start-up. kdm has a lovely way of doing this.
  • I agree, but imagine if tar provided its status info to the OS through a more generalized interface, so it was stored in the process table. Some people would run a GUI that exploits this - you wouldn't. But if you're curious about how far that backgrounded tar got, you might type 'status --complete $PID' and get back '32%' or something. This would be a much cleaner solution than having tar attempt to drive a GUI directly.
  • What you're talking about has long ago been thought of, implemented, and then cloned many times. The first incarnation, FAPP, is Norton Commander under DOS, created in the early 90's, IIRC.

    For more info, see the The Softpanorama University Orthodox File Managers Site [softpanorama.org].

  • The Unix shell CLI is predicated upon the notion that every piece of code has essentially two input streams: a set of bits that flows in and a set of bits that flows out. That's a fairly powerful metaphor, and, if you have the cycles to throw at it, even universal. Look at XMLTerm if you want: although it's incredibly clumsy and unbelievably slow, it really does allow the user to go back and forth between CLI and GUI.

    But there's a problem with the control structure of that model. The computer is in control; it reads from stdin when it wants to do so, and it writes to stdout when it wants to do so, and the user is the slave of the processor. For CLI-like tasks, and particularly those where the computer can itself generate a data stream, process it serially, and then spit out the result, it's exceptionally efficient.

    The GUI has an essentially user-centric control model. There is not just one stream of data: the mouse and the keybooard are independent of one another. Components can be embedded in one another in non-linear fashions: the user can enter data into any of several slots, and, when a certain conjunct of data presence is available, a new set of slots or actions can become available to him. In order to build an abstract data model for such a machine, you need to step from the clean and comprehensible linear machine-based Unix CLI into a Petri-machine based system.

    Can it be done? Of course. But expressing a Petri machine in text is very hard, and so it's quite unnatural to use a CLI to describe GUI-like processing. Imagine trying to express the idea that "When data are available at each of these two streams simultaneously, do x and send the result to component seven" for a system with twenty or thirty components. Then understand that my screen has approximately one hundred different active elements on it right now. How do you describe that as a pipe-stream?
  • You know there was a time when shells were optional components on computers. Of course the people who really understood computers wrote their own program loaders and didn't have to rely on a command line. The shell is just a usablitity tool, and has nothing to do with how the computer "really works".

    Just like the shell, the GUI is just another form of abstraction which makes certain tasks easier. Sure cp *.foo ~/foo is easier with a commandline, but for people who create filenames that aren't easily regexed, it's probably easier with a mouse (and there's no amount of teaching that can solve that problem) Both are an abstraction of how the computer moves inodes around on the disk.

    I think the Unix perception is "Because the init, login, and the startx process is a massive hairball of shellscripts, that must be How It Really Works(tm)." Well, no, that's just Unix's kludgy way of making it work by exchanging flexibility for fragility.

    We've now got a whole bunch of interesting component technologies free and widely available which offer lots of possibilities in terms of providing a user interface (cli, gui, or hybrid). Don't lock yourself into thinking that stdin and pipes is be-all-and-end-all because it's nothing more than a 60s hack -- and once you've broken that conception, the idea that everything boils down to a CLI command doesn't hold much weight.
  • Just been reading the posts and having a think and my initial reactions go along the lines of:-
    • Yes there is a problem with CLIs being a bit untransparent for newbies and therefore alienate them from the power of the operating system.
    • Yes it would be nice to be able to get graphical feedback from the tools that we all use (and love once we get used to) on the CLI.
    • No it would be a really bad idea to try and drop all this 'added functionality' into the tools because you run the risk of breaking/bloating something that really doesn't need it.
    So if we want to start to provide some added functionality to our existing CLI then it has to be some kind of wrapper around the existing tools. ie no recompilation of the tools, no dependance on external graphical libraries (whichever one is your own chosen 'true path'), and no compromise of the tools for the power users who have used them for years and would be up in arms if anybody changed them.
    What can we find out about a process that might be useful in a graphical pop-up thang? We know what processes are running by seeing what spawned what, we have all the usual system resources about how much memory/cpu etc that a process is using, but this still doesn't really tell us anything about the process.
    What would be really nice would be to know what is going into the process and what is coming out the other end of it. ie if I type:-
    tail -f /var/log/messages | grep kernel
    and I don't see anything, is that because syslogd has crashed or just that the kernel is quite happy? What would be nice would be some way to tap into the points between the processes and look at the communications between them. Suppose we rewrote the previous CL as:-
    tail -f /var/log/messages | monitorPipe | grep kernel
    where monitorPipe is a small process that is transparent as far as input --> output is concerned, but it then registers itself with your favourite graphics environment and then lets people query what is actually passing through the pipe. This query could be in terms of Display the output in a scrollable window, or Plot the average speed in CPS in a scrolling graph or whatever. It is no longer the responsibility of the process that is outputting or inputting this information to guess in advance what people might like to ask it, it only has to do what it was designed to do and do it well.
    The next thing to do is to try and sort out some way of getting your pipes to be aliased into 'enhanced pipes' depending on whether you are running your shell under a dial-up, or if you have provided an enhanced pipe that communicates with your environment of choice. Off the top of my head this would probably require some kind of patch to the shell or some kind of wrapper around the shell (enhanced xterm kind of thing?).
    Let's stick with the enhanced xterm that filters your input and 'enhances' it to give 'better' functionality. We would now be in a position where you could have a drop-down list beside your xterm listing the processes that are currently running, lets you look at the input/output of each process, maybe even some kind of folded in multi-stage diff across the various pipe plugins so that you can see what is actually changing.
    Other posts had talked about the need for pop-up lists for autocompletion. Again, just trap the "tab" key in situations where you think it might be going to auto-complete, pass the request onto the shell, get back the list of alternatives and pop up a widget. What is the problem?
    Basically what I am trying to say is that I feel it is a very bad mistake to assume that just because a certain tool (eg tail) doesn't give you the functionality that you want/need, that patching this tool is the place to insert the functionality. This way only leads to a massive fork of all the low level tools that people rely on to get their work done. Instead try and find a single higher level point to patch in your functionality and you should be able to make a much more extensible system.
  • The author seems to be arguing for adding GUI functionality to the CLI; I would say it should be the other way around; why can't we add CLI functionality to the GUI? Why can't I select from a listbox by typing a regex, instead of selecting each element one at a time?
  • ...

    Unix is the only OS remaining whose GUI (a vast suite of code called the X Windows System) is separate from the OS in the old sense of the phrase. This is to say that you can run Unix in pure command-line mode if you want to, with no windows, icons, mouses, etc. whatsoever, and it will still be Unix and capable of doing everything Unix is supposed to do. But the other OSes: MacOS, the Windows family, and BeOS, have their GUIs tangled up with the old-fashioned OS functions to the extent that they have to run in GUI mode, or else they are not really running. So it's no longer really possible to think of GUIs as being distinct from the OS; they're now an inextricable part of the OSes that they belong to--and they are by far the largest part, and by far the most expensive and difficult part to create.

    ...

    THE INTERFACE CULTURE

    A few years ago I walked into a grocery store somewhere and was presented with the following tableau vivant: near the entrance a young couple were standing in front of a large cosmetics display. The man was stolidly holding a shopping basket between his hands while his mate raked blister-packs of makeup off the display and piled them in. Since then I've always thought of that man as the personification of an interesting human tendency: not only are we not offended to be dazzled by manufactured images, but we like it. We practically insist on it. We are eager to be complicit in our own dazzlement: to pay money for a theme park ride, vote for a guy who's obviously lying to us, or stand there holding the basket as it's filled up with cosmetics. ...

    Excerpts of "In the Beginning was the Command Line" by Neal Stephenson.

    Worth a read.

    --ricardo

  • What if the Unix GUI didn't need a mouse?

    IMO a very interessting idea but it would not work nevertheless. Sure; you can give a GUI the exact feel of a plain commandline but even then it would not be exactly the same.. I think the author is missing a very important point here. In theory he makes some very good points IMHO and indeed, in theory it could very well work out. But in the real world people tend to associate GUI's with completely configurable environments. iow; I can decide how the windows get placed, how big the fonts are, and the likes. Thats not really new. But going back to the article:

    What if the shell popped up a window, showing you the last N commands you typed, or showed you what you would get if you hit tab at that point. Or let you mix a file selection dialog with the command line. You could type rm , select the files you want to remove

    In this example the GUI would allmost feel like a commandline yet would not work. The "idiots" would demand a more flexible and more easy way to administrate their boxes while the more advanced users would still complain about the time it takes to transfer all the used data over the Net in order to remote administrate these servers.

    Like I said, the author is missing a point here. At least IMHO.. If you really know what you are doing you don't need a GUI period. not even when you're administrating a Windows based (web)server.

    My comany maintains some IIS (NT based) webservers mainly because we need the options NT offers (ASP). Sure; there may be much other options which can provide non-asp based solutions but that isn't an option and not the point here. Some time ago we had a major problem on the IIS part and I asked a colleague to help me out. I know my way around in IIS but I'm still, and prob. allways will be, *nix minded and therefor not interessted in IIS enough to specialize. He took control over the box using a remote admin tool and to my surprise immediatly did a "run program -> cmd". From there he typed like hell and did the rest to sort out the problems (yes; the solution worked as well).

    Noticing my surprise he basicly told me that Windows never did let go of the commandline *if* you knew how to use it.

    Experiences like this made me wonder about this whole GUI stuff as well and IMO it all boils down to the fact of acceptation.. People accept that you can or cannot do certain things in a console and get excited if something is possible. In a gui people are more demanding which tends to lead to more excessive environments. Personally I've never seen anyone on a console get freaked out on where a certain error message was placed yet in a GUI environment I tend to see people even move error messages to another spot...

  • by Alan ( 347 )
    This is exactly what I liked about EFM (before the beta was removed). It combined command line and GUI filemanager perfectly (IMHO).

    For those of you who never got a chance to use it, it worked like this: It had your standard desktop with icons and folders that you could open, but at anytime you could just start typeing and hit enter. What you typed could be a directory name (ie: /etc) and it would open a window with that directory. It could be a command, simple or complex such as "rm foo" or "for i in *.foo ; do foobar $i; done" and the command would execute.

    It probably had a lot more than that (beautiful graphics, alpha blending, a background you configured via layers (a la gimp/photoshop), etc), but those were the things that made people go "oooohhhhh" when I showed them to them.

    I know there are some systems that have something like this (konqueror has a command line option in it I think), but it still is a separate entity. EFM had a (mostly) seemless integration of command line into a GUI filemanager.
  • Humans are linguistic animals. We have very finely tuned linguistic skills and we have the linguistic centres in the brain with at least 100,000 years of evolution behind them. As a result I'd have thought that language was one of the best ways of communicating with a machine. I certainly find it easier to learn a lexicon and grammar (ie. command line commands and how to use them) than a list of operations like "the third menu, select the second item, click on the third checkbox that comes up" and so on. In addition I find a mouse to be a hopelessly analogue tool for linguistic tasks such as programming. GUIs bear little relation to the way humans have communicated for aeons. Unfortunately eye candy sells and so we are doomed to using tools that are not adapted to human brains - GUIs
    --
  • I'm really offended.

    I don't post trolls.

    It just shows how broken the moderation system is when someone disagrees with someone else who expresses a strong opinion.
  • by sheldon ( 2322 ) on Sunday February 11, 2001 @04:31PM (#439404)
    It may be one of the Mac's strengths... But it has also been one of the reasons why the Mac has been a failure.

    Those of us who are adept with computers do not appreciate the way the Mac treats everybody as an idiot. I've found the Mac incredibly frustrating to use over the years, but that's not to say I wouldn't recommend it to a friend I didn't want calling me for help. Thus the problem is the adept people who might actually support and write software for the Mac universally despise it, which can't help it's marketing.

    Granted, Windows started down the path of everybody is an idiot. But then most people are. But Windows also provides you with the other tools one needs to actually dig down in deep and do your work. So it's simple enough for idiots, powerful enough for programmers.

    Now Unix on the other hand has the unique distinction of being the OS for everybody but idiots. It's elitist, but it works for some people.

    So we have a different OS for everybody, and obviously the one which has taken the middle ground is the most popular.

    The problem I see is that if Unix starts going down the path of Windows, it will simply become Windows.

    And at that point, why not just save a lot of effort and use and improve Windows to begin with?

    I've never been a big fan of the Linux everywhere argument, I think it's silly.
  • Here's what's wrong:
    mv: when moving multiple files, last argument must be a directory

    If there are no .html files in the directory, the shell won't expand *.html - it will pass it as a literal to mv. Mv sees that 1) it has more than 2 arguments and 2) the last arg is not a directory so it exits with an error message.
    If there's one .html in the directory, the shell will pass this to mv as the last arg, and again mv will error out because it's not a directory.
    If there are multiple .html files, the shell will expand *.html to this list and the last file in the list will be seen as the target directory, with the same results as above.
  • Go read it again. If you want to delete all files beginning with "foo", but not "foobar" or "foobaz", you have a hard time using file completion.
    -russ
  • No, you misundstand. AtheOS has the kernel & appserver seperate. However, AtheOS does not have any way of running the system at a pure command line (Other than Telnet or SSH). When you boot AtheOS, the appserver is loaded and started for you, and you use the GUI. You can open a terminal though, and use bash, so you do have a command line.

    The kernel & GUI components are not integrated into a big ball of mud as you seem to think, but are fully seperate. My best advice is to try AtheOS (Now on VMWare, so you have little excuse ;)
  • by b0z ( 191086 ) on Sunday February 11, 2001 @02:15PM (#439408) Homepage Journal
    It's funny to see people lying about Windows just as much as Microsoft does about linux. Honestly, you are no better than Microsoft's team of liars.

    I have many computers. I have run one with Windows 95 for about 3 years with no major problems (and no regular blue screens except when messing with shady software), I had a P133 with 32MB of memory, and that was enough to run what I did, which included photo editing, Internet Explorer 4, Netscape 4.5, etc. Really, it wasn't that bad. I recently got another computer that is running Windows ME, which I have had no significant problems as well. I have gotten more blue screens than I did on Windows 95, but even then I'd say I've had a total of 4 in the past month. That also is from dealing with shady software (god damn you zsnes!)

    I have many other computers that have had nearly no problems as well. Maybe you are having so many problems because you fucked up your windows installation. Just because you don't know what you are doing with Windows doesn't mean that it is completely unreliable. It just means you need to learn how to use it. I really hate to see people that claim to support open source by lowering themselves to lying about the reliability of Windows. It doesn't help make linux any better if you pretend that Windows is completely useless. If you really wanted to help make linux a better desktop, figure out how to get it to be compatible with more hardware, write drivers. Write some plug and play detection that works. Do something so linux as a desktop will actually have the ease of use people expect from Windows. Bitching about a product you obviously know nothing about doesn't solve anything.

    Oh, and that bit about you taking the calls for a helpdesk shows nothing about the quality of Windows. It is not an indication of the quality, only of how widespread Windows is as an OS. Also, it's probably the only OS your ISP supports so the linux and MacOS people don't bother calling because they know they won't get support. Try to use your brain a little instead of bashing Windows all the time.

  • The strongest advantage of the Unix CLI is modularity: little utilities that do a specific thing, and can be piped together. The more bells and whistles you add onto these utilities, the more it violates this philosophy, and the more likely it becomes that side-effects will keep them from playing nicely together.

    The biggest advantage of a Mac-style GUI is consistency: every program has a File menu with similar commands, and so on. Interfacing it somehow to a CLI would just make it more complicated and less consistent.

    I like the fact that the GUI and the CLI are completely separate in MacOS X, for example. It keeps everything conceptually simple. And you'd better believe that the average Mac user is completely uninterested in the CLI -- that's why Apple has made it so clear that they don't want software developers ever to assume users will use the CLI.


    The Assayer [theassayer.org] - free-information book reviews

  • Comment removed based on user account deletion
  • Okay, again, sorry, my fault :)

    Truth be told, I am sucking back NyQuil quite quickly, and I'm having trouble focusing :)

    Barclay family motto:
    Aut agere aut mori.
    (Either action or death.)
  • > I wonder what we would have in the way of tools if people said "Okay, > EVERYBODY has a GUI". That is, if there was no console mode per se. > If the first thing that init did was throw you into X instead of the > last thing.

    I can say exactly what we would have: Windows NT.

    (This is where the smart people stop reading and go write a terribly clever Microsoft dig. Us dumb people can continue.)

    Yes, NT does have a command line, two actually. CMD.exe is a 32bit extended DOS shell. command.com is it's 16bit little brother. Are they as robust as most *nix shells? No, of course not, mainly because the design philosophy of the two OS's are VERY different. NT is (in design philosophy) a 32bit VMS offshoot that uses a GUI as the main interface.

    Because NT shells are typically weak programming environments most serious programming is done via the GUI. This lends itself well to the object oriented design philosophy behind Windows NT. For simple tools and tool combinations the shell is still needed. When I need to manipulate multiple systems across a network for administrative purposes I find it very easy to write a script for most simple operations.

    Getting back on topic, how does this affect the types of programs produced that interact between the two? Well, typically there are two effects:

    1. The command line arguments are not as robust.

    Normally programs run from the command line will concentrate on interface related functions. (i.e.: Run minimized, non-interactive mode, run with the following configuration, location of data files to be loaded before GUI starts). This is a generalization but it is the exceptional NT program that takes more than 1 page in a help file to explain.

    2. Tools with GUI's evolve to meet the majority of needs.

    This is not to say that the OS manufacturer provides the tools. For example, Microsoft provides many tools for systems administration including Event Log Administrator, Server Administrator, User Manager, etc. A third party tool called Hyena combines these all into a single interface which can output data to a script generation tool. Is it as flexible as a series of command line tools which can interoperate well? No, it isn't. However it fulfills 95% of my systems administration needs.

    There is also a secondary affect that very little work is done toward new shell creation. *nix has a rich blend of shell environments developed for it (i.e.: bash, csh, tcsh, etc.). NT has had a very few (and rather weak) NT specific shells created (i.e.: 4NT, KiX). Many shells have been ported to NT from *nix but they typically don't thrive. Why? The file-centric command line shell environments aren't as effective in NT as the object-centric GUI programming environments.

    The end effect? The GUI tools created tend to be very powerful but lacking the flexibility of command line tools. (IMHO)

    Dan

  • No, you're adding a command into into a GUI. That's not what I want. I want my command-line applications to know that they've got the resources of a GUI.
    -russ
  • When tar runs quickly, that's fine. When I'm doing an untar over the network, you can bet I want a progress bar.

    The command line rules because it does what you tell it? Obviously YOU have never typed "rm -rf foo *".
    -russ
  • I'm sure there could be an easy way to implement a "i-don't-want-this-program-to-use-x"...

    Emacs does this already. It first checks to see if the $DISPLAY environment variable is set. If so, it behaves like an X app on that display. If not, it behaves like a console app. It's a simple trick that causes Emacs to usually do the right thing.

    You can also change the behaviour by using the -nw (no window) or -d (display) switches. If we had a couple of standard switches like these to turn off and on the X-isness of applications, this plan would be entirely workable.

    Greg

  • And then you can run an abritrary command in that directory?

    Not from where I sit. I right click on a file, or in the empty window, or a directory and I get no way to execute a command.

    You seemed to have missed his point - sure you can use most file managers NOW to delete and rename. What he wanted was the next step, being able to run commands from the file manager and have it work in concert with the file manager - you traverse directories or select files graphically, then operate on them will the command line.

    My example - imagine selecting four files and then just typing tar -cf fred.tar in the command line below the window. You get a tar file in that directory named fred.tar containing the four files you just selected. Or, typing ls *.o in the command line would automatically change the filter in the window to show only .o files.

  • ...force people into a GUI environment

    But we're already IN a GUI environment. Is there *anyone* who still uses Alt-F1, Alt-F2, etc as their "windowing" environment? I used to, but netscape is an essential tool. In fact, I'm soaking in it now.
    -russ
  • I saw a suggestion on here I liked, but it wasn't modded up. So I'll try it for him with my own little spin on it...

    We already have stdin, stdout and stderr. Why not three more for GUI manipulation. gstdin, gstdout and gstderr or whatever you want to call them. Than you could just write to them if you would like to for GUI output, maybe in some manner of XMLish language, and a special program or daemon would interpret and display the widgets and windows and status bars by reading the stream.

    Then all you have to do is just not use them if you don't want a GUI. Or redirect them to a log file if you want and use the software to play back a graphical log of the actions of a program later on. Sounds reasonable.

    It wouldn't break any existing programs cause they just wouldn't use the new streams but they could easliy be extended to take advantage of it.

    Justin Dubs
  • "switch"?? Who needs a switch. If you don't want a CLI program to use your GUI, unset your DISPLAY environment variable. With no display to display on, it doesn't display.
    -russ
  • by Tackhead ( 54550 ) on Sunday February 11, 2001 @04:47PM (#439447)
    > More flashy icons [ ... ] An OpenGL rendered boot-up screen

    D00000D! U forgot da Sk1nZ!!!!!!! If 1T duZnt half sk1nz, 1t sUx0rz!!!!

    (But apart from that, you just described the design philosophy behind Mozilla to a "T" ;-)

  • by gwonk ( 23993 ) on Sunday February 11, 2001 @04:47PM (#439449)

    Everything he is asking for can be done already (a few lines of perl, python or tcl can produce the progress bar he's looking for) and most window managers allow for assigning keys commands (get sawfish, learn lisp, hack away). There are lots of ways to perform the processing he is looking for. Unix (and linux) is great at this stuff because the gui crap isn't built in. (BTW, which toolkit would you use? Are we going to have tar for gnome, tar for kde, tar for athena, tar for ... )

    If he really wants all that flashy crap, go to town, write some code, LEARN SOMETHING. Don't make the rest of us re-alias all the commands to turn off USELESS FLUFF! Pick your tools, learn how to use them, and shape your system to your liking. That's one of the benefits of open source.

    Don't trade substance for style. Adding flash is easy, removing it nearly impossible.

    g

  • Personally, I think a term that would take

    cat photo.png page.html

    and render the image and/or HTML on the page would be a wonderful addition to the command-line toolset. However, the ability to work in text-only mode is a critical part of the appeal of Unix.

    That's the way I would like to see it. Basically the opposite of the articles idea. Instead of having CLI programs also support a graphical interface, make graphical programs support interations with CLI programs....

    It can't be too difficult, plan9 did it a long time ago after all... Then again I'm a lowly perl/C++ program and not a kernel hacker/graphics hacker/windowing system hacker/whatever so what do I know...

    I want to be able to say:

    wc -w /somepath/window/1

    And have get the number of words currently in the window with ID 1 (which might be a text editor with my thesis open or something - of course there are issues as to does it just the text currently visible or the whole files text)...

    cat somefile > /somepath/window/new

    Should open a window with the text of somefile in it...

    rm /somepath/window/1

    Could close the window with ID of 1.

    cp /somepath/window/1 foo

    Could save a copy of the web page I'm viewing in the web browser with window ID of 1...

    Take the 'everything if a file' concept a bit further and turn the windowing system into a file system (as an alternative way of interacting with it)...

    echo "20dd" > /somepath/window/1/cmd

    Could send 20dd to the aplication running in window 1, and vi would delete some stuff...

    But I'm not going to code it so I'll shut up about it now...

  • 1) I think of myself as an adept person -- I have more code in Linux distributions than most people here -- and I have never thought the Mac treats me as an idiot. On the contrary, the ease of use allows you to take chances (because there's nothing you can screw up that can't be fixed by rebooting with extensions off) where most Windows users are terrified of the prospect of changing a single setting.

    2) However much of a "failure" the Mac itself has been, a copy of it runs 90-something percent of the world's personal computers. And those rebel Linux desktops are busy copying that copy.

    3) You can't have encountered too many Windows users if you think they're all people who found the Mac interface insufficient for their advanced computing needs.

  • Funny, I thought I left Windows to escape that horrible GUI they forced me to work with, among other things.

    Like I once saw in an article, "GUI" to me means for xterms and netscape open. This is how I work best, fast and furious. It is easier to type commands and read textual output than it is to spend time controlling windows with a mouse, and it is far easier to control windows with a mouse than with a keyboard.

    Not to mention the fact that my two-floppy firewall is now impossible, since I need to have X on there in order to run ipchains or ipfw.

    The world does not always need to be about fancy pictures shining at us; we aren't little babies who are attracted to anything that moves and shimmers. Some information, like the output of tar, is best left to characters.

    Furthermore, we'd need a standard graphics system for displaying this stuff, and a standard widget set, and all that good stuff.

    If you want to select files to delete from pictures, get a graphical filemanager. If you want to see how many lines scroll by in a given time period while running tail (not very useful, if you ask me), write a program to count this and display it in the upper line of characters. It's all open source, you can modify tail itself, if you like.

    Binding us to GUIs is a foolish idea. They are often cumbersome and resource-heavy. Let's not go the way of Windows; we can all see the shit it's become.

    A new year calls for a new signature.

  • by Russ Nelson ( 33911 ) <slashdot@russnelson.com> on Sunday February 11, 2001 @04:51PM (#439463) Homepage
    You have a good point, a very good point. If Unix is so good at pasting tools together, then there's no need for tar to be modified. Okay, how about this, then:

    xprogress gzcat files.tar.gz | tar xf -

    xprogress would stat stdin, fork the rest of its command line with stdout going through a pipe, and show a progress bar over the length of the file.

    But that's just tar. There's a bunch of other things I'm wanting.
    -russ
  • Are those who are not familiar with emacs compelled to re-invent it badly?

    I tend to work almost inclusively inside of emacs, using dired and sub-shells. You want a file-manager that let's you work without a mouse? Think about dired (it can also be run outside of emacs, though I can't comment on how well). Sometimes I might be typing a command in a shell window and decide to look up a file name: so I split the shell window in half (C-x 2), run dired in one of them (M-d), cursor through the directory listing, select a file name (C-x SPC, C-e), copy it to the clipboard aka 'kill-ring' using the command 'copy-region-as-kill' which I personally have bound to C-z), skip to the shell window (C-x o) and paste the name in place (C-y).

    After awhile you get pretty fast at these kind of manipulations, and I much prefer it to dorking around with a mouse...

    (On the other hand: I have yet to find any useful method of doing things recursively inside of dired, e.g. cutting a tree. But then, if I cared enough I could write a modification to dired to do this, much in the same way that I've written commands that do things like copy selected files to the directory displayed in another dired window. It'd be nice to get some more people involved with polishing up the emacs way of life... Another thing I'd like: a way of doing an "su" inside of emacs/dired...)

    Anyway, in conclusion I want to point out that you really shouldn't get caught up in the CLI vs. GUI dichotomy. It's entirely possible to have a windowing system which is *not* graphical, e.g. emacs.

  • >Actually, I really believe if Linux were the
    >default desktop in corporations, there would be
    >far fewer help desk calls

    I disagree - there would be more calls, just different kinds of calls.

    Instead of "This is broken", you'd be seeing an awful lot of "I can't figure out how to..."

    -LjM
  • And you ssh into these machines from a real VT-100?? Or a simulated one running under X.
    -russ
  • Nope, that doesn't work because the shell expands wildcards before "mv" get's a chance to see them. It works under dos, though.

    It is one thing I don't like to admit, but here dos actually have a better command-line than unix. But for every task other than renaming files, I still think I prefer Unix. I just hate that renaming files is so darn difficult :-)

  • by ksheff ( 2406 ) on Sunday February 11, 2001 @08:49PM (#439477) Homepage

    Ok, so you can control KDE2 stuff via DCOP. What does that do for all the command line stuff that existed long before KDE was even conceived?

    What about a system that provides a GUI for those command line tools, such as Kaptain (http://www.hszk.bme.hu/~tz124/kaptain/ [hszk.bme.hu])? It's a Qt app that allows the user to set up scripts or grammars that define the CLI options. Kaptain builts the GUI and then fires off the CLI tool with the parameters that were set via the GUI. It seems to work ok. It currently just sends the stderr/stdout of the program to the shell where you ran the kaptain script. However, to fully take advantage of a GUI environment, it should pop up a window and display the output there (it may do that already, I've just started playing with it).

  • Think about it; anywhere in the GUI, you could pop-down the console and type a command. That would cover most of the text-based stuff, but if the program did something else (like a Linux game accessing SVGALib), then the program would start either in its own window or full-screen.

    I'm surprised that there isn't a OS GUI that does this yet; perhaps I should make the Windows version of it myself. It could be Quake's donation to the GUI world.

  • If I posted a comment to slashdot about linux not having a decent GUI or not being able to operate on a wide variety of hardware, I'd be flamed out of existance. However, I can easily post about windows being unstable and frequently BSODing, even though I haven't had a problem with BSODs on a properly configured machine, and win2k doesn't seem to need to reboot. Things that were true 5 years ago aren't true today. Linux does have some very nice desktops, runs on most hardware (although cutting edge still might be a problem), and windows has gained a lot of stability since the win95 days.

    Windows does have its faults, but if you want to start ranting about stability you sound like a bunch of linux zealots without a clue. Anecdotal evidence does not prove a point. If it did, I could "prove" that linux "sucks", since I've seen my gnome desktop crash more then a few times, and I've managed to crash and trash the hard disk of a redhat 6.2 install as an unprivilaged user repeatedly. (A recent release of ADOM would always do this when I tried to save and exit).

    Don't be afraid to admit that windows does have some good points. We don't have to promote linux by tearing down the competition, its much better to promote linux by its own merits.

    As for your problems, there is a registry hack to unload *.dlls after you close a program, it helps to free up resources (especially mem usage). Also, if you have been getting a lot of BSOD, check your setup, something is wrong with the box somewhere. Its my humble opinion that a stable windows box should have few, if any, bsods.
  • by doom ( 14564 ) <doom@kzsu.stanford.edu> on Sunday February 11, 2001 @03:06PM (#439481) Homepage Journal
    What I would like to see is a combination of both of these. For example when I am browsing around in a file manager, sometimes I want to just rightclick and delete a file or something simple. Other times I want to type in a script name to run on the file.
    Hm... well let's see, suppose you're using dired, perhaps inside of emacs, and you cursor down to a file and hit a "!". And then dired let's you enter a shell command that will use this file name as an argument. And as an added feature, it tries to guess what command you might want to run (e.g. if it's a "jpg", it will guess you want something like an xv to view it with, or if it's a "tgz", it will guess you want to uncompress and untar it).

    Is that the sort of advanced functionality you're talking about (that has only existed for a few decades or so...)?

  • I see command line applications failing to take advantage of all the resources available to them. I'm not saying that we should abandon the CLI. I'm saying that CLI applications could give us more information than they do now. They could interact better with us than they do now. For example, try doing something on the command line to all files but one. For example, try tracking the progress of a long-lived pipe command. There's a LOT of value to knowing whether you should sit and wait for a command to finish, or if you should go read slashdot for ten minutes while it's working.
    -russ
  • Well, much of the power and generality you're praising comes from the fact that every process has three file descriptors, STDIN, STDOUT and STDERR with standardized meanings. Now imagine if we add a fourth file descriptor to the Unix paradigm: STDINFO. A program could optionally print messages to STDINFO in a standardized format (we'd need an RFC) which could be parsed by numerous tools. For example tar might print to STDINFO "percent_complete=34". A GUI might use this to power a progress bar.
    However those of us who aren't into GUI's would find lots of cool uses for this info. How cool would it be to type ps -<something> and have most of the processes show certain metrics, like how many lines of input they've processed and how many errors they've encountered? Or imagine typing "tar -xvf huge.tar 3|grep'percent_complete=5[0-9]'>&1". In other words, let me know when it's half done.
    I think this would rock. And adding a few printf's to tar wouldn't be much bloat.
  • Have you ever seen a new user sit in front of a Windows box? I can state without fear of being succesfully contradicted that Windows is not consistant, easy, or intuitive. That's a myth which is propagated by Microsoft and long time Windows users. The truth of the matter is that Windows is easy and intuitive if you are used to using it. I drive a car with a manual transmission and I think it's very easy to do, in fact I don't even have to think about it, I just do it. Buuuuut, take someone who has never driven anything but an automatic and watch them give it a try.

    Since about 90% of the people who use a PC use Windows it makes perfect sense that it seems "easy" or "user friendly" It's all a matter of what you know. The truth is that no OS is easy for a first time computer user, and quite frankly it's a computer, not a toaster, maybe it shouldn't be easy. Why do we want to take an extremely powerful, amazingly sophisticated piece of machinery and dumb it down.

    Yours
    Bob
  • by MongooseCN ( 139203 ) on Sunday February 11, 2001 @11:37AM (#439487) Homepage
    What I would like to see is a combination of both of these. For example when I am browsing around in a file manager, sometimes I want to just rightclick and delete a file or something simple. Other times I want to type in a script name to run on the file. I think the best way to do this would be to have a command line space on the bottom of the file manager. The command line space's working directory would be whatever the current working directory in the file manager is. Then I can do my gui stuff in the file manager, and then type in stuff on the bottom if I needed to. Basically a bash shell whos working directory stayed in synch with the file managers directory. It's the best of both worlds.
  • Red Hat, Inc. today announced the release of Cat 2007. Said spokesman Fred Smith, "This isn't your grandparents' Cat." Cat is no longer just a program that copies stdin to stdout, said Smith. "The new Cat 2007 comes with all the features users have been asking for, like an animated character who pops up and offers advice if you seem uncertain." Other features of the new Cat include:

    • prevents the copying of text files that carry copyright notices
    • automatically translates natural-language text files to and from English, French, German, Finnish, Serbian, Mayan hieroglyphs, and Klingon
    • interfaces to e-mail, FTP, palmtop computers, hearing aids, and coffeepots
    • For users who don't understand the philosophy of the Unix CLI, Cat now incorporates the functionality of over 120 other Unix commands, such as Grep 2006 and More 2005.
    • For users who are used to inaccurate Windows progress bars, Cat 2007 features a special nonlinear mapping that makes the bar go almost completely across, and then crawl the last 5% of the way very slowly while little animated pictures of pieces of paper fly across the screen.
    Smith also noted that Cat 2007 is not interoperable with files written by Cat 2006. "We don't think it will be a big issue," said Smith. "We think all our users are going to want to pay for the upgrade because of all the new features."

    Red Hat's stock rose 37 cents on the news of the new product.


    The Assayer [theassayer.org] - free-information book reviews

  • by Slicker ( 102588 ) on Sunday February 11, 2001 @11:39AM (#439498)

    KDE 2.0 already claims you can do anything from
    the keyboard and the object model supports
    scripting. Also, most KDE 2 applications even
    allow you to use regular expressions every place
    they might be useful--as an option (not a
    necessity).

    I absolutely agree that the philosophy of small
    tools should be expanded to GUIs, but I think
    KDE is doing this. The QT toolkit's signal and
    slot philosophy is a near parallel and KDE
    componant objects nearly complete the
    requirement.

    The old OS-9 operating system (used on CoCo's and
    some M68000-based computers like Atari ST) also
    had an interesting philosophy for GUIs.

    You could pipe data in and out of every Window
    and all kinds of GUI activity could be managed by
    character streams all centered upon ASCII.

    It was an excellent philosophy. Last I checked,
    OS-9 was still marketed for embedded systems.
    This is the OS-9 of Microware systems
    corporation--not the MacOS 9

    --Matthew
  • You suck, bitch!

    Actually, I'm just kidding. However, I thought the ending to your post was incredibly immature. I *like* pop-up focus. Usually, I open a program in order to use its window, so when it focuses automatically, it saves me a bit of clicking. The point is that its a personal preference, and just because *you* don't like it, doesn't mean it sucks

    Second, your whole "CLI/GUI blend" is just a visual trick. That's the problem with many of the "new" technologies that are coming out of the OSS community. They are nothing more than show. You've got great desktop environments, but few compelling apps actually use them (in the interest of cross-desktop compatibility!) You have great object-technologies like gnorba and KOM, but they're simply check-mark items that few important programs use effectively. You've got nifty graphical tricks like EVAS and XRender, but you're merely using it to do eye candy instead of adding something to the environment. Of course, when a non-Linux user points out that Linux is missing one of these technologies, you can always point to these empty shells and say "oh, we've got it too!" However, you don't got it. Not to the point where its worth using anyway. UNIX was a good idea (most of it anyway ;) When I see the incredible amount of talent in the OSS community, I think, "geez, these guys could come up with the next UNIX. Something that brings the UNIX idea, which is so great for a CLI-system, to the next level." Instead, what I see are cool technologies being used for either a check-mark item against Windows, or simply eye-candy. And that's just a shame.
  • We do not only claim this.It *IS* already possible in KDE 2 for a long time via DCOP. Fire up kwrite (make sure you don't have two kwrites there at the same time -- otherwise you have to add the pid to those commands) and type into your favourite xterm:

    --snip--

    dcop kwrite KWriteIface insertText 'Windows rocks!' true

    dcop kwrite KWriteIface setCursorPosition 0 8 true

    dcop kwrite KWriteIface insertText 'sux! KDE ' true

    dcop kwrite KWriteIface shiftHome

    dcop kwrite KWriteIface writeFile 'conquer_your_desktop.txt'

    --snip--

    or check your Mail using KMail by entering:

    dcop kmail KMailIface checkMail

    or bind the command "dcop kdesktop KScreensaverIface lock" to your "Pause"-key using kmenuedit. That way you can start kscreensaver by pressing the "Pause"-key.

    To explore the possibilities you might want to use kdcop.

    Greetings,
    Tackat
  • by SpanishInquisition ( 127269 ) on Sunday February 11, 2001 @11:40AM (#439507) Homepage Journal
    • More flashy icons
    • Transparent menus
    • Animated Cursors
    • 3d widgets
    • Blinking windows
    • Anti-Aliased Error Messages
    • An OpenGL rendered boot-up screen

    If we don't do that soon, Unix on the desktop is doomed

  • No! You got your GUI in my CLI.

    Hmm...

    JESUS CHRIST and his black bastard brother Harry! That tastes like SHIT!

    Bad idea. Nuff said.

  • There have been a number of excellent ideas proposed here, and the common theme of them seems to be: just expand the present UNIX CLI paradigm a bit further to encompass the possibility of a GUI:

    - Adding a STDINFO handle to the usual STDIN, STDOUT, STDERR

    - Adding a /proc style filesystem to keep track of windows and window information

    - etc.

    Let's face it, what we really want here is for windows, and other gui objects to just fit in as extra objects that can be handled, piped, grepped, etc., but in exactly the same way that processes, devices, can be.

    That's true integration of the GUI with UNIX - don't layer it on top in a mishmash manner, embrace it in a UNIX way (as opposed to trying to cram the whole thing down your throat ala windows), so that it doesn't matter if it's not there.

    Jedidiah
    --
  • First off, one of the most useful (powerful) parts of the CLI is the ability to pipe/redirect output. It's not that unusual to have five commands stung along to some end-result; I would not want 5 dialog boxes popping up that I then have to close.

    Secondly, linking in GUI routines to CLI tools would make these programs many times larger (in disk space and memory consumption) then they already are. Something like 'cat' which is ~10 lines of C would turn into several hundred.

    Third, which GUI toolkit do you standardize your CLI tools on? GK? Qt? athena? openwindows? Then you have to make assumptions on which GUI libraries are installed on a given operating system, not to mention differences between one platform and the next. The simple bintools that used to be compilable on any platform will then increase in incompatability and build complexity exponentially.

    The final reason not to do this is that you then force people into a GUI environment regardless of if they want it or not. Whats the point of GUI-linked tools on system with only a serial console for instance? You end up requiring a GUI toolkit be on a system regardless of if the system needs or even supports such a thing. You force the tools to become unstable if only becsause a windowing system is inherently more complicated and therefore more likely to have problems.

    -- Greg
  • Take a look at AtheOS (www.atheos.cx) where the GUI is incooperated into the kernel. Its pretty slick! =)
  • There are GUI designs that don't prevent programatic interaction. One example is Apple's recommendation on how to implement Mac OS's Apple Events. A well factored application has its actions divided into fairly concrete set of verbs, and then some special nouns for them to interact with. Finally when it gets time to develop the UI portion of the application, the programmer hooks up the menus so that it sends Apple Events back to itself. A program designed this way can then be manipulated programatically with just as much functionality as with manual manipulation.

    Before someone pipes up about the deficiancies of Applescript, I just want to point out that Applescript is just one implementation of a system that uses Apple Events to interact with other programs. Its designed to be overly verbose and "English-like" to allow people to read scripts that they might not have the ability to write on their own. There are other languages that can send Apple Events to correctly factored programs. Frontier and Perl come to mind.

    TCL/Tk was originally designed for a similar type of interaction. The original idea for TCL was for it to be a language embedded into an application. The functionality of the program could be coded up as new TCL commands, and then small scripts could be written to bundle the functionality together. Tk was then an additional set of TCL commands that implemented a GUI toolkit which could interect with these extended TCL commands. A program built up as a set of TCL commands could be run either graphically as a Tk app or non-graphically through TCL. Then you can add to that the Tk "send" command which allows you to pass TCL commands to a running Tk app, allowing you to programatically interact with programs that are currently running graphically.
  • In Windows, use win+r for the RUN command

    This opens a single command line. History? Tab completion? cd? Hardly. It does not open a shell window. To open a shell window, you have to type command or bash into the dialog box that the Run... command creates. (To get bash and friends, download and install DJGPP [delorie.com] (DOS/Win9x) or Cygwin [redhat.com] (9x/NT).)


    Like Tetris? Like drugs? Ever try combining them? [pineight.com]
  • Um ... yes. Many people do, actually. (Right here.) Netscape, far from being an essential tool, is a slow, unstable program that I generally don't use if I can avoid it. In fact, it seems like overkill to me to run X just because you want a web browser. Just because you don't find that non-GUI tools don't meet your needs, doesn't mean that other people feel the same way.
  • No. The GUI things that most CLI programs would do would be quite similar, which would suggest a higher-level library that the CLI programs can link to. The library would abstract the windowing system and GUI toolkit away from the CLI programs (which would then not care about the actual windowing system used) So, to address your concern: if you want no graphical stuff of any kind, just add the version of the said higher-level library that has all its functions empty. There ya go, no graphical bells and whistles, and no X libraries (or other GUI libraries).

    --
    SecretAsianMan (54.5% Slashdot pure)
  • The Lisp Machine GUI had windows that were much like a command line interface: you'd type commands and output would appear. However, the output wasn't just passive, it was an actual GUI. For example, in a directory listing, you could click on files and manipulate them. If you displayed an image, it would appear just like text output in the transcript and you could click on it and interact with it. You could also have input fields and other stuff appear in the output. In effect, each part of the output would stay "connected" to the code that generated that.

    Translating that into a UNIX environment is a bit harder, but it might be worth having another go at at it, perhaps based on XML and web infrastructure.

  • by ca1v1n ( 135902 ) <snook@noSPam.guanotronic.com> on Sunday February 11, 2001 @11:44AM (#439532)
    I would like to see programs that are coded in such a way that all the core algorithms are compiled into one file, and you simply have to code an appropriate access module for each interface, plus whatever graphical goodies you might throw in. You'd probably be fine just coding everything so that it can theoretically run from CLI and then pipe that into your graphical front-end. That way everything is nice and portable, so people can use it in their graphics system of choice. Many windows programs do this with DLLs, and often with great success. Obviously you get problems with managing the DLLs and everything gets unstable after a little while, but that's the kind of thing maybe we could get right. I know there are some downsides to this method, such as requiring programmers to write portable code. On the other hand, if we lost half of the bad programmers to laziness and trained the other half to do things right, I think we'd all be better off.
  • Before everyone starts getting excited or annoyed, it should first be remembered that UNIX doesn't have a standard GUI, and for the UNIX tools to have GUI parts or GUI reactions we would have to settle on which GUI would make the best fit. Or perhaps a default toolkit that all GUIs interface with? Either way, I think CLI and GUI should remain seperate.

    I think someone out there is trying to make UNIX easier for *them* to understand.

  • Beyond the disconnected punch cards and paper tapes the first electronicly connected User Interface was logically the keyboard. Where versatility came as a result of and matter of character sequencing on the command line.

    In time the GUI became the next and still logical in the evolution of User Interfaces. It provided less versatility but greater ease of use by those being introduced to computer for the first time.

    But what both of these User Interfaces have in common is the fact that they are both based on the use of icons or symbols. The GUI having less versatility then the keyboard but greater ease of use. And thru various methods the sequencing versatility of the keyboard, character sequences could be automated or prewritten and attached to a GUI icon.

    But then there is the incredably slow evolution of the logical third User Interface. The interface that completes the natural triplet of core user interface set. Not so unlike the value of having all three primary colors to mix paint or light color from. Removing any one of these primary colors greatly reduces possibilities.

    There are various speices, if you will, of this third user interface. Often it's spoke of in term of the programmers APIs but command line piping and Interprocess communication or messaging are other labels. But overall, this third user interface is that of having the ability to send a running program or function library or other functionality such as DOS, commands from an external to its operation, common location.

    As an example, the Commodore Amiga evolved with an AREXX port usually included in programs and libraries, etc.. NOTE: it's not AREXX here but the fact of this side door for external control exist as a standard, even in some programs that didn't require AREXX to be running.

    So the three User Interfaces Are:
    1) keyboard (ASII)command line interface
    2) Graphical User Interface, typical mouse access
    3) The side door to functionality control and
    integration.

    The keyboard or manual character sequencer can address both the commandline environment and the side door. And it can also be used to define a character sequence automation attached to a GUI icon that triggers a control communication to a specified side door.

    Of course there are other possibilities, as is obvious in the analogy of primary color triplets.

    But then there is even more that can be accomplished once these three user interfaces are established on a system.

    Dynamic integrated automation can evolve. From allowing the user to easily access thru a consistant interface, the details or descriptions of functionality... to making their own definition to extend their systems dynamic automation ability. Providing the user with the ability to automate anything they do over and over again, regardless of how many programs or other functionality is used in the automation.

    Now isn't this really what computers are all about? The ability to define and activate the automation of complex functionality sequencing?

    Programmers do it all the time, that's what the point of programming is, to automate complexity to the point of being much simpler to activate then doing manually.

    In fact with such a general purpose automation tool being used thru the three user interfaces it would be possible to automate the do's, don'ts and standards of any program language as well as any functionality inclusion that can be repeated, even dynamically.

    And as the automation continues it compiles and executes the generated code, etc...

    But what I have found is that the psuedo programming industry doesn't want to really remove the foundation on whch companies like Microsoft stand. For MS was built on the premise that programming is a difficult and complicatd thing to do. Identified and labeled in the 60s, as the "software crisis" from which it probably evolved in the 50's out of IBM causing a distraction in genuine computer science along with the governments war decription/encryption efforts.

    Sad to realize the computer industry got a major start due to the practice of hiding and trying to see hidden communications.

    An apparent practice that seems inbreed to the point of...... well where does inbreeding usually lead?

    That's going to end. Just call me Neo!
    Yep! the informant is real!

    Each one of the character represent an action. Something we all do. Actions that can be programmed and used to automate dynamically.

    How bad do you really want the next advancement in User Interfaces? Bad enough to become unplugged from the psuedo programming system, and that which supports it?


    3 S.E.A.S - Virtual Interaction Configuration (VIC) - VISION OF VISIONS!
  • by heroine ( 1220 ) on Sunday February 11, 2001 @11:45AM (#439541) Homepage
    Really we've seen that Linux does best as a server in a back room with only a web interface or an appliance with a button interface or a remote control. You wouldn't want users to have to interface it through a GUI or a command line.
  • Personally, I think a term that would take

    cat photo.png page.html

    and render the image and/or HTML on the page would be a wonderful addition to the command-line toolset. However, the ability to work in text-only mode is a critical part of the appeal of Unix.

    Sounds like you want XMLterm [xmlterm.com]. It does a lot more than just displaying, of course, since it also is meant to fascilitate better interaction between command-line tools.

    Hmmm... looks like the project's been pretty quiet for a while. Anyway, it's still a good idea.

  • I am increasingly troubled by how many new applications are designed to work solely under a GUI. While this may make some amount of sense for people coming from the PC or Mac worlds, one of the strengths of Unix has always been the ability to use it from anywhere. These people don't seem to understand this. Of how much ultimate utility is that nifty new spreadsheet, editor, or debugger if I can't dialup from the field and run it on my vt100? Too often a tool that "does windows" is little more than a marketing gimmick to dazzle impressionable users into not noticing that they don't have the real functionality they need. GUI-minded programs seldom lend themselves to being used as components in larger tools. As such, they do not fit well into the UNIX tool-and-filter philosophy. Instead of each being a single program that modestly attempts to do one thing well, they are a throwback to the Bad Old Days when each program was a standalone, monolithic monster that didn't interface with anything else. It's all well and good to place a GUI wrapper around an existing tool, but to design a new application with only a GUI interface in mind is to forever limit that tool's flexibility. After all, how to you write a shell script that drives an automated xrn session? Providing programmability for the fancy graphics software remains an open problem. The most effective use of GUIs in UNIX environments is to design the nitty-gritty computational function as a "back end" that can be driven either manually or automatically. The GUI wrapper should be a separable module. If they're plug-replaceable, the application isn't irretrievably wedded to any specific GUI technology, such as SunView, NeWS, or even X11 or its children, like Open Look or Motif. Sending standard commands down a pipe the way the STDWIN or wafe packages behave is also a reasonable approach. This means your program should be runnable both with and without the GUI present, and accept input from a mouse or under programmed control. Preferably that means both a shell-level interface for convenience and a C-level interface for efficiency; Perl programmers could elect either method. That way, naive users can use push-button GUIs, blind users can use Braille terminals, and sophisticated users can program solutions to intricate problems. It has been noted that GUIs make simple things simple, and complex ones impossible. Certainly it is worthwhile to make simple things simple. But too often software is geared to only one level of expertise. That which is novice-friendly is too frequently expert-hostile, and vice versa. Being needlessly forced to click the mouse over a menu will slow down the expert user who is more comfortable with a keyboard interface. Gratuitous distractions from the keyboard only slow down the experienced user. A precision pointing device that didn't require taking your hands off the keyboard would help. There are cases where only a GUI makes sense, like a CAD system. Being able to delineate a region or draw a figure with a mouse is probably a reasonable use for it, but selection of a range of possibilities isn't, at least not after you've become familiar with the tool. -snellac
  • I speak my pictures by the names of their files. They're given the name which the artist assigned to them. Something about painting being poor at communicating a unique label.

    How do you paint a verb?

  • I don't think this fits with the UNIX philosophy. Let's face it, all of the important aspects of UNIX have been hammered out many years ago and we're just adding sprinkles right now. There are still pipes, signals, and processes, and we're probably going to have to deal with them for the rest of its lifetime.

    But let's not assume that we're going to be using UNIX-like OS's for the next 100 years (please dear God no) ... What our next OS needs to have is processes that can tell the outside world what they're doing. For instance, grep would say "I'm working on this stream of data, with these options, and I'm this percent complete..." and the GUI could display this info, perhaps in a flow graph format.

    I would like a system where the user could graphically drill down into a script and see the execution flow, monitor the data being slung around, and have some control over what is happening. My big beef with UNIX GUI commands (and Windows programs) is that they spawn off processes that spew output into some remote terminal somewhere, or terminate for no reason and you can't find out why. (Witness the RedHat installer -- at least the old versions)

    UNIX UIs aren't "all wrong". They're simply a reflection of the environment in which they must exist. We'll have something different when we build a different OS world for a UI to live in.

  • Imagine a smart shell: I type 'ls -' and it shows me all the options to ls. There should be another keystroke, so that if I type 'ls -alR' it would explain what those options do, then return my command line in the same state for further editing.

    What you are describing is programmable completion. Zsh [zsh.org] has features that could be used to do what you describe, although you will probably need to write your own functions that spit out the information because the unix man page system doesn't have a mechanism for bringing up specific sections. You could make zsh do everything you want if you are prepared for a learning curve and a lot of work (think of all those commands you'll have to write completion functions for), but you could do it fairly easily if you take it one command at a time. You'll probably also want to do the usual zsh completion stuff (eg. "tar -xvzf foo<tab>" brings up foo.tar.gz and foobar.tgz but not foobaz.html) so there's more work there. But once it's done you can share the results with everyone else.

    Zsh is a nice shell for other reasons too. I use it exclusively even though I haven't yet bothered to figure out the programmable completion (I just use completion controls other people have written).

  • But the same things can be said about the GUI, at least in terms of using the right tool for the task.

    For example, on the command line, doing the copy operation you mentioned is easy. But what if I want to choose a few desparate files (for example, somepic.jpg athing.gif and foo.c from ten or so files. I think this is much easier to do in a GUI.

    Likewise, I think it is simpler to drag a picture into a WYSIWYG HTML program, positioning it (although, having the code on hand is nice as well).

    Further, when we start getting into larger icons that *all* contain thumbnails of pictures, miniature documents in thumbnails, miniature code in C++ programs, etc., it will be a lot easier to pick out "all the pictures with a red boat in them", and "all the documents that look like web pages with no extensions" and make file changes. The command line will lack here. (Look to Mac OS X and even Windows 2000 for examples of this "miniature view" of files beginning to take hold).

    To each his own. I agree that *NIX makes a hell of a file server. People are idiots if they think a server needs a GUI. That's a waste of resources. Still, for workstation tasks a highly-customizable, highly object-oriented (not simply in the C++ sense, but designating files as actual "things" with properties relative to their picture/document/code classification) is best.

    --------
    Carmack is an elitist, pseudonerd bastard.

  • I've said it before, and I'll say it again:

    People migrating from Windows to *nix, and Linux in particular, are wasting their time if they never leave GNOME/KDE/whatever.

    Aside from stability and philisophical issues, Linux doesn't have all that much to offer if you stay in the various desktop environments.

    The power of *nix, and therefore Linux, comes at the command line. Lots of little utilities, which are fairly easy to learn(if one decides to learn them, and takes the time to read the documentation) lend themselves to a great environment.

    I've found that in Linux, the more complex a task becomes, you need to put proportionally more work into it. I think it's about 1:1.

    In Windows, the more complex a task is, you end up spending an exponential amount of time at it. As soon as you try to do something that doesn't have a button somewhere, you've got to bend over backwards.

    Even when there is a button for it, you'll often have to find it, which isn't always easy.

    There, I've said it :)

    Anyways, I'm all for integrating the CLI into the GUI. I'd love to be able to type 'for i in *.txt; do mv ${i%%.txt} $i.html; done' in a URL bar.

    Look at that, I just re-named thousands of files that ended in .txt to have thousands of files that no longer have .txt, but .html at the end. Hoo-YAH! :) Quick 'n easy, if hard to learn. Try doing the same without an quick 'n dirty shell script, or script of some other sort.

    If GUIs start incorporating more shell-like features into them, we bring that up another layer, and make things easier.

    Okay, I'm babbling. You'll have to forgive me, I'm really terribly sick with the flu :)

    Dave

    Barclay family motto:
    Aut agere aut mori.
    (Either action or death.)
  • I'm sleep deprived and a little buzzed, so...

    Imagine little snippets of unix cli like 'rm', 'mv', 'awk', and so on to be akin to genes. Atomic components that can be combined in endless variety to form all manner of little scriptlets to perform an endless variety of tasks. Combined completely randomly, of course, you'd get garbage. Apply a little bit of selection pressure, though, and what do you get?

    I often like to think of the world of programming (from humble shell scripts to obscure assembly constructs) as an ecology. We all continually adapt and combine our little text constructs to produce new forms, some good, some bad. Rinse, lather, repeat.

    The problem I have with GUI's is that I just don't see how they can easily mirror this important process of continual refinement and renewal. They're just too clunky. For example, I can show you a trick, like

    for i in `find . -iname "*.php"`; \
    do perl -pi -e "s|.*ddate.*||;" $i; \
    done

    and you can easily adapt and reuse this idea in ways I would never guess. Is there any kind of analogue to this type of rapid sharing of adaptable information in the GUI world? If so, please clue me in, because I don't see it.
  • Same here. The only time I've had to reboot Windows 2000 is because I installed some crap video drivers that fucked the screen. Rebooted, fixed the problem, and I haven't rebooted since.

    --------
    Carmack is an elitist, pseudonerd bastard.

  • For example, if I run tar inside X, then tar ought to pop up a completion bar.

    Hmmm...I thought that the UNIX philosophy was "no news is good news"...meaning that in this case tar should only show something if an error occurred.

  • ..is that bloody-minded insistence on character stream interfaces between programs. Sure it's more flexible than records, but it's an awfully lowlevel way to exchange data.

    Symbolics' Genera operating system used a system called Dynamic Windows, that nicely integrated the command processor and GUI. The command processor was both simpler and more powerful than Unix shells. Simpler, because scripting was provided by the Lisp interpreter. More powerful, because commands didn't just return a result code -- they returned entire objects (or multiple objects!). The presentation manager was responsible for formatting those objects in an appropriate manner. The Show Directory command returned a list of pathname objects, which would be shown as a list. The nice thing is that the GUI knew what they were, and they remained active objects: You could right-mouse-click on a pathname and get a menu of appropriate actions (Edit, Show, Delete, etc).

    Similarly, the commands themselves were integrated into the GUI. For example, the Command Processor knew that the Show File command took a pathname as argument. If you typed Edit File (without hitting return) all the pathname objects in the window would become active, and you could click on one to send it to the Edit File command. Or you could hit Ctrl-? to pop up a list of possibilities.

    If a command needed some more complicated interaction, it could send a dialog object to the command processor, who would append it to the window, allow the user to fill it out, then send the results back to the command.

    Full GUI apps were easy as well, even a GUI application had the same basic model, it just needed to turn off scrolling, turn off command echo, change the menu to an application-specific one, and send Dynamic Windows whatever presentation objects it needed to get cranked up. Menu entries, mouse clicks and other gestures, and accelerator keys were translated into commands and sent to the application's Command Processor.

    There was a lot more to the whole thing, of course, and this short description doesn't begin to touch richness of the system, but the closest I can imagine a stream-based system getting is via XML, which is again an awfully low-level way of attacking the problem.

  • You're so expressive, so literate, so erudite. "it sucked", eh? If you're popping up windows through keyboard actions, you really DON'T want to have to switch to the mouse to transfer your keystrokes into the window.

    Perhaps you're using click-to-focus? Some of us don't like click-to-focus. We use focus-follows-mouse. Hey, to each their own. That doesn't mean that it "sucks" (I love that word. Sucks, sucks, sucks, sucks. Do you use that word in polite company? Do you realize that it refers to the act of fellatio [look it up]?), it just means that your experience is inadequate to understand it.
    -russ
    p.s. sigh.
  • That's one problem with Linux GUIs in general. There's no *synergy* Even though you have all these GUI tools, the CLI versions are just plain more powerful. There is no reason for this, it just happens to be that way, because people think CLI first, GUI on top. It shouldn't be the other way, either. They should co-exist peacefully. I want to be able to draw a picture in my graphics apps, then run a script from the command line to do my favorite transform. I want to be able to manage my contacts graphically, but script it so I can add a hundred contacts with a simple program. Oh, wait, I just described BeOS's scripting environment!
  • I wrote about this some time ago. (Actually, I devoted about four months of my life to writing about it, which never went anywhere. Thus, Dan Learns The First Rule Of Design: If Thou Can't Code, Nothing's Gonna Change :-)

    Details available at http://www.doxpara.com/cluehunting.html [doxpara.com].

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
    http://www.doxpara.com
  • I think one of the most useful implementations of CLI-GUI integration is how under Mac OS X (or NeXTStep) you can just run open somefile from a terminal and the correct application will open somefile in the GUI. Very simple, and probably easy to implement, but I don't think Linux has the infrastructure to support something like this yet.
  • Ouch, that is way better than my suggestion.

    Way better. I like it.

    --
    SecretAsianMan (54.5% Slashdot pure)
  • UNIX doesn't have one UI it has dozens of UIs. Some of them (Gnome, KDE) are quite good. Some are a bit more primitive. Each perfectly suits the needs of a small group of users. Most of them provide limited compatibility with the alternatives (meaning that typically applications will run but will not integrate well with other applications). The solution to this problem does cannot be found in creating another GUI but in getting GUI creators to agree on basic standards: how to cut and paste, how to embed components, how to define a menu structure, how to put an icon on the desktop, etc. As long as there's more than one way of doing things, there will be inconsistencies making UNIX user unfriendly
  • There are lots of CLI users who want to be able to influence GUIs with their CLI and KDE 2 may allow them to do this, but it is still not much really a GUI revolution.

    Specifically, there is little "transfer" from the tricks the users has learned unde the GUI to the tricks they will learn under the CLI, i.e. using the GUI dose not teach you to use the dcop CLI. The relevent quote from your own thread would be "Wow! I had no idea you could do this." If it is possible for a users who knows about CLIs to not understand that they could use the CLI then something is wrong.

    We could have a much more tightly integrated CLI and GUI if the widget set automatically provided a CLI for every window and all menus had direct interpretations via the CLI, i.e. much like the old autocad interface where all menu commands caused CLI commands. This allows you to build commands with the menus and edit/reuse them from the history.

    Actually, you could take this one step further by eliminating widget based menus in favor of an improved cut and paste. The idea here is that cut and paste is critically importent to the usability of any computer interface so it must be optimized for speed (this crtl-C ctrl-V shit dose not cut it). The way to make shure that your cut and paste is good enough is to force the simulation of menus via cut and paste commands to the window's CLI. I think that Plan 9 dose with their editor.
  • Remeber that this "No News" policy was partly based on the fact that the output device was a teletype machine (tty).

  • "Of all the problems in Windows mentioned here", at least 80% are hatred and ignorance. Windows works well on the desktop, whether some people like that fact or not.
  • by SecretAsianMan ( 45389 ) on Sunday February 11, 2001 @11:52AM (#439610) Homepage
    Actaully, I had this idea a year or two ago, and I put it on my I'll-do-it-when-I-get-out-of-college-and-write-my- utopian-operating-system list. I think it's wonderful! In my opinion, command lines are the fastest human->computer interface, while graphics make the best computer->human interface. Things like this represent what I think is the best way of combining the two paradigms (which are not opposites, as many would have us think). Most of the objections I saw were like these:
    • What if I'm logged in remotely? Doesn't this assume I have a graphic terminal?
    • Won't this cause random stuff popping up when, say, cron scripts run?
    No, boneheads! Just provide a standard switch that means "don't do that GUI shit". That may be a problem for our current set of Unix tools, since it is probably impossible to find a switch that is unused in all Unix tools. But that doesn't preclude someone from implementing this idea in their new operating system, where the switch meanings aren't set largely by tradition.

    --
    SecretAsianMan (54.5% Slashdot pure)
  • by whydna ( 9312 ) <whydna.hotmail@com> on Sunday February 11, 2001 @11:56AM (#439614)
    I think this is a valid argument. For example, when I log into my school UNIX account to use Matlab from the console, I use the program as a console program. When I log in while in X, Matlab detects that and allows me to use all the nifty graphical stuff too. I know there are other programs that do this same thing too... emacs, etc.

    Why couldn't "normal" programs do the same thing. It wouldn't hurt cron jobs and the like (as everybody keeps complaining about), because the cron isn't run on a terminal per se, and therefore the cron programs wouldn't have access to X.

    I'm sure there could be an easy way to implement a "i-don't-want-this-program-to-use-x"... perhaps a simple alias would work. example) alias tar="tar --noX" or something like that. The only problem I see with this is that it would require a rewrite of a bunch of stuff, but I'm sure there are coders who enjoy stuff like this (I'm up for it if people will help).

    It would make the GUI more informative. On "other" desktops, when I'm copying a bunch of files and it's going to take a while, I get a nice little progress bar... I like that. I know it's standard in UNIX that "no output is a good thing". But if a User is sitting behind a GUI, it's obvious that they're looking for output when things are going to take a while.

    Just my $0.02.
  • I see NT admins constantly getting UNIX tools to get the job done, I don't see UNIX admins moaning about the lack of a GUI.

    I see Wintel, Mac and Unix users not knowing about such a thing as a command line neither caring about it because they don't need it.

    My take: UNIX does not have a GUI for everything because there is no demand for it, the CLI is so expresive for programming and administrative tasks that demand is limited to say the least.

    I can make with a one liner in UNIX things that somebody using NT can only dream about (unless he/she is clever and installs a shell clone or perl).
  • by mr3038 ( 121693 ) on Monday February 12, 2001 @08:41AM (#439626)
    ...try tracking the progress of a long-lived pipe command...

    It would be nice, but in general case this is impossible. Think about something simple like "cat * | sort". How could cat know how many bytes there is still to read after reading 50% of filenames given as parameters? (It could be 99 files of 1 byte and last one of 1GB... and another process could be appending the last one!) Even less can sort say anything about how long it takes before it ends, because it has even less information about information to be sorted. Now think something like "cat /dev/urandom | grep a"...

    It's however a great advantage to be able to work with files arbitrary large because pipelined programs only need commands like "read next 4000 chars" and "output these 100 chars". If OS can support 10TB files these little programs will do also. Of course sort doesn't work with arbitrary large files because it needs all data at once. Using pipeline also decreases memory requirements because commands are run in paraller and if program outputs more data than next one can handle it will sleep until next process is ready to process data again. No need to store all data by a single process.
    _________________________

  • You lack even the imagination to reply.
    -russ
  • >The only time I've had to reboot Windows 2000 is because I installed some crap video drivers

    There's one problem. An eterprise leve OS that explodes because of video driver problems.

    That's why X is optionally separate from the kernel. When X explodes, worst it's supposed to do is freeze the console (unless you are running it all as root... which as we all know is bad).

    Solution: Telnet/ssh into the box. Kill the X server. Fix the problem. Start up the X server again. Don't tell the clients, they won't even know it happened.
  • Lets just say you are average Joe POWERuser (the only people that need look at *nix anyway.

    You need xterm, mozilla, an image viewer, a mp3 player (optional), and fvwm2.

    Hell, throw in gimp and gnumeric for fun.

    I swtiched back to win98 for a while to play games, and discovered that I did all my work in IE or a terminal. I realized that I didn't need GNOME, KDE, or any of this.

    Let us all sing:

    X is for xterm, that's good enough for me,

    Oh, X is for xterm, that's good enough for me,

    Oh, X is for xterm, that's good enough for me,

    Oh, xterm, xterm, xterm starts with....c?

  • There's no reason you have to be doing all that stuff with the mouse. I'm sure there are plenty of window managers out there that give good keyboard shortcuts and don't get in your face (of course, some may be <COUGH>evil</COUGH>).

    What I want to write some day is my (and probably many other people's, though I don't see anyone writing one) tool builder idea: executables represented as icons on the desktop, with 3 nodes (stdin, stdout, stderr), each of which you can drag a line from, to a similar node on a different executable.

    Some sort of selection on the elements would allow you to set command options (incl all standard shell position expansion etc). Once the tool's complete, you group them together and it writes a little shell script that calls things in the right way, which itself then becomes a little icon with 3 nodes.

    Whether it would work or not, I don't know. I'll write it some day. Unless someone beats me to it (likely).

  • by crucini ( 98210 ) on Sunday February 11, 2001 @12:08PM (#439671)
    For example, if I run tar inside X, then tar ought to pop up a completion bar.

    One way to implement such things would be to write a replacement for xterm and define a new terminal type. Just as xterm has escape codes to set the foreground and background colors, the new termi nal could have additional escape codes to create or update certain display widgets. I'd rather such widgets were part of the terminal window than poppped as separate windows, which sounds somewhat uncontrollable. You'd also need a good scheme to kill the widgets if the process that requested them dies . If done this way, the gui enhancement would still work well over a 300 baud modem.
    If you're using a terminal that doesn't support these capabilities, the termcap database wouldn't return any codes for them, and everything would work as it used to.
    I used to use AutoCAD heavily, and I liked its interface, which was something like a Unix shell with the ability to mix mouse clicks with words, and to see the results of commands after you typed them. Imagine if you type 'ln -s /etc/foo.cfg .foorc' and you see a red line appear in a GUI reflecting the symlink. I think the most useful aspect of the GUI is visualization, and this is not used much by either Windows or X.
  • by DickBreath ( 207180 ) on Sunday February 11, 2001 @12:30PM (#439680) Homepage
    I got seriously into the Mac back in 1984. One thing that always struck me as wrong was the way Windows was built on top of DOS.

    I mention the following because it is quite a different philosophy that is probably foreign to most unix people.

    Similarly to DOS/Windows, but without so many problems (or maybe with different problems), X is built on top of unix.

    In Mac, the GUI is fundamentally part of the OS. There is no command line. Period. Well okay, there is. It's MPW. But it's an optional add on. But it runs on top of the OS and GUI. It is a powerful CLI environment with a rich set of utilities, compilers, disassemblers, make, sed, grep, command piping ala. unix, etc., ad nauseam. MPW's cli is executed in a window(s), but NOT in the same manner as Xterm, as you might expect. (Take too long to explain here.)

    The important point is that the CLI is on top of the GUI, not underneath of it.

    Please don't misinterpret me here. I'm not suggesting that one way or the other is wrong. Both have advantages. Linux is great as an embeddable OS. Just change the init scripts, or even the init program itself. You can use the OS for completely non-gui things. (Or non-cli for that matter.)

    A property of Apple's approach is that the GUI is not an afterthought. User friendliness is thought of first and foremost in everything. (The mouse, it's drivers, etc. are fundamental parts of the system -- not add on's.) This may be why the Mac has the legendary ease of use. And why it took so long for Microsoft with their opposite approach to achieve a similar level of friendliness.

    If you want to build gui-less tools (on Mac) that cannot be run by end users, then compile your program as an MPW tool. Most end users don't bother to get MPW. So your gui-less program automatically has a very limited audience. Just try writing a gui-less program for the mac. There just are no such concepts as "standard input", "standard output", etc.

    It's interesting that Apple is completely turning the system on it's head in Mac OS X. This is probably a good thing for software portability. But a large part of the world is missing having studied the different approach of the classic Mac OS. (Not that it's necessarily right, it's just different. Observing the workings of other systems, languages, etc. enriches you.)
  • by KevinMS ( 209602 ) on Sunday February 11, 2001 @12:11PM (#439681)

    Maybe the separation between the cli and the gui is artificially imposed by the window manager and all those windows. Lately I've gotten annoyed by all the dragging, resizing, mousing, and shading that my mouse and hand are perpetrating, and I was thinking, cant we get rid of these windows? Did the invention of the mouse lead us down the wrong path? I'm sure I've never seen a mouse on any of the Star Trek shows.
  • by Aglassis ( 10161 ) on Sunday February 11, 2001 @12:13PM (#439701)
    There were some good points made in the article. But should we be bloating programs like tar with window manager cooperability? Hell no! I don't even think that tar should have the -z option, as it can just be piped to gzip anyways. I see no problem with an alias to do it, but tar should be considered a completed product, it does what *it is supposed to do*. This is a major accomplishment with a program. If you want it to do something else build a script that runs on the UI.

    The power of UNIX is that it has all of these 'infrastructure' type programs upon which so much more can be built. There is no reason to build an extra program within a program. This just adds complexity which the UNIX design is against. If you look into the history of UNIX you will see that it has always favored stability over speed, and a small program that does what it is supposed to do rather than one large bloated 'featureful' program. Why should we get rid of the UNIX design philosophy when making a UI? It doesn't seem logical. As UNIX is an bottom up system so should be the UI (and it has been).

    People who are selling UNIX by saying that our UI is just a good as XX's are morons. We are really forgetting our strengths. A person could build an excellent windowed compression/decompression program that can drag and drop to other programs, without having to know how to write a compression program, how to write a window environment, or how to figure out drag and drop. To them, it is all pipes. I'm not saying that XX's products don't have these characteristics, but to UNIX it is so much easier to do. Lets not ignore this power.

    Taking the gecko engine (or whatever they call it now, 'nslayout or something') and making your browsers render with it rather than writing a new engine for yours, using X to render rather that using the framebuffer in different ways (it seems really stupid to me to make a window manager in UNIX that can't be networked), or making a script to run tar and gzip in a friendly graphic way is my opinion on how UNIX GUI's can be a sucess. Adding a completion bar to tar and using a --nodisplay flag is not.
  • by Starky ( 236203 ) on Sunday February 11, 2001 @12:41PM (#439716)
    I think the writer of the article is missing a number of critical points about Unix.

    People use Unix for servers in part because of the command-line. I could be administering a server under a DoS attack with the CPU under and incredible load, and I can still securely log in and administer the box.

    Administering Windows with PCAnywhere or VNC, that would be almost impossible.

    Even at home, I value being able to check my e-mail and perform other useful functions on a remote box while at the same time downloading gigs of cruft over a 256K DSL connection. I also enjoy being able to work on a remote box without ever having to worry about the speed of my connection. Copying a bunch of files with

    cp *.out *.bak ~/foo

    is a hell of alot faster and easier than opening up Windows Explorer via VNC and cutting and pasting.

    What would be of benefit would be to build hooks into commonly-used command line tools that would lend themselves easily to an event-based model. Then whoever wanted to build a GUI for these tools would be more than welcome.

    Personally, I think a term that would take

    cat photo.png page.html

    and render the image and/or HTML on the page would be a wonderful addition to the command-line toolset. However, the ability to work in text-only mode is a critical part of the appeal of Unix.

    I see this same common misperception that gave rise to this article amongst alot of non-technical business-types that wander through a Unix shop. They see the command-line and think that it looks incredibly primitive and atavistic.

    What they don't realize is that there are GUI tools that do the same thing as gzip, tar, top, ls, cp, rpm, and other command-line mainstays, but that noone who uses a Unix system professionally uses them. And there's a reason for that.
  • by Metrol ( 147060 ) on Sunday February 11, 2001 @12:24PM (#439734) Homepage
    While I find some of your arguments here have merit, I believe that you're looking at this with a sort of CLI blinder on. That, and breaking your post up into paragraphs would certainly help it's readability.

    Yes, many applications that are GUI only tend to be so not due to any need beyond that's where the audience is. Folks used to think, why in the world would I need a spreadsheet run from a GUI? Then those same folks got to seeing all the other information that can be derived from changing a cell color, altering a font, or porting the information to a variety of graphs. Let's face it, the GUI spreadsheet was the real selling point of Windows back in the early days.

    As to remote access, from a dial up modem I'm able to log into my office computer running NT and LapLink and utilize it. Delays in typing up stuff like E-mail is annoying, but only slightly more so than logging in via SSH to a remote BSD box and typing in Pine. In each case it just ain't like being there when you're doing the dial up thing.

    I will firmly agree with one point you seem to be driving at though. There is an over reliance upon the mouse in many GUI based apps that actually hinder the usability. I personally find that NT does a better job of keyboard support than either KDE or Gnome. Of the two, I find that KDE seems to do a better job with keyboard support, but there's a fair amount of work still to be done.

    As a FreeBSD user, I personally don't see a lot of front line CLI based office apps for it. No, I do not count EMACS as a word processor. Nothing even approaching the level of VisiCalc on the spreadsheet front either. I'd love to see more work done on this myself, but the fact is that most folks today simply don't feel comfortable unless there's a GUI running the app. The only thing that will even dent this kind of paradigm are compelling tools for a CLI that folks will want to use. No amount of preaching will have as dramatic an impact as that.
  • by crucini ( 98210 ) on Sunday February 11, 2001 @12:53PM (#439762)
    Well that's cool, but is there any particular reason it needs to be graphical? Imagine a smart shell: I type 'ls -<TAB>' and it shows me all the options to ls. There should be another keystroke, so that if I type 'ls -alR<magickey>' it would explain what those options do, then return my command line in the same state for further editing.

Mediocrity finds safety in standardization. -- Frederick Crane

Working...