Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
GUI OS X Unix Technology

Imagining the CLI For the Modern Machine 317

scc writes "TermKit is a re-think of the storied Unix terminal, where human views, input and data pipes are separated. Output viewers render any kind of data usefully. It may not be a new idea, but it's certainly a new take on it." I know you are quite comfortable in your shell of old, but this sort of thing sure gets my juices going. The best of both worlds.
This discussion has been archived. No new comments can be posted.

Imagining the CLI For the Modern Machine

Comments Filter:
  • Mac only. (Score:4, Interesting)

    by The MAZZTer ( 911996 ) <.moc.liamg. .ta. .tzzagem.> on Thursday May 19, 2011 @01:30PM (#36181736) Homepage
    This saddens me, I would so want Windows and Linux ports. There's a brief mention that it should work with a normal web browser, and it appears to use node.js, but I am unsure what exactly to do. I haven't done any coding with node.js.
  • Style over Substance (Score:4, Interesting)

    by spun ( 1352 ) <loverevolutionary@@@yahoo...com> on Thursday May 19, 2011 @01:36PM (#36181840) Journal

    RAM and bandwidth are cheap, why not add tons of bells and whistles? They may not make anything more functional, but they make it more fun, and that's what counts, right? Oh, it's only for Mac? Well that makes perfect sense.

  • by MarcQuadra ( 129430 ) on Thursday May 19, 2011 @01:37PM (#36181850)

    I like some of this idea, but frankly, it doesn't go far enough. Take a look at Windows PowerShell. Instead of the UNIX 'everything is a file' philosophy, it says 'everything is an object', and it's pretty cool.

    I would pay good money for a PowerShell implementation on Linux, and even more if Linux internals were exposed in the same way that WMI objects are on Windows.

    And this is from a thirteen-year Linux veteran.

  • Re:Advantages of CLI (Score:5, Interesting)

    by Alternate Interior ( 725192 ) <slashdot.alternateinterior@com> on Thursday May 19, 2011 @01:40PM (#36181902) Homepage

    This looks cool when everything works, but what happens when you try to `cat` a JSON file with a syntax error? Terminal is already lowest common denominator. If you want a better/easier/user-friendlier way, they're out there, but it seems like doing it in the terminal layer is wrong.

  • by bl8n8r ( 649187 ) on Thursday May 19, 2011 @02:24PM (#36182506)

    I like the simplicity of Xterm. It works well with SSH, can talk to endless serial devices (like console terminal login on headless stuff) and can run over a modem. All I need is twm and Xterm and I have a nice lightweight X desktop on a server for installing Oracle. There aren't a lot of dependencies so I can keep the software footprint small. Updates are faster and few.

    Now in KDE on a desktop, something like Termkit might be more practical. Don't forget though, eye-candy comes at the expense of resources. You can't have all that bling without giving up cpu or ram. In the end, is the payoff worth it to be able to run a screensaver in your terminal?

    All the work that went into the Compiz bling; it's cool but I just don't use it. The exploding windows are neat, I just don't see the point in having a desktop that contributes to my distractions.

  • Re:Advantages of CLI (Score:4, Interesting)

    by gilleain ( 1310105 ) on Thursday May 19, 2011 @02:25PM (#36182518)

    So, you'd like someone to write a natural language shell where you can describe what you'd like to happen, possibly badly, and the shell would magically know what you mean and do the right thing?

    COMPUTER : ENHANCE!

    Also, applescript is where people should go for "move file 'myfile.txt' to directory 'somedirectory'". Ok, so not exactly that syntax, but still.

  • Re:PowerShell (Score:4, Interesting)

    by fusiongyro ( 55524 ) <faxfreemosquito@@@yahoo...com> on Thursday May 19, 2011 @02:43PM (#36182754) Homepage

    I think you have this all wrong. This is not a terminal emulator app, it is an attempt at creating a novel text-based user interface with a lot of the graphical niceties Mac OS X users are accustomed to. It preserves the REPL-style interaction method but replaces text output with HTML output, and replaces line-of-text input with token input.

    The author is not on a mission to wean Unix-lovers like us from our Terminal.app, he's trying to make something like it for our friends who admire the power of Unix but aren't able to commit to it.

    Graphics in the terminal as you describe is a fundamentally different thing from what's being attempted here. Yeah, we have Ncurses and we have svgalib, but what we do not have is a set of Unix fundamentals that return graphical output to the command line interface, interleaved with the text of the commands. To do so would probably be impossible; svgalib takes over the whole screen, for example, and with ncurses you are dealing with characters rather than pixels. Think of it more as WebKit interpreting command output as HTML. So while a fair amount of the coding effort so far has been in creating the server and desktop app, as time goes on much more effort is going to be spent on wrapping existing Unix utilities to have them return HTML this thing can use, or developing alternatives to the Unix standbys that are substantially different and more amenable to new users and this interface.

    One capability the author talks about wanting is a way to highlight the command line arguments based on their relative safety or syntactic correctness. This will obviously require introducing a lot of additional information that just isn't there by itself, much like completion patterns for bash or zsh.

    In short, I think you've completely misunderstood what's going on here, and that's why you're missing the point.

  • by Alef ( 605149 ) on Thursday May 19, 2011 @02:55PM (#36182936)

    You missed the biggest pro of them all, central to the Unix philosophy: Composition of simple tools to do complex tasks.

    With a GUI, you are bound to whatever the GUI designer has included, and basic features are replicated endlessly in different GUI:s. For example: If I want to process five files with some program in a command line, I can list them with ls or find, type them manually, or cat the list from a file, just to name a few ways. With a GUI, you often have only the Open File Dialog, built right into the processing program, and that's it. In that case, creating the list of files is not separated from processing them.

  • by benjymouse ( 756774 ) on Thursday May 19, 2011 @03:27PM (#36183316)

    The big advantage of the Unix philosophy is that plain text is human readable. 'Objects' have this terrible problem that you always need a specific program to read and write them.

    Not true. Objects can be rendered on the terminal as well. PowerShell does this all the time. For some object types a certain format/method has been registered, but for all other types PowerShell just falls back to default rendering - which is to render the properties. You don't need *any* specific program to write objects in PowerShell. Never. One distinct advantage of this is that you can actually *choose* exactly how you want the objects written without relying on each and every little CLI tool to include a whole battery of output options.
    ls|ft lists files/dirs in a table (ft is alias for Format-Table): Each property in its own column.
    ls|fl lists files/dirs in a list (fl being an alias for Format-List): Each property on its own line.
    ls|fw lists files/dirs in "wide" format (fw is an alias for Format-Wide): Multiple columns with just the name.
    The cool thing is that ps|fl works similar: It lists processes with properties on separate lines.

    you don't need to get some separate documentation that may be wrong, not up to date, or not even exist.

    PowerShell builds upon .NET, COM and WMI, which are all models which supports discoverable objects. One of the first cmdlets a powersheller learns is the gm cmdlet. gm is an alias for Get-Member. This cmdlet reflects and documents the types with properties, methods, events etc of the objects piped to it on the command line. No need for external out-of-date documentation.

    This means development and testing is simple, you do it one module at a time, type the input and watch the output. And you can very easily combine different programs in a way that no one tried before.

    Well, that is the same way with PowerShell. Even though the pipeline streams objects, the output from the last command of a pipeline is rendered on the terminal using the default or registered format (or you can control the format). But PowerShell takes it a few steps further, e.g. defining common infrastructure for transactions as well as risk control such as executing all cmdlets in simulated "whatif" or "confirm" mode in a unified way and based on context so that cmdlets executing within a script will inherit the mode from the script invocation. The fact that *all* cmdlets support the -WhatIf parameter lets you try out even potentially state-changing scripts and cmdlets before actually executing them.

    I don't think powershell offers any advantage over the way Unix has been working for forty years.

    Frankly, based on the above it doesn't seem like you know enough about PowerShell to pass that judgement. And having worked for forty years doesn't mean that it cannot be improved. I'll grant that PowerShell is a more natural fit for Windows given that so much of the OS and applications are exposed as objects.

  • Re:WHOOOOSH! (Score:5, Interesting)

    by Novus ( 182265 ) on Thursday May 19, 2011 @04:11PM (#36183916)

    In practically any sane terminal emulator, you're not seeing the bytes, you're seeing a picture generated from these bytes by interpreting it as text with embedded control codes. This is merely an extension of that concept; instead of just "clear the screen" and "switch text colour to red" you also have "display the following PNG". Considering that there are tons of different sets of escape sequences in use, one more would hardly be a problem. Since the author suggests that the metadata identifying the data type (MIME-style) would be separate from the actual data, legacy programs would presumably just ignore the additional information and behave like they used to.

  • by wasabii ( 693236 ) on Thursday May 19, 2011 @04:58PM (#36184508)

    Every example you just posted requires you to actually examine the output of each of the commands, and apply brittle and convolted text parsing structures like grep and awk. All of if these break when the author alters the output text format. PowerShell has none of those limitations. If an author of ps adds a new property to each object, he does not need to be concerned with previous users of his cmdlet, because nobody is actually parsing his output. His output is strongly typed objects. If a previous user didn't consume his new property, it doesn't matter, they'll continue to not consume it.

    Instead of building scripts based on brittle text parsing, they are built on a self documenting model that provides. There is no text parsing. That's extra work. Why do it?

  • by gilgongo ( 57446 ) on Thursday May 19, 2011 @05:22PM (#36184774) Homepage Journal

    I've often wanted to have a CLUI that works with my GUI. Imagine I'm in Photoshop, mousing or tablet-ing away, and I have a layer on my canvass. Rather than trying to remember where in the menu structure a bunch of commands are in order to manipulate that layer, I just bring up my CLUI and type something like "resize 50%, flip, gamma -20" Or how about in Word: "Find foo replace bar, insert header from page 2-", and so on?

    Why are we forced to find commands in mouse-driven menu bars (of worse, "ribbons" and whatnot) when they could be available any time in the app you are using?

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...