Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
GUI Software Linux

Autopackage Universal Package Manager 73

nanday writes "I currently have an article on Linux.com about Autopackage. Autopackage is developing a universal package manager for the GNU/Linux desktop, separate from the package management for the system. It includes installation for individual users, a lot of concern for interface design and documentation, and some ideas about the future of package management that are sure to raise some debate." From the article: "Besides ... technical problems, the Autopackage team believes that managing system and desktop software together is a mistake. It requires developers to pay attention to desktop applications that are of secondary importance to them, and confuses end users with problems about dependencies and upgrades." Linux.com is a sister site to Slashdot. (say that three times fast)
This discussion has been archived. No new comments can be posted.

Autopackage Universal Package Manager

Comments Filter:
  • Great Idea (Score:2, Interesting)

    by nbSouthPaw ( 935530 )
    As someone who is new to linux this is the one area I struggle with. While not difficult installation of software on linux is quite different even among the different distributions. The autopackage software could make desktop solutions quite a bit easier for those of us that dont want to mess with system software very much but would like more control over desktop software.
    • Re:Great Idea (Score:5, Insightful)

      by Arandir ( 19206 ) on Friday December 02, 2005 @05:12PM (#14169579) Homepage Journal
      "Native package managers' dependency detection depends on a database. Autopackage, on the other hand-detects dependencies by actually scanning for them."

      Such a simple idea, such an absurdly simple idea. Yet it's one that 9 out of 10 distros just can't manage to get right. Building one library yourself should NOT break your entire package management system. A minor bugfix release to a library where no ABI has changed should NOT necessitate an update to every application that uses it.
      • Re:Great Idea (Score:3, Informative)

        by /ASCII ( 86998 )
        I don't know enough about the package databases to know if there is a simple way around your first problem, but the second one is only a trouble in cases of lazy packagers making bad dependancy specifications. Package dependancies should be to the relevant ABI version of the library, which is usually stable across minor version numbers.
        • but the second one is only a trouble in cases of lazy packagers making bad dependancy specifications.

          But that's the whole point! 9 out of 10 package makers are lazy. While I don't agree that autopackage is the solution, they are correct in pointing out that there is a problem.
  • by keesh ( 202812 ) on Friday December 02, 2005 @05:09PM (#14169546) Homepage
    Autopackage is not exactly loved by the distro people. See commentary from Gentoo [gentoo.org], Debian [kitenet.net], more Debian [licquia.org]... Might be wise to keep those remarks in mind when considering using Autopackage packages on a distribution...
    • I agree. It's an interesting and potentially great idea to separate the system from the desktop, but poorly designed & implemented.
    • ...considering it's not posted to Index. :)
    • The biggest problem IMO is that their claim that "the Autopackage team believes that managing system and desktop software together is a mistake", this is just obviously wrong. Every Linux distro. is essentially component based, it's just not possible to say ok "here" is the line in the sand.

      I'm just as likely to want to upgrade postgresql, as get "muine" ... and if I upgrade muine, what about my installed plugins? ... what about if I have a distro. muine installed and want a plugin that requires a newer

      • The plugins should be upgradable too. But what prevents you from doing that?

        The problem is that, unless you make a distinction between system and desktop, the user will be presented with a huge list with thousands of packages. If my dad wants to upgrade mune he really isn't interested in seeing glibc and qt in that list. It's more a user interface problem.
        • The problem is that, unless you make a distinction between system and desktop, the user will be presented with a huge list with thousands of packages. If my dad wants to upgrade mune he really isn't interested in seeing glibc and qt in that list. It's more a user interface problem.

          And you still have to solve this problem when I distribute an epoll using application to a system using a glibc that doesn't have the epoll symbol. And this is such an obvious UI issue, I dearly hope you have a better example

          • "And you still have to solve this problem when I distribute an epoll using application to a system using a glibc that doesn't have the epoll symbol. And this is such an obvious UI issue, I dearly hope you have a better example than this for why you screwed the pooch."

            In this case, yes. But usually this doesn't happen. Remember that autopackage targets desktop applications: i.e. the stuff that average users cares about, not server applications. I don't know many desktop applications that use epoll or other

          • Huh? Programs that use epoll can either use relaytool, direct dlsym or embed the syscalls directly (as Wine does). This is a non-issue, why is it even being debated? Programs don't grow dependencies by magic, developers add them, and it's up to the developers to decide what the minimum requirements are of their app. The lower they are, the larger their userbase can potentially be.
    • Phew, those are some fairly harsh commentaries. However, it would be nice if the bigger distributions would cooperate with the autopackage guys, simply because there is no way in hell any given distribution will have every piece of software everyone might want. What if Adobe wants to distribute Photoshop CS on Linux? I guess it's possible to do it like Crossover on Gentoo, which has an ebuild that installs the downloaded binary. But I don't think that solution scales well.

      • Believe me, we have tried. Unfortunately the discussion often boils down to:
        - We (= the distro guys) don't like proprietary software so its not our problem.
        - We don't care about inter-distro compatibility. Not our problem.
        - RPM/DEB rules all. Everything else is evil.
        Really, we tried to negotiate. So far all attempts failed.

        • - We (= the distro guys) don't like proprietary software so its not our problem.


          And you can't see why? Proprietary software is a security nightmare because if you can't see the sources then you can't be sure of what the binaries are doing. Closed source stuff belongs in one place and one place only, /opt, and it should never be given access to system resources that compromise the users machine.
          • "And you can't see why?"

            Look at the post I replied to. "What if Adobe wants to distribute Photoshop CS on Linux?" And there currently is no alternative to Photoshop. If even mention Gimp, you'll be instantly flamed down by thousands of Slashdotters who say Gimp is nowhere near being able to take on Photoshop.
            Or how about things like Flash? Or the NVidia drivers? No good open source alternatives.

            But that aside, even assuming that there *are* good open source alternatives, you'll still have a problem because
    • by FooBarWidget ( 556006 ) on Friday December 02, 2005 @05:59PM (#14169958)
      They don't like us because they want to centralize software packaging. Don't just blindly assume we're evil just because they're critical to us. Read what they write. For example:

      "To even unpack the package, you must run arbitrary code supplied by an untrusted source."
      Untrusted source? Is the upstream developer an untrusted source? If he cannot be trusted, then why would one trust third party Gentoo packages?

      "They install directly to the live filesystem."
      We have this little feature called "installing to $HOME". It won't touch /usr if you don't want it to. How hard is that?

      "They do not support configuration protection."
      We backup stuff if they're being overwritten.

      "They can clobber arbitrary files on uninstall."
      No proof. Bug reports please, because we sure haven't received any from our users.

      "The entire format is completely unportable and highly tied to x86 Linux systems."
      The reason why that is so is because none of the developers have anything but x86 systems. It's a bit hard to support PPC if we don't have such a machine, right? We already have some stuff in place to make sure x86 packages aren't executed on other architectures.
      But that aside: let us go back to the original problem: software installation isn't easy enough for the average user. And what architecture does 99% the average user use? x86! Now there's the main reason why we focus on x86: because that's where our target group are! The software installation problem is pretty much unique to x86 Linux.
      Non-x86 architectures are usually servers, not desktops. They don't need autopackage, so why should we worry about them? PPC users most likely use MacOS instead of Linux, so there's not much point in supporting that either.

      As for joey's blog: the only thing he's complaining about is that at the moment you cannot programmatically extract files from a .package. Average users don't even care about that! They just want the damn software to be installed and to work!

      Yeah I haven't replied to everything but you get the point.
      • by Nevyn ( 5505 ) * on Friday December 02, 2005 @06:17PM (#14170088) Homepage Journal
        They don't like us because they want to centralize software packaging.

        Nobody wants that, but it's basically required. You can't solve the "I need to install mod_foobar, which requires Apache-httpd-2.2.0" problem without understanding what Apache-httpd is and what version you have, and how you get from here to there ... which requires a single way to query all software. This doesn't mean it can't be distributed, like DNS, just that you can't pretend you can have two roots and everything will be fine (or even blame those nasty root server operators for telling you otherwise when you try).

        Untrusted source? Is the upstream developer an untrusted source? If he cannot be trusted, then why would one trust third party Gentoo packages?

        Say you're installing a GNOME theme, it's basically just XML+pngs ... then assuming the installer is secure, there is no reason you can't just install this "package" from anywhere and try it out. The same is true with installing backgrounds, or documentation etc. (think CIA. world factbook).

        With aribtrary binary applications this is somewhat muddier, but there is still a deliniation between trusting the package and trusting the package installer (the later of which likely needs more privilages).

        For instance "conary" is one of the newer package management ideas, and to them packages are basically just collections of files which are tagged (Ie. doc/bin/shared-library). Installing/removing a package has basically zero security concerns.

        • "With aribtrary binary applications this is somewhat muddier, but there is still a deliniation between trusting the package and trusting the package installer (the later of which likely needs more privilages)."

          Not so with autopackage. You can run autopackages as user and install to ~/.local. You don't have to give it root access if you don't want to. And as autopackage is open source, you can check the source code for trojans. Autopackages are tarballs with a shell script header. Anyone can check the shel

          • Not so with autopackage.

            Again, we know that and as we just said the fact that autopackage doesn't allow seperate privilages between the package and the package installer is a _bug_ not a _feature_.

            You can run autopackages as user and install to ~/.local

            Yes, well done ... in a few cases I might like to do that. However 99% of the time I'll want them outside my home dir. And that still doesn't excuse you from not seperating the package installer from the package. Again with a font/background/so

            • "Again, we know that and as we just said the fact that autopackage doesn't allow seperate privilages between the package and the package installer is a _bug_ not a _feature_."

              Does RPM/DEB provide that seperation? Last time I checked (today), RPMs *must* be installed as root. As the preinstall and postinstall RPM scripts can do whatever they want, including rm -rf /. I also haven't found a way to install DEBs as non-root (on Debian and Ubuntu). Can dpkg prevent DEBs from wreaking havoc in pre- and postinst

              • Does RPM/DEB provide that seperation? Last time I checked (today), RPMs *must* be installed as root.

                Then you haven't checked hard enough. You can create a user-level rpm database and use that to install as a user to a user-defined location. See the --prefix option; combine with --nodeps if you want to skip duplicating the system db.

                As the preinstall and postinstall RPM scripts can do whatever they want, including rm -rf /.

                See the --noscripts option.
                • "Then you haven't checked hard enough. You can create a user-level rpm database and use that to install as a user to a user-defined location. See the --prefix option combine with --nodeps if you want to skip duplicating the system db."

                  I know about that, but it isn't useful in practice. The fatal problems are:
                  1. The most important of all: programs packaged in RPMs are not designed to be relocatable. Path names for data file lookup are hardcoded into the binary. You can install RPMs to your home folder but

        • Nobody wants that, but it's basically required.

          It clearly isn't, as Windows and MacOS X - in fact, every commercial OS ever made - did not require the central repository scheme. They use things called "platforms" and we've been saying for a long time that Linux should have one.

          Until then, autopackages - which are basically the binary equivalent of source tarballs with extra features - are the next best thing for end users who just want to click in a web page and get what they expect (as opposed to som

          • It clearly isn't, as Windows and MacOS X - in fact, every commercial OS ever made - did not require the central repository scheme.

            Yes, they did it's just at the root of the repository was a large proprietary binary blob that couldn't be changed. And, yes I can see how that makes the incomplete solution easier ... but given that I can get complete solutions for less, I'll pass on your "next best thing".

            • You're assuming there's no complexity difference between 1 dependency and 100 - but this defies common sense and actual experience. If the "complete solution" were so great, how comes there are so many articles and posts from people wishing for MacOS type software management on Linux?
      • PPC users most likely use MacOS instead of Linux, so there's not much point in supporting that either.

        If you're really not interested in actually solving a problem, just say so. It's quite another thing to claim that real problems you have no intention of fixing Aren't Really Problems because your solution doesn't work for them.

        Existing packaging systems Just Work for Linux PPC. Yours doesn't. How is that possibly an improvement?

        • "If you're really not interested in actually solving a problem, just say so."

          I don't say so, which means I AM interested in solving the problem.

          "Existing packaging systems Just Work for Linux PPC. Yours doesn't. How is that possibly an improvement?"

          Autopackage's goal is not to be "better" than RPM/DEB. Autopackage's goal is not to surpass current packaging systems in every way possible. Autopackage is not supposed to completely replace RPM/DEB.

          Let us go back to defining the problem. The problem is: software
    • Rephrased, "why think for myself when other people can tell me what to think instead".

      • If a random guy from the street tells me my house is a fire accident waiting to happen, I might be curious and ask for reasons or just say "I don't think so" and ignore him. If a guy trying to sell me fire proofing comes up to me and says that I'm going to be very suspicious, and likely not believe him no matter my opinion. If the local fire marshal says it ... I'm likely going to believe him.

        Thinking for yourself is all well and good, but when someone who knows what they are talking about says X then "

        • The fire marshal is specialized in extinguishing fires. They are not specialized in other areas. Likewise, the distro guys are specialized in packaging their *own* distros, but not inter-distribution packaging. You still shouldn't let them do the thinking for you.

          Autopackage is used by high-profile projects like AbiWord, Inkscape and Gaim. Upstream trusts us. What do you think that means?
  • by dondelelcaro ( 81997 ) <don@donarmstrong.com> on Friday December 02, 2005 @05:21PM (#14169640) Homepage Journal
    By default, programs installed with root privileges are placed in /usr/share.
    I sure hope that's a typo for /usr/sbin... If not, well, that's so horribly broken that I don't even want to get strarted with it. FHS anyone?
    When a package is removed, its dependencies remain. Hearn and Lai point out that dependencies are not always removed by some other package management systems, either.
    Some package management systems may not do that, but at least they keep track of exactly which packages have been installed so you at least have a chance of removing the dependencies at some point in time. With this solution you end up with files on your system that have no clear correspondence to any package, which kind of defeats the purpose of having a package in the first place. (To just expose my bias... aptitude, synaptic or deborphan anyone?)
    "A binary from one distribution doesn't always work on another, because of things like glibc symbols and C++ ABI." [...] "Native package managers' dependency detection depends on a database. Autopackage, on the other hand-detects dependencies by actually scanning for them."
    Right, and what exactly is autopackage going to do about these dependencies once it has found that they don't match up? Use LD_PRELOAD and have multiple copies of system libraries in place instead? Oh wait, autopackage is for "desktop packages only".

    Of course, all of that isn't to say that autopackage may not do something useful in the future, but it sure looks like some of the fundamental problems of developing and distributing packages which other packaging systems have already dealt with still remain to be solved.

    In any case, if you don't believe me, see what Scott Remnant has had to say on the matter (he's currently the dpkg maintainer, so he at least is passingly familiar with the issues surrounding a packaging format.)
    • "I sure hope that's a typo for /usr/sbin... If not, well, that's so horribly broken that I don't even want to get strarted with it. FHS anyone?"

      It's either a typo, or the writer of the article wrote the wrong thing.

      If you install an autopackage as root, it'll be installed to /usr. This can be customized by editing a configuration option. If you install an autopackage as user, it'll be installed to ~/.local.

      "Some package management systems may not do that, but at least they keep track of exactly which packag
    • My (possible) solution was to not give autopackage my root password, which forces it to install everything to /home/me/.local

      It may result in multiple libraries and wasted space but at least it can't confuse apt. Also the security concerns are addressed this way.

      I really don't see why this isn't the default way to do autopackage.
      • "I really don't see why this isn't the default way to do autopackage."

        I don't know what you mean, as every time you install an autopackage, it asks you to enter the root password. If you want it to install to ~/.local, then just click "No Password" in that dialog. What is the problem?
        • I mean since most of what people complain about (rightly or wrongly) with autopackage is security (binaries being installed system wide by a third party) and integration (files in filesystem that weren't put there by the package manager) it seems like a default way to do it would be to always install in ~/.local. It's not a problem for me to always have it do this. I just though it might be better to have this be the default behaviour.

          Also, maybe autopackage has been updated since I last used it, but I was
          • The reason why we allow root installs is because there are people who want to install software system-wide. And some things just work better when you install to /usr as root (unfortunately not all desktop environments fully support locating resources in $HOME). And this is also why we *ask* the user whether he wants to enter the root password.

            All the complaints seem to come from power users who don't like the idea of a new packaging format. The inexperienced Linux users seem to love us, as can be seen from
    • Right, and what exactly is autopackage going to do about these dependencies once it has found that they don't match up?

      If a required dependency can't be found the install fails, same as with a source tarball. It's the developers job to ensure their dependencies are reasonable and widespread: projects like AbiWord or Inkscape haven't had an issue with doing this. There are various techniques you can use like static linking obscure/rare/unstable libraries or using relaytool/dlopen to access features which

  • ...will it be ported to Windows?
  • If you don't mind keeping it separate, why not go with pkgsrc?

    It mostly works and it's available today.
  • We read about Autopackage months ago in an old Slashdot story [slashdot.org].

    Sounded sort of cool, but they always do. tar + gzip or tar + bzip2 is still the ultimate in packaging. I use Firefox on Slackware. What I end up doing is not installing the Firefox package that came with Slackware. It's difficult to update. Instead, I grab the latest tarballs of Firefox as they appear and install them manually in /usr/local. Configured my window manager with a button that points there. Of course doing it that way, the

    • The reason we don't deal with tarballs is because:
      1. No desktop integration. Tarballs don't install menu items and icons for you to the right place. We do. We go into great length to deal with the myrads of different menu systems.
      2. Binary compatibility. Tarballs are just that - tarballs. Part of the autopackage is research about binary compatibility. We figure out why binaries don't work on other distros, and try to fix that. Without a solution to binary compatibility problems, using any format is meaningl
  • I think the "new package manager every month" people must all still be using redhat 9? I haven't thought about packaging since I switched to a Debian based distro (Ubuntu). The most intresting I've seen lately in packaging is Klik [kde.org] .
  • There's no doubt about it that Linux really is in the stone age as far as managing software goes. Every time I help a newbie install Linux I dread the next question: "How do I install software." "Well it's easy. We just edit this source.list file here and add these repositories from dag, freshrpms, etc, and then you can just apt-get it. isn't that simple?" And they go running back to Windows. The point is that as developers and programmers and even tech-savvy people we can deal with this kind of thing,
    • Great post overall.

      For example, I have Java 1.3, 1.4 and 1.5(5) all installed. Mono, GTK, Gnome libraries and similar things should also go in this place. If I need an app that needs the gnoem 2.12 libraries, I should be able to install them alongside the 2.10 libraries, but in a clean, modular way that can avoid dll hell.

      Sounds good. The dinamic linker should be able to find and use the correct versioned files for the libraries, and gcc should be able to find the versioned include files/dirs too.
    • "... download and install software for her Mac really makes me ashamed to show anyone how stuff is done on linux."

      That's the part that drives me nuts! I've been spoiled by package managers for too long now and I can't bare the thought ot having to prance around the internet trying to hunt down the latest builds of the software I want/need. I never want to have to go to gaim.sourceforge.net, mozilla.com, gimp.org, or hunt around for a new bittorrent client.

      THAT! feels like the iceage to me. That is
      • No kidding...in my crontab, once a week, emerge --sync, updatedb, prelink -amR. Then once a week (I like to do the updates manually in case I have to adjust USE flags), emerge -uDNva world ; emerge depclean ; revdep-rebuild. I don't even think about updates anymore. Gentoo is freaking awesome...the only problem with Gentoo is that it's not a very good distro for those new to Linux or those without a decent amount of computer knowledge.
    • Portage (Gentoo) does something sort of similar. You can install multiple versions of some packages (example...kernels, GTK, GCC) in "slots". I always had the problems you mention with Linux (I used Redhat/RPM, bleh) until I switched to Gentoo. The package management in Gentoo is incredible. Even converting the whole system from GCC 3.3.5 to 3.4.4 (they aren't binary compatible with each other) is relatively painless. GCC 3.4.4 is installed in a new slot...switch to the new compiler, recompile system,
      • Thinking about having her try Ubuntu.

        Somehow I don't think ubuntu can hold a candle to MacOS for her. Besides, MacOS runs so much nicer on her iBook than linux does. Which brings up another linux advocacy point. Sometimes we promote linux just to promote linux, even though it is not a good fit for some, like my sister.
  • Why not either
    - compile the libraries in with the binary, leaving one binary and one conf file for each application

    or

    - keep the libraries seperate from the binary, but store a copy of whatever libraries the binary expects in its own folder (/usr/bin/foo/lib)

    Disk space and RAM are both cheap. This kind of thing would suck on systems where resources are limited, but otherwise it would simplify things dramatically

C makes it easy for you to shoot yourself in the foot. C++ makes that harder, but when you do, it blows away your whole leg. -- Bjarne Stroustrup

Working...