Tag Archives: introduction

Why not use a distribution and a package manager?

I have a few Linux computers, but they do not use a package manager. They’re not “redhat” computers, or “debian”, or “ubuntu”. Once, 13 years ago, they were slackware. Briefly. I administer these boxes manually, for lack of a better adjective.

Maintaining a Linux computer manually is a fair amount of work. Installing new software is not always trivial, and sometimes things break in subtle ways that may take some effort to debug. I plan to start recording my adventures here, in part so that I can come back and see what I did the next time I upgrade something and it misbehaves in a familiar manner. Because I do things manually, I tend to run into problems that the majority of Linux users don’t experience. I often have to look on the web for answers to questions, so I hope my experiences can help out other people who, for whatever reason, come across one of these unusual problems.

What do I have against distributions and package managers? Nothing, really. They are very useful. I do have one computer that was installed from packages, a MythTV computer that I installed from a KnoppMyth CD. This is a good example of a place where package managers are useful. The computer is an appliance that I set up once, and then don’t ever modify. It’s not exposed to the Internet, and it isn’t going to change much. I don’t need to install new software on it, because it’s a dedicated single-purpose machine that already does what I want it to do. And yet, I’ve “broken the packages” on the box. There are files ostensibly under control of the knoppix package manager that I have replaced with recompiled binaries, and which I am maintaining myself now. I’ll talk about that in a later post.

Here are some of the things that I think are good and useful about distributions and package managers (note that there are some exceptions to these rules, but most package managers supply at least some of these benefits):

  • They supply the entire filesystem in compiled form, allowing a new computer to be set up and running in under an hour with reasonable defaults, usually after asking just a handful of questions.
  • They usually are associated with a good setup tool that can configure the software correctly for the hardware attached to your computer.
  • They have a good, general-purpose kernel with modules ready to handle many situations.
  • They keep track of dependencies to help to ensure that interdependent packages are correctly installed, so that the user doesn’t end up with an installed package that fails to work correctly.
  • They provide a single location for access to updates and security fixes. A user can simply ask the package manager to do an “update to latest packages”, and expect that they have all of the updates provided by the distribution.
  • If you have a dozen new computers to set up, possibly even on different architectures, it’s not a very big job with the correct installation media available.
  • Probably most importantly, distributions and package managers provide an easy way for people to administer their Linux computer without having to become Linux experts. The computer is a tool used to perform other activities, and a distribution lets the person work with the tool, instead of spending a lot of time maintaining the tool.

So, why don’t I use package managers? There are a few drawbacks to the use of package managers, and for me, they outweigh the benefits. Other people will have different priorities. I would never suggest to a newcomer to Linux that they should be going distribution-free. A person who maintains a large collection of computers on dissimilar hardware might also be poorly served by breaking the distributions (though I have actually done exactly that).

What don’t I like about package managers and distributions? Well, here’s a collection of drawbacks:

  • It isn’t always clear what your computer is doing. There may be packages or services installed that you don’t want, doing things you don’t understand. Somewhere in the 200 packages that were installed when you set up the computer, you may have wound up with, say, an FTP daemon you didn’t ask to have. When you’re installing software manually, you’re more likely to install only the things you really need.
  • Distributions tend to ship with older code. Distributors have to freeze their versions and do extensive testing, and by the time the packages are shipped there may have been improvements, bugfixes, or security fixes that didn’t make it into the base media.
  • Bugfixes and security fixes can be delayed as you wait for the distributor to build updated packages. While most Linux distributors get security fixes out within a small number of days, there is still some delay between the time a fix is produced and the time that updated packages are available.
  • Distributions are set up to be good for the general case, but there will be times when they do the wrong thing for a particular special use.
  • Package installers are generally forbidden from interacting with the user, otherwise a new install would be a tedious exercise in configuring every package as it came along. Consequently, packages are usually dropped in with some default configuration.
  • Many programs come with multiple compile-time configuration options. A media player may have support for multiple codecs, output devices, companion devices, and so on. A distribution will usually turn on as many of these options as possible. Some of these options might not be of interest to a specific user, but that user is still forced to install other packages holding libraries he or she doesn’t expect to use. These dependent libraries increase the interconnectedness of the packages, which can make what would be a simple upgrade of one package into a huge transaction that touches a dozen other packages and the kernel.
  • Because it’s easier for a particular file to be owned by a specific package, even when that file controls the behaviour of multiple packages, distributions tend, when possible, to break up the file into fragments that are logically collated in some other place. This can make it hard to figure out exactly what a specific application is doing.
  • Distributions and package manages don’t insulate the user in all cases. Some users with unusual requirements may still end up having to install software by hand, and figure out how to tie the new software into the system correctly, and sometimes the package management system makes such efforts more difficult.
  • Most importantly, for me, a package manager hides too much of what is happening. You don’t have to learn how to configure a program, you don’t know what files it’s installing, it’s a bit too much of a black box for my tastes.

Given all this, I’ve decided that I prefer not to use package manager. Consequently, I’ve been manually modifying my Linux computers for over 13 years now.