Donation?

Harley Hahn
Home Page

Send a Message
to Harley


A Personal Note
from Harley Hahn

Unix Book
Home Page

List of Chapters

Table of Contents

List of Figures

Chapters...
   1   2   3
   4   5   6
   7   8   9
  10  11  12
  13  14  15
  16  17  18
  19  20  21
  22  23  24
  25  26

Glossary

Appendixes...
  A  B  C
  D  E  F
  G  H

Command
Summary...

• Alphabetical
• By category

Unix-Linux
Timeline

Internet
Resources

Errors and
Corrections

Endorsements


INSTRUCTOR
AND STUDENT
MATERIAL...

Home Page
& Overview

Exercises
& Answers

The Unix Model
Curriculum &
Course Outlines

PowerPoint Files
for Teachers

Chapter 5...

GUIs: Graphical User Interfaces

There are two ways in which you can interact with Unix: you can use a text-based interface or you can use a graphical interface. In Chapter 4, I introduced you to using Unix by explaining what it is like to use a shared system that has a text-based interface. In this chapter, I am going to explain graphical interfaces: what they are, how and why they were developed, and which ones are in common use today. In Chapter 6, we'll talk about both types of interfaces, and I'll show you the details of how to manage your work sessions.

Before we do, I want to teach you the basic concepts about graphical interfaces: how to think about them, and their place in the Unix universe. Along the way, I have a few treats for you: a few jokes you will probably get; one joke you probably won't get; a true/false test to see if you are a KDE or a Gnome person (you'll see); and some sound advice on how to create a Grandmother Machine.

Jump to top of page

What is a GUI?

A GRAPHICAL USER INTERFACE or GUI is a program that allows you to interact with a computer using a keyboard, a pointing device (mouse, trackball or touchpad), and a monitor. Input comes from the keyboard and the pointing device; output is displayed on the monitor. The design of the interface is such that it uses not only characters but windows, pictures and icons (small pictures), all of which you can manipulate.

When it comes to displaying information, there are, broadly speaking, two types of data, text (characters) and graphics (images), hence the name graphical user interface. Both Microsoft Windows and the Macintosh use GUIs, so I am sure you are familiar with the idea.

— hint —

When you talk about GUIs, there are two ways to pronounce "GUI": either as three separate letters "G-U-I", or as a word in itself, "gooey".

Choose whichever pronunciation best fits your temperament and your audience. (I'm a "G-U-I" man, myself.)

Because of cultural inertia, most GUIs today follow the same basic design. Compared to Windows and the Mac, when you take a look at the various Unix GUIs, you will see some important differences. Perhaps the most basic one is that, in the world of Unix, no one believes that one size fits all. As a Unix user, you have a lot of choice.

To work with a GUI, there are several basic ideas you need to understand and several skills you have to master. First, you need to learn to use two input devices cooperatively: the keyboard and a pointing device.

Most people use a mouse but, as I mentioned above, you may also see trackballs, touchpads, and so on. In this book, I will assume that you are using a mouse, but the differences are minor. (I prefer a trackball, by the way.)

Typically, as you move the mouse, a pointer on the screen follows the motion. This pointer is a small picture, often an arrow. With some GUIs, the pointer will change as you move from one region of the screen to another.

Pointing devices not only move the on-screen pointer, but they also have buttons for you to press. Microsoft Windows requires a mouse with two buttons; the Mac requires only a single button. Unix GUIs are more complex. Most of them are based on a system called X Window (explained in detail below). X Window uses three mouse buttons, although it is possible to get by with two.

By convention, the three buttons are numbered from left to right. Button number 1 is on the left, number 2 is in the middle, and number 3 is on the right. GUIs are designed so that you use button 1, the left button, most often. This is because, if you are right-handed and the mouse is on your right, the left button is the easiest one to press (using your right index finger). If you are left-handed, it is possible to change the order of the buttons, so you can move the mouse to your left and use it with your left hand.

GUIs divide the screen into a number of bounded regions called WINDOWS. As with real windows, the boundary is usually, but not always, a rectangle. Unlike real windows, GUI windows can overlap on the screen, and you can change their sizes and positions whenever you want. (You can see this in Figures 5-3 and 5-4 later in the chapter.)

Each window contains the output and accepts input for a different activity. For example, you might be using five different windows, each of which contains a different program. As you work, it is easy to switch from one window to another, which allows you to switch back and forth from one program to another. If you don't want to look at a window for a while, you can shrink it or hide it, and when you are finished with it, you can close it permanently.

In Chapter 4, we talked about what it is like to use Unix with a text-based interface, one that emulates a character terminal. In such cases, you can only see one program at a time. With a GUI, you can see multiple programs at once, and it is easy to switch from one to another. In fact, one of the prime motivations behind the development of X Window — and of windowing systems in general — was to make it as easy as possible for people to work with more than one program at the same time.

There are other important ideas and skills that you need to understand in order to work with a Unix GUI, and we will discuss them in Chapter 6. In this chapter, we'll talk about the most important ideas relating to such systems. We'll start with the software that forms the basis for virtually all Unix GUIs: X Window.

Jump to top of page

X Window

X Window is a system that provides services to programs that work with graphical data. In the world of Unix, X Window is important in three ways. First, it is the basis of virtually all the GUIs you will encounter. Second, X Window allows you to run programs on a remote computer, while displaying full graphical output on your own computer (see Chapter 6). Third, X Window makes it possible for you to use a wide variety of hardware. Moreover, you can use more than one monitor at the same time.

Imagine yourself working at your computer and, in front of you, you have five open windows. Three of them are running programs on your computer; the other two are running programs on remote computers. All of them, however, are displaying the graphical elements that come with a GUI: icons, scroll bars, a pointer, pictures, and so on. It is X Window that makes this all possible. It does so by working behind the scenes to provide the supporting structure, so that programs needing to display graphical data and receive input from a mouse or keyboard don't have to bother with the details.

For convenience, we usually refer to X Window as X. Thus, you might ask a friend, "Did you know that most Unix GUIs are based on X?" (I know X is a strange name, but you will get used to it quickly if you hang around the right type of people.)

The roots of X extend back to MIT (Massachusetts Institute of Technology) in the mid-1980s. At the time, MIT wanted to build a network of graphical workstations (powerful, single-user computers) for teaching purposes. Unfortunately, what they had was a mishmash of mutually incompatible equipment and software from a variety of different vendors.

In 1984, MIT formed PROJECT ATHENA, a collaboration between researchers at MIT, IBM (International Business Machines Corporation) and DEC (Digital Equipment Corporation). Their goal was to create the first standardized, networked, graphical operating environment that was independent of specific hardware. This environment would then be used to build a large, campus-wide network called Athena.

To build Athena, MIT needed to connect a large amount of heterogeneous computing hardware into a functioning network, and it all had to be done in a way that would be suitable for students. This required the Athena developers to replace the complex gaggle of vendor-specific graphical interfaces with a single, well-designed interface: one that they hoped would become the industry standard.

Because of the demands of such an ambitious undertaking, they decided to name the project — and the network itself — after Athena, the Greek goddess of wisdom. (Athena was also the goddess of strategy and war which, in 1984, made her an important ally for anyone trying to connect computing equipment made by different vendors.)

Ultimately, Project Athena was successful in two important ways. First, the Athena programmers were able to create a vendor-neutral, network-friendly graphical interface, which they called X Window. X Window grew to achieve wide acceptance and, indeed, did become an industry standard (although not the only industry standard). Second, the programmers were able to build the Athena network and deploy it successfully, servicing hundreds of computers within the MIT community.

The first version of X Window (called X1) was released in June 1984. The second version (X6) was released in January 1985, and the third version (X9) was released the following September. (I am sure the numbering scheme made sense to someone.) The first popular version of X Window was X10, which was released in late 1985.

By now, X had started to attract attention outside of MIT. In February 1986, Project Athena released X to the outside world. This version was called X10R3: X Window version 10 release 3. The next major release was X11, which came out in September 1987.

Why am I telling you all this? To illustrate an interesting point. When a complex software product is new, it has yet to gather many users. This means that the developers can change the product radically without inconveniencing a lot of people or "breaking" programs that use that product. Once the product acquires a large installed base, and once programmers have written a large amount of software that depends on the product, it becomes a lot more difficult to make significant changes.

The more popular a product becomes, the more its development slows. This only makes sense: as more and more people — and more and more programs — come to depend on a piece of software, it becomes inconvenient to make major changes.

Thus, it came to pass that, in its first five years, X Window went through five major versions (X1, X6, X9, X10 and X11). X10 was the first popular version and X11 gathered an even bigger audience. X11 was so successful that it slowed down the development of X enormously. In fact, over 20 years later, the current version of X is still X11!

To be sure, X11 has been revised. After all, over a period of more than 20 years, hardware standards change and operating systems evolve. These revisions were called X11R2 (X Window version 11 release 2), X11R3, X11R4, X11R5 and X11R6, culminating in X11R7, which was released on December 21, 2005 (my birthday, by the way). However, none of the revisions was significant enough to be called X12.

Since 2005, X11R7 has been the standard. Again, there have been revisions, but they were relatively minor: X11R7.0, X11R7.1, X11R7.2, X11R7.3, and so on. There are people who talk about X12, but it's not going to happen anytime soon.

One way to make sense out of all this is by quoting the principle I mentioned above: a large user base retards future development. This is certainly true, and is something worth remembering, because it is one of the important long-term principles of software design that most people (and most companies) fail to appreciate.

However, there is another way to look at X Window development. You could say that the original MIT programmers designed such a good system that, almost 20 years later, the basic principles still work well, and the main changes that have had to be made are in the details: fixing bugs, supporting new hardware, and working with new versions of Unix.

What the X programmers showed is that, when you develop an important product and you want it to last a long time, it is worthwhile to take your time at the beginning. A flexible, well thought-out design gives a product enormous longevity. This too is a programming principle that many people fail to appreciate.

What's in a Name?

X Window, X


The roots of X Window lie in a particular operating system that was developed at Stanford University. This system, called V, was developed by the Distributed Systems Group at Stanford University from 1981 to 1988.

When a windowing interface was developed for V, it was called W. Some time later, the W program was given to a programmer at MIT who used it as a basis for a new windowing system, which he called X.

Since then, the name has stuck, perhaps for two reasons. First, names of Unix systems often end in "x" or "ix", and X Window is used mostly with Unix. Second, if they kept changing the name, they would reach the end of the alphabet in just two more letters.

Notice, by the way, that the proper name for the system is X Window, not X Windows.

Jump to top of page

Who Is in Charge of X Window?

By 1987, X was becoming so popular that MIT wanted to relinquish the responsibility for running the show. (After all, they were a school, not a company.) At first, a group of vendors who wanted X development to remain neutral talked MIT into remaining in charge. Eventually, however, MIT stood firm and the responsibility for X was passed on: first to one organization (the MIT X Consortium), then to another (the X Consortium), and then to yet another (the Open Group).

Today, X is maintained by a fourth organization, an independent group called X.Org. (I bet you can guess their Web address.) X.Org was formed in January 2004, and has supervised the maintenance of X since X11R6.5.1.

In 1992, a project was started by three programmers to work on a version of X for PCs. In particular, they were working with PCs that used an Intel 386 processor, so they called their software XFree86. (The name is a pun, because "XFree86" sounds like "X386".)

Because XFree86 supported PC video cards, it came to be used with Linux and, as Unix grew in popularity, so did XFree86. During the late 1990s and early 20s, when official X development had slowed to a crawl, the XFree86 developers took up the slack. Indeed, XFree86 became so widespread that, at one time, if you were using a PC with Unix and a GUI, you were probably using XFree86.

However, in 2004, the president of the XFree86 organization decided to make a change in the distribution license. His idea was to force people to give credit to the XFree86 development team whenever certain parts of the software were distributed.

It sounds like a noble idea, and XFree86 was still open source software. That idea never changed. However, the new license was incompatible with the standard GNU GPL distribution license (see Chapter 2). This bothered a lot of programmers as well as most of the Unix companies, because it would have resulted in terrible logistical problems.

The ultimate solution was for X.Org to take over the development of X, which they did, starting with the most recent version of XFree86 that was unencumbered by the new license. As a result, the XFree86 project lost most of its volunteer programmers, many of whom switched to X.Org.

I mention all of this for two reasons. First, from time to time, you will come across the name XFree86, and I want you to know what it means. Second, I want you to appreciate that, when open source software is distributed, the details of the license can be crucial to the future of the software. The more popular the software, the more important the license be in harmony with previous licenses. We will see this again, later in the chapter, when we talk about a system called KDE.

Jump to top of page

Layers of Abstraction

As I have explained, X Window is a portable, hardware-independent windowing system that works with many different types of computing equipment. Moreover, X can run on virtually any type of Unix as well as certain non-Unix systems (such as Open VMS, originally developed by DEC).

How can this be? How can a graphical windowing system work with so many operating systems and so many different types of computers, video cards, monitors, pointing devices, and so on?

As we discussed earlier, X was developed as part of Project Athena with the goal of providing a graphical operating environment that would run a variety of software on many different types of hardware. Thus, from the beginning, X was designed to be flexible.

To achieve this goal, the designers of X used what computer programmers call LAYERS OF ABSTRACTION. The idea is to define a large overall goal in terms of layers that can be visualized as being stacked from the bottom up, one on top of the next. Each layer is designed to provide services to the layer above and to request services from the layer below. There is no other interaction.

Let's take a quick, abstract example, and then we'll move onto something concrete. Let's say that a computing system is made up of five layers: A, B, C, D and E. Layer E is at the bottom; layer D is on top of E; layer C is on top of D; and so on. Layer A is on top.

Programs running in Layer A call upon programs in Layer B (and only Layer B) to perform various services; Layer B programs call upon programs in Layer C (and only Layer C); and so on.

If such a system is designed well, it means that a programmer working on, say, Layer C, does not have to know about all the other layers and how they function. All he has to know is how to call upon Layer D for services, and how to provide services for Layer B. Because he is concerned only with the details of his own layer, he doesn't care if someone makes changes to the internals of Layer A or Layer E.

Jump to top of page

The Window Manager

To cement the idea of layers of abstraction, let's consider a real example.

From the beginning, X Window was designed to be a standardized interface between a GUI and the hardware. In itself, X does not furnish a graphical interface, nor is there any specification that describes what the user interface should look like. Providing the actual GUI is the job of another program called the WINDOW MANAGER.

The window manager controls the appearance and characteristics of the windows and other graphical elements (buttons, scroll bars, icons, and so on). What X does is bridge the gap between the window manager and the actual hardware.

For example, say that the window manager wants to draw a window on the screen of a monitor. It sends the request to X along with the relevant specifications (the shape of the window, the position of the window, the thickness of the borders, the colors, and so on). X causes the window to be drawn and sends back a message to the window manager once the task is completed.

In this way, the window manager doesn't have to know anything about how to draw a window. That is X's job. Similarly, X doesn't have to know anything about how to create an actual GUI. That is the window manager's job.

Thus you see the importance of levels of abstraction. A programmer working on one level can ignore the internal details of all the other levels. In this case, the window manager resides in a layer above X Window. When the window manager needs to display something, it calls upon X Window to do the job.

This means that a programmer working on the window manager doesn't need to care about the details of displaying data on monitors or capturing data from mice and keyboards (that's X's job). He is free to concentrate on creating the GUI and nothing else. As I mentioned, there is nothing in the design of X that mandates how a user's screen should look when he or she is using a GUI. That is up to the programmers who design the window manager.

From the beginning, the intention was that there would be as many different window managers as people were willing to build. Since each window manager would have its own characteristics, people would have a choice of GUIs.

When Project Athena released X10 — the first popular version of X — they included a rudimentary window manager named xwm (the X Window Manager). With X10R3, they included a new window manager, uwm. With X11, the first very successful version of X, there was another new window manager, twm.

(The name uwm stood for Ultrix Window Manager, Ultrix being DEC's version of Unix. Later, the name was changed to the Universal Window Manager. The name twm stood for Tom's Window Manager, because it was written by Tom LaStrange. Later, the name was changed to the Tab Window Manager.)

The twm window manager became very popular. In fact, it is still included with X11 and can be considered the default window manager; it is certainly the most influential. Over the years, twm has spawned a number of derivative products written by programmers who modified it to create window managers of their own. (Remember, all of X Window is open source, and anyone can change any part of it to create their own version of whatever they want.)

I mention xwm, uwm and twm because they are important for historical reasons, so you should know the names. Since then, there have been many other window managers written for X, each with its own special features, advantages, and disadvantages. However, out of the many new window managers that have been created, there are only two I want to mention: Metacity and kwm.

These are particularly important window managers. However, before I can explain why, I need to discuss the next layer of abstraction, the one that sits on top of the window manager: the desktop environment.

Jump to top of page

The Desktop Environment

As we discussed, it is the job of a window manager to provide a basic graphical interface. As such, it is the window manager that enables you to create windows, move and size them, click on icons, maneuver scroll bars, and so on.

However, using a modern computer system requires a lot more than a basic GUI. You need a well thought-out, consistent interface. Moreover, you want that interface to be attractive, sensible and flexible (just like the people in your life).

The power of a GUI comes from being able to provide a work environment in which you can manipulate the various elements in a way that makes sense and serves your needs well. Your interface needs to have an underlying logic to it, one that will allow you to solve problems from moment to moment as you work.

In the early days of X, people would interact directly with the window manager. However, the basic GUI provided by a window manager can only go so far. What it can't do is help you with the complex cognitive tasks associated with using a modern computer. This is the job of a more sophisticated system called the DESKTOP ENVIRONMENT and, sometimes, the DESKTOP MANAGER.

The name comes from the fact that, as you are working, you can imagine the screen of your monitor as being a desktop on which you place the objects with which you are working. The metaphor was chosen in the olden days, when graphical interfaces were still new, and GUI designers felt that it would be intuitive to untrained users to consider the screen as a desktop. Personally, I think the metaphor is misleading and confusing. I wish it would have been discarded a long time ago(*).

* Footnote

However, when it comes to metaphors I am a lot more picky than other people. Consider, for example, the American humorist Will Rogers who used to say, "I never met-a-phor I didn't like."

The desktop environment allows you to answer such questions as: How do I start a program? How do I move or hide a window when I don't want to look at it? How do I find a file when I can't remember where it was? How do I move an icon from one place to another? How do I prioritize my files and programs so that the most important ones are easier to find?

Here is a specific example. Where a window manager might process our mouse movements and display icons and windows, a desktop environment would allow us to use the mouse to drag an icon and drop it on a window. In doing so, it is the desktop environment that brings meaning to the idea of dragging and dropping.

— hint —

Never let yourself be fooled into thinking that a computer interface should be so "intuitive" as to be immediately useful to a beginner. Complex tasks require complex tools, and complex tools take time to master.

In some cases, it is possible for designers to dumb down an interface so much that people with no experience can use it immediately. However, what's easy to use on the first day will not be what you want to use once you are experienced. In the long run, easy-to-learn interfaces are much more frustrating than powerful tools that take time to master.

When we use tools that are designed primarily to be easy to learn, we end up with systems in which the computer is in control. When we use well-designed, powerful tools that take time to learn, we end up with systems in which the user is in control.

That is the case with Unix.

Jump to top of page

Layers of Abstraction: Revisited

To continue with our layers of abstraction model, we can say that the window manager sits on top of X Window, and the desktop environment sits on top of the window manager.

You can see this in Figure 5-1, which shows the layers of abstraction that support a typical Unix GUI. Notice there are several layers I have not mentioned. At the bottom, X Window calls upon the operating system's device drivers to do the actual communication with the hardware. At the top, the user and the programs he or she runs call upon the desktop environment as needed.

Figure 5-1: Layers of Abstraction

Within Unix, a graphical working environment can be thought of as a system consisting of various levels of programs and hardware. At the top level, we have our application programs (including utilities), as well as the user. One level down is the desktop environment. Below the desktop environment is the window manager, and so on. At the very bottom is the actual computer. Philosophically, we can think of the entire system as a means of bridging the gap between human hardware and computing hardware.

 

There are two important concepts I want to emphasize about this model. First, as we discussed, the details of what is happening at any particular level are completely independent of any other level. Second, the only communication that exists takes place between adjacent levels, using a well-defined interface.

For example, the window manager communicates only with X Window below and the display environment above. This is adequate because the window manager doesn't care about the details of anything that is happening on any other level. It lives only to respond to requests from the level above (the desktop environment) and, in turn, it calls upon the level below (X Window) to service its own requests.

Jump to top of page

How the Unix Companies Blew It

When X Window was first developed, there were only window managers which, by today's standards, offered primitive GUIs. The idea that people might want a full-featured desktop environment developed over time, as programmers learned more about designing and implementing interfaces. Although there is no exact moment when window managers were replaced by desktop environments, here is more or less how it happened.

By 1990, the world of Unix was fragmented because there were a number of different Unix companies, each with its own type of Unix. In spite of promises to cooperate, there was a great deal of competitive sniping, and most companies were much more interested in dominating the marketplace than in working together. In the same way that young boys who fight in the playground will choose up sides, the Unix companies formed two umbrella organizations, each of which purported to be developing the one true Unix.

As we discussed in Chapter 2, by the mid-1980s, most types of Unix were based either on AT&T's UNIX or Berkeley's BSD or both. In October 1987, AT&T and Sun Microsystems announced their intention to work together on unifying UNIX and BSD once and for all. This upset the other Unix vendors and in May 1988, eight of them formed the OPEN SOFTWARE FOUNDATION (OSF) in order to develop their own "standard" Unix. The eight vendors included three of the most important Unix companies: DEC, IBM, and HP (Hewlett-Packard).

The formation of the OSF scared AT&T and Sun. They decided that, if they were to compete against OSF, they too needed their own organization. So in December 1989, they corralled a few smaller companies and formed UNIX INTERNATIONAL (UI).

Thus, by the early 1990s, there were two rival organizations, each of which was trying to create what it hoped would become the one true Unix. As part of their work, both OSF and UI developed their own window managers. OSF's was called mwm (Motif window manager), and UI's was called olwm (Open Look window manager). This meant that X Window users now had three popular choices for their window managers: mwm, olwm, and twm (which I mentioned earlier).

However, where twm was a plain vanilla window manager, both mwm and olwm were more complex and powerful. In fact, they were the ancestors of today's sophisticated desktop environments.

So, why aren't Motif and Open Look the most important GUIs today? The answer is that their sponsors, OSF and UI, spent so much time fighting that they lost their leadership in the Unix world. The details are incredibly boring, so I won't go into them(*). What is important is that, by the mid-1990s, there was a big gap in the world of Unix, a gap that was filled by Microsoft Windows NT and Linux. And along with Linux came two new GUIs, KDE and Gnome, which had nothing to do with either OSF or UI.

* Footnote

If you really want to know what happened, just go to a Unix programming conference, find an old person, and invite him for a drink. Once he gets settled in, ask him to tell you about the "Unix Wars".

If you are not sure whom to ask, just walk around the conference until you see someone with a ponytail and a faded Grateful Dead T- shirt.

Jump to top of page

KDE and Gnome

In 1996, Matthias Ettrich, a German student at the University of Tübingen, was dissatisfied with the current state of Unix GUIs. On October 14, 1996, he sent out a Usenet posting in which he proposed to remedy the problem by starting a new project called the Kool Desktop Environment (KDE). (See Figure 5-2.)

Figure 5-2: Matthias Ettrich, founder of the KDE project

Matthias Ettrich founded the KDE project in October 1996. Eventually, KDE would become so successful that Ettrich could be considered the Father of the Desktop Environment. This photo was taken a few months after the project had started.

Ettrich argued that the current window managers were deficient, that "a GUI should offer a complete, graphical environment. It should allow a user to do his everyday tasks with it, like starting applications, reading mail, configuring his desktop, editing some files, deleting some files, looking at some pictures, etc. All parts must fit together and work together."

Ettrich had noticed these deficiencies when he was configuring a Linux system for his girlfriend. He realized that, in spite of all his expertise, there was no way for him to put together a GUI that was integrated well and was easy for his girlfriend to use. He asked people to volunteer to work on KDE, promising that "one of the major goals is to provide a modern and common look & feel for all the applications." (*)

* Footnote

Presumably, Ettrich's girlfriend was not as technically inclined as he was, leading him to realize that the current GUIs, while tolerated by programmers, did not work well for regular people. Eventually, the KDE project inspired by Ettrich's experience would produce the very first integrated desktop environment, changing forever the way people thought about GUIs.

One can only wonder: If Ettrich's girlfriend had been, say, a tad less pretty and a tad more nerd-like, how long would it have taken to develop a true desktop environment? Since KDE would come to have a profound influence on the acceptance of Linux around the world, is this not, then, an argument that more of society's resources should be devoted to encouraging beautiful women to date programmers?

More specifically, Ettrich asked people to help create a control panel (with "nice" icons), a file manager, an email client, an easy-to-use text editor, a terminal program, an image viewer, a hypertext help system, system tools, games, documentation, and "lots of other small tools". Ettrich's invitation was answered by a variety of programmers, and the KDE project was formed.

One of Ettrich's major complaints was that none of the popular Unix applications worked alike or looked alike. The KDE programmers worked hard and, by early 1997, they were releasing large, important applications that worked together within an integrated desktop environment. In doing so, they produced a new, highly functional GUI that began to attract a great deal of interest.

Within a few months, however, a number of programmers within the Linux community began to voice concerns about KDE. Ettrich had chosen to build the new desktop environment using a programming toolkit called Qt. Qt had been written by a Norwegian company, Trolltech, which had licensed it in such a way that it was free for personal use, but not for commercial use.

To the KDE programmers, this was fine: from the beginning, they saw KDE as a non-commercial product. Other people, however, felt that Trolltech's licensing arrangement was not "free" enough. In particular, the programmers who were associated with the GNU project and the Free Software Foundation wanted a less restrictive license for KDE, either that, or an alternative to KDE that would be licensed under the GNU GPL. (See the discussion of free software in Chapter 2.)

In August 1997, two programmers, Miguel de Icaza and Federico Mena, started a project to create just such an alternative, which they called GNOME. Although KDE was already well-established, the Gnome project attracted a lot of attention and, within a year, there were about 200 programmers working on Gnome.

(You may remember that, earlier in the chapter, I mentioned two window managers, Metacity and kwm. At the time, I said that, out of the many window managers that are available, these two are important enough that I wanted you to know the names. The reason they are important is that Metacity is the window manager for Gnome, and kwm is the window manager for KDE.)

What's in a Name?

KDE, Gnome


The project to build KDE, the first X-based desktop environment, was started by a German university student, Matthias Ettrich. At the time Ettrich proposed the project, he suggested the name KDE, which would stand for Kool Desktop Environment. Later, however, this was changed to K Desktop Environment.

In the same way that X Window became X, the letter K is often used to stand for the KDE desktop environment. For example, within KDE, the native Web browser is called Konqueror; the CD ripper is KAudioCreator; the calculator program is called KCalc; and so on. In perhaps the most egregious use of the letter K, the KDE terminal emulator is called Konsole.

Within Gnome — and the GNU project in general — you see the same thing with the letter G. For example, the Photoshop-like program is called Gimp (GNU Image Manipulation Program); the instant messaging program is Gaim; and the calculator program is Gcalctool; and so on.

The name Gnome stands for GNU Network Object Model Environment. "Gnome" is pronounced either "Guh-nome" or "Nome". In my experience, programming geeks pronounce GNU with a hard G ("Guh-new") and Gnome with a soft G ("Nome").

Jump to top of page

CDE and Total Cost of Ownership

By 1999, there were two popular, well-designed desktop environments: KDE and Gnome. Both GUIs enjoyed widespread support within the Linux community (and, to this day, they are used widely around the world).

In the meantime, the commercial Unix companies were still in business and, by now, they realized the importance of desktop environments. The Open Group organization I mentioned earlier had taken over development of the Motif window manager. In the early 1990s, they had started work on a new proprietary desktop environment — CDE (Common Desktop Environment) — based on Motif. After a large, multi-company effort, CDE was introduced in 1995. By 20, CDE had become the GUI of choice for commercial Unix systems, such as AIX from IBM, HP/UX from HP, Unix from Novell, and Solaris from Sun.

You may wonder why there was a need for CDE? Why would so many computer companies pay to develop a proprietary product when both KDE and Gnome were available at no cost? On a larger scale, why was there a need for commercial Unix at all? After all, Linux was available for free and the licensing terms were liberal. Why didn't every company simply switch to Linux and use either KDE or Gnome?

The answer has to do with one of the fundamental differences between the commercial and consumer markets, and the principle is so important that I want to take a moment to explain it.

As consumers, you and I want two things out of our software. First, it should be inexpensive (free, if possible); second, it should work. We realize that when we have problems, we are on our own. We can read the documentation, we can look for help on the Internet, or we can ask someone else for help. If we get desperate we can pay someone to help us but, in most cases, there is no great urgency. If we have to wait for a solution, it is inconvenient but not devastating.

In a company, especially a large company, the situation is different. Even a simple software problem can affect hundreds or thousands of people. Waiting for a solution can be very expensive, both to the company and to its customers. Because large companies can't afford serious problems, they employ full-time computer personnel to maintain networks, servers, and personal computers. For this reason, when companies evaluate a product — software or hardware — they don't focus on initial cost. They look at what is called the TOTAL COST OF OWNERSHIP or TCO.

To calculate the total cost of ownership, a company must answer the question: If we decide to use this product, what is it going to cost us in the long run?

Calculating the TCO for something as complex and as fundamental as a desktop environment is not simple. For you or me, the initial cost is the only cost. If we can get KDE or Gnome for free, that's all we care about. Software problems can be bothersome but, as I said, it's more a matter of inconvenience than money.

A large company looks at it differently. Although they do evaluate the initial purchase cost or licensing fees, they also perform a more complicated, long-term analysis. The details of such calculation are beyond the scope of this book, but the ideas are important to understand, so I will give you a quick summary.

Before a company adopts a significant hardware or software system, their financial analysts look at what are called direct costs and indirect costs. The direct costs include both hardware and software: initial purchase or lease expenses, operations, tech support, and administration. The indirect costs have to do with lost productivity. They include the amount of time employees spend learning how to use the system; the amount of time some employees will lose because they are helping other employees (something that happens everywhere); and the cost of downtime due to failure and scheduled maintenance.

Once all these costs are estimated, they are converted to annual expenditures, a calculation that includes the depreciation and the cost of upgrades. The annual expenditures are then integrated into the company-wide budget, which is reconciled with the company's plan for long-term growth.

In most cases, when total cost of ownership for software or hardware is calculated, what we find is counter-intuitive: the initial costs are not that significant. In the long run, what counts the most are the ongoing expenditures and indirect costs.

Thus, when a company is thinking about adopting new software, they don't ask how much it costs to buy or license the product. They ask, how well will this software integrate into our existing environment? How does it fit into our long-term plans? How well does it serve our customers? How much will it cost to maintain on an ongoing basis?

Once these questions are answered, it becomes clear that, for corporate use, the best software is usually not free software that has been designed for individual or educational use. Business software must have features that are suitable for business. There must be a large family of well-maintained programming tools; there must be a well-defined, long-term development plan tailored to the needs of businesses, not individuals; most important, there must be excellent documentation and high-quality tech support. This is why most large businesses prefer to stick to commercial software. It is also why, in the corporate world, Linux has not replaced Microsoft Windows and probably never will.

This is not to say that large companies never use free software. They do when it makes sense to do so. For example, IBM offers not only their own version of Unix (AIX), but Linux as well. However, when a company like IBM offers an open source ("free") software product, they put a lot of money into it, supporting and enhancing that product. IBM, for example, has spent millions of dollars on Linux development and support. The truth is, for a large company, nothing is free.

To return to the desktop, you can see why, in the 1990s, it was so important for the corporate world to have its own desktop environment. Although both KDE and Gnome worked well, they didn't have the type of features and support that were needed by businesses.

That is why the Open Group was set up and why they developed CDE. And that is also why, at the end of the 1990s, CDE — not KDE or Gnome — was the desktop environment of choice for corporate users.

In the 20s, as free software became more and more important, Unix companies started to offer their own versions of Linux as well as their own proprietary Unix. For example, IBM offers both AIX and Linux; Sun offers both Solaris and Linux; HP offers both HP-UX and Linux; and so on.

As you might expect, this also means that Unix companies also offer KDE and Gnome. For example, IBM offers their AIX users a choice of CDE, KDE and Gnome; Sun offers both CDE and Gnome; and HP offers both CDE and Gnome.

Of course, these versions of Linux, KDE and Gnome are not the same distributions that you or I would download for free from the Net with no support. They are commerical-quality products that come with commerical-quality tech support (at a price).

Jump to top of page

Choosing a Desktop Environment

When you use Unix, you get a strong feeling that the desktop environment is separate from the actual operating system. This is not the case with Windows or Mac OS, because Microsoft and Apple try hard to convince people that every important program that comes with the computer (including the browser, the file manager and the media player) is part of the operating system.

The truth is it isn't: it's just packaged that way. Because you are a Unix user, you can make the distinction between the operating system and everything else, which leaves you free to ask yourself, "What do I want on my desktop?"

Many companies and schools standardize on computer tools. If you work for one of these companies or go to one of these schools, you will have to use whichever desktop environment they tell you to use. However, if you are running Linux on your own computer — or if your organization gives you a choice — you will be able to decide for yourself what GUI to use.

So which desktop environment is best for you?

If you use Linux, there are many free desktop environments, so you have a lot of choice. (To see what I mean, just search on the Internet for "desktop environment".) However, virtually all Linux distributions come with either KDE or Gnome or both, and it is my advice that — unless you have a reason to choose otherwise — you start with one of these two GUIs. (See Figures 5-3 and 5-4. When you look at these pictures, please focus on the appearance and organization of the GUI — the windows, the icons, the toolbar, and so on — rather than on the content within the windows.)

Figure 5-3: KDE desktop environment

The project to create KDE, the first real desktop environment, was started in 1996 by Matthias Ettrich. His goal was to create "a complete, graphical environment" in which "all parts fit together and work together".

Figure 5-4: Gnome desktop environment

The Gnome project was started in 1997 by Miguel de Icaza and Federico Mena in order to create an alternative to KDE that would be distributed with more liberal licensing terms.

So let's narrow down the question: With respect to KDE and Gnome, which one is right for a person like you?

To start, take a look at the following five statements. Mark each statement either True or False, with respect to your personal preferences. We will then evaluate your answers to choose the desktop environment that's best for you.

1. I would rather drive a car with manual transmission than a car with automatic transmission.

2. I am more comfortable in a home that is simple and organized than a home that is decorated and has comfortable clutter.

3. When I have a personal discussion with my girlfriend/boyfriend or wife/husband, it is important to me that we take the time to figure out who is right.

4. After I bought my DVD player, I read at least part of the manual.

5. When I use Microsoft Windows or a Macintosh, I generally leave things the way they are. I don't mess around with the colors, the backgrounds, and so on.

Before we interpret your answers, I want you to appreciate that all desktop environments are merely a way of using the same underlying computing environment. So no matter which desktop environment you pick, it will be fine. Having said that, you will be more comfortable using a desktop environment that is suited to your personality, so let's move on with the analysis.

Regardless of your technical skill or your interest in computers, if you answered True 3, 4 or 5 times, use Gnome; if you answered True 0, 1 or 2 times, use KDE.

Notice that I said that your choice should not depend on how much you know about computers. Some very technical people prefer KDE; others prefer Gnome. Similarly, many non-technical people choose KDE, while others like Gnome.

The dichotomy has more to do with how you see the world, rather than how much you know. Gnome people thrive on simplicity and order. They want things to be logical. If necessary, they are willing to put in as much effort as it takes to make something work in a way that makes sense to them.

A Gnome person would agree with the dictum "form ever follows function", an idea expressed by the American architect Louis Sullivan in 1896. Sullivan observed that the appearance of natural objects was influenced by their function. Gnome people want the world to behave in a rational way, and they prefer tools whose appearance directly reflects their purpose.

Whereas Gnome people like to control how things work, KDE people like to control how things look. This is because they care less about "being right" than they do about living in a way that makes them emotionally comfortable.

KDE people see the world as a place filled with color, variation and, at times, confusion. They are inclined to accept much of life as it is, rather than putting in a lot of effort to fix small details. When they feel motivated to spend time customizing their working environment, they tend to makes things that look nice and act nice.

Now, take another look at your answers to the true/false questions. Are you a KDE person or a Gnome person?

A question arises. We have two different desktop environments, each of which was created by its own group of people from around the world, working together. How could it be that there are personality traits that differentiate KDE people from Gnome people?

The answer lies in the genesis of each group. As we discussed earlier, the KDE group was started by people who were not satisfied with the status quo. They wanted to create a complete working environment that worked better and looked better than the window managers of the day.

The Gnome group was started by people who were dissatisfied with KDE because of an abstract legal problem related to licensing terms, a deficiency that — let's face it — most people would have ignored (as all the KDE people did). However, to a Gnome person — or, more precisely, to a Free Software Foundation-type of person — what's right is right and what isn't isn't, and that's all there is to it. (See the discussion of Richard Stallman in Chapter 2.)

Does it not make sense, then, that each group would create a desktop environment suitable for their type of person? Perhaps people who design consumer products (including software) should pay more attention to the KDE/Gnome dichotomy.

Jump to top of page

The Grandmother Machine

Now that we have discussed the most important GUIs and the most important types of Linux (Chapter 2), I want to end this chapter by answering a question I hear a lot:

"I am putting together a system using free software for someone who doesn't know a lot about computers. What software should I use?"

I call such a computer the Grandmother Machine, because it is the type of system you might create for your grandmother.

When you set up a Grandmother Machine, you must realize that you will bear permanent responsibility because, whenever your grandmother has a problem, she will call you. So you should use software that is dependable, easy to install, and easy to update. You also want a system that you can configure to make it easy for a beginner to access the Web, check email, and (in some cases) use word processing, spreadsheets, presentation graphics, and so on.

Here are my recommendations. When you read them, remember that conditions change over time. New software comes along and old software grows to become bloated and unusable. So concentrate on the general principles behind my choices, and not just the specific selections.

To create a Grandmother Machine, use the following:

• Ubuntu Linux: It's based on Debian Linux and is easy to install and maintain.

• Gnome: The Gnome desktop environment is simple to use, but robust enough that a beginner won't mess it up.

If the grandmother in question is a very KDE-like person, you can give her KDE. However, please stick with either Gnome or KDE. Regardless of your personal preferences, don't mess around with the less common desktop environments.

When you set up the GUI, be sure to take some time to make it as easy as possible for your grandmother to start her favorite applications. The best idea is to create a few icons on the control panel. (At the same time, you can remove the icons she will never use.)

• Firefox: The Firefox browser is easy to use and powerful. For email, get her a Web-based account (such as Google's Gmail) and let her use her browser to communicate.

Firefox is wonderful, but do plan on taking some time to show your grandmother how to use it. In addition, plan on answering questions over the phone until she gets used to using the Web. (The best way to avoid unnecessary questions is to create a link to Google, and to show your grandmother how to use it. When you do, be sure to take a few moments to explain how to make sense out of the search results.)

• Open Office: A suite of free productivity software (word processor, a spreadsheet program, and so on), compatible with Microsoft Office.

One last piece of advice: I have a lot of experience helping people choose computer systems, and there is a general principle I have noticed that rarely gets mentioned.

When you choose a computer for someone, it doesn't work well to base your advice on what I might call their "hardware needs". What works best is to choose a system based on their psychological needs. This idea is so important that I will embody it in the form of a hint.

— hint —

Harley Hahn's Rules for Helping Someone Choose a Computer

1. When you are choosing a computer for someone to use as their personal machine, choose a system that meets their psychological and emotional needs.

In most cases, the person will not be able to articulate their needs, so you must figure them out for yourself. During this process, do not allow yourself to be sidetracked into long discussions of hardware specifications or other trivia.

2. When you choose a computer for someone who is working for a company, choose a system that is in harmony with the psychology of the person who will be approving the expenditure.

Within a corporate environment, people come and go, so don't choose a system based on the needs of a particular user. Choose a system that suits the company.

The best way to do this is to look for a computer that meets the psychological and emotional needs of the person writing the check, not the person who will be using the machine. This is especially true when you are dealing with a small business.

Jump to top of page



Exercises

Review Question #1:

When it comes to displaying information, there are, broadly speaking, two types of data. What are they?

Review Question #2:

What is the name of the system that supports most Unix graphical user interfaces (GUIs)? Where and when was it first developed? Name three important services it provides.

Review Question #3:

What are layers of abstraction? Name the six layers in a typical Unix GUI environment.

Review Question #4:

What is "total cost of ownership"?

Who uses the concept and why?

When total cost of ownership of a computing system is calculated, how important are the initial costs?

Review Question #5:

What is a desktop environment?

In the Linux world, what are the two most popular desktop environments?

For Further Thought #1:

As a general rule, when using a GUI, windows are rectangular. Why is this?

When might it make sense to use a round window?

For Further Thought #2:

You work for a large company that uses PCs running Windows. The company has standardized on Microsoft Office products (Word, Excel, PowerPoint, and so on). You are at a meeting where a young, newly graduated programmer proposes that the company change from Office to the free software alternative, Open Office. Why is this a bad idea?

Does it bother you to recommend that the company stick with Microsoft products? If so, why?

Jump to top of page