Donation?

Harley Hahn
Home Page

Send a Message
to Harley


A Personal Note
from Harley Hahn

Unix Book
Home Page

List of Chapters

Table of Contents

List of Figures

Chapters...
   1   2   3
   4   5   6
   7   8   9
  10  11  12
  13  14  15
  16  17  18
  19  20  21
  22  23  24
  25  26

Glossary

Appendixes...
  A  B  C
  D  E  F
  G  H

Command
Summary...

• Alphabetical
• By category

Unix-Linux
Timeline

Internet
Resources

Errors and
Corrections

Endorsements


INSTRUCTOR
AND STUDENT
MATERIAL...

Home Page
& Overview

Exercises
& Answers

The Unix Model
Curriculum &
Course Outlines

PowerPoint Files
for Teachers

Chapter 3...

The Unix Connection

Being able to connect to different types of computers has always been an integral part of Unix. Indeed, the fact that Unix has this capability is one of the main reasons there are so many computer networks in the world. (The Internet, for example, has always depended on Unix connections.)

In this chapter, we'll discuss the basic concepts that make it possible to connect to other computers: on the Internet and on local networks. Along the way, you will learn how these concepts underlie the most fundamental Unix connection of all: the one between you and your computer.

Jump to top of page

Humans, Machines and Aliens

I'd like to start by talking about an idea that is rarely discussed explicitly. And yet, it is such an important idea that, if you don't understand it, a lot of Unix is going to seem mysterious and confusing. The idea concerns human beings and how we use machines.

Think about the most common machines in your life: telephones, cars, TVs, radios, and so on. Because you and the machine are separate entities, there must be a way for you to interact with it when you use it. We call this facility an INTERFACE.

Consider, for example, a mobile phone. The interface consists of a set of buttons, a speaker or ear plug, a small video screen, and a microphone. Consider a car: the interface consists of a key, the steering wheel, the accelerator pedal, the brake pedal, a variety of dials and displays, and a gaggle of levers, knobs and buttons.

The point is every machine that is used by a human being can be thought of as having two components: the interface and everything else.

For example, with a desktop computer (we'll get to laptops in a moment), the interface consists of the monitor, the keyboard, the mouse, speakers and (possibly) a microphone. "Everything else" consists of the contents of the box: the hard disk, the CD drive, the processors, the memory, the video card, the network adapter, and so on.

In Unix terminology, we call the interface the TERMINAL (I'll explain why later), and we call everything else the HOST. Understanding these concepts is crucial, so I am going to talk about them in detail.

Since the terminal provides the interface, it has two main jobs: to accept input and to generate output. With a desktop computer, the input facility consists of the keyboard, mouse and microphone. The output facility consists of the monitor and the speakers.

To capture these ideas, we can describe any computer system by the following two simple equations:

COMPUTER = TERMINAL + HOST

TERMINAL = INPUT FACILITY + OUTPUT FACILITY

Has it ever occurred to you that, as a human being, you also consist of a terminal and a host? In other words, these same two equations describe you and me and everyone else.

Your "terminal" (that is, your interface to the rest of the world) provides your input facility and your output facility. Your input facility consists of your sense organs (eyes, ears, mouth, nose, and skin). Your output facility consists of the parts of your body that can make sounds (your mouth) and can create change in your environment (your hands and arms, your legs, and the muscles of facial expression).

What is your "host"? Everything else: your brain, your organs, your muscles and bones, your blood, your hormones, and so on.

It might seem artificial and a bit ludicrous to separate your "host" from your "terminal" because you are a single, self-contained unit. But think about a laptop computer. Even though all the components are built-in, we can still talk about the terminal (the screen, the keyboard, the touch pad, the speakers, and the microphone) and the host (everything else).

Imagine two aliens from another planet watching you use a laptop computer. One alien turns to the other and says, "Look, there is a human being who is using his interface to interact with the interface of a computer."

To the aliens, it doesn't matter that your interface is built-in because the laptop's interface is also built-in. The aliens, being from another planet, see what you and I don't normally see: as you use a computer, your interface communicates with the computer's interface. Indeed, this is the only way in which you can use a computer (or any other machine, for that matter).

However, what if the aliens happened to come from a Unix planet? After the first alien made his comment, the second alien would respond, "I see what you mean. Isn't it interesting how the human's terminal interacts with the computer's terminal?"

Jump to top of page

In the Olden Days,
Computers Were Expensive

In Chapter 1, I mentioned that the very first version of Unix was developed in 1969 by Ken Thompson, a researcher at Bell Labs, New Jersey. (At the time, Bell Labs was part of AT&T.) Thompson had been working on a large, complex project called Multics, which was centered at MIT. When Bell Labs decided to end their support of Multics, Thompson returned full-time to New Jersey, where he and several others were determined to create their own, small operating system. In particular, Thompson had a game called Space Travel that he had developed on Multics, and he wanted to be able to run the program on a system of his own.

At the time, there were no personal computers. Most computers were large, expensive, temperamental machines that required their own staff of programmers and administrators. (We now call such machines MAINFRAME COMPUTERS.)

Mainframe computers required their own special rooms, referred to whimsically as "glass houses". There were three reasons for glass houses. First, the machines were somewhat fragile, and they needed to be in an environment where the temperature and humidity could be controlled. Second, computers were very expensive, often costing millions of dollars. Such machines were far too valuable to allow just anyone to wander in and out of the computer room. A glass house could be kept locked and closed to everyone but the computer operators.

Finally, computers were not only complex; they were relatively rare. Putting such important machines in glass houses allowed companies (or universities) to show off their computers, especially to visitors. I have a memory as a young college student: standing in front of a huge computer room at the University of Waterloo, looking through the glass with awe, intimidated by the large number of mysterious boxes that comprised three separate IBM mainframe computers.

Multics ran on a similar computer, a GE-645. The GE-645, like most mainframes, was expensive to buy, expensive to lease, and expensive to run. In those days, computer users were given budgets based on real money, and every time someone ran a program, he was charged for processing time and disk storage. For example, each time Thompson ran Space Travel on the GE-645, it cost about $75 just for the processing time ($445 in 2008 money).

Once Bell Labs moved Thompson and the others back to New Jersey, the researchers knew there was no way they would be able to get their hands on another large computer. Instead, they began looking around for something smaller and more accessible.

In those days, most computers cost well over $100,000, and coming up with a machine for personal research was not easy. However, in 1969, Thompson was looking around Bell Labs and he found an unused PDP-7.

(The PDP-7, made by DEC, the Digital Equipment Corporation, was a so-called MINICOMPUTER. It was smaller, cheaper and much more accessible than a mainframe. In 1965 dollars, the PDP-7 cost about $72,000; the GE-645 mainframe cost about $10 million. The name PDP was an abbreviation for "Programmed Data Processor".)

This particular PDP-7 had been ordered for a project that had floundered, so Thompson was able to commandeer it. He wrote a lot of software and was able to get Space Travel running. However, the PDP-7 was woefully inadequate, and Thompson and several others lobbied to get another computer.

Eventually, they were able to acquire a newer PDP-11, which was delivered in the summer of 1970. The main advantage of the PDP-11 was that its base cost was only(!) $10,800 ($64,300 in 2008 money). Thompson and a few others began to work with the PDP-11 and, within months, they had ported Unix to the new computer. (You can see Thompson and Dennis Ritchie, his Unix partner, hard at work in Figure 3-1.)

Figure 3-1: Ken Thompson, Dennis Ritchie, and the PDP-11

Ken Thompson (sitting), Dennis Ritchie (standing), and the Bell Labs' PDP-11 minicomputer. Thompson and Ritchie are using two Teletype ASR33 terminals to port Unix to the PDP-11.

Why am I telling you all of this? Because I want you to appreciate that, in the late 1960s and early 1970s, computers cost a lot and were difficult to use. (The PDP-11 was expensive and inadequate. The PDP-7 was very expensive and inadequate. And the GE-645 was very, very expensive and inadequate.) As a result, there was an enormous need to make computing, not only easier, but cheaper.

One of the biggest bottlenecks was that, using the current software, the PDP-11 could only run one program at a time. This meant, of course, that only one person could use the machine at a time.

The solution was to change Unix so that it would allow more than one program to run at a time. This was not easy, but by 1973 the goal had been achieved and Unix became a full-fledged multitasking system. (The old name for multitasking is MULTIPROGRAMMING.)

From there, it was but a step to enhance Unix to support more than one user at a time, turning it into a true multiuser system. (The old name is a TIME-SHARING SYSTEM.) Indeed, in 1974, when Thompson and Ritchie published the first paper that described Unix (see Chapter 2), they called it "The UNIX Time-Sharing System".

However, in order to make such a change, the Unix developers had to come to terms with a very important concept, the one you and I discussed earlier in the chapter: human beings could only use a machine if the machine had a suitable interface. Moreover, if more than one person were to use a computer at the same time, each person would need a separate interface.

This only makes sense. For example, if two people wanted to type commands at the same time, the computer would have to be connected to two different keyboards. However, in the early days of Unix, computer equipment was expensive and hard to come by. Where could Thompson and Ritchie come up with the equipment they needed to run a true multiuser system?

The answer to this question proved to be crucial, as it affected the basic design of Unix, not only for the very early Unix systems, but for every Unix system that ever existed (including System V, BSD, Linux, FreeBSD and OS X).

Jump to top of page

Hosts and Terminals

It was the early 1970s, and Ken Thompson and Dennis Ritchie had a problem. They wanted to turn Unix into a true multitasking, multiuser operating system. However, this meant that each user would need his own interface. Today, high quality color video monitors, keyboards and mice are cheap. In those days, however, everything was expensive. There was no such thing as a separate keyboard; there were no mice; and the cost of a separate computer monitor for each user was prohibitive.

As a solution, Thompson and Ritchie decided to use a machine that was inexpensive and available, even though it had been designed for a completely different purpose. This machine was the Teletype ASR33 (ASR stood for Automatic Send-Receive).

Teletype machines were originally developed to send and receive messages over telegraph lines. As such, the machines were called teletypewriters ("Teletype" was a brand name).

The original experimental teletypewriters were invented in the early 1900s. Throughout the first half of the twentieth century, teletypewriter technology became more and more sophisticated, to the point where Teletype machines were used around the world. AT&T (Bell Lab's parent company) was heavily involved in such services. In 1930, AT&T bought the Teletype company and, indeed, the name AT&T stands for American Telephone and Telegraph Company.

Thus, it came to pass that, in the early 1970s, Thompson and Ritchie were able to use Teletype machines as the interfaces to their new PDP-11 Unix system. You can see the actual machines in Figure 3-1 above, and a close-up view in Figures 3-2 and 3-3.

Figure 3-2: Teletype ASR33

A Teletype ASR33, similar to the ones used by Ken Thompson and Dennis Ritchie with the very early Unix systems.

Figure 3-3: Closeup of a Teletype ASR33

A close-up view of a Teletype ASR33. Notice the tall, cylindrical keys. A key had to be depressed about half an inch to generate a character. To the left, you can see the paper tape punch/reader. The tape is 1-inch wide.

As an interface, all the Teletype had was a keyboard for input and a wide roll of paper for printed output. To store programs and data, there was a paper tape punch that could make holes in a long, narrow band of paper, and a paper tape reader that could read the holes and convert them back to data.

Compared to today's equipment, the Teletype was primitive. Except for the power supply, everything was mechanical, not electronic. There was no video screen, no mouse and no sound. Moreover, the keyboard was uncomfortable and difficult to use: you had to depress a key about half an inch to generate a character. (Imagine what typing was like.)

What made the Teletype so valuable was that it was economical and it was available.

Here's where it all comes together. Thompson and Ritchie wanted to create a true multiuser system. Computing equipment was expensive. All they had were some Teletypes for the interfaces, and a single PDP-11 minicomputer to do the processing.

Like the aliens I mentioned above, Thompson and Ritchie realized that they could, conceptually, separate the interface from the rest of the system, and this is the way they designed Unix.

There would be a single processing element, which they called the host, along with multiple interface units, which they called terminals. At first, the host was the PDP-11 and the terminals were Teletypes. However, that was merely for convenience. In principle, Unix could be made to work with any host and any type of terminal. (It would take work, but not too much work.)

This design decision proved to be prescient. From the beginning, the connection that Unix forged between a user and the computer was dependent upon a specific design principle, not upon specific hardware. This meant that, year after year, no matter what new equipment happened to come along, the basic way in which Unix was organized would never have to change.

As terminals became more sophisticated, an old one could be thrown away and a new one swapped in to take its place. As computers became more complex and more powerful, Unix could be ported to a new host and everything would work as expected.

Compare this to Microsoft Windows. Because Windows was created specifically for single-user PCs, Microsoft never completely separated the terminal from the host. As a result, Windows is inelegant, inflexible, and is wedded permanently to the PC architecture. Unix is elegant, flexible, and can be made to work with any computer architecture. After all these years, the Unix terminal/host paradigm still works marvelously.

Jump to top of page

Terminal Rooms and Terminal Servers

As I explained, Unix was designed as a multiuser system. This meant that more than one person could use a computer at the same time, as long as [1] each person had his own terminal, and (2) that terminal was connected to a host.

So, imagine a room full of terminals. They are not computers. In fact, they are not much more than a keyboard, a monitor, and some basic circuitry. At the back of each terminal is a cable that snakes down into a hole in the floor and, from there, makes its way to an unseen host computer.

The room is occupied by a number of people. Some of them are sitting in front of a terminal, typing away or looking at their screens and thinking. These people are using the same host computer at the same time. Other people are patiently waiting for their turn. It happens to be a busy time, and there are not enough terminals for everyone.

The picture I just described is what it was like to use Unix in the late 1970s. At the time, computers — even minicomputers — were still expensive, and there weren't enough to go around. Terminals, however, were relatively inexpensive.

Since Unix was designed to support multiple users, it was common to see TERMINAL ROOMS filled with terminals, each of which was connected to a host. When you wanted to use the computer, you would go to the terminal room, and wait for a free terminal. Once you found one, you would log in by typing your user name and password. (We'll talk about this process in detail in Chapter 4.)

This setup is conceptualized in Figure 3-4.

Figure 3-4: Terminals in a terminal room

In the late 1970s, when computers were still expensive and terminals weren't, it was common to see terminal rooms, in which multiple terminals were connected to the same host.

Some organizations, such as university departments or companies, could afford more than one host computer. In this case, it only made sense to allow people to use any host from any terminal. To do so required a TERMINAL SERVER, a device that acted as a switch, connecting any terminal to any host.

To use a terminal server, you entered a command to tell it which computer you wanted to use. The terminal server would then connect you to that host. You would then enter your user name and password, and log in in the regular manner.

You can see such a system in Figure 3-5. In this drawing, I have shown only six terminals and three hosts. This is a bit unrealistic. In large organizations, it was common to have many tens of terminals, all over the building, connected to terminal servers that allowed access to a number of different hosts.

Figure 3-5: Terminals connected to a terminal server

In the late 1970s, some organizations could afford to have more than one computer available for their users. It was common to have all the terminals in the organization connect to a terminal server which would act as a switch, allowing a user to access any of the host computers from any terminal.

Jump to top of page

The Console

Out of all the terminals that might be connected to a host, there is one terminal that is special. It is the terminal that is considered to be part of the computer itself, and it is used to administer the system. This special terminal is called the CONSOLE.

To give you an example, I'd like to return, for a moment, to the late 1970s. We are being given a tour of a university department, and we see a locked room. Inside the room there is a PDP-11 minicomputer with a terminal next to it. The terminal is connected directly to the computer. This is the console, a special terminal that is used only by the system administrator. (You can see the console in Figure 3-4 above.) Down the hall, there is a terminal room with other terminals. These terminals are for the users, who access the computer remotely.

Now, let's jump forward in time to the present day. You are using a laptop computer on which you have installed Linux. Although Linux can support multiple users at the same time, you are the only person who ever uses the computer.

Do you have a console?

Yes, you do. Because you are using Unix, you must have a terminal. In this case, your terminal is built-in: the keyboard, the touch pad, the screen, and the speakers. That is also your console.

Typically, the console is used by the system administrator to manage the system. In the first example, when the system administrator wanted to use the console of the PDP-11, he would need to go into the computer room and sit down in front of the actual console. With your laptop, you are the administrator, and there is only one (built-in) terminal. Thus, any time you use your Linux laptop, whether or not you are actually managing the system or just doing work, you are using the console.

Why do you need to know about consoles and regular terminals? There are three reasons. First, Unix systems have always distinguished between consoles and regular terminals and, when you are learning about Unix and come across a reference to the "console", I want you to know what it means.

Second, if you are a system administrator (which is the case when you have your own Unix system), there are certain things that can only be done at the console, not from a remote terminal.

(Here is an example. If your system has a problem that arises during the boot process, you can only fix the problem from the console. This is because, until the system boots, you can't access it via a remote terminal.)

Finally, from time to time, a Unix system may need to display a very serious error message. Such messages are displayed on the console to ensure that the system administrator will see them.

Having said so much about consoles and why they are important, I'd like to pose the question: Are there computers that don't have consoles?

You betcha. There are lots of them. However, before I explain how a system can work without a console, I need to take a moment to talk about Unix and networks.

Jump to top of page

The Unix Connection

As we have discussed, Unix is designed so that the terminal (that is, the interface) is separate from the host (the processing unit). This means that more than one person can use the same Unix system at the same time, as long as each person has his or her own terminal.

Once you understand this idea, it makes sense to ask, how far apart can a terminal be from the host? The answer is as far as you want, as long as there is a connection between the terminal and the host.

When you run Unix on a laptop computer, the terminal and the host are connected directly. When you run Unix on a desktop computer, the terminal is connected to the host by cables. (Remember, the terminal consists of the keyboard, monitor, mouse, speakers, and microphone.)

What about a larger distance? Is it possible to connect a terminal to a host over a local area network (LAN)? Yes, it is.

For example, let's say you use a PC that is connected to a LAN on which there are many computers, three of which are Unix hosts. It is possible to use your PC as a terminal to access any one of the three Unix hosts. (Of course, before you can use any Unix host, you must have authorization to use that computer.)

When you use your computer to connect to a remote Unix host, you run a program that uses your hardware to EMULATE (act like) a terminal. This program then connects over the network to the remote host.

You can do this from any type of computer system: a Windows computer, a Macintosh, or another Unix computer. Typically, the terminal program runs in its own window, and you can have as many separate windows as you want.

For example, you might have three windows open at the same time, each running a terminal program. Each "terminal" can be connected to a different Unix host over the network. In this case, you would be working on four computers simultaneously: your own computer, and the three Unix hosts.

You can see this illustrated in Figure 3-6.

Figure 3-6: Unix/Linux computer on a local area network

A computer on a local area network, running four terminal programs, each in its own window. Three of the "terminals" are connected, via the network, to different remote hosts. The fourth "terminal" is running a program on the local computer.

In Figure 3-6, the network connections between the PC and the three Unix hosts are via cables, as in a traditional network. However, any type of network connection will do. In particular, you can use a wireless connection.

Here is an example. Let's say you have three geeky friends, Manny, Moe and Jack. Each of you has a laptop computer that runs Unix. You use Debian Linux; Manny uses Fedora Core Linux; Moe usees Gentoo Linux; and Jack uses FreeBSD. (Jack always was a bit odd.)

You get together for a Unix party (that is, computers, caffeinated drinks, and junk food), and you decide that each person should have access to the other three computers. First, each of you creates user accounts on your own computer for the other three people. (I won't go into the details here, but it's not hard.)

Then, you all use either the iwconfig or wiconfig command to configure your computers in such a way as to allow them to connect, wirelessly, into a small network.

Once the network is established, you each open three terminal windows on your own computer. Within each window, you connect to one of the three other computers.

You now have four people in the same room, each running a different type of Unix on his laptop computer, each of which also has access to the other three computers. Could anything be cooler?

So, we have talked about a terminal directly connected to a host (a laptop computer), a terminal connected to a host by cables (a desktop computer), a terminal connected to a host over a regular LAN, and a terminal connected to a host over a wireless LAN. Can we go further?

Yes. By using the Internet to connect a terminal to a host, we can have a connection that can reach anywhere in the world. Indeed, I regularly use the Net to connect to remote Unix hosts from my PC. To do so, I open a terminal window and connect to the remote host. As long as I have a good connection, it feels as if I were working on a computer down the hall.

Jump to top of page

Hosts Without Consoles

I mentioned earlier that there are many Unix host computers in the world that are not connected to terminals. This is because, if a computer can run on its own, without direct input from a human being, there is no need for a terminal. Such computers are referred to as HEADLESS SYSTEMS.

On the Internet, there are many Unix hosts that run just fine without terminals. For instance, there are millions of headless systems acting as Web servers and mail servers, silently doing their job without any human intervention. Many of these servers are running Unix, and most of them are not connected to a terminal.

(A WEB SERVER responds to requests for Web pages and sends out the appropriate data. A MAIL SERVER sends and receives email.)

If the need arises to directly control such a host computer — say, to configure the machine or to solve a problem — the system administrator will simply connect to the host over a network. When the system administrator is done, he simply disconnects from the host and leaves it to run on its own.

On the Internet, there are two very common types of hosts that run automatically without terminals. First, there are the servers, such as the Web servers and mail servers I mentioned above. We'll talk about them in a moment.

Second, there are the ROUTERS: special-purpose computers that relay data from one network to another. On the Internet, routers provide the connectivity that actually creates the network itself.

For example, when you send an email message, the data will pass through a series of routers on its way to the destination computer. This will happen automatically, without any human intervention whatsoever. There are millions of routers, all around the world, working automatically, 24 hours a day, and many of them are Unix hosts without a console.

What if there is a problem? In such cases, it is the work ofa moment for a system administrator to open a terminal window on his PC, connect to the router, fix the problem, and then disconnect.

Some large companies with many Unix servers use a different approach. They will connect the console of every host computer to a special terminal server. That way, when there is a problem, a system administrator can use the terminal server to log in directly to the computer that has the problem. I have a friend who once worked at a company where 95 different Unix hosts were connected to a set of terminal servers that were used only for system administration.

Jump to top of page

The Client/Server Relationship

In computer terminology, a program that offers a service of some type is called a SERVER; a program that uses a service is called a CLIENT.

These terms, of course, are taken from the business world. If you go to see a lawyer or an accountant, you are the client and they serve you.

The client/server relationship is a fundamental concept, used in both networks and operating systems. Not only are clients and servers used extensively on the Internet, they are an important part of Unix (and Microsoft Windows, for that matter). Consider the following example.

As I am sure you know, to access the Web you use a program called a BROWSER. (The two most important browsers are Internet Explorer and Firefox. Internet Explorer is used more widely; Firefox is better.)

Let's say you decide to take a look at my Web site (www.harley.com). To start, you type the address into the address bar of your browser and press the Enter key. Your browser then sends a message to my Web server. (I'm skipping a few details here.) Upon receiving the request, the Web server responds by sending data back to your browser. The browser then displays the data for you in the form of a Web page (in this case, my home page).

What I have just described is a client/server relationship. A client (your browser) contacts a server on your behalf. The server sends back data. The client then processes the data.

Let's take another example. There are two ways to use email. You can use a Web-based email service (like Hotmail or Yahoo Mail), or you can run a program on your computer that sends and receives mail on your behalf. I'd like to talk about the second type of service.

When you run an email program on your computer, the program uses different systems for sending and receiving. To send mail, it uses SMTP (Simple Mail Transport Protocol). To receive mail, it uses either POP (Post Office Protocol) or IMAP (Internet Message Access Protocol).

Let's say you have just finished composing an email message, and your email program is ready to send it. To do so, it temporarily becomes an SMTP client and connects to an SMTP server. Your SMTP client then calls upon the SMTP server to accept the message and send it on its way.

Similarly, when you check for incoming mail, your email program temporarily becomes a POP (or IMAP) client, and connects to your POP (or IMAP) server. It then asks the server if there is any incoming mail. If so, the server sends the mail to your client, which processes the messages appropriately.

My guess is that, even if you have sent and received email for years, you may have never heard of SMTP, POP and IMAP clients and servers. Similarly, you can use the Web for years without knowing that your browser is actually a Web client. This is because client/server systems generally work so well that the clients and servers are able to do their jobs without bothering the user (you) with the details.

Once you get to know Unix and the Internet, you will find that there are clients and servers all over the place. Let me leave you with three such examples.

First, to connect to a remote host, you use a client/server system called SSH. (The name stands for "secure shell".) To use SSH, you run an SSH client on your terminal, and your SSH client connects to an SSH server running on the host. Second, to upload and download files to a remote computer, you use a system called FTP (File Transfer Protocol). To use FTP, you run an FTP client on your computer. Your FTP client connects to an FTP server. The client and the server then work together to transfer the data according to your wishes.

As you become an experienced Unix or Linux user, you will find yourself working with both these systems. As you do, you will come to appreciate the beauty and power of the client/server model.

Finally, you may have heard of USENET, the worldwide system of discussion groups. (If you haven't, go to http://www.harley.com/usenet.) To access Usenet, you run a Usenet client, called a newsreader. Your newsreader connects to a Usenet server called a news server. (I'll explain the names in a minute.)

All of these examples are different, but one thing is the same. In each case, the client requests a service from the server.

Strictly speaking, clients and servers are programs, not machines. However, informally, the term "server" sometimes refers to the computer on which the server program is running.

For example, suppose you are taking a tour of a company and you are shown a room with two computers in it. Your guide points to the computer on the left and says, "That is our Web server." He then points to the other computer and says, "And that is our mail server."

What's in a Name?

News, Newsgroups, Newsreader, News server


The Usenet system of worldwide discussion groups was started in 1979 by two graduate students at Duke University, Jim Ellis and Tom Truscott. Ellis and Truscott conceived of Usenet as a way to send news and announcements between two universities in North Carolina (University of North Carolina and Duke University).

Within a short time, Usenet spread to other schools and, within a few years, it had blossomed into a large system of discussion groups.

Because of its origin, Usenet is still referred to as the NEWS, even though it is not a news service. Similarly, the discussion groups are referred to as NEWSGROUPS, the clients are called NEWSREADERS, and the servers are called NEWS SERVERS.

Jump to top of page

What Happens When You Press a Key?

As you now understand, Unix is based on the idea of terminals and hosts. Your terminal acts as your interface; the host does the processing.

The terminal and the host can be part of the same computer, such as when you use a laptop or a desktop computer. Or the terminal and host can be completely separate from one another, as when you access a Unix host over a LAN or via the Internet.

Regardless, the terminal/host relationship is deeply embedded into the fabric of Unix. Having said that, I want to pose what seems like a simple question: "What happens when you press a key?" The answer is more complex than you might expect, and I bet it will surprise you.

Let's say you are using a Unix computer and you want to find out what time it is. The Unix command to display the time and date is date. So, you press the four keys: <d>, <a>, <t>, <e>, followed by the <Enter> key.

As you press each letter, it appears on your screen, so it's natural to assume that your terminal is displaying the letters as you type them. Actually, this is not the case. It is the host, not the terminal, that is in charge of displaying what you have typed.

Each time you press a key, the terminal sends a signal to the host. It is up to the host to respond in such a way that the appropriate character is displayed on your screen.

For example, when you press the <d> key, the terminal sends a signal to the host that means "the user has just sent a d character". The host then sends back a signal that means "display the letter d on the screen of the terminal". When this happens, we say that the host ECHOES the character to your screen.

The same thing happens when you use a mouse. Moving the mouse or clicking a button sends signals to the host. The host interprets these signals and sends instructions back to your terminal. Your terminal then makes the appropriate changes to your screen: move the pointer, resize a window, display a menu, and so on.

In most cases, it all happens so fast that it looks as if your keyboard and mouse are connected directly to your screen. However, if you are using a long-distance connection, say over the Internet, you may occasionally notice a delay between the time you press the key and the time you see the character appear on your screen. You may also see a delay when you move your mouse or press a mouse button and the screen is not updated right away. We refer to this delay as LAG.

You might ask, why was Unix designed so that the host echoes each character? Why not have the host silently accept whatever it receives and have the terminal do the echoing? Doing so would be faster, which would avoid lag.

The answer is that when the host does the echoing, you can see that what you are typing is being received successfully, and that the connection between your terminal and the host is intact. If the terminal did the echoing and you had a problem, you would have no way of knowing whether or not your connection to the host was working. This, of course, is most important when you are using a host that is physically separate from your terminal.

Aside from dependability, there is another reason why the Unix designers chose to have the host do the echoing. As I will discuss in Chapter 7, there are certain keys (such as <Backspace> or <Delete>), that you can press to make corrections as you type. Unix was designed to work with a wide variety of terminals, and it made sense for the operating system itself to handle the processing of these keypresses in a uniform way, rather than expect each different type of terminal to be able to do the job on its own.

— hint —

When you use Unix, the characters you type are echoed to your screen by the host, not by the terminal. Most of the time, the lag is so small that you won't even notice it. However, if you are using a distant host over a slow connection, there may be times when there will be a delay before the characters you type are displayed on your screen.

Unix allows you to type ahead many characters, so don't worry. Just keep typing, and eventually, the host will catch up. In almost all cases, no matter how long the lag, nothing will be lost.

Jump to top of page

Character Terminals and Graphics Terminals

Broadly speaking, there are two classes of terminals you can use with Unix. How you interact with Unix will depend on which type of terminal you are using.

Take a moment to look back at Figures 3-2 and 3-3, the photos of the Teletype ASR33. As we discussed, this machine was the very first Unix terminal. If you look at it carefully, you will see that the only input device was a rudimentary keyboard, and the only output device was a roll of paper upon which characters were printed.

Over the years, as hardware developed, Unix terminals became more advanced. The keyboard became more sophisticated and a lot easier to use, and the roll of paper was replaced by a monitor with a video screen.

Still, for a long time, one basic characteristic of Unix terminals did not change: the only form of input and output was characters (also called TEXT). In other words, there were letters, numbers, punctuation, and a few special keys to control things, but no pictures.

A terminal that only works with text is called a CHARACTER TERMINAL or a TEXT-BASED TERMINAL. As PC technology developed, a new type of terminal became available, the GRAPHICS TERMINAL. Graphics terminals had a keyboard and mouse for input and, for output, they took full advantage of the video hardware. Not only could they handle text; they could display just about anything that could be drawn on a screen using small dots: pictures, geometric shapes, shading, lines, colors, and so on.

Obviously, graphics terminals are more powerful than character terminals. When you use a character terminal, you are restricted to typing characters and reading characters. When you use a graphics terminal, you can use a full-fledged GUI (graphical user interface), with icons, windows, colors, pictures, and so on.

For this reason, you might think that graphics terminals are always better than character terminals. After all, isn't a GUI always better than plain text?

This is certainly true for PCs using Microsoft Windows and for Macintoshes. From the beginning, both Windows and the Macintosh operating systems were designed to use a GUI; in fact, they depend upon a GUI.

Unix is different.

Because Unix was developed in an era of character terminals, virtually all the power and function of the operating system is available with plain text. Although there are Unix GUIs (which we will discuss in Chapter 5) and you do need to learn how to use them, a great deal of what you do with Unix — including everything I teach you in this book — requires only plain text. With Unix, graphics are nice, but not necessary.

What does this mean in practical terms? When you use Unix on your own computer, you will be working within a GUI (using a mouse, manipulating windows, and so on). This means your computer will be emulating a graphics terminal.

However, much of the time, you will find yourself working within a window that acts as a character terminal. Within that window, all you will type is text, and all you will see is text. In other words, you will be using a character terminal. In the same way, when you connect to a remote host, you usually do so by opening a window to act as a character terminal.

When you find yourself working in this way, I want you to take a moment to think about this: you are using Unix in the same way that the original users used Unix back in the 1970s. What's interesting is that, over thirty years later, the system still works well. Most of the time, text is all you need.

Jump to top of page

The Most Common Types of Terminals

Over the years, Unix was developed to work with literally hundreds of different types of terminals. Today, of course, we don't use actual standalone hardware terminals: we use computers to emulate a terminal.

I have mentioned the idea of opening a window to emulate a character terminal. In most cases, the emulation is based on the characteristics of a very old terminal, called the VT100, which dates from 1978. (The VT100 was made by the Digital Equipment Corporation, the same company that made the PDP-11 computers we discussed at the beginning of the chapter.) Although actual VT100s haven't been used for years, they were so well-designed and so popular, they set a permanent standard for character terminals. (You can see an actual VT100 in Figure 3-7 below.)

Graphics terminals, of course, have a different standard. As you will see (in Chapter 5), Unix GUIs are all based on a system called X Window, and the basic support for X Window is provided by a graphics terminal called the X TERMINAL. Today, the X terminal is the basis of graphics terminal emulation, the same way that the VT100 is the basis of character terminal emulation.

Thus, when you connect to a remote host, you have two choices. You can use a character terminal (the most common choice), in which case you will be emulating a VT100 or something like it. Or, if you want to use GUI, you can use a graphics terminal, in which case you will be emulating an X terminal.

Although I won't go into the details now, I'll show you the two commands you will use. To connect to a remote host and emulate a character terminal, you use the ssh (secure shell) command. To emulate an X Window graphics terminal, you use the ssh -X command.

Figure 3-7: VT100 terminal

The most popular Unix terminal of all time, the VT100, was introduced in 1978 by the Digital Equipment Corporation. The VT100 was so popular that it set a permanent standard. Even today, most terminal emulation programs use specifications based on the VT100.

Jump to top of page



Exercises

Review Question #1:

What type of machine was used as the very first Unix terminal? Why was this machine chosen?

Review Question #2:

What are terminal rooms? Why were they necessary?

Review Question #3:

What is a headless system? Give two examples of headless systems that are used on the Internet to provide very important services.

Review Question #4:

What is a server? What is a client?

For Further Thought #1:

In 1969, Ken Thompson of AT&T Bell Labs was looking for a computer to create what, eventually, became the first Unix system. He found an unused PDP-7 minicomputer, which he was able to use. Suppose Thompson had not found the PDP-7. Would we have Unix today?

For Further Thought #2:

In the 1970s, computers (even minicomputers) were very expensive. Since no one had their own computer, people had to share.

Compared to today, computers in the 1970s were slow, with limited memory and storage facilities. Today, every programmer has his own computer with very fast processors, lots of memory, lots of disk space, sophisticated tools, and high-speed Internet access.

Who had more fun, programmers in the 1970s or programmers today?

How about programmers 20 years from now?

Jump to top of page