Behind the scenes 1

As gifts when I was growing up, I would often get books about Star Trek. Our whole family had always enjoyed this show in reruns. These books were not the Star Trek novels, but rather books about the making of the TV show. Much of what I read was interesting not just because it was about that particular TV show but because it provided a behind-the-scenes glimpse at how TV shows are made.

Apparently they didn’t necessarily have the space to have all the sets built all of the time. Also, some of the sets performed multiple functions: for example, two different corridors inside the starship might be the same set with different lighting filmed from a different angle. In such a case they could first film all the scenes where people needed to use corridor number 1, which might be scattered throughout the show. Then they could change the lighting and camera placement and set up corridor number 2 and film those scenes. An alien planet might only be used in a single episode, so they could build that set, film all the scenes on the planet, and then take the set down and use the space for something else. Logically this makes sense but I hadn’t thought much about the possiblity that the scenes in the show were not filmed in the sequence in which you see them.

This must be very odd for the actors, especially if they’re used to acting in stage plays. If the scenes are not filmed in sequence, the character development must be out of sequence as well. The actors must need to have a sense of what part of the story each particular scene comes from, and put their characters in the appropriate frame of mind. I find it very impressive that when you see the finished product, there’s nothing in the characters’ behavior that would give this away.

Movies must be done quite out-of-sequence as well. Often people have identified continuity errors in movies where, for example, a character is eating a sandwich that suddenly has a bite taken out of it or suddenly becomes whole after he’s eaten some of it, or a drink goes from full to empty to full again, or something like that. Or perhaps the sandwich changes into a hamburger and then back to a sandwich again. I find this interesting not just because somebody made a mistake, but because the scenes were filmed in such a fragmented way that it’s even possible for such things to happen. What looks to the viewer like a continuous conversation is evidently not even close to happening in real time. Perhaps a lot of credit also must go to the editors who assemble these scenes so that they seem to flow naturally.

In the real world it’s so much easier: we get to just live our lives in real time. It’s nice that everything happens in its actual sequence, but there have been those occasional times when I would have gladly used the fast-forward button.

Unauthorized information and software development

So what information are we talking about here, and who has to authorize it?

Software depends on knowledge to make something work. As a house is built upon its foundation, software is built on certain types of knowledge that the developer believes to be stable.

Much software is written with a certain operating system in mind. For example, someone might have started out to write a program for Windows. How does the developer know how to make a program that will run on Windows? Well, Microsoft provides documentation on how to do this, and based on this documentation, the developer can access features in the operating system and produce a program that works correctly. This type of information is referred to nowadays as an Application Program Interface (API). If someone creates software (such as an operating system) that is intended to be used by other software, they generally document an API that gives this other software some reliable knowledge to build on.

So how can this information be “unauthorized”? Let’s go back in time to the early 1980s. At that time, a home computer was likely to be one of these:

  • Apple ][
  • Atari 800
  • Commodore 64
  • IBM PC (at first this was more of an office machine)

Of these, only the IBM PC has true “descendants” to this day. In fact, it is possible that someone could have written a program to work on an early IBM PC, and this program could still run on a modern computer. (I know this for a fact, as I wrote such a program.) How could this be?

Back in the early 1980s, IBM published the source code to its BIOS (Basic Input/Output System). This code contained copious comments (comments are text that doesn’t do anything computer-wise but provides the reader of the code with information). These comments documented the API of how one could write programs that worked on an IBM PC.

As a quick digression, it took some creativity for someone to figure out how to legally create an IBM PC “clone” that could run programs compatibly. Source code is generally considered the property of its creator, and it would have been illegal for someone to take the source code of IBM’s BIOS and copy it into their own BIOS. IBM published the source code but they still owned a copyright to it. So these “clone” creators wrote their own document containing only the API information, and passed it off to their own developers who were strictly forbidden from looking at the IBM source code. It turned out that it was legal for them to develop their own BIOS with the same API as long as they were not actually copying their code from IBM’s source code.

Anyway, software developers are a creative bunch, and wanted their programs to work as well as possible. This created a bit of a controversy. IBM had some functions in the BIOS to create text and graphics on the screen, and they said they wanted developers to please use their API to create their text and graphics. If they did this, IBM would guarantee that the same API would work for future IBM computers that hadn’t been invented yet.

But developers decided to ignore this request from IBM. They could look at the BIOS source code and see the workings of these text and graphics functions. The developers found that if they accessed the graphics hardware directly instead of using IBM’s API, their programs would run noticeably faster.

The developers were taking a risk here. Hypothetically, IBM could invent a new graphics card and change their graphics hardware completely. Along with this, IBM would also have to modify its BIOS code to work with this new graphics hardware. If developers used IBM’s API (as IBM had asked them to), their programs would continue to work. But if developers accessed the graphics hardware directly, their programs would no longer work with these hypothetical new graphics cards. If the developers complained to IBM, then IBM could say, “We warned you that you should have used our API. You should have listened to us.”

What actually happened, though, was that developers had almost universally decided that IBMs BIOS functions were too inefficient and slow. I recall having to decide how graphics should work in a program I was writing, and by this time it was considered “accepted practice” to access the graphics hardware directly. I decided to jump on the bandwagon and follow the crowd, abandoning the BIOS graphics functions like everyone else. If IBM ever radically changed their graphics card, I wouldn’t be any worse off than all those other developers who made the same decision.

In fact, the hypothetical situation never happened; IBM never changed the way their graphics cards worked. Maybe this was to make things easier for themselves. But another factor could have been that accessing the graphics hardware directly had become such widespread “accepted practice” that IBM was essentially trapped. Who would upgrade to IBM’s new graphics card if it wouldn’t work with any existing software?

So the “accepted practice” had won a victory over IBM’s documented API. As other manufacturers started to make their own graphics cards to use in these “clone” PCs, they all stayed compatible enough with IBM’s original design that existing software still worked.

But wait a minute, you say. Several paragraphs ago, I said that some of this old software would still work on a modern computer. But surely graphics cards have changed substantially in all those years. If I wrote a program in 1985 that accessed the graphics hardware directly, how can it still work over a quarter of a century later?

The answer is that this “accepted practice” won out in a bigger way than anyone would have expected. Back in the early 1990s, Windows 3.0 and then 3.1 quickly became the new standard for office computers. And Microsoft did a very clever thing. They knew all about the “accepted practice” of accessing graphics hardware directly, and they built into their operating system the appropriate stuff so that all of these old programs still worked!

I must admit I was rather stunned the first time I saw an old DOS-based program that I wrote using the “accepted practice” run perfectly fine inside an “MS-DOS window” in Windows 3.1. On an old IBM PC, my program occupied the whole screen and controlled the graphics hardware entirely on its own. But here it was running inside a window along with other windows running different programs that were on the screen at the same time. Windows 3.1 intercepted my program’s accesses to what my program “thought” was the graphics hardware, and it did the “equivalent” things so that my program would run correctly in a window!


So does this mean that you should use unauthorized information? No, it just means that IBM didn’t have the authority that it had hoped. The information was in a sense “authorized” by common practice, so in just this special case, the developers won out by ignoring the rules.

Oh, but despite that, it’s really really bad to depend on unauthorized information. Not that you’d know that from this story, but most of the time, “unauthorized information = bad”. Oh, and besides, “the exception proves the rule,” so, um, right, this story really shows that if you depend on unauthorized information, it would be bad. Yup, you’d definitely want to not be doing that. If you did, it would be like, all risky and stuff.

The slide rule

Out of all the millions of people reading this, I’ll bet only a small percentage know what this is: 

slide rule

Identify this object

The title of this post may have given you a clue: it’s a slide rule. But if you don’t already know what a slide rule is, that may not tell you much. If you saw the movie Apollo 13, you may remember a scene where the people at Mission Control had to do a calculation, and they took out slide rules similar to the one pictured above. 

Basically, this is how technical folks calculated things before calculators were invented. I never used a slide rule except as a novelty item; by the time I had to calculate anything professionally, calculators were commonplace. 

To understand how a slide rule works, imagine two one-foot (or 30 cm) rulers that are mirror images, placed so that the measurement scales touch: 

Math with rulers

Put two rulers together

If you push the top ruler two inches (or cm) to the right, as in the picture above, you can look under the 3 on the top ruler and see a 5 on the bottom ruler. Congratulations, you have just used a really lame method to calculate 2 + 3! 

A slide rule works essentially like this, except that the numbers are spaced according to their logarithms base 10 instead of their plain old values. The scale starts at 1, and ends at 10, smushing the numbers together more as they progress. From 1 to 2 is about 30% of the whole scale, while from 8 to 10 is only about 10% of the scale. 

So what, you say. Well, when you add logarithms, it multiplies the numbers that they’re the logarithms of. So if you used the lame ruler addition technique with logarithmically scaled rulers, you’d push the top ruler so its left edge (where 1 is) lines up with the 2 on the bottom, and you would find that the 3 on the top ruler lines up with a 6 on the bottom ruler, and now you have used a lame method to multiply 2 x 3. That’s basically how a slide rule works. Actually it’s a rather clever idea. 

2 x 3 = 6

2 x 3 = 6

What about numbers that aren’t between 1 and 10? Just move the decimal point. You’re on your own to figure out where the decimal point goes. Also, how accurate can the answer be? Not very; you can only get the first three digits or so. You can use a thin line on a movable clear piece of plastic to help judge exactly how the numbers line up. Accuracy is especially bad if your answer starts with an 8 or 9 (since those numbers are crammed together in the last 10%) and less bad if it starts with 1. 

Fancy slide rules have more scales. Some have a log-log scale (the logarithm of the logarithm) that lets you do exponents, sine and cosine scales, square root and cube root scales (logarithms stretched by a factor of 2 or 3), or multiplication scales with the 1 in the middle somewhere so you don’t have to slide the rulers quite so far sometimes. 

Slide rules seem antiquated now, but they must have been useful in their day. One shouldn’t underestimate the power of such simple tools. After all, they did help bring astronauts back to Earth.