Back in 1983 the late Michael Crichton published a non-fiction book called 'Electronic Life: How to Think About Computers'. This was neither a cut 'n' dry home owner’s manual, of which were already bountiful at the time, nor a cutesy 'computer for dummies' type read featuring some illustrated caricature on the cover. It does incorporate aspects of both on a basic level but, in truth, the author aimed to provide readers with an altogether larger, more comprehensive understanding of the very idea of both a home computer and the possible future such commercial technology might bring, speaking from a cultural, social and anthropological perspective--yet, still managing to articulate these ideas in a common coinage easy enough for anyone to grasp. A fine balancing act. Of course, the catch here is that it was written 30 years ago; in terms of the evolution of computer technology, practically an eon.
And that, my friends, is what makes the book rather unique as an artifact shedding insight to a past age, one where the multi-media global network of today could, at the very most, only be imagined -- pipedreamed -- in rudimentary, micro-measurements. Indeed, so much of Crichton’s views expressed here are charmingly dated and naïve, but his broader philosophical bent is quite accurate, and there is perhaps even a wholesome-like core value system concerning Man and his interface with technology that one might take away from the book which has long since been cast into oblivion in favor of our current generation’s at once ultra-sophisticated savvy and dull acceptance of the digital world around us; our wisdom deficiency, if you will, on all things electronic life.
The book is loosely written in the form of a glossary via both PC terminology and more general topics about the computer revolution as perceived at the time. Some random excerpts:Anatomy, Computer
Faced with an unfamiliar computer, experienced programmers do an interesting thing. They do nothing. They stand and look at the machine. After a moment, they make some simple observation like "Hmmm, it’s got a hard disk drive," or "I see it’s got a built-in printer."
What they’re doing is checking out the anatomy of the machine--finding where things are. This is a logical first step before trying to operate the machine. Beginners skip this step. Either they throw up their hands in horror and announce that computers are beyond them or they plop down at the keyboard and say, "Okay, what do I do?"
What you do first is nothing. Whether you’re in a store or a home or an office, first step back and look. Take your time.
...Don’t worry about any extra equipment. Notice it exist and ignore it.
Radically different art would literally be invisible. I’ve often been amused to think that domestic dogs practice an art form, right under our noses. After all, a dog walking down a street behaves just like a person in an art gallery, going from picture to picture, alert and interested. There’s a whole world of fascination for dogs that we can’t participate in, because we have no comparable sense of smell.
Less fancifully, Edmund Carter tells how seventeenth century sailors enjoyed inviting aborigines aboard their ships and firing off cannons unexpectedly to scare them. But when the cannons were fired, the Indians didn’t even blink. They had no reaction at all. The loud sound meant nothing to them.
What is truly new does not create shock-–it creates nothing. If we are shocked by art, we are shocked because our expectations are not met. And that means we already have expectations based on previous experience.
If one imagines artificial intelligence programs as an art form, then many objections to them disappear (and perhaps funding). We don't complain that the Sistine Chapel ceiling is not also The Last Supper. The very idea is absurd. Art is inherently limited; we appreciate it for what it is, not for what it isn't.
The earliest designs of any new object reflect older images. The first Pulsar digital watches looked like TV sets; the first home computers looked like typewriters mating with TV sets.
External computer design is still dominated by engineers, and there are a lot of ugly clunkers around. But there’s no excuse for making a machine that looks like a toilet on the space shuttle--and less excuse for buying one. I’m constantly surprised that people who wouldn’t put an ugly stereo on a side shelf will place one of the junky boxes right the corner of their desk, as if they had no choice.
Apple, Atari, IBM, DEC, Espon, Xerox, and Olivetti make pleasant looking computers, although they’re all basically white or gray (I don’t know why computers don’t come in colors; eventually they will).
A set of instructions to make the computer do something. But that’s not a very useful way to think of a program, since instructions purely to the machine for its own use are often brief.
The bulk of many programs is devoted to helping people put data into the machine and get processed information back out again. One’s satisfaction with a program strongly reflects these areas of input and output.
People don’t understand this. When they look at a long program, they fail to recognize how much of it is directly geared to them. Don’t throw up your hands: this stuff matters to you. And it’s always the least computery part of the program.
For example, a BASIC program might begin with:
20 VTAB 3: HTAB 5
30 PRINT "WHAT IS YOUR NAME?"; : INPUT N$
40 PRINT N$
50 PRINT "IS YOUR NAME CORRECT? (Y/N)"; : INPUT A$
60 IF A$ = "N"THEN GOTO 20
70 PRINT "NICE TALKING WITH YOU,": N$ ; "."
[Now, this means in English]
10 Clear the screen.
20 Tab down 3 lines, and horizontally five spaces.
30 Print "What is your name?" and stay on the same line. Wait for something to be typed in from the keyboard. Call whatever is input N$.
40 Print this N$ back out again, whatever it is.
50 Print "Is your name correct? (Y/N)." This prompts the user to answer Y or N. Get the answer, and call it A$.
60 If the Answer is N for no, then you’ll need to start over again, so go back to line 20.
70 If the answer is not N, then the name must be okay, so print "Nice talking with you," and stay on the same line. Print the name N$, stay on the same line, and print a period (.).
Human beings continuingly create models of reality. These models are always simplified and approximate, although we tend to forget that. Models are integral to our language and our tools. If we say a friend is angry, we ignore the fact that our friend is much more than just the emotion of anger. If we measure with a yardstick, we ignore the fact that the yardstick is inexact.
Because we use models consistently, and because models are built into our language, we tend to substitute models -- simplified constructs of reality -- for reality itself. In minor ways, we’ve all had the experience of seeing our models collapse, as when the furniture doesn’t fit in a room because our measurements were inexact, or when our angry friend tells a joke to his supposed enemy. At those times, we are sharply reminded that reality is more complex than our modeled version of it.
Programs, like scientific theories, can never be proved right--they can only be found wrong. The fact that a program works a thousand times does not guarantee that a bug won’t show up the next time.
Experienced programmers say, "There’s always one more bug."
Test a new program until you’re cross-eyed with exhaustion. And regard every running of a satisfactory program as one further test. There’s no way around this perpetual testing and caution.
It's the nature of the beast.