Thursday, July 6, 2017

Computer Education for Everyone, Part 0: History

When I first went to college, I had four interests: physics, mathematics, computer science, and music.  These are still my interests, but I want to focus on two of them.

Mathematics has been around for millennia; I mean, the Egyptians (whoever they were; they might or might not have been the ancestors of those hanging out in the UAR today) used it, and so did the Arabs, for instance to cross the desert.  We know that the ancient Greeks knew a lot of mathematics, and we also know how they taught it.  The art of teaching mathematics is also several thousand years old, so there are new fashions in how to teach mathematics, but they have a tough job competing against ancient methods.  In contrast,

Computer Science only arrived in our midst about a century ago, but since then, the techniques of programming, and teaching computer science have changed furiously, so it is incredibly difficult to keep up.  Back when I first started working, I taught the first several courses in computer science using methods and tools that seemed very simple to me.  But in 1995, everything changed.  When Windows was created, and shortly afterwards, the Windows_95 operating system that was native to the newer PCs, the programming game had to change.  This is, in retrospect, not hard to explain, though I was never happy with these changes.

The elements of Programming
The computer is a clever device that has two aspects to it.  Firstly, it has storage, which is a huge number of places on the computer chip that can remember numbers.  Secondly --and this is the clever part--it has a unit that can obey instructions.  This is the part that boggles the mind of non-computer people: how can a computer obey instructions?

(By the way, the storage (or the memory locations) are numbered from 1 to 1,048,576 (or something like that; it depends).  It's an enormous block of apartments, each of which holds a number.)

Back to instructions.  Essentially, the basic computer chip obeys instructions such as: "Go put this number in location 3."  More interesting instructions are like: "Check the number in location 7; if it is 0, go to step 8,  otherwise, continue with the next instruction."  The wonderful thing is that, with a little work, several hundreds of this sort of instruction can play a Netflix movie for you, or solve an equation, or put an astronaut on the moon.

Back when I was teaching computer science (we were already past the punched cards stage), the only way to get any instruction into the computer, at the level at which we were teaching, was through the keyboard.  The only things you could get out of the computer, was on the screen.  This made things simple.

Still, many fun things could be done with these simple tools.  By packaging large sets of instructions together, we could make our own super-instructions.  For example, we could set it up so that the computer could sort a list of numbers in increasing order!  But if we got tired of having it do that, we could have it sort 25 lists of numbers, and sort each one.  Okay, that's pretty tame, but the interesting thing here is that once you solve a basic problem, like sorting, you could package that solution into what is called generically a module, and use that module in a more complex program.  You could call your module sort, and use it as if sort was an instruction, just like Put this number in location 3 was.  You could add to the language.

Objects.  In addition to these super-instruction modules, we could invent various gadgets.  For instance, using numbers, we could make gadgets called characters!  Remember, the basic things a computer uses is numbers.  But making characters is easy; we basically say something like 65 stands for A, 66 stands for B, and so on.  So, as long as the computer knows that you're interested in characters, when you say 69, it knows you want E.

A second ago, we were talking about lists of numbers.  Well, we can really do nice lists of numbers; it only takes a bit of careful organizing (which, mercifully, the programmer does not need to do; the programming language easily takes care of it); it only needs to know how big your list is.  We can do better.  For instance, I could invent a gadget called a student record, which has a mixture of different sort of simpler gadgets: A name, a homework score, four test scores, a final score, and an average.  Now, you have seen things very much like this: this looks like a row in a spreadsheet, if you've used one of those.  Well, a spreadsheet is pretty much a huge rectangle of multi-purpose gadgets, set up so that each place can be one of several sorts of gadgets.

When Windows came along, the gadgets aspect of programming completely took over.  The new generation of gadgets were far more complex than just a name, or a score, or a list.  For instance, Microsoft programmers invented a thing called a Window that had various parts: the title bar, the width, the height, the position on the screen, any boxes in the window into which you might want to type things, how you move the window, what color the background is, and so on and so forth.  Further, Microsoft provided the gadgets it wanted you to use: it was called the Windows API,  which is short for application programming interface.  At that point, students were taught to use built-in gadgets (objects), and learn how to solve various problems with the objects that were available.  They could advance to creating their own gadgets, and fairly soon.  But now the gadget tail, or the objects, were wagging the programming dog, which was a little difficult to adapt to.  Many computer science teachers have made the transition with ease (and I, too, have taught a few courses using the new object-oriented paradigm for special purposes), but the added layer of Windows seems, to me, to obscure the transparency of the programming process.

I must be one of a small minority that rues the progress of the computer science environment.  The proportion of people going into programming has fallen off, it seems to me; most people are satisfied to just use computers, and not program them.  So millions of people are able to use word processors, such as Word or WordPerfect.  (There are others, and for free, too: Open Office Write, for instance.)  Or browsers, or spreadsheets, or PowerPoint.  I'm not even sure what the generic word for software such as PowerPoint is; something like slide show, no doubt.)

Peripherals.  The programming setup of Windows changed other things as well.  The output was still the screen, but instead of being set up to show just letters of the alphabet, it could now show pictures.  Pretty soon, it could play music, and send out messages along the lines: email.  All this could have been handled with basic object-oriented programming, but once the fully-object-oriented paradigm came to dominate the programming environment, and the accompanying languages, C, C++ and Java became the only games in town, you had to come to terms with the vagaries and peccadilloes of those languages.  Among other things, this encouraged bad programming habits among weaker programmers, because there were short cuts that they began to take en masse, which made it difficult to repair broken programs, and upgrade programs to keep up with user and hardware demands.

Operating systems did not impinge on the attention of the average citizen until a few decades ago.  These were initially supervisor programs that allowed a number of users to use the same mega-computer.  You "logged in", and used the same computer as several dozen others around you.  The operating system ran programs for you on request, such as an email program, or an editor, or a browser, or whatever.  Nobody paid much attention to what it was called.  Then, the innovators at Bell Labs designed a highly flexible operating system called UNIX, which was practically a program language on its own; you could chain together UNIX commands to make it do more interesting things.  For the record, as per Wikipedia, it was designed and implemented by Ken Thompson, Dennis Ritchie, Brian Kernighan, Douglas McIlroy, and Joe Ossanna, and intended to be a sandbox in which they played at Bell Labs.  But somehow, UNIX began to spread throughout the computer science university community, becoming something of cultural artifact; nobody could call himself a computer scientist back in the later decades of the last century and still be ignorant of UNIX.  Teaching UNIX to sophomores was important, because it could be used to illustrate problems with file management, security systems and passwords, and so on.  For some reason, almost all operating systems began to look like UNIX, for example the DOS operating systems of Microsoft, and later, the Apple operating systems.  It was always considered open source; in other words, anyone was permitted to port it to their computers, within limits.

Today, of course, we are familiar with Android, the operating system developed by Google, which is a descendant of UNIX, via Linux, which is an adaptation of UNIX to the PC architecture.  (In fact, UNIX has been adapted to most computers available today.)

The task that computer educators face today is to find a balance between (*) teaching the cultural environment of programming versus programming, (*) general principles of programming problem-solving versus specific solutions, (*) applications in a peripheral-rich environment versus those in an environment of a simple set of outputs and inputs, (*) choosing between a programming environment specifically designed for beginners, and a trivial application for real-world hardware, such as a smart phone.

It used to be that, at my school, we taught basic programming to even nursing students.  The board that accredited nursing degrees required that every graduate nurse had to have a certain minimum of exposure to computers, since they could not anticipate in which direction medical technology would advance over the next few years.  Today, of course, nurses would never consider learning programming, but would settle for experience with hospital software of various kinds.  Regrettably, programming is evolving into a game for specialists only, and I am rooting for this process to be slowed, halted, or reversed.

Finally, women in computer science are increasingly alarmed at the drop off in the proportion of women going into the math and computer science area.  It has been found that women make excellent programmers.  In fact, some of the earliest applied mathematicians and computer scientists were women, at a time when we would not have expected women to go into any technical field at all.  (In fact, Hedy Lamarr, a well known Hollywood actress, invented a method for disguising the control signal of torpedoes in WW2.  Aspects of her work is said to be used in wireless security technology, but I might have misunderstood this piece of information.)

Into this environment comes the Raspberry Pi, a tiny computer you can buy for around $35, which was intended to encourage British kids to get interested in computer programming.  The very fact that it was an absolutely stripped-down piece of circuitry made it completely flexible.  It was to hardware what Linux was to software.  Now, four years after the first model was introduced, the third generation has even Bluetooth built in.  So much for basics!  On the plus side, more people are likely to get into it (already 8,000,000 Raspberry Pi devices have been sold, worldwide), which means that interest in programming could rise.

Arch

No comments:

Final Jeopardy

Final Jeopardy
"Think" by Merv Griffin

The Classical Music Archives

The Classical Music Archives
One of the oldest music file depositories on the Web

Strongbad!

Strongbad!
A weekly cartoon clip, for all superhero wannabes, and the gals who love them.

My Blog List

Followers