Compute like it's 1989

554 readers like this.
Compute like it's 1989

LSE Library. Modified by Opensource.com. CC BY-SA 4.0

For many of us, when we look around at the state of computing in 2016, we nod and think, "Yes, today is what I expected when I thought about what The Future would be like." Sure, we haven't got flying cars yet, but today's technology is flashy. We swipe fingers across screens instead of pressing buttons, and I'm told we are all very excited about all of the latest virtual reality headsets and augmented reality gadgets.

So now seems as good a time as any to look back at how people of the past used to compute, and back to the days when a "desktop" computer was so called because it took up 80 percent of your desktop. The days when the term "computer" actually meant "a machine for computation."

Why bother looking back 30-year-old computing? After all, the computers back then were clunky, slow, and awkward, weren't they? Sure, they were, but the great thing about living in The Future is that we have the power to look back at the Old Ways and cherry pick information from them for modern technology. The truth is, there's power in simplicity, and old computing was simple out of necessity.

Survival of the thriftiest

Have you ever played a survival video game? The kind in which you must survive a zombie apocalypse by hoarding canned food and using rolling pins as weapons? Computers are a little like that.

People make complaints in modern computing about bloat, and the cavalier answer is usually "CPU cycles are cheap!"—or in other words, "I'd prefer to treat the symptom rather than the cause." In the old days, CPU cycles were prohibitively expensive, and even now, “cheap” isn't the same as “free.”

For $89, you can get a PocketCHIP and a keyboard, and have a tiny computer ready to use at any moment, but its CPU cycles are not cheap. What you save on its cost and power requirement you pay for in computing power, but you'll hardly notice, as long as you're okay with simplified computing.

The same is true for server slices or instances on shared servers, or within Crouton. The computer is there for the taking, but you have to learn to conserve your resources, whether it's bandwidth or CPU cycles or RAM.

A workflow of your own

Old computers weren't the pre-fab ready-to-wear appliances that we're used to now. Hardware is neat, but I'm not talking about the hardware. The stuff we interact with on a daily basis is the software, and "integrated" software is a relatively new idea.

Back before big monolithic applications got dumped onto the unsuspecting public, software came in batches on "shareware" diskettes and BBSes. If you wanted to create animation, you got software to help you draw images, used more software to string images into an animated sequence, used other software to produce sound effects, and finally used software to combine the sound and the image.

There was freedom in that, both for your computer, which didn't need to be capable of running that one big app that tried to do everything, as well as for yourself because it was up to you which applications you strung together to get the job done.

Diving in

One way to compute like it's 1989 is to go and get a Raspberry Pi or a PocketCHIP and dive in to the wonderful world of low-powered living. The good news is that running Linux today is surprisingly similar to running UNIX or Linux in the '90s. The bulk of the commands you know and love are there, and many of the applications and the general sensibility have endured.

Most everything you do on your computer now can be done in a terminal, which obviously is the lightest of lightweight interfaces.

File management

The old style of file management was far more direct than modern interfaces allow. Modern computers have automagical file handling, according to a mimetype database. Mimetypes are useful when your main interaction with a file is pointing at it and double-clicking to open, but that becomes largely irrelevant when you yourself are making the call about which application to use. It might surprise you to learn how much faith we tend to put in auto-detection now. For instance, who knew you could run a valid text edit command with sed on a dynamic library or that you can see metadata in a sound file with less? When you stop relying on pre-programmed decisions, then that is when you end up learning something new.

Most modern Linux users have at least heard of file management commands such as cp and mv, and that's an entirely valid and certainly the most efficient way to manage files. It's not the only option, though. If you yearn for that happy medium between constructing valid BASH commands and the intuitiveness of graphical interfaces, then look to GNU Midnight Commander.

Midnight Commander

Midnight Commander

Midnight Commander (invoked as mc) is a DOS-style utility that provides a split-pane file manager in your terminal. It's primarily keyboard-driven, although being written with the "ncurses" toolkit, it can also receive mouse-clicks.

The interface is wonderfully intuitive. All you need to know is that it's a file manager, and from there you can quickly learn it. Contextual commands are listed at the bottom of the window, each being assigned to a Function Key and a full menu (a "pull down" accessed with F9) is always available at the top of the window.

Personal customization notwithstanding, there are always two panels in Midnight Commander, and you switch between them with the Tab key. All the usual file operations are either a menu selection or a keypress away. It's intuitive enough that you can poke at it for a few minutes and fall into using it without introduction, and yet so efficient in design that you can quickly become a power user and control it with emacs-like key combos. It is, for all intents and purposes, a "desktop" for an MS- or Pro-DOS experience.

Networking

To a lot of people, the Internet and the “www” subdomain are one and the same. What many people don't realize is that the "www" subdomain is really just the "World Wide Web" part of the larger Internet and generally it's the part that serves content over HTTP(S).

What was called "Web 2.0" is a lot heavier than the Internet of old. With all the background videos, JavaScript pop-up requests for your email address, pleas for you to disable your ad-blocker, alerts about cookies, warnings that your browser is out of date, and everything else the modern web tries to throw into your browser, visiting the "www" in a non-mainstream browser is almost impossible. Luckily, there's a lot more happening on the Internet than just social media and comment wars.

Web browsers have conditioned most of us to view the web as a place in which you go and "hang out." You sit and idle; you go to it, but never bring it home. This, of course, isn't strictly true—you're downloading bits to a temporary cache, but that's all abstracted from you by the browser. You can still visit the modern web whilst living a digital retro lifestyle, but it's less about loitering and more about getting stuff done. The earliest Linux distributions shipped with Lynx and ELinks, which provide the typical HTTP experience modern web users are used to, but there were and are many other ways to interact with the Internet:

Atom and RSS

The advantages of these is that they are a "push" model instead of a "pull." You don't have to go out and check a website to see whether there are updated news items. The software sends you an alert instead. Much of my daily web browsing is taken care of with one look at newsbeuter or Mashpodder. Once you start using RSS and Atom, you might just find that the sheen of HTTP is a lot duller than it seemed before.

Of the two, newsbeuter is the easiest to configure and use. Install it from your distribution's repository, and then launch it once to force it to instantiate its configuration file. Once that's done, you have only to edit your ~/.newsbeuter/urls file; a simple line-delimited list of feeds you want to check. A sample from my current urls file:

$ head ~/.newsbeuter/urls 
https://opensource.com/feed
http://slackware-changelog.oprod.net/atom_feed/
http://fedoraplanet.org/rss20.xml
https://planetkde.org/rss20.xml 
http://planet.qt.io/rss20.xml 
http://planetpython.org/rss20.xml 
https://www.linux.com/feeds/rss 
http://gnuworldorder.info/ogg.atom.xml 
http://monsterjavaguns.com/podcast/feed 
http://twodeeten.blogspot.com/feeds/posts/default 

Wget, curl, fetch

Regardless of the UNIX you're running, you have available some command to access the network and fetch a file. It's browsing the web without the browsing, and it's sublime. Unfortunately, many modern sites obfuscate where the actual content is (if you're using wget or similar, then you're not clicking on their ads), but for the practical websites out there, a quick download command is liberating and efficient.

Git

Git itself is great, but another good thing about git's popularity is that people have actually started hosting blogs and other content in git repositories, which means you can easily grab that content using just a UNIX shell.

SSH

Thriving community servers are out there that are open to new users, and you can also build your own. You can find a list of free shell accounts, and now that you can get a computer for $35, setting up your own is one install and port forward away, even if only as an experiment to see how many friends you can get to join you.

Gopher

There's an unusual amount of nostalgia out there for the Gopher protocol. It's not the greatest system ever invented (Gopher servers sometimes have trouble parsing Gopher's markup), but it does underscore one point that much of the web seems to have forgotten: It's all about the content, not the ads. When your site is serving lists of text and binary files, you inherit an objectivity that just gets lost in modern sites. The Lynx browser still recognizes Gopher, so start your journey with it.

Floodmap Public Gopher Proxy home page

Floodgap Public Gopher Proxy

Email

Of course there's always email, the original social network. Too many people these days fall back on Gmail and other providers that use over-complex web interfaces that cease to function at even the slightest variation of browser vendor or version. There is a better way, and that is Mutt. It's a lightweight, simple, efficient, and effective email client with more customized config files than you could ever need. Better yet, it has next-to-transparent PGP integration, so you can start encrypting those emails from end to end.

Your-Protocol-Here

Don't forget Usenet, Tor, GNUnet, and more. There are too many ways to access the worldwide network to list. If you look, you'll find all kinds of interesting lightweight technologies lying around out there.

Graphics without X

On some devices, an X server just isn't practical—sure, it might be possible, but you just know it's eating up a lot of precious RAM. You might run a lightweight desktop, but you'll still inherit the overhead of running X in the first place.

Mostly, when computing the Old Way, you don't need much by way of a graphical interface. A GUI just clutters things up, gets your hands away from the keyboard where they belong, and is painfully inefficient. If ever you're going to wish for a graphical display, it will be when you're online or checking email. People love graphics on the Internet, and people love embedding images into email.

Don't startx yet. What if I told you that you don't need to run X to display graphics on your screen? Thanks to the Linux framebuffer device, /dev/fb0, you can do just that.

There are a few different utilities to draw images straight to your screen without a graphic server. These don't work over remote or emulated connections (SSH, screen, tmux), but as long as you're sitting at the physical computer you're using, you can direct all kinds of output straight to the physical screen attached to it.

To view images, there's fbi (framebuffer image viewing) and its successor, fim (Fbi IMproved). Both essentially do the same thing. Point it at a bitmap file and it'll paint the picture on your display, abruptly, without fanfare or apology. You can use various controls; you can zoom, pan, or step through a slideshow. It's easy and immediate, and it's exactly what you need.

You can even play video without X, believe it or not. You need to make sure your username is a member of the "video" and "audio" groups (this is usually a default on even the bare-bones Linux distributions), and then:

$ mplayer -vo fbdev my_movie.mp4

Understand, this isn't a gimmicky "convert your images to ASCII" scenario—these tools actually display the images and video on your screen without a GUI. Depending on which shell you're using, painting pixels this way can confuse your input. If your shell starts to act funny after using fbdev, use the reset command and everything ought to return to normal.

The "you" in UNIX

UNIX training and training videos produced in the 1980s made it abundantly clear that the intent of the operating system was and is to empower users to take small commands and string them together to accomplish complex tasks. The individual's workflows were meant to be unique and infinitely extensible.

Beyond the mimicry of old computer interfaces and the rejection of modern network chatter is the enduring principle that computerists should be eager to find new tools, useful programs, and exciting ways to piece things together to accomplish tasks and to make life better for everyone. In other words, put the "you" in UNIX.

Seth Kenlon
Seth Kenlon is a UNIX geek, free culture advocate, independent multimedia artist, and D&D nerd. He has worked in the film and computing industry, often at the same time.

12 Comments

Awesome post! I never thought I could watch videos or pictures without using X.
Greetings! I really like your web.

Pretty neat! I won't say doing GUI-like activity without X is exactly a seamless, perfect experience, but it's far better than starting X just to see one email attachment (that turns out to just be a person's fancy signature logo or something).

Glad to have helped.

In reply to by Miguel R.

I'll admit I haven't really done much with PDF's without X. In fact, the task I usually find myself doing with a PDF is converting them to text with `pdftotext` and then re-arranging the raw text dump into epub. I do this because my preferred format is text, so getting something out of PDF format as quickly as possible as often my first step. But that obviously wouldn't work if a PDF had forms or no embedded text, or if that just wasn't the goal.

That said, there may well be a good reader out there; I just haven't looked into it.

In reply to by Pbj (not verified)

I use pretty much all of them from long time . I still fondly use Lynx and Elinks and my job allow me to use CLI as much possible ,it fast and wonderful .

Atom and RSS are push? Hmm.

I reckon I'm using the term "push" and "pull" in a very non-technical sense (although I realise there are technical implications). I meant to describe the process of finding content online. There are probably better analogies, although IRL this one seems to work well with people who have never heard of RSS and have no idea that you can get content from websites without actually going to the site in a browser.

In other words: your bug is acknowledged but resolved as won't-fix in the interest of non technical readers :-)

In reply to by Tim Locke (not verified)

The photograph is a bit overdone. The IBM PC had been around since the early '80's and the model of the "deskbox" was firmly established by the latter '80's. What is pictured looks more like the dedicated secretarial text entry machines like those sold by (e.g.,) CPT. Yes, things were clunkier because laser and inkjet printers were yet to come and hard drive technology was still large in volume. Graphics capability did exist; Lotus 1-2-3 was popular and provided "instant graphification" and LaTeX editors had been implemented although things like word processing software (and file formats) were far from standardization. PCs were at their highest point of productivity in those days, despite the educational shortfalls and technology angst, largely because of single tasking and no proliferated internet.

Good eye! Yes, some artistic license was taken; the photo is in fact from 1981. I assume, however, that the peopele using computers in this photo were still using computers in 1989. Connection made!

Heck, we can even assume that the people in the photo eventually switched to Slackware Linux in 1993, which one could read about in this new article: http://opensource.com/article/16/12/yearbook-linux-test-driving-distros

In reply to by Bruce W. Fowle… (not verified)

Yes Seth!!!! Compute like it's 1989 indeed! Great article, thanks for the stroll down memory lane...

My computing in 1989 was a Sun 3/60, Motorola 68020; was it 4MB RAM? I think so.... 141MB hard drive. The whole point of the SunView windowing system (no it wasn't X, not then) was to let me have more than one terminal window open at the same time...

I was mostly programming in Sun Pascal though somewhat in C. Plus a lot of throw-away stuff in awk and csh. It all worked pretty well, though the graphics on the monochrome displays were a bit jaggy.

We had a Usenet connection but no Internet, just a local Ethernet LAN. IIRC three 3/60s and three 3/50s (those ran diskless with one of the 3/60s acting as an NFS server).

And wow that stuff was expensive by today's standards.

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.