Big and small computing

I’m a utopian, in that I don’t believe that computers are a mistake. I have big criticisms about particular technical decisions, but I…

Big and small computing

I’m a utopian, in that I don’t believe that computers are a mistake. I have big criticisms about particular technical decisions, but I don’t think those decisions were inevitable. An alternate computer universe, as projected from trends thirty years ago and earlier, was possible; with care and effort, it still is.

The biggest structural problem I see is a failure to distinguish between two different kinds of computing that have fundamentally different needs.

Big computing is computing at-scale. It’s the kind of thing anybody in the software industry is used to, and anyone not in the software industry is accustomed to complaining about. Big computing is client-server. Big computing processes big data. Big computing has millions of users. Big computing hides ‘advanced settings’ behind a checkbox or a button so ‘regular people’ don’t get intimidated. Big computing has maintainers, bug trackers, and devops on call. Big computing is worried about accidentally committing experimental code to prod. Big computing writes tests, cares about strong typing, and writes things in java because it’s easier for HR to find qualified candidates that way. Big computing is worried about job security. Big computing has a project manager and stock options. Big computing ships.

Small computing never died, but you wouldn’t know it from reading Hacker News. Small computing has an average user count of one. Small computing is peer to peer, and human scale. Small computing does exactly what the end user wants, because the end user is the developer. Small computing doesn’t distinguish between programmer and non-programmer. Small computing doesn’t care about marketing. Small computing is open source because there’s no point in using a restrictive license, not because anyone will ever submit a pull request. Small computing is as unique as a geocities page. Small computing plays.

If you are being paid, you should be doing big computing. Big computing means scale, and scale means that your decisions have technical, social, and ethical ramifications that you have a responsibility to seriously consider. This means asking for permission. It means facing reality, caring about security, avoiding intellectual laziness with regard to tool choice, and maintaining familiarity with the lore. Major technical problems often can be traced back to the application of small-computing mantras (“move fast and break things”, “yagni”, “it’s better to ask for forgiveness than permission”) to big-computing situations. Big computing should be extremely conservative, and because of its centralized and hierarchical nature, we should be making decisions based on the categorical imperative: make a technical decision only if you think it would easily and unproblematically scale to every machine in the planet forever.

On the other hand, I consider small computing much more important than big computing. Big computing, because it is big money, gets all the attention; however, big computing is one-size-fits-all and therefore doesn’t quite fit anyone. Every programmer began in the context of small computing, and every programmer, in his or her off-time, operates in that context. Systems geared toward small computing (like REPLs, notebook interfaces, smalltalk VMs, and the UNIX command line) are incredibly powerful. Unfortunately, small-computing systems are not made accessible to non-programmers, even though they absolutely could.

Almost all user-facing interfaces should be small-computing. Big computing should only exist as a fallback when we, as developers, have failed to make small-computing-oriented systems sufficiently unintimidating. Users should be able to gradually learn to program, without reading manuals, simply by interacting naturally with their computer’s UI and performing the kinds of casual customizations we all do to optimize for our use cases. The system of even a non-technical user should be composed of 75–80% code written by that user, within a few months.

On the other hand, big computing, because it is professional, should be subject to licensing. Licenses are not a guarantee of competence, but they are a mechanism that filters out those unwilling to make minimal effort, and they also present a mechanism for ethical lapses to be effectively punished. (“Why don’t I have a license? Oh, Uber asked me to implement a fake surge pricing mechanism and I said yes. Oh, I lost my license because I collaborated with an NSA wiretapping request. I lost my license because I exposed a credit card database to an unvalidated input field. I lost my license because I didn’t implement buffer overflow checks. I lost my license for using unsalted SHA1 for password hashes.”) Big computing can ruin people’s lives, so professional developers and their employers should be legally liable for their decisions.