Small Computing and the Security Mindset

The story of modern computing is the story of the big-computing mindset (scale, centralization, elitism, and paternalism) infecting…

Small Computing and the Security Mindset

The story of modern computing is the story of the big-computing mindset (scale, centralization, elitism, and paternalism) infecting everything it touches as programming becomes more of a profession than a craft. In the process, it creates edifices of practices — useful in big-computing situations — that get unthinkingly applied outside of their appropriate bounds, forcing small-computing projects into the strictures of big-computing design. One major domain where we must begin to think critically about the big- vs small-computing distinction is security.

Small-computing systems ought to be secure. After all, they are our most personal environments! They are our diaries and our artworks and our dream journals! But computer security, as it has become professionalized, has become more and more focused on big-computing environments, and good security practices in those environments are inimical to the basic tenets of small computing.

In a big-computing environment, valuable secrets (like credit card numbers) and desirable powers (like the ability to tweet on behalf of the president) are kept on a set of machines owned by a single entity (the corporation) on behalf of the ostensible owners of that information and power (particular end-users) and protected from illegitimate access (hacking/cracking) by an elite set of professionals (software engineers, ops teams, security consultants) who use their monopoly on legitimate access to certain power (superuser & administrator privileges, commit access) to construct laws (security policies) that prohibit as many not-explicitly-allowed operations as possible. Because the adversaries are many, with infinite time and energy, and because the treasure is valuable, and because laws always have unseen loopholes, these elite professionals construct layers upon layers of rules to limit not only what users (legitimate or illegitimate) can do, but what kind of feedback they can receive.

This mentality has even made its way into language design: Java (and C++) have a rudimentary form of access control where members can be marked private, and good style in these languages is to mark all member data as private and write accessor functions, ostensibly in order to perform validity checks on proposed modification. This boilerplate is added rather than doing the sensible thing and creating custom metatables such that assignments are implicitly passed through an integrity check (as may be done in Python and Lua). Of course, such checks are rarely implemented, and they cannot distinguish between ‘authorized’ and ‘unauthorized’ calling-classes anyhow — while C++ has ‘friend classes’ that can modify private data directly, and both support using inheritance hierarchies to control data access, there is no granularity smaller than kin/friend versus outsider, so these access controls are borderline useless for everything besides the ad-hoc plugging failures of the type system and increasing the line count of codebases.

Systems that require big-computing style security exist. Problems that are best suited to those systems also exist — your bank ought to not only have big-computing style security, but ought to have substantially better security than it has. But, this model is not really sensible in many of the places it is used. For instance, Google Docs (which simulates a word processor with some limited support for simultaneous editing by multiple users) is locked into this model only because it is client-server, and a hypothetical local-first or peer-to-peer version should not be so professionalized and stratified; Microsoft Word, being a local application, has no legitimate excuse (though the real reason, as with most big-computing systems, is that unnecessary centralization is a very effective way to squeeze money out of users who don’t know any better).

When I use Google Docs, I can modify the javascript running on my browser, modify the cookies being sent to the server, and modify URL parameters. If I do something wrong, I will get an entirely unhelpful error message from inside the black box of the remote server. This is because, by failing to fall precisely in line with the Alphabet Corporation’s desired behavior, I have become an adversary, and adversaries cannot be given information that might help them do whatever they might want to do (since some of the things they might want to do is get, for instance, the credit card numbers of everyone who has ever bought an advertisement). Of course, Google engineers writing and maintaining Google Docs face the same situation. Outside of an adversarial situation, investigating a poorly-understood piece of code by poking it and interpreting error messages is called debugging, and part of the small computing ethos is that users should not be prevented from debugging.

The difference between big computing and small computing is, in essence, that in small computing, the user is never an adversary. This is because the running code is owned and controlled by the user. This goes beyond open source / free software (where the developer is no adversary, but the developer is an elite professional often working on behalf of a corporation inside a firewall, performing work that may well be detrimental to those who actually need to interact with its effects).

What kinds of structures befit a small-computing system in an environment where networking exists, and what security models are appropriate for these structures?

For one thing, a multi-user client-server model makes no sense. In a client-server model, whoever controls the single server functionally controls all clients. There is, therefore, incentive to hoard power by locking vital functionalities away on the shared server, making every client dependent — slowed by latency when online, shit out of luck when offline, and always under threat of sudden unilateral changes in policy or protocol.

Instead, we should look to peer to peer systems: direct for real-time communication, and offline-first store-and-forward schemes for everything else. Asymmetric encryption for key exchange and for signing still make sense here, as does hash-based content addressing for storage. Secure Scuttlebutt and IPFS are good models for what small-computing-oriented network technologies of the future might look like: fully distributed, yet resistant to the kinds of threats that regularly take down federated systems like ActivityPub and IRC, because all nodes are equal and all nodes replicate for each other (under cryptographically-enforced anti-spoofing measures).

What does a threat model for small-computing infrastructure look like?

Well, unlike in big-computing systems, a small-computing system does not (typically) have large numbers of highly motivated dedicated attackers. Fuzzy Bear isn’t APTing your grandma’s laptop, because your grandma’s laptop has nothing on it but christmas MIDIs and questionable nudes. Our threat is really from folks doing large-scale automated sweeps for low-hanging fruit. So, small-computing threat modeling looks like everyday opsec: use encryption, don’t give strangers direct access to private spaces and limit the spaces they do have access to, distinguish between sensitive and non-sensitive data, and protect the integrity of the system from outsiders. Protect the network-facing portion of your machine, while maximizing your own access to it.

In this context, technologies we absolutely do not need are: passwords, SSO, certificate authority hierarchies, name servers and host files, NAT firewalls, code signing, chroot jails, memory layout randomizers, executable symbol stripping, single-application containers, daemons running as ‘nobody’, web APIs for wrapping the web APIs around your web APIs, friend classes, and sudo.

Technologies we might want to look into: distributed hash tables, chord routing, merkel trees, functional languages, JIT, fast copy-on-write, network-aware cache eviction policies, split-brain countermeasures, transitive blocking, store and forward, message passing, microversioning, journaling, and image-based environments.