Programming languages shouldn't and needn't be Turing complete

by Gabriel Pickard

As we software engineers grow up, we are taught early on (like in kindergarten) to associate Turing (equivalent) machines with computation per se: It's the paradigm that the vast majority of programming languages aspire to and that many more which don't aspire to fall into accidentally. If you grew up like me, you probably came to believe that Turing completeness was necessary to build any serious programs and that Turing incomplete languages were either toys, parsing tools or config files. Well -- you maybe heard that SQL was Turing incomplete but not thought much of it.

Well, I have come to tell you that these matters aren't quite the way they seem: Turing incomplete languages can in fact be powerful enough to support pretty much any programming task you or I or any web developer, or data scientist, or devops engineer would be faced with today. I will also tell you that Turing incompleteness is very promising to drive a true leap forward in the art of software engineering, on par with the likes of garbage collection or structured programming.

The goal is to leverage Turing incompleteness for:

  • safe collaborative sandboxing
  • predictable "serverless" cloud functions
  • distriuted computing with provably less bugs
  • cryptocurrency "smart contracts" less full of money-losing surprises
  • tractable automatic code generation from specification
  • safer interaction of untrusted components

While the path to getting there is not completely clear, Turing incompleteness is both promising and easy enough to warrant much more intense investigation and experimentation than we currently have. In service of more widespread adoption we shall go through some very simple dos and don'ts for Turing incomplete languages which hopefully will inspire you to build your own!

This post is an adaption of a paper and talk I gave at HATRA 2020, a workshop during the SPLASH 2020 conference.

Ceding power to gain control

Computers do exactly what we tell them to do. Unfortunately human thinking is much less pedantic and we write programs which don't do what we think they should. These discrepancies are called bugs and I'm here to get rid of them. We may also encounter the opposite problem: We are faced with a complex code base that does something and we are tasked with changing it, without understanding what it really does. Either way, nobody sets out to write segfaulting code -- and yet we inevitably do so long as the language allows it... Which is why garbage collectors are so popular these days.

Hence the history of programming language design has been a history of removing powers which were considered harmful and replacing them with tamer, more manageable and automated constructs. Here are a few examples:

  • Direct entry of binary code became assembly language.
  • Direct access to CPU registers and instructions gave way to higher level compilers in the ALGOL tradition.
  • The infamous GOTO was banished in favor of structured programming.
  • Manual memory management is still around in certain areas, but garbage collection made memory safety so popular that now we have borrow checkers to write memory safe code even if we can't afford the garbage collector overhead.
  • OOP involved moving from arbitrary access to information hiding modularization.
  • Functional programming afficionados removed side-effects and mutability in favor of pure functions and persistent data structures. Nowadays programming in React is much better with immutability.

The list goes on and on... While not all application domains benefit from all the replacement steps above, it appears as if removing some expressive power can often enable a new kind of programming, even a new kind of application altogether. Unix was possible because of the C compiler. Photoshop could hardly have been built on the basis of GOTO. The JavaScript ecosystem would not have grown without garbage collection.

While we can't know exactly what the benefits will be, Turing incompleteness is a great candidate to achieve an expansion in applicability by reducing expressive powers. Furthermore, there is cause for high hopes: compared to most of the changes listed above, Turing incompleteness is more sweeping and more fundamental.

A great leap forward

Rice's theorem states that we can't build automatic checkers that work on all programs of a Turing complete language to extract any non-trivial property. These kinds of non-trivial properties are exactly what we would be looking for if we want to equip languages with smarter developer tools and more in-depth feedback about correctness, runtime complexity, optimization and security.

These kinds of benefits have not gone completely unnoticed. There are some success stories out there, applying Turing incomplete languages. The main one that most people have dealt with is SQL. I posit that SQL would not have been as successful and widespread as it is, if database servers weren't as performant as they are -- and they wouldn't have been as performant if the input language had been less manageable, i.e. Turing complete.

In many ways we don't yet know exactly what the killer app for Turing incompleteness will be, but I do have some ideas for what I would like: Making the kind of stuff people do with fancy type systems more user-friendly. I want verification to become much more transparent and automatic. As anyone who has dealt with automation and transparent abstraction knows: It's easy to come away with a bad taste and it's all about the edge cases -- the parts that make the abstraction leaky. Turing incompleteness could be a very big step to cutting out the ege cases, making our abstractions more robust and making it easier and more widespread to build verification tools.

Nowadays mission critical applications like Microsoft Windows are automatically checked for bugs at great expense using SMT solvers. This process is laborious, incomplete and full of pitfalls, which is why hardly anyone does it who doesn't have to. I think we can find a way out of this by fundamentally rethinking our computing paradigm.

What Turing incomplete languages can and can't do

I performed a small literature search and found two (2!) problems known to theoretical computer science which we cannot implement in a known Turing incomplete language. They are the following:

  • Simulating a Turing machine (and equivalents)
  • Building a self-interpreter for a language in itself, where the input code is a Gödel number (and there even is a big caveat to this)

If you can think of any others, let me know, but suffice to say the list isn't long. Notably, the Ackermann function, SAT, A* etc., searching and sorting can all be solved using a strongly normalizing (i.e. terminating) lambda calculus like Gödel's System T.

While almost any known problem is clearly solvable in a known Turing incomplete language, the matter of efficient algorithms is a different matter. Though it's clear that for any specific given problem we can come up with a set of Turing incomplete formalisms to model it efficiently, we cannot assume that there is one Turing incomplete language capable of representing all known algorithms as efficiently as C/C++ would on a von Neuman style computer. There are some caveats. Luckily though, these issues are akin to how it's hard to represent an efficient mutate-in-place algorithm like as quicksort in a pure/immutable functional language like Haskell or Clojure -- and people still find a lot of use for those languages.

Algorithms of the type "do X until condition Y holds", often found in numerical computing, which do not iterate over a collection, are hard to model in a Turing incomplete language. Doing so in the current state of the art requires complex type systems and proofs of termination which from my perspective take us farther from the goal of user friendly assisted programming..

Luckily, the reality of modern application development is often extremely, extremely light on "capital A algorithms". I.e. for very broad swaths of the software engineering workforce it is entirely irrelevant whether they can efficiently implement in-place sorting or the Newton-Raphson method. In fact, rolling your own fundamental algorithms is generally considered an anti-pattern and bad industry practice. That is what, by and large, we have libraries for. It's not by accident that many fundamental numerical libraries are comprised of decades-old Fortran code. Why mess with something that works? Furthermore, the widespread adoption of virtual machines as compilation targets has made it much easier to bootstrap new (e.g. Turing incomplete) languages, while hitting the ground running with access to a full suite of practical libraries to draw on.

Turing incomplete languages can trivially handle anything having to do with conditionals, iterating over collections, trees or performing a transformation repeatedly for a pre-determined number of steps. As such, I believe that a very simple set of constructs can handle anything a common full-stack web developer needs to do for most of their carreer. In the rare edge case one can always expand the range of language primitives or implement escape hatches.

How to stay total

It's not too hard to take an imperative or functional language and turn it Turing incomplete. Mainly one just needs to remove unconstrained loops and recursion and replace it with more constrained constructs such as list comprehension, foreach loops, or the map, reduce and filter suite of collection processing functions.

Here's a set of choices that will work for a Turing incomplete imperative programming language:

  • Mutable data structures
  • No mutable references to first class functions
  • No unconstrained for or while loops
  • A modified version of foreach loops, with a cycle checker and a way to iterate through nested data structures / trees

If instead you want to make a Turing incomplete functional language you could choose the following:

  • Immutable names and variables, with acyclic data structures
  • First class functions
  • No recursion
  • map, reduce and filter (with some variants for handling nested data structures and trees)

In both cases one would want a system of event listeners to handle user input without needing infinite loops. Any conditionals will work for either scheme, as well as list comprehensions and lazy data structures. Handling more complex loops than iterating over a list requires some beefed up variant of reduce: One could have a reduce operator that gets called for every nested element of a collection, i.e. not just the immediate members of a list, but also their members, if they themselves are collections. For trees it might be handy to support a reduce function with two components: one transformation for descending and one for ascending in the tree.

Guido van Rossum is famously not a fan of reduce:

[reduce] is actually the one I've always hated most, because, apart from a few examples involving + or *, almost every time I see a reduce() call with a non-trivial function argument, I need to grab pen and paper to diagram what's actually being fed into that function before I understand what the reduce() is supposed to do

He has a point, but consider the following code:

reduce(lambda x,y: foo(x,y), coll, val)

This is equivalent to / syntactic sugar for:

x = val
for y in coll:
    x = foo(x, y)

As such it of course makes sense to not use the explicit reduce function througout most Python code and instead use the pretty syntax. Any Turing incomplete language aiming for user friendliness would aim for similarly readable constructs, either via macros or built-ins. Here's an example pythonic nested reduce:

def flatten(nested_list):
    for item nested in nested_list with result = []:
        result.append(item)

    return result

Leveraging Turing incompleteness

Above I have outlined some very simple constructs without fancy type/termination checking, which already can handle the vast majority of retail programmer use cases. This is good news, because once we start down the happy road of tractable language semantics and gain more experience, it stands to reason that even more intricate, efficiency sensitive "capital A algorithms" will be supported.

I have shown that it's not too hard to build a Turing incomplete language. What I haven't yet outlined is how to leverage Turing incompleteness. Partially, this is due to the fact that we don't yet know exactly. This would be a great realm for many different approaches to compete.

We can surmise that Turing incompleteness will help us write more correct software, but doing so will likely necessitate better ways of building specifications against which correctness should be checked. In my very own opinion, expanding on type systems in the ML and Haskell tradition will not be the way to go, or at least not the only way to go.

I'm interested in metalogics that talk about program behavior, which do not follow the intuitionistic type theoretical approach. I.e. I'm interested in specifications with negations. The logic programming literature is full ideas about stratification and the like, in order to make negation a tractable part of a solver, checker or inference system.

Any metalogic or type system can suffer from the aesthetic problem that it increases the surface area of the language and adds more and different syntax to the operational core. In light of this, I have been interested in expressing program metalogic as code patterns and underdetermined traces. But this is a topic of active inquiry, with more to come...

Back to Gödel: historical remarks

The above concludes the core of my argument. If you are now wondering "Why, if Turing incompleteness is so promising and accessible, haven't I heard more about it?" let me give you my selective reading of the history of computer science as an attempt at explanation:

Well before computers became a thing, the logicians who invented computer science were more concerned about well-foundedness in mathematics than writing bug-free software. Initially the most widespread definition of "effective procedure" was actually a terminating language: Gödel's seminal incompleteness theorem used a strictly terminating definition of effective procedure: That which we now call primitive recursive functions.

This definition was far from settled though: A convincing negative answer to the Entscheidungsproblem (whether there is an effective procedure to prove or disprove any mathematical statement) required a satisfying definition of "effective procedure" beyond all doubt, one for which the scientific community could agree that there was no "larger" sensible definition of "algorithm" in which the Entscheidungsproblem could happen to be true after all. Following considerable scientific debate, the general recursive functions as well as Turing machines and the lambda-calculus were accepted as equivalent, encompassing conceptions of effective procedures.

The choice of Turing machines as definition was not based on user-friendliness or manageability, rather it was chosen to be maximal, so as to eliminate all doubt with regards to an uncomfortable negative result. Particularly, making no claims about whether the difference between Turing machines and the terminating languages (such as the primitive recursive functions or Gödel's System T) contained any valuable or sensible algorithms.

Turing machines perhaps might have been relegated to a position of less prominence, relative to other (including total) formalisms in the lambda-calculus family, if early computers had not come about so swiftly and the von Neuman architecture hadn't so closely resembled a Turing machine. The comparison is apt: Computer memory functions as (unfortunately finite) read-write tape, while registers roughly correspond to machine state and the CPU to the transition function.

Programming in binary and assembly obviously mirrored the von Neuman architecture in Turing completeness. Languages with abstract control structures soon followed. These programming languages roughly fell into two camps: those with loops (Fortran, COBOL, BASIC, C) and those using recursion for code repetition (Lisp, ML). Unfortunately, both approaches, in their chosen form, also entailed Turing completeness.

Why did the earliest programming languages all default to being Turing complete?

  • Turing completeness (accidental or not) is so easy to achieve that without focus on the matter it becomes inevitable in programming language design.
  • The Turing machine had a head start as predominant paradigm for computation. Lambda calculi were hard to implement (for lack of complex data structures) and underdeveloped: The simply typed lambda calculus was too simple for real computation and Gödel's System T was not published before 1958 -- the same year that Lisp was invented.
  • For lack of meta-reasoning capabilities, complex type systems, abstract data structures and processing power, the early programming languages were unable to take advantage of the benefits of Turing incompleteness.
  • Infinite loops are an attractive concept for managing system-level tasks.

By the time calculi like System F were developed, the paradigm of arbitrary recursion had become so entrenched that almost by default Haskell became Turing complete, even though its type system is ostensibly based on F. Turing completeness has become synonymous with computing, whereas virtually no software in use requires the full arsenal of Turing completeness.

We largely owe this state of affairs to the popularity of arbitrary loops and unrestrained recursion. Nevertheless there is hope of escape: Turing incomplete constructs such as list comprehension, foreach-style iteration (together with lazy data structures and generators) and functions in the family of map, reduce and filter are gaining popularity and actually considered the more idiomatic means of writing repetitions in many popular programming languages such as Python and JavaScript.

We find ourselves in a situation where those who know about Turing incomplete languages are mostly academiccs who think that "computing" is about "capital A algorithms" (which are harder to represent efficiently). The hordes of software engineers in industry who know that most programs are really rather simple, think that Turing completeness is computing per se. I see an oportunity here.

So, what are you waiting for? Let's start terminating!

  

I'm not a tinkerer: The painful, hardly instructive story of how I became a decent engineer

by Gabriel Pickard

The RPG from Hell

Role-playing games seem to have an obsession with underground lairs full of monsters. (Dungeons & Dragons even has it in the name.) As a kid, I remember watching my brother playing Diablo II on his PC. Everything was droopy, gloomy, scary and dark. -- And that was just at the surface level! Then he descended into one of the dungeons, where the real action was! It was oppressively horrifying to my childish soul. There I sat with my eyes glazed over, my clammy little hands clenching the sides of my deeply unfashionable corduroy thrift store pants, transfixed yet wondering: Why would anyone want to go down there into those dungeons? There's nothing there! Only monsters! Why not just stay at the surface? Maybe hide, maybe get the Dark Lord's phone number and broker a deal from the safety of the village? How would that sound?

This is how I generally have felt when approaching a new piece of technology. Transfixed, but wondering how many dead-ends, unsolvable problems and wasted hours await me. I'd much rather spend time at the surface, learning about the theory, capability and higher-minded concepts. I do not, as a rule, favor getting my hands dirty.

In other words: I am not a tinkerer.

For this I have long considered myself at a serious disadvantage compared to many of my software engineering colleagues. A disadvantage only to be made up by technological brilliance. A brilliance so epic, that... I surely must falter at implementing it. This is the tragedy of my professional life, of which I have come to sing:

I don't do computers

I grew up a bookish kid, by all accounts destined for the humanities. My first and only failing grade in middle school was a remedial computer class. I didn't play with Lego Technic. It had never occured to me that anyone would care to take the washing machine apart, just to find out how it works. When my best friend did just that and got into hefty trouble, I was astonished and admired him greatly for it, but was under no illusion that such a path was for me.

But there was something that drew me towards computing machines, despite all the odds: Ever since I picked a losing fight with the boss kid in kindergarten and found myself no longer welcome to the sandbox, I have been interested in power; How it works, who has it and how it may be gained. When I say "interested", of course I mean academically interested. Let's just say that a lot of my social experiences in childhood were of the, ahem, "outside-looking-in"-variety, if you catch my drift. Socially shunned and ill-adept, what's a greasy-haired kid to do? -- My answer, like many a kid, was to make my thoughts my own best friends. The world with all its physical objects was too scary and frankly just too heavy for my scrawny arms. But not in my fantasy, there I happened to be in the deadlifting business.

Enter computers: Again, computer games remained the domain of more adept individuals than me. Reflexes, practical mastery? Not my cup'a tea. Most games were also just too damn scary for my gentle soul. No, I had to stick to something less fast paced if I was going to keep up. It was the concept of programming that intrigued me: Here is a device that could bring thoughts to life! Concepts from my mind finding embodiment -- what a heady idea! At last I would find myself in power in a domain where I actually felt at home. It didn't hurt that society was talking about tech as the way of the future. -- Furthermore, who knows -- I might even manage to talk to girls? (A popular non-sequitur if there ever was one!)

So, as an angsty teen around the turn of the century -- with my remedial computing knowledge -- I set out to build an operating system for AI. I invented an xml-based "declarative" "programming"-"language" which I compiled to x86 assembly using a healthy pile of regular expressions in a sauce of Perl. Don't go looking for my usenet posts from back then. Please. Not much came of my youthful software ambitions, though I did learn a thing or two about Perl and Java along the way.

More impactful, surprising, and in a completely unrelated chain of events, I started talking to girls, which largely involved a few brave (or severely congested) souls willing to smell past the pronounced body odor problem I was nursing at the time.

Math for Rubes

I enrolled in Cognitive Science as an undergrad, hoping to do something between Philosophy and Computer Science, without perishing of the stuffiness of either. Little did I know that I was going to discover a pearl that had long been hidden from me: The joy of Mathematics and Logic.

As a child, I had viewed Math with the same disdain, fear and shame as I would a Rubik's cube: An inscrutable jumble, promising nothing but joylessly toil and failure. Rubik's cubes were not a part of my world.

But now I learned that I didn't really need to be a tinkerer to do Math. I just needed to be a committed thinker! It was a realm of thought, with logic as a worthy opponent. What a joy it is to say: What I want to do is impossible, logic does not support it. However, what I want to do is reasonable, what can I do instead? Perhaps I can bend my requirements, change my perspective to express that which I want to formalize it in such a way that logic will accept it nevertheless, if begrudgingly.

My advice for improving STEM participation: teach Math as the formally rigorous branch of the humanities that it is!

Machine Learning doesn't work yet

Next to Logic, in my studies I soaked up statistical NLP, Good Old Fashioned AI as well as Neural Nets. This was back in the day when Support Vector Machines were de rigueur. But there were some whispers around the edges about so-called "Deep Belief Networks".

The year was 2007 and I was going to build generative language models using auto-encoders. Only problem: Deep back-propagation didn't really work yet -- and you had to build it yourself, so if it didn't yield results you were never quite sure whether it was a bug or shortcomings of your algorithm that was thwarting you. I, for one, had a lot of bugs.

I failed at generative natural language models, proceeded to (re-)invent adversarial networks, but failed to use the new training algorithms that were pouring out of Geoffrey Hinton's lab and instead (in a stroke of self-defeating genius) decided to roll my own Evolutionary Algorithm. None of my supervisors thought it worth publishing by the time I managed to limp across the graduation line. Frustrated, I concluded that there wasn't much future for me in academia, and picked up web development.

Engineering is an anti-pattern

Remember, I was a Cognitive Science major. While I could nail together scripts, I didn't really know how to write software! But, the myth of Silicon Valley was wafting through the air and it drew me in, much like our dog will pull at her leash if she smells a pee-stain across the street.

I was going to build an app and it would be successful. -- What if I built a unified messenger app? That's something I don't know much about!

Unfortunately around the same time, health issues were taking over. I had been in chronic pain for a while and with limited attention span I had skated through college and grad school mostly soaking up information osmotically from lectures and classmates. Sitting down and reading a technical book was quite the undertaking. Sitting down and writing -- nay debugging half-understood code was the ultimate nasty, dark and gloomy dungeon to descend into. And yet I tried.

Debugging is a curious beast. As coders we will run into situations where we don't know how to even start at least once in a while. Problems and bugs where we have no clue. As a neophyte we will run into them all the time. What it takes to deal with those situations is incredible steadfastness, willingness to try many things and a breadth of experience.

People with a predisposition to start small, prod, experiment and generally muck around at are a distinct advantage. They get into less inscrutable dead ends, gain a breadth of experience much more quickly and generally build stuff that actually works instead of stuff that sounds like a good idea. Through my bleary eyes, staring at the editor, it dawned on me: If I wanted to make it in software, I had to learn to imitate those kinds of people: tinkerers.

The consummate tech CEO

One must imagine Sisyphus as an ancient Greek Elon Musk. Really -- have a look at the Wikipedia page: The similarity is striking. Sisyphus was a busy-body. Before his punishment, he was running around Hellas, out-innovating mortality, squeezing the Olympian shorts placed on him by Zeus, generally being hyper-active and full of himself. The craftiest of men. Vapid yet gargantuan projects such as Mars colonies, saving our wasteful car-driving lifestyle in the face of reality or rolling rocks up a hill are truly a just form of punishment for that kind of guy.

I was not that kind of guy. I was a tech wannabe. I would work on ambitious projects: Unified messaging, or a document database with indexing based on anti-unification, or a reddit clone where subreddits were replaced by embeddings into a non-euclidean space. That kind of stuff. Unlike those Muskian achievers I didn't make it big. I actually didn't really make anything that people actually got to see and use. Much like Sisyphus' punishment the stone of my attention would always slip -- before I had a chance to ship.

There's no way to sugar-coat it: I fundamentally confused writing software with yak shaving. I would push myself deeper and deeper into theory and prescient optimizations, until the project just kind of became inert... sat there a while... and then I slowly, slowly, imperceptibly gave up on it.

How amazing is it that for his punishment by the Gods, Sisyphus basically gets turned into a me! Before, he's achieving, he's meeting deadlines, he's outrunning the greatest deadline of all: Death itself. Afterwards, he's striving, striving, striving... But without results.

It is the moment of failure in this myth that truly intrigues me. That instant where the rock slips out of his hand and rolls back down. Surely, he must get it, right? But no, he doesn't get it. The slip is but an afterthought, an unsolved riddle to be glossed over. The Sisyphus solution is: try harder.

I, for one, have tried trying harder. To build a successful product. And I've failed at it.

But en passant to rolling up my own hill of technological hubris, I actually did learn enough jargon to pass tech interviews.

FizzBoss

You know how interviewers used to use fizzbuzz to weed out people who couldn't write code? Lucky for me I did know how to solve fizzbuzz. I actually knew more than that: I was able to talk intelligently enough about advanced technical topics. All those fancy technologies I had been fantasizing about. Problem was: I couldn't for the life of me get a simple software project across the finish line.

Indeed, I was a 0.1x programmer trapped in the body of a conversant technologist. Suffice it to say: I had trouble holding down a job. But I could pass interviews, so I could get a new one. Bit by bit things changed; I learned to stick it out. I was blessed to meet some of the kindest people in tech who endured me and my meandering incompetence. If there's any moral in this sordid tale: Talking can help and hiding can hurt.

We do not actually live in a world of mythical punishment. Simply by showing up and failing, after a while, I somehow acquired enough practical knowledge to actually do my job. In a mysterious way, many important things happen while we're not paying attention. Sometimes, all it takes is the follow-through to keep doing it. Which is really the benefit that tinkerers have: They'll just keep on playing around with something and gain experience while doing so.

My difficulty is that I will be scared and I will freeze and shrink away from taking action and gaining hands-on experience with the code beast. So now, if I'm trying to learn something expansive and complex, I often choose to first work on stuff that's related, yet non-threatening. It's a true art-form to find preparatory projects that don't feel like homework. I want to be concerned with the deep and conceptual, not the fiddly work! But finding just the right on-ramp for my proclivities and capabilities can be an analytical task! And if there's one thing I can do, it's analytical tasks.

My mind will always tend towards analyzing and conceptualizing over acting and manipulating. So I have developed tricks to transform activity-heavy work into a sequence of analysis interspersed with non-threatening manipulations.

A warm and fuzzy codebase

It all came back to that fear of dungeons. Familiarity dissolves the fear. When I was watching my brother playing Diablo, he wasn't overwhelmed by fear; He may have had some spooky excitement, but mainly he was enjoying a feeling of mastery and accomplishment, on the basis of a certain sense of safety -- of knowing how the game works.

Nowadays, I sometimes can even enjoy tinkering around a little. I still find it hard to focus if I can't relate my task at hand to some larger concept. One key factor in this professional growth has been to respect my own brain and its peculiar workings. If I sit myself down with nothing but a vague notion of what it is I'm about to do -- and expect to achieve results, then I'm setting myself up for failure.

I have had to learn to: 1. Not do Engineering -- As best I can to avoid the compulsion to come up with "the right solution" and dwell on the conceptual. 2. Manage my fear of finding out how the machine actually works -- familiarity breeds contempt, and contempt is what you want when setting out to slay dragons. 3. Work with the kind of person that I am: I cannot simply sit down at a keyboard and write code freeform. I need a rough and realistic plan for what I'm going to do algorithmically. 4. Know what to punch into Google when there's a problem.

I have come to taste the joy of doing manageable tasks in a code-base that I know like the back of my hand. Sometimes my requirements still turn out too steep, concepts too abstract, implementations too tricky. It remains a struggle not to drop the rock.

But that's how I roll.

  

How video conferencing fails the cocktail party effect

by Gabriel Pickard

I've been attending a lot of video conferences lately. Some of them meet the definition of a meeting: 3-20 people in a room, with one person speaking while everyone else listens. This is a well-understood UX paradigm, decently served by the likes of Zoom, Google Hangouts and Jitsi and originated in the business world as a natural corollary to conference calls over the telephone.

Zoom parties are pathetic

A lot of my recent video calls do not at all fit into the video-conferencing paradigm: They are virtual social gatherings, intended to fulfill the need for community, human connection and fun in times of social distancing. Most of them should not meet the basic criterion of a meeting: There's no agenda. Usually there's more than one topic and more than one person wanting to speak at any given time. These get-togethers are shoe-horned into a User Experience that has very little to do with their intended reality: a loose, multi-stranded interaction. In normal social gatherings, even if it's just four or five people, there is a natural ebb-and-flow from joint conversation to breaking off to talk about something in private, or even overhearing a word and joining back in (the famed cocktail party effect).

The cocktail party effect is related to a basic human need: Ambient social awareness, which makes social gatherings interesting, fulfilling and comforting. We want to know who is talking to whom. But, we don't want to be forced to listen to every word they say. In video calls it's so easy to get impatient, waiting for a turn to get a word in edgewise. While I love my friends, they can be tedious at times. This is exacerbated by the fact that most video conferencing apps practice active noise-suppression: They are specifically targeted at letting only one source speak at any given time.

There have been some great attempts at enabling multiple parallel conversations, using breakout-rooms: sub-sessions of a conference call, which can be easily reachable from the main context. However, the hard break between main and breakout room still remains.

On the internet, software shapes social interaction. This is nothing new, in the real world we shape our social interactions all the time. There are clear differences between the kinds of connections you can have in a noisy dive bar, an office break room, at home in your living room, on a factory floor, or while out together on a forest hike. As developers of social apps we should be able to formulate clearly what kind of social interaction we are looking to shape. Is it productive? Confrontational? In-depth or fleeting? Warm or impersonal?

Zoom calls shape a kind of social connection that may be effective for jointly pursuing a work-related agenda. But it falls flat when trying to, you know, actually connect. We also should not kid ourselves into believing that social needs are purely a matter of private life. If we are to make remote work viable at scale, we have to take them into account.

Criteria for a good social gathering app

In short, we want:

  • Multiple adjacent conversations,
  • aware of each other's participants and content,
  • without interfering with each other.

Additionally we may also appreciate:

  • a shared context or world to inhabit, things to look at or manipulate together.
  • the ability to pass by, casually join or leave a conversation: in-between-states.

The most natural realm in which these sorts of casual/world-driven dynamics have played out online are multiplayer games. For many individuals and some social groups, games have been a great way to spend time together: It's not by accident that games such as Animal Crossing have been a hit in the recent COVID-months. But even the best of games will always be attractive to some, but boring to others, even amongst people who "like games" there will be a variety of opinions about which game to choose. So if you want to have a social gathering, focused on the social interaction rather than on the world it's embedded in, you may want to turn elsewhere.

Spatial social software

One compelling idea has been to embed people as avatars in a simple virtual 2D or 3D space, using text, audio and/or video communication. (See: Spatial Software, Avatar chat apps, Spontaneous social apps) The watch-word here is simple: most recent approaches have eschewed going full-bore 2nd life -- instead they offer more abstract UX primitives, such as an abstract 2D space, in an attempt to sufficiently model some selection of the criteria for a good social gathering app I put forth above.

"Conversation adjacency" in these apps is organized in three rough categories:

  • a list or tree of individual rooms
  • two-dimensional spatial embedding
  • three-dimensional embedding

VEEPARTY

Many of these realtime-social apps look really fun and interesting, but most have either a spatial context (with artificial avatars)() or video chat. Ultimately I actually want to see and hear my friends in an explorable setting. Under normal circumstances my gripes with video conferencing would have lain fallow as I wait for someone else to fix it, but as fate willed it I had some time on my hands and was stuck at home. -- So I put together a webrtc app to play around with the above requirements list: https://veeparty.horse

Veeparty is a video chat app embedded in a zoomable 2D space.

  • Users are represented by a circular video avatar that they can move around freely.
  • Every user has a defined "earshot radius" in which they can hear others.
  • Other participants beyond that radius can not be heard, but you can see a real-time transcription of what they are saying at a glance (hence approximating the cocktail party effect).
  • Users can draw and write on the background and embed images, making the space their own and having fun.
  • You can create a new space for you and your friends or join an existing public space.

Online Town is a closely related app. It has a simple interface, embedded in a 2D space, people are able to move around and there is a notion of distance and radius-of-hearing. I do have number of differences of opinion when it comes to User Experience, but I invite you to form your own opinion!

Zoom meets Minecraft

In playing around with veeparty, I've had the growing sense that there's maybe an interesting new category of app here: Something that's decidedly not a game, but nevertheless integrates many playful / world building aspects. An app that you might even play games inside of but that fundamentally is just a simple world for people to meet in and interact. In a way this hearkens back to the early era of facebook apps (and perhaps there are a number of lessons to be learned from that cautionary tale).

For now the plan is to keep adding simple primitives for fun interactions between friends or strangers. My app is still very rough around the edges and only supported on Desktop Firefox and Chrome, so check it out with that in mind.

  

How bad will it be? Economic and political effects of COVID19

by Gabriel Pickard

So you're entering lockdown mode and now you have two questions: How long will this last? And how bad will this be for the economy? Here I provide my answer to the second question: the economic and political effects of the pandemic. In the first part of this series I give an estimate of how long it will take and what it will be like for us to live through this grand experiment.

After months of underestimating COVID19, the pendulum is finally swinging the other way. People are worried! But we need to remain calm. Mitigating this disease will indeed be very costly, but we are not looking at a bottomless pit, both in terms of time and money. This will not be the end of civilization.

plague doctors

Politics

Donald Trump's hold on power will likely strengthen with COVID19. If it hadn't been for the administration's recent about-face (thanks Tucker), Trump would be doomed. But now, the federal government is responding to the recession in an unprecedented manner, including direct payments to consumers, with the Republican party overtaking mainstream Democrats on the left (economically). I am surprised at the speed with which political orthodoxy is changing and yet again Donald Trump stands a chance to escape the clutches of his own idiocy which was on display with his initial "it's just a flu" denialism and criminal suppression of testing.

As it stands, a COVID-recession alone will not keep Trump from re-gaining the White House in November. Even if the economy were to slide even deeper, the presumptive Democratic nominee Joe Biden is poorly suited to beat him. The Democrats have been becoming the party of affluent suburbanites and may irrevocably be forsaking the working class.

If this trend continues and Trump keeps overtaking Democrats on the left, the rising left-populist movement currently associated with the Sanders campaign will be looking for a new home. There will be attempts at starting / joining a third party. We may also see socially left-wing, even socialist candidates popping up in Republican primaries!

China will rise. The People's Republic will benefit from being less affected than everyone else. China already is the largest industrial power in the world. As economies around the globe will grind to a halt, China will play a similar role as the US did in the world wars: A well-industrialized and less-affected hinterland. Jack Ma got China serious good PR for his donations and China looks very competent in comparison to the west.

Those Chinese exporters that have survived, though badly hurt, will now face captive markets: locked-down economies abroad will first need medical supplies, but later demand for all sorts of goods should take off, because of domestic factory shutdowns.

The Trump adminstration is trying to gin up anger between reactionaries and woke people by branding COVID19 as the "Chinese Virus". This is classic divide-and-conquer strategy: It distracts the Trump base from his failure to acknowledge the threat in time, which will needlessly kill hundreds of thousands of his very own constituents, while costing the economy dearly (compared to milder measures which would have been possible).

This squabble over naming also coopts the legitimate movement to repatriate manufacturing to the US, in a way which alienates the Chinese. Framing the rise of China as a world power in antagonistic terms is a dangerous game, which will likely end very badly for the United States. Instead we should follow the model pioneered by the British Empire: In its decline, the Empire closely aligned with one rising power (the US) against another rising power (Germany).

At this point the United States has managed to antagonize and unite most rising powers (Russia, China, Iran) against it. If Tulsi Gabbard ever became president (there's some chance she might end up being Biden's VP), we might see an effective alignment with India. This might make sense if Prime Minister Modi's party weren't so fascist.

The European Union is ripping apart. Countries at the periphery of the Eurozone, such as Spain and Greece have been savaged by austerity and never recovered from the 2008 financial crisis. They will fall off a cliff unless the center, lead by Germany takes extremely radical action, even then, the odds don't look good.

Economy

Like a shark, capitalism needs to keep moving in order to survive.

This received wisdom will be put to the test in the next few months. The danger scenario is the following: In a lockdown consumers stop patronizing businesses, which in turn go bankrupt, default on their loans and lay off staff. As people come out of lockdown they find themselves out of a job. Banks are unable to provide loans to start new businesses because their balance sheets are under water due to all the bankruptcies. The situation looks dire.

In reality, things are slightly less bleak: The government is able to step in and freeze loan and rent payments, it can dump money on banks, businesses and consumers directly, in order to keep all the economic players in place, without severing employment and customer ties. This would be the equivalent of holding a shark in place while pushing water through its gills with a hose. Then, as soon as the lockdown is over, everyone in the economy can go back to business as usual, with their job, their business, their suppliers and customers still in place. I.e. the shark can just start swimming again... Or can it?

Perhaps the biggest, least-discussed problem in the US economy is at-will employment. In most states companies can fire their employees at any time, without reason. This is fantastically harmful to the economic scenario we are facing right now. People are getting laid off in droves. So, when the economy reawakens, there will be no true resuscitation because of demand destruction: Unemployed people can't afford to buy stuff, pay rent, mortgages etc. so the businesses that served them go bankrupt, banks underwater, more people get laid off, everyone suffers -- the economy enters a downhill spiral.

This downhill spiral usually continues until business, consumer and finance expectations shift from "more decline" to "growth" and they start buying, investing, lending and hiring again.Often this change in expectation comes due to a catalyzing event, like a government stimulus package. The economics of a pandemic actually have such an event built-in: once lockdowns are lifted, we can expect at least some pent-up demand to hit the markets: things that people had wanted to buy and put off until after, restaurant meals, visits to the relatives, (plus all the travel they had to cancel but still have flight credits for), nights out and going to the movies etc. Of course all of these activities would still be constrained, however we should expect significant growth compared to the in-lockdown economy.

Post-lockdown pent-up consumer demand could be quite potent if paired with payments directly from the government to consumers. Furthermore, during the commercial shutdown, some stocks of manufactured goods will have been depleted, so as businesses come back online they may have to hire additional people in manufacturing and logistics to meet demand.

We can expect a modicum of economic recovery in the middle of this year, following the institution of a reopen-and-test regime. But it may end up being weak and short: There are a number of factors beyond infectious disease that would indicate a continuing recession. International trade will still be disrupted and recovery disjointed, oil and natural gas markets will have cratered, popping the fracking bubble going on in the United States. There are bound to be bad loans made within the "easy money regime" of the last decade, which now will sit on balance sheets and could inhibit investment. (What remains to be seen is whether we can get out of this with more easy money.)

One thing seems likely: Governments will spend. It's what saved them last time (2008) and they will try it again. To the extent that this spending will finally go into the pockets of ordinary people (not just the banks), expect a rise of inflation. Current generations of bureaucrats do not have experience with an inflationary economy, mistakes will likely be made.

We may see some countries transition to a kind of (temporary) command economy, as governments step in, perhaps (partially) nationalizing key industries. This is a good move, since it allows the government to keep people on the payroll. After the clear failure of austerity following 2008, the pretense of free-market liberalism is finally crumbling.

Historically speaking, the economic effects of pandemics have been surprisingly mild (cf. Spanish flu, Black death). Shrinking the workforce leads to better conditions for workers and more resources available to survivors. However, we cannot simply apply this to our situation:

  • We are unwilling to just let people die.
  • With modern medical technology we can combat the disease. This is costly.
  • We have a strong enough state and willingness to institute curfew measures. It stops the virus, but costs the economy.
  • Our economy is based on markets instead of farming -- we are more complex and more fragile. Lockdowns and quarantine (which are not new) hurt us more than in the past.
  • Our current market society is global and excessively connected. We spread diseases much, much faster than we used to. Instead of rolling local crises, we have crisis almost everywhere concurrently.

Nevertheless, there is hope. Our current recession is driven by an external factor which is manageable. Controlling the virus will not be easy, however the mitigation program itself can provide jobs and some economic stimulus.

Ultimately the disease will fade into the background. In the mean time we have a ramshackle plan to prop up our economy with massive infusions of cash. Luckily the last financial crisis put Keynesian Economics back onto the map. (Otherwise we would not survive this.) We have a decent chance of getting out of this pandemic with just a "normal" recession. Not a depression death spiral. But getting there will take extraordinary political will to completely restructure the economy if necessary.

If however the European Union (or some other world power) collapses economically, the US will likely get pulled back down, possibly into depression as well, in the later half of 2020 or next year.

Even when the economy as a whole comes back eventually, you or people you care for may be very badly hurt in the mean time, because the shutdown is so harsh and mitigation measures so crude. The only way out is to self-organize in solidarity. Many people are eager, eager to help, if we ask for help. I will go out and help as many people around me as I can. I happen to have a little prepper supply of pandemic essentials. Particularly the old and the homeless and the lonely will need it. Even just talking to someone at a safe distance can be a good thing. I hope you'll join in helping if you can!

Environment

COVID19 is perhaps the best thing to happen to our climate within the last 100 years.

  

COVID19 lockdown: How long will it take and how will it change us?

by Gabriel Pickard

So you're entering lockdown mode and now you have two questions: How long will this last? And how bad will this be for the economy? Here I will give my answer to the first question: how long it will take. The second part of this series deals with the political and economic ramifications of the pandemic.

After months of underestimating COVID19, the pendulum is finally swinging the other way. People are very worried! But we need to remain calm. Mitigating this disease will indeed be very costly, but we are not looking at a bottomless pit, both in terms of time and money. This will not be the end of civilization.

plague doctors

How long will it last?

The good news is: With a focused and capable public health response, lockdown does not have to last forever. It can be a matter of just a few months. The bad news is that we have to really get our act together: We need a competent administration.

(In the following I will use the term "lockdown" imprecisely for any significant measures to make people to stay at home. Eskimos may or may not have many words for snow, but we haven't yet developed a popular vocabulary for pandemic mitigation efforts.)

A lockdown for COVID19 needs to be strict and take at the very least 3 weeks. That is about the time required for the disease to run its course. This prevents people with mild or no symptoms from moving in society and infecting others. Once the number of new cases slows and starts dropping the main determining question becomes: How long does it take to set up an effective task-force to run post-lockdown control measures?

We should not go ahead with reopening until we have supplies and staff for ubiquitous testing and contact tracing in place (as well as replenished hospitals). I presume we would simply copy the Chinese mitigation model -- though the US does have a penchant for "roll your own".

The approach taken in China (and several other countries) involves tracking people's movements, taking their temperature at the entrance to every gathering place and setting up quick serve fever clinics for SARS-CoV-2 testing. If someone tests positive, staff tracks down and test all their recent contacts.

In theory such a mitigation strategy would work, though currently there isn't much indication of a focused effort to prepare for post-lockdown in the US. Our health system is still mainly struggling to keep us from entering a world of death panels and mass graves.

Even when we get past that stage, mitigation is no joke. There can be second wave outbreaks, which is why a half-assed response won't cut it. Americans will have to adapt to constant controls. Will we need to track people's movements? I hate the privacy implications, but I also hate infectious diseases killing our grandparents.

Americans need to finally start wearing masks. This kind of disinformation to discourage mask usage is disingenuous and harmful. Currently the US has a shortage of masks, which indeed are very useful and are needed by medical professionals. Instead of admitting to poor planning and rationing masks, we are being misled. While it is true that masks do not perfectly protect you against getting infected, if everyone wears a mask, those who are infected are much less likely to pass it on to others.

The Chinese administration loosened their lockdown after about two months -- I expect the US to take about as long, possibly a bit longer. Our administration has the advantage of more information about the illness, but likely a less efficient apparatus. -- It may well happen that we open things up and then have to close them down again in some places, due to insufficient control measures.

Health impacts

A lot of people will get sick and somewhere between 1% and 10% of old and infirm people will die. Public health as a topic will be front and center in people's minds, favoring long-term advancements, such as better health-care funding and insurance. In the US in particular, there is a chance that we will get universal healthcare, but sadly enough it's not a done deal, by a long shot, even now. -- It's absurd.

Of all professions, expect healthcare to be hit hardest. The size of the infecting dose can strongly affect outcomes and healthcare workers are dealing with the sickest patients, while beset by overwork, exhaustion and shortages of protective equipment. In the near-to-midterm there will be worse health outcomes for everyone. Mid-to-longterm the bargaining position of healthcare professionals will be strengthened. Due to increased demand, we will see initiatives to make working in this field more attractive and ameliorate the inhumane conditions that workers are subject to. I expect job opportunities for public health specialists and epidemiologists, all the way from local government to hedge funds.

People with pre-existing illness will suffer. Particularly drug addicts may well find themselves in a desperate situation. With lockdowns and borders closed, supply will be tight. Also drug addicts generally need constant cash flow and have limited reserves - once commerce dries up many will find themselves unable to pay.

Isolation and deaths of despair will peak. People will get fat. Homeless people, of which we have many, will suffer incredibly. Most of the public places that they rely on for sanitation will be closed, shelters disease-riddled, food banks struggling and cash alms scarce.

Culture

Locking large swaths of the population at home for weeks will probably have a variety of cultural effects. These may be some of them

  • Many will be bored. Expect anything that can capture people's attention to get lots of it (after the initial anxiety hype dies down).
  • Of course video conferencing will rule the world.
  • Streaming already is a popular genre, it will grow. As a medium for experiencing companionship, streaming may have its "mainstreaming" moment similar to when the boomer generation picked up facebook.
  • Some neighborhood ties will be strengthened. Today I attended a local "social distancing sing-along" on our street, this can be fun too!
  • A whole generation will come face-to-face with the basics of life: food, health, family, death. This is long overdue and to our betterment.
  • A lot of people will experience cognitive dissonance: Everything around you seems fine, the birds are chirping, and yet there is anxiety and extreme precaution. Look out for "hidden enemy" themes cropping up in movies and other popular narratives.

The history of modern human society has been a history of a growing ability to shape our environment. We had a good run, but this age is very, very slowly coming to a close. We are entering a phase of history in which we get to experience, first hand, just how limited our power over nature is. We are learning that our attempts to control our environment are subject to the law of diminishing returns.

Every natural disaster, such as the COVID19 pandemic, serves as one lesson in our collective syllabus. Don't expect this course to be finished in our lifetime, but each step along the way, more people will take heed. As an example: The negative returns of complex, globalized supply-chains are becoming all too obvious. Not only will we repatriate industry, some of us will come out of this pandemic panic with a new-found appreciation for the energy efficiency, cost-effectiveness, resiliency and simple joy of having a small vegetable garden!

The inconvenience, sacrifice and true hardship of this pandemic may seem like a catastrophe. Anxiety itself will probably kill a number of people. Don't be one of them! Many Americans are still quite fortunate and have enough to eat and access to education and entertainment. Some people around us do not have such luxuries and we need to care for them, pandemics are a team sport! Ultimately we can count our blessings that the death rate of this disease is closer to the Spanish flu and nothing like Ebola or the black death. We also are under the protection of a government that is strong enough to implement public health measures, as incompetent as it may sometimes seem.

The broader picture

Lockdown measures will give way to a new reality during and after the pandemic. If you're interested in the political and economic effects checkout my part 2 article.

  

Data Science is dying out

by Gabriel Pickard

As a Data Scientist you usually do one of two things: 1. You comb through a pile of numbers derived from a SQL database, S3 or an HDFS cluster, in order to answer questions like "what keeps customers from converting?" and make predictions like "How much is a given user worth?". 2. You build Machine Learning models that do actual ongoing work in deployed applications.

For both of these task groups there exists a different job title with similar job description:

  1. If you replace the SQL with Excel spreadsheets, then Analysts have been and still are answering the same kind of questions.
  2. Software Engineers have been doing Machine Learning for a long time. Long enough that there now is such a thing as a Machine Learning Engineer.

Data Scientists distinguish themselves from Analysts by knowing a bit more about programming. They distinguish themselves from Software Engineers by knowing less about programming. They differ from both in that Data Scientists usually are expected to have gone to grad school and know a thing or two more about math.

The "Data Scientist" job title was a way for industry to hire academics to do the kind of computational modeling that PhD candidates usually do in academia. Also, it was a way for managers to feel cool, hip and with it.

But it's dying.

Why you say? Well, I'm glad you asked:

  1. Analysts can and do learn how to write SQL queries. Often (not always) the math involved in answering business questions can actually be kept relatively simple, you often don't need a PhD to do it. Doesn't hurt though, I guess.
  2. Machine learning is getting commodified. You no longer need to write your own backpropagation algorithm in order to do Deep Learning. (It's sad, I know.) All you need is GPUs and money to burn. -- Well, and feature engineering, you do need feature engineering from time to time. And some experience in the field and domain knowledge. But all those can be acquired by many a bright individual. Also, if your models are to do real work, they usually need to be integrated into real code. Here it comes in handy to know a thing or two about Software Engineering. Hence the rise in popularity of the ML Engineer.

Now I know for a fact that once you give a group of people in the professional class a title and a decent salary, they'll make sure their field will continue to be needed, come hell or high water. So, in fact: I lied.

Sorry.

Data Science is going nowhere, it's just that if you happen to not have Data Scientists on hand at your organization, don't fret, you still can get stuff done.

But if you decide you do desperately need someone to make sweet, sweet Science to your Data; I'm a Sciencer of Data myself, possibly accepting clients. -- Or maybe I'm an ML Engineer, or something like that... Just give me a ring.

  

Developer Experience: Fundamentally harder than normal UX

by Gabriel Pickard

Many (maybe most) of us Software Engineers are deeply involved in projects where UI and UX are very important concerns (or at least they should be). Throughout the short history of our profession, we have become more and more focused on understanding and serving the people who will actually be subjected to the stuff we build: users. In many cases, this focus on UI/UX isn't because we particularly enjoy worrying about such matters -- no, it's simply because we have to if we want to build successful products!

There has been tremendous growth in the field of UX/UI. Without a doubt, today's applications are much more user-friendly than those from 30 years ago. Back then, users were given features, interface be damned. It was only over time that customers came to appreciate, expect, demand and reward those applications that actually cared to consider their needs. Consumer demand drove UX as a discipline.

This process has been fast in some areas, slow in others. Nowhere has it been slower than in the realm of programming tools. We coders still put up with horrid UX/UI when programming.

Visual Studio

Why DX lags behind

In my view the User Interface of programming encompasses a lot, from the type system of the programming language that you use, its error messages, to the editor you're writing code in, the websites you go to in order to get help, all the way to the cloud hosting systems you deploy on. Developer Interface / Developer Experience is comprised of all of this.

  • Counter to any UI/UX philosophy, as programmers we find ourselves maintaining vast background knowledge about the structure and dynamics of our programs, with nary a visual cue to help us.
  • It's often so hard to figure out what exactly went wrong when something goes wrong.
  • A lot of tools are truly ugly!

When was the last time you heard of a programming language discussed in terms of discoverability, succinctness, relevance, let alone beauty?

I believe there are two reasons for the discrepancy between general UX and DX:

ENIAC Computers

  1. Coding tools were around before UI/UX was a thing. We've simply gotten used to them: Dealing with the idiosyncracies of bash, vi, or the JavaScript type system have become a part of the professional hazing process. They may be suboptimal, but they're ours! This is compounded by the fact that the history of technology is so path dependent. In so many cases it is much easier and more profitable to build on something existing rather than reinvent the wheel. -- Even if the existing wheel is full of arbitrary decisions and in fact impediments to what you are trying to do. Just look at the long reign of the x86 architecture, or consider the fact that basically all Operating Systems these days follow the Unix architecture. The fact that JavaScript is what it is. -- I don't mean to say that these technologies are without merit, however they also have considerable flaws and it has been more efficient to work with and around than to replace them entirely. Certainly every mature technology will have to deal with technical debt in the inner workings of its guts. I have no bones with that. However, if the gory details are in the guts, then we coders are the GI surgeons having to deal with them. -- And we need sharper scalpels, better imaging and protective gear. In short: better tools. Chomsky Hierarchy
  2. Fundamentally, programming is a Turing Complete business. You can view a UI as a form of (visual) formal language. For most GUIs the language complexity, the different states that the application interface can find itself in, would largely correspond to regular languages (some may be context-free, but that's pushing it). The visual languages are "flat", predictable, without feedback loops or long distance dependencies. It is in such an environment that UI/UX principles have been able to thrive. But what did our Intro to Theoretical CS course teach us in college? You can't assume that statements about regular languages will still hold for Linearly Bounded Automata, let alone Turing Machines. Fundamental principles in conventional UI/UX design no longer apply when it comes to programming. DI/DX is a very, very special case UI/UX.

So, what I believe has happened is that in many cases we programmers have tilted at windmills of trying to improve our tooling -- the eternal yak shave -- only to be thwarted by the complexity of our own work, inevitably giving up and going with the most rudimentary system capable of doing the job somehow, even if it sucks.

Toward better DX

Ironman interface

So, what's the alternative? Here are my suggestions for a world with better Developer Experience:

  1. Watch what coders actually do -- this is one principle from UX that applies readily. Library developers rarely know what error messages users most commonly get stuck on; How about user studies? We track every click that people make on our website and agonize over button sizes, but know precious little about how common which call paths in our framework are. The elephant in the room here of course is privacy.
  2. Address the many components that make up programming in concert: Editor, shell, repl, language, version control host, cloud host and framework. Generally these pieces are not very aware of each other. There is some movement on this front: Microsoft is gearing up to tightly integrate VSCode, GitHub and Azure. This portends market domination. Competitors take note!
  3. Search -- It's our dirty little secret how much programming nowadays depends on Google. Arguable it is the most important item in our toolbelt. However, search engines nowadays do a uniquely poor job supporting developers. Most of recent Natural Language Understanding developments have made search less precise and less useful for coders, while hardly any new features have been added with our needs in mind. I find this topic fascinating and I'll be writing more about it-- coming up!
  4. Standardize fundamental questions -- How do I run this thing? Where does the code start? Is my system configured correctly? Project Readmes are hardly ever up to date. Why do we treat this as a moral failing instead of a usability issue?
  5. Tests are a usability dead end -- I know this may be contentious, but I believe test suites are another realm of excessive moralizing en lieu of better tools and better processes. Too often they function as a security blanket that simply encases the parts of the code that are unit testable, while leaving the vulnerable, untestable bits fluttering in the wind. Approaches such as generative testing seem more promising.
  6. Intelligently hide details -- Writing code is about control, however understanding code is not. Most environments throw up their hands and overwhelm you with the entire pile of everything the program does, in minute detail. Others take a different tack and try to hide the bitter realities behind magic -- impenetrable and beautiful until you inevitably do have to care about what's behind the curtain. Why do we have so few systems that would allow us to zoom in and out? Get you a coding environment that can do both.
  7. Coding is a social process -- The success of GitHub is a testament to this. I'm not sure, but maybe we can do even better along these lines.
  8. Programming Languages are User Interfaces -- The most fundamental unit of Developer Experience is the programming language. The Elm language is one of few examples where this reality was considered explicitly in the design process.
  9. Programming is about empowering -- dumbing things down is not enough. I believe minimalism is a cheap approach to UX in general, but it definitely doesn't work for DX -- it fundamentally misses the point of what programming is about: expressive power. I believe this is why many "graphical"/beginner programming languages have failed.
  10. Engage with Theoretical Computer Science -- In a way we are hunting for UI widgets (or other kinds of artifacts) that can faithfully represent the full computational complexity of algorithms. I have been actively working on this front and it's a long term project that you hopefully will be hearing more about. Representing computational complexity is also the one area where Bret Victor's excellent work may have fallen short of what we ultimately need.

UX Meme

DX is worth it

Interest in this topic is growing. It's definitely worth looking into. The software industry is so big and expensive that even small improvements can have considerable impact. Furthermore, Software Engineers actually do respond to better UX -- and they wield considerable purchasing power. Consider the following example:

I once worked at a small company, which for regulatory reasons couldn't use the normal cloud-hosted GitHub. Our engineering department strong-armed the rest of the company to purchase on-premise GitHub Enterprise at a cost equivalent to hiring one or two additional engineers, just so we could use Pull-Requests! We recognized the difference it made in our productivity and we demanded it -- only the best of tools would do.

  

I declare blog

by Gabriel Pickard

This is yet another attempt I'm making to start a blog. This time will be different! Also this is a test post.