How should we think about no-code? This technology class is a challenge because it is both disruptively new and an evolution of what has come before. Shifts in our thinking that go with it can be startlingly sudden - and sometimes insidiously gradual. We might not realize our thinking has changed until prompted.
I received such a prompt when interviewing a PHP developer with deep, valuable experience building what many call "monolith" applications. He was trying to broaden his horizons and understand more recent technological developments. This intelligent, thoughtful person asked, what is the deal was with microservices? (For those who have not heard of microservice architecture, I will discuss what that is a few more paragraphs down.) Why don't I just put the code together myself? The monolith worked perfectly well for him for years, and clients paid well for the work. What is the upside?
I see this general philosophy in many professional developers who are "considering" no-code tools but return to those they know and love. These are, by and large, all brilliant people, but they do not see the value that seems so clear to me. Where is the gap? Is it some bias - sticking to one's knitting? A sort of reverse-Luddism? Is there something else going on here?
The issue is the mental model we bring to code vs. that required for no-code. Mental models are the way we think about how a part of the world works. The story that we tell ourselves informs how we predict the behavior of a system and integrate it into our lives. In technology, broadly adopted mental models evolve with the state of the art, tooling, and training.
A technology with lots of opportunities (e.g., a fantastic state of the art) but that is arcane (bad tooling) and nobody knows how to use (lack of training) is not generating much economic value. One that is hard to work with but has a phalanx of trained people can drive value bounded by the trained population. But when the technology becomes well-tooled, the scale of application shifts from wrangling the technology to associating it with one's job.
We can see this evolution in the history of information technology itself.
In the beginning, there was code, and it was tough to understand. Even leaving aside the olden days of punch cards, Low-level abstractions like Assembly, Pascal (remember that?), and C introduced in the 1970s allowed a programmer to impart instructions to machines. These languages looked arcane to most people, but they were attempts at building a bridge between how the computer thought and how a human could express that thinking. Most of us cannot dream in binary.
The people who could be successful at the outset were coders - people who could articulate their intents using these languages. As the languages became higher-level, the skills required to work with them generally declined, and a more expansive universe of professional developers - many without college degrees - could be successful.
The burgeoning popularity of open-source made package managers much more powerful. Open source is a legal construct in which the owner of one bit of code - say, a dependency - permits another person to use that code in their project - say, an application - without a fee. Standardized, liberal open source licenses like LGPL and MIT made including dependencies much safer and, eventually, the standard way of working.
The availability of these expertly-built projects shifted the critical skills for developers from writing the whole thing to linking these dependencies with a layer of business logic and an intended user experience. The amount of code in a given application that one would "write" shrank as dependencies handled more low-to-intermediate concerns.
Many modern developers hold a mental model of software based on this environment. They happily pull a gem or NPM package to extend the functionality of their intended application. In this way, they compose their code, which is tantalizingly close to the mental model for no-code.
However, this model sees the world through the lens of compilation. The output is all under their control. The packages work together in the same memory space - like multiple organs in the same organism. The objective is to build the "six million dollar man" - a single person with extraordinary ability.
Microservices as a model take us closer to no-code. In microservices, multiple programs working in concert execute the intended jobs. These programs collaborate, but they are separate processes. Each program can be smaller and - in theory - simpler. One tricky part is breaking up the concerns to be handled by more atomic microservices, and another is managing the links between them. As a result, complex system management emerges as a primary skill set.
But even here, using microservices is like hiring employees to help you do your work. You are no longer doing all the work yourself, and your skillset moves from doing to managing. But they all work for the same firm, so you are responsible for training them all.
Utilizing third-party software-as-a-service is more like outsourcing to agencies and bureaus. The concern I see outsourced most is authentication - managing the secure access to one's resources. Services like Auth0, Firebase, and AWS Cognito specialize in making sure that the person asking to use your resources is who they say they are. Leveraging their expertise has two effects. First, it improves one's security profile, and second, it decreases the complexity of one's system. The increased benefit with reduced cost - what could be better? The number of hosted services that can similarly outsource concerns is growing by the day. We are moving from managing servers to being "server-less" and from integrating code inside our runtime to being "service-ful."
We can best understand no-code with this "service-ful" model in mind. One orchestrates many outside services to create outsized value. The critical skills are in selection, orchestration, and composition, rather than crafting using a particular tool.
No-code is a mental model that, to me, seems like a predictable evolution of software development. Emerging technologies (e.g., quantum, AI, distributed databases) will - for a while - require code because they are not as refined. That's what makes them "emerging!" But for a vast array of business-applicable use cases, this model will drive value.
The evolution of the mental model makes me think that the win is not convincing a developer with a decade's experience to start implementing projects based on no-code tooling, but in the many non-developers who want to solve their problems. They would not have been able to "leap" to code-based solutions without a lot of help, but the mental model of "some assembly required" looks much more like that of children's toys.
While more accessible for a business-side person to understand, the issues are not necessarily evident to the untrained observer. The complexity is in the business needs, rather than the technical understanding required to be successful. And that complexity is where the business expert has the experience and advantage to unleash value.
Where does this relationship go next? How can we unmoor ourselves from the perceptions that led to this moment, such that we can be free to imagine what is next? What mental model starts where the technology is today and helps us with the subsequent evolutions? And what new business models will arise from those mental models that will unleash value for us all?
These questions keep me up at night because the answers will be the keys to making this technology work for businesses in the coming years. What worked today will not work tomorrow; that goes for business practices, technology, and our mental models.
Thanks for reading Sustained Writing! Subscribe for free to receive new posts and support my work.
I am grateful to the community at foster.co for editorial input on drafts of this essay. As always, errors are my own!