Geeky

The road to DevOps.

The Tools We Choose: Learning from History’s Information Revolutions

A response to concerns about AI tools and the preservation of craft, because writing blog posts about the future of a world with AI is very fashionable today.

The recent critique of AI tools and their impact on human expertise raises important questions about technological change and the preservation of craft. I, your humble narrator with hot takes, will take you through a brief history of inflection points in the last 1000-years of human history. I’m not trying to convince you that we shouldn’t worry, instead I want to help you understand where we need to focus our energies.
As you read this funny-but-serious prose, I hope you see that civilisation has tackled similar changes many times in the past. They’re always been difficult, disrupting (in bad ways as well as good), and they’ve left lessons for us to follow if we’re smart enough to stop, look, listen and think.

If you’re not down with complex and nuanced takes, if you prefer poster boards which say “AI BAD!”, you may want to stop. This post won’t be for you. But before you go, check out this AI generated image just for you!

How correct will I be? Probably as correct as if this blog post had been written by one of the many freely available AI tools available today. Armchair commentary is ingrained in the human psyche, AI doesn’t mean we need to stop doing that.

At its heart the AI revolution is simultaneously a technological, communication and information revolution. Our species has been on an ever accelerating journey in all of these spaces since our ancestors learned to bang rocks together to make noises and create sparks. The velocity has increased greatly in the last 50 years and it probably won’t slow down much at all.

And so we start our story in delightfully familiar territory: standing at another inflection point in humanity’s long relationship with transformative technologies, complete with the same apocalyptic predictions that have accompanied every major technological shift. The fears expressed about AI replacing human expertise echo through history with remarkable consistency, suggesting either that humanity has learned absolutely nothing from previous technological panics, or that each generation genuinely believes they’ve stumbled upon the first technology that might actually end civilisation as we know it.

Let’s begin…

The Rhythm of Information Revolution

Human civilisation has been fundamentally shaped by how quickly information can spread. Each leap in communication speed has triggered profound social, economic, and cultural transformations that initially seemed (or actually were) threatening to existing power structures and ways of life.

For millennia information moved only as fast as human speech from person to person, village to village, at walking pace. Knowledge was precious, localised, and controlled by those who could memorise and preserve it. Oh, and the ones who could manipulate it, can’t forget those people. They’re going to be part of our running narrative too.

The invention of writing began to change this, but it remained the domain of scribes and elites. The printing press, arriving in Europe around 1450, started to break these constraints. It didn’t just make books cheaper, it democratised knowledge itself – as long as you could afford to buy a press or hire someone with a press. Still, it was a start.

Within fifty years over 15 million books were in circulation across Europe. The Catholic Church, which had controlled religious knowledge through hand-copied manuscripts, found this arrangement quite convenient for maintaining spiritual authority. Suddenly it faced a world where any literate peasant could own a Bible and develop inconvenient opinions about papal infallibility. And boy did they! In 1517 Martin Luther wrote his 95 Theses. These were printed, nailed to the door of the Castle Church in Wittenberg, and then distributed widely, sparking the Protestant Reformation not because his ideas were necessarily revolutionary, but because they could spread faster than church authorities could dispatch inquisitors to burn the evidence. Book burnings aren’t just a recent phenomenon.

The transformation started but it wasn’t without cost. Monasteries that had spent centuries preserving knowledge through manual copying – while enjoying a lucrative monopoly on literacy – saw their economic foundation crumble. The scribal arts that had developed over generations became about as useful as expertise in buggy-whip manufacturing is today. Yet the net effect on society was extraordinary: literacy rates began their centuries-long climb, scientific knowledge accelerated through shared publications, and the modern concept of public discourse emerged, giving humanity its first taste of the joys of everyone having an opinion about everything. Access to technology or its output products lifts communities up.

In the 1840s, the telegraph collapsed distances for the first time in human history. Messages that once took weeks to cross continents now traveled in minutes. This didn’t just change commerce, it created the first truly global markets and began the process of cultural homogenisation that continues today. Radio and telephone extended this revolution, creating shared national experiences and enabling new forms of social coordination.

Email and the early internet represented another fundamental shift, making person-to-person communication nearly instantaneous and virtually free, as long as you lived in a country like the US with free local phone calls (I didn’t). A development that seemed miraculous until we discovered that most people would use this incredible technology primarily to forward chain letters and argue with strangers about politics. Or write missives about the demise of society like this one.

Social media then amplified this democratisation while creating new forms of information warfare and manipulation, proving that humanity’s capacity for turning revolutionary communication tools into weapons of mass annoyance knows no bounds.

Each transition brought similar patterns: existing information gatekeepers lost power, new forms of expertise became valuable while others became obsolete, and society eventually adapted to new equilibriums. Though not without significant disruption and human cost along the way.

Serious note: While I might joke about the above, we also mustn’t forget the role social media has also played in events like the genocide of the Rohingya people in Myanmar and the rise of the far-right and white supremacism by amplifying not just annoying or silly content, but actually harmful and destructive content. Yet this isn’t new either. The printing presses in 1790’s Paris also carried destructive messages that contributed to the march toward the Reign of Terror – a 10 month period of mass executions by Madame Guillotine of anyone felt to be an enemy of the Revolution. Humans are really good at abusing technology and communication for the most harmful outcomes.

The Eternal Dance of Power and Technology

Lords, CEOs, Oligarchs have always wanted to amass power and money. But the relationship between technological change and power concentration is more complex than a simple narrative of the powerful exploiting new tools.

Communication revolutions have historically been double-edged for existing power structures. While new technologies can be co-opted by elites, they also create opportunities for power to be distributed more widely. The printing press that enabled Luther’s challenge to church authority also allowed rulers to communicate directly with subjects, bypassing traditional intermediaries. The internet that enables massive corporate data collection also allows activists to organise globally and individuals to access information that was once the preserve of institutions.

The key variable isn’t the technology itself, but how societies choose to regulate and deploy it. The same digital infrastructure that enables surveillance can enable transparency. The same AI tools that might replace human workers can augment human capabilities and create new forms of social and economic value.

This time feels different not because the powerful are more greedy – they always have been, bless their consistently avaricious hearts – but because the pace of change has accelerated beyond many of our adaptive institutions and our own abilities are humans to keep up. Democratic processes, legal frameworks, and social norms that evolved over decades or centuries now struggle to keep pace with technological changes that happen in years or months, leaving us with the delightful spectacle of octogenarian senators trying to regulate technologies they can barely pronounce, let alone understand.

The Guild System: Lessons in Craft, Competition, and Change

An area of great concern for everyone – except the thieving asshats who like to steal the work and sometimes even the likeness and voices of others – is the impact on people who depend on creative output for their livelihoods. 

The European guild system offers us crucial insights into how craft expertise intersects with technological and economic change. From roughly 1000 to 1500 CE, guilds controlled most skilled trades across European cities, creating elaborate systems of apprenticeship, journeymanship, and mastery that preserved knowledge while limiting competition.

These weren’t simply economic arrangements, they were comprehensive social institutions. A weaver’s guild didn’t just control who could operate looms, it provided social insurance, regulated quality standards, controlled pricing, maintained religious and social traditions, and determined who could marry whom within the trade. Guild membership often passed from father to son, creating hereditary castes of skilled workers.

The system worked remarkably well for preserving craft knowledge and maintaining quality standards, assuming you enjoyed living in a society where your career prospects were determined at birth and innovation was treated as a suspicious deviation from time-honored tradition. Guild-produced goods were often extraordinary in their craftsmanship. Think of the intricate stonework of Gothic cathedrals or the precision of medieval armor, created by artisans who had the luxury of spending seven years learning to do one thing extremely well, since economic mobility was about as common as unicorn sightings.

But guilds also stifled innovation and excluded competition with the efficiency of a modern HOA enforcement committee. They prevented talented outsiders from entering trades, maintained artificially high prices, and often resisted technological improvements that might threaten their members’ livelihoods because nothing says “commitment to quality” quite like legally prohibiting anyone from trying to do your job better. Urban guilds frequently prevented rural craftsmen from selling in cities and different guilds jealously guarded their territorial boundaries.

The collapse of the guild system between 1500 and 1800 was driven by multiple forces: the rise of merchant capitalism, technological innovations that required new forms of organisation, political changes that reduced guild power, and growing populations that created demand for cheaper goods. The process was often violent. Guild members rioted against new technologies and business practices. One of the most famous of these was led by Ned Ludd, to whom the term Luddite can be traced.

The transition created both winners and losers. Society gained access to cheaper goods, more innovation, and greater economic mobility. Previously excluded groups – women, religious minorities, rural populations – could just start to access economic opportunities that guilds had denied them. But the loss of guild social protections left workers more vulnerable to market forces and a great deal of traditional craft knowledge was lost forever.

Was it bad that these skills were lost? In that moment for those people it certainly was.
For society as a whole? Unfortunately not. Over time we developed better, cheaper ways to clothe the population.

Similar shifts happened with agriculture which impacted entire supply chains from farmers to markets, but over time they allowed us to improve nutrition and feed growing populations.

Local pain, global gain. These were terrible situations but the extended times between technological, communication and information revolutions made it harder to learn the necessary lessons to do better.

Today: The Reality of Software Development

Let’s be real: So much of software development sucks. It can be repetitive and grinding. Much of programming involves routine tasks: compiling, packaging, debugging, testing, documenting, maintaining legacy systems that were obviously written by your programming buddy’s evil twin during a particularly vindictive coffee shortage.

That. Isn’t. Useful. Work. It isn’t fun work. It isn’t productive. And worst of all we have to set aside all of the unique traits in our brains which make us uniquely human to spend time on this drudgery. We forget the intellectual challenges that make programming compelling: architecting systems that can scale, designing interfaces that feel intuitive, solving novel computational problems, optimising performance under constraints, and translating complex human needs into executable logic. If there are parts of the software development life cycle that almost universally bring joy to people in the extremely small sample set I polled, it’s these.

The value of having humans in code review is to catch errors that automated testing misses. Usually the kind of subtle bugs that only manifest when users do exactly what they’re not supposed to do, which is to say, exactly what users always do. Refactoring improves maintainability in ways that original authors can’t anticipate, partly because most code is written under deadline pressure by developers who are optimistically assuming their future selves will be both more competent and more patient.

The human element in this process isn’t just about craftsmanship, it’s about judgment. Experienced programmers know when to break rules, when “good enough” is actually good enough, when apparent inefficiencies serve important purposes, and how seemingly minor decisions can have major downstream consequences. This knowledge comes from years of seeing systems fail in subtle ways and learning to anticipate problems that don’t show up in initial testing.

AI tools should remove the drudgery of this work. They should free our time up to focus on things that actually require human judgement to go from OK to good to great. You’ll notice I said “years of seeing systems fail”. That isn’t going to change. If anything, AI built systems may fail more often and in different ways. In order to learn from those experiences the next generations of developers need senior folks to be less grouchy about how bad things will get and focus on learning, teaching, engaging the world around us as it is now.

How we teach them is going to be different to how we learned. We all need to adapt.

The Creative Resilience of Humanity

The concern about losing traditional crafts like artisanal weaving or rocket engine manufacturing reflects genuine cultural loss. When the last master of a traditional technique dies without passing on their knowledge, something irreplaceable disappears from human culture. The inability to rebuild Apollo-era rocket engines represents a real limitation on current capabilities.

But this loss, while regrettable, doesn’t represent an existential threat to human creativity or capability. Humans are extraordinarily adaptable, and our capacity for innovation has consistently exceeded our rate of knowledge loss over historical timescales.

Consider what we’ve gained even as we’ve lost traditional crafts: modern textiles that are more durable, comfortable, and affordable than historical equivalents. Though admittedly lacking the character that comes from being woven by someone who could tell you the life story of every sheep that contributed to the yarn. Rocket engines that, while different from the F1, are more efficient and reliable, even if they can’t claim the romantic heritage of being built by engineers who used slide rules and had never heard of PowerPoint. Manufacturing techniques that can produce goods of consistent quality at scales that would have been unimaginable to medieval craftsmen, who had the luxury of knowing the name of everyone who would ever use their products.

Human creativity doesn’t depend on preserving specific techniques. It emerges from our ability to identify problems and devise solutions. This is at the heart of every creative person. The master weavers who lost their livelihoods to power looms were genuinely skilled, but many of their descendants found new outlets for creativity in industrial design, fashion, or entirely different fields. The social cost of this transition was real and shouldn’t be minimised, but the underlying human capacity for creative work remained intact.

This same pattern appears throughout history. The scribes displaced by printing presses were succeeded by editors, publishers, and journalists, proving that humanity’s capacity to find new ways to argue about the proper use of semicolons is essentially limitless (there are none in this blog post). Telegraph operators were replaced by telephone switchboard operators, who were in turn replaced by new forms of communication work, each transition creating jobs that the previous generation couldn’t have imagined, mostly because they involved technologies that would have seemed like witchcraft to anyone born before indoor plumbing became commonplace.

The key insight is that creativity itself is more fundamental than any particular creative practice. While we should work to preserve traditional crafts that have cultural value, we shouldn’t assume that losing specific techniques represents a loss of human creative capacity more broadly.

We need to focus on managing those transitions in ways that are fair, safe and supportive of the people who will be the most impacted by them.

AI as Tool, Not Replacement

The current concerns about AI tools replacing human expertise echo the historical pattern of anxiety that accompanies every major technological shift, with roughly the same level of measured rational discourse that typically characterises humanity’s response to change. Which is to say, somewhere between a toddler’s reaction to having their favorite toy taken away and a medieval village’s response to a solar eclipse. End of the world cataclysmic events.

They also reflect some genuine misunderstandings about what these tools can and cannot do, often fueled by marketing departments and blog posts from self-proclaimed experts like yours truly.

AI excels at pattern recognition and generation within well-defined domains. It can write code that follows established patterns, generate images in familiar styles, and produce text that sounds plausible. What it cannot do, at least in its current form, is make the kind of contextual judgments that characterise true expertise, true art, true ingenuity. Yet people keep trying to make AI do those things.

An experienced programmer doesn’t just write code that works. They write code that will be maintainable by their colleagues, that will perform well under unexpected load (because users will always find creative ways to break things), that follows patterns that make sense within the broader system architecture (a noble goal in systems that actually have coherent architecture), and that anticipates future requirements that aren’t yet fully specified.

Similarly, a master craftsperson doesn’t just follow established techniques. They know when and how to deviate from standard approaches based on specific materials, environmental conditions, and functional requirements. This kind of adaptive expertise emerges from years of experiencing how things fail and learning to anticipate problems before they occur. It occurs from practicing one’s skills and creativity over and over again, discovering new ways to solve problems, express the beauty of a sunrise, or lift the spirits of their listeners through music and song.

AI tools can certainly augment human expertise by handling routine tasks, suggesting solutions, and providing information quickly. They may eventually become sophisticated enough to handle more complex judgments. But the path from current capabilities to true replacement of human expertise is much longer and more uncertain than current hype suggests.

And far, FAR longer than any tech-bro would have you believe, no matter how much they want it to be real today.

The Inevitable Corporate Learning Curve

Let’s be clear about what’s going to happen next: we absolutely will have an adjustment period where well-meaning but misguided corporate leaders will think AI can replace people wholesale, only to discover, sometimes (hopefully?) painfully, that you still need skilled experts who understand what they’re doing. This isn’t pessimism; it’s the same pattern we’ve seen with every major technological shift, and there’s plenty of evidence the is already underway.

I’ve been lucky enough to witness multiple seismic shifts in technology in my lifetime where these effects came true very quickly. The shift to cloud computing was the most notable where work for old-school sysadmins rapidly dried up because now any developer could create and manage their own infrastructure!
Companies forgot, and then quickly relearned, that systems and infrastructure are a knowledge domain which still needed experts. From that we got both the DevOps and SRE movements, the wide spread understanding that blameless incident reviews are critical to learning, and kubernetes.

That last one was surely a mistake though.

More and more companies are scrapping their AI initiatives. This is happening while other companies have either already made cuts to their staffing as a result of deploying AI or have said this will happen in the future. Of the companies doing layoffs under the guise of AI, we’re already hearing stories of them needing to re-hire staff.

The corporate enthusiasm for AI-driven cost-cutting has produced some spectacular failures that would be amusing if they weren’t so predictable. Air Canada was ordered by the Civil Resolution Tribunal to pay damages to a customer and honor a bereavement fare policy that was hallucinated by a support chatbot, which incorrectly stated that customers could retroactively request a bereavement discount within 90 days of the date the ticket was issued. A policy that never actually existed! A federal judge fined a New York City law firm $5,000 after a lawyer used ChatGPT to draft a brief for a personal injury case. The text was full of falsehoods, including more than six entirely fabricated past cases meant to establish precedent.

This learning curve isn’t going away anytime soon. We’re seeing more mentions that corporate AI initiatives fail, yet companies continue to pursue them with the enthusiasm of someone who just discovered cryptocurrency in 2021. The pattern is depressingly consistent: executives see AI demos that work under controlled conditions, extrapolate wildly about potential cost savings, cut human staff, then discover that real-world complexity requires the kind of judgment and adaptability that comes from actual expertise.

And in almost every case it’s clear they didn’t take the multitude of human experiences and interactions into account. The linked story about Klarna is a perfect example of this.

Evolving Software Engineering Education: From Code Junkies to System Architects

The emergence of AI tools that can generate code does require us to fundamentally rethink how we train new software engineers, but not in the direction that most critics assume. Rather than mourning the potential loss of coding grunt work, we should celebrate the opportunity to focus on what actually makes someone a good engineer: the ability to think systematically about problems, understand trade-offs, and make sound architectural decisions at every level of a person’s career.

This is the difference between learning to be a chef by washing dishes and chopping vegetables versus learning to be a chef by studying flavor profiles, understanding nutrition, and learning to compose meals. The mechanical skills matter, but they’re not the essence of the craft. You need them, but you don’t need to practice them for many years first.

Traditional computer science education has long focused on having students implement data structures from scratch, debug assembly code, and manually manage memory. Essentially the equivalent of having medical students grind their own aspirin before they can learn to diagnose patients. While we think this built character and understanding, it also meant that students spent enormous amounts of time on implementation details that are now largely automated, leaving less time for the higher-order thinking skills that actually distinguish great engineers from code-generating tools.

OK I don’t know if debugging assembly and manual memory management in C are still taught in schools, forgive me I’m old. I’m confident though that if they aren’t, they’ve likely been replaced by something else just as tedious.

Future software engineers need to develop architectural thinking: the ability to see systems holistically, understand how different components interact, anticipate failure modes, and make decisions that optimise for long-term maintainability rather than short-term functionality.
They need to understand when the obvious solution is wrong, when performance matters and when it doesn’t.
They need to know how to evaluate trade-offs between different approaches, and how to build systems that can evolve as requirements change.
They need to know how to properly organise code for maintainability and understand how users might interact with the system.
They need to think continuously about resilience, recovery and the security of systems.
They need to think expansively and critically.
And they need to leave the basic, boring, repetitive tasks to the machines.

And when I say “future engineers”, I don’t just mean new entrants to the field. I mean anyone who wants to be a successful software engineer in 5-10 years time, including people already doing the work today.

This means spending more time on system design, learning to read and understand existing codebases, practicing debugging skills that go beyond syntax errors, studying the history of software failures and successes, and developing the kind of intuitive understanding of software behavior that comes from seeing many systems break in creative ways. It means learning to ask better questions: not “how do I implement a binary search tree?” but “when is a binary search tree the right choice, and what are the alternatives?”

And yes: The people who aren’t willing to make this transition will have a harder time in the future. The good news is that this switch is available to practically anyone who is willing to put in the time, effort and energy required.

The goal isn’t to create engineers who can’t code without AI assistance. That would be as limiting as creating drivers who can’t function without GPS (which we’ve also done highly successfully!). The goal is to create engineers who understand systems well enough to know when the AI-generated code is appropriate, when it’s dangerous, and how to modify it when requirements change in ways the AI couldn’t anticipate.

This represents a maturation of the field, not a dumbing-down. We’re moving from an era where programming was primarily a craft practiced by individuals to one where it’s an engineering discipline practiced by teams working on complex systems. Just as civil engineers don’t typically pour their own concrete but need to understand material properties and structural principles, software engineers of the future will need to focus more on architecture and systems thinking than on the mechanical details of code generation.

Navigation

If we accept that AI represents another inflection point in humanity’s long relationship with information technology, what can we learn from previous transitions to navigate this one more successfully?

First, we can recognise that the benefits and harms of new technologies aren’t predetermined. They depend on how we choose to deploy them. The printing press could have remained a tool for authoritarian control if societies hadn’t developed concepts of free speech and press freedom (not you, United Kingdom). The internet could have become purely a surveillance apparatus if we hadn’t fought for privacy rights and open protocols.

Second, we can acknowledge that technological transitions always create both winners and losers, and plan accordingly. Rather than assuming that market forces will automatically create good outcomes, we can proactively design policies that help people adapt to changing economic conditions while preserving valuable aspects of existing systems. The EU is already leading the way on this by crafting legislation that requires companies to think about how their AI systems are used, the impact they have and how the risks of those systems can be mitigated. They also have a general fund to directly help individuals whose jobs are impacted by automation. These measures aren’t perfect but they’re a good start and I expect more will come.

Third, we can distinguish between preservation of specific techniques and preservation of underlying human capabilities. While we may not be able to save every traditional craft, we can ensure that humans retain the cognitive and creative skills that enable adaptation to new circumstances. Teaching critical thinking should be a core subject at schools. This has started too with classroom lessons on how to read news reports, look for biases, and evaluate context and presented “facts”.

Fourth, we can focus on maintaining human agency in technological systems. The real risk isn’t that AI will become too capable, but that we’ll design systems that reduce human choice and control. By keeping humans in meaningful decision-making roles, we preserve the possibility of changing course if we don’t like where current trends are leading.

Finally, we can learn from the guild system’s both successes and failures. Like the guilds, we need institutions that preserve valuable knowledge and maintain quality standards. Unlike the guilds, these institutions should be open to newcomers, responsive to innovation, and oriented toward serving broader social needs rather than just protecting existing interests.

Learning from History: Doing Better This Time

The choice before us isn’t between embracing AI tools uncritically or rejecting them entirely. It’s between thoughtful deployment that serves human flourishing and hasty adoption that prioritises short-term efficiency and monetary gain over long-term consequences.

This means developing AI tools that augment rather than replace human judgment. It means creating economic policies that help people adapt to changing job markets rather than leaving them to face technological displacement alone. It means preserving educational institutions that teach fundamental skills rather than just tool-specific techniques. And it means maintaining democratic control over how these powerful technologies are developed and deployed.

Most importantly, it means remembering that we have agency in this process. The future isn’t something that happens to us unless we let it. It’s something we create through the choices we make today and constantly monitor to make sure we’re going the right way.
By learning from history’s information revolutions we can work to ensure that this one serves human flourishing rather than undermining it.

The tools we create and choose to use will shape the world our children inherit. Let’s make sure we choose wisely.

Did I use AI to write this?

With “AI” in the broadest sense: yes, I most certainly did. And I followed the advice I shared with you all above.

I’m not a learned student of the humanities but over my lifetime I’ve picked up enough useful knowledge to be able to construct a philosophical narrative that makes sense. At least, it does to me. I hope it did to you too.

I used Claude Sonnet 4 to help me research topics where I needed more details:

  • The establishment and ultimate dissolution of the Guild systems
  • Details of the changes in societies at each of the inflection points I already knew existed
  • Research on recent articles and publications about how AI is being implemented at companies, where that’s going well, and where it isn’t.

It also helped my polish words in paragraphs. I often write my polemics at 1am, and having such a rubber duck is helpful.

With this refreshed knowledge in hand I was able to thread a needle and weave it in a way that I’ve been wanting to for a while now.

Even having seen the communications revolutions created by email (1990s) and social media (2010s), I didn’t expect us to have another leap forward for at least a decade. I couldn’t predict the rise of AI in this way 5 years ago. And now we’re here. We’re already seeing AI-driven systems being used to help, but also actively harm populations. Such activity isn’t new, but it almost always carries risks for a population. This becomes super-charged in our new world. The genie isn’t going back in the bottle, and it’s up to us to make the right wishes for our future.

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.