This is a story that will sound all too familiar to anyone who has ever watched good money chase bad ideas.
Somewhere in Uganda, there sits a computer lab. It cost a small fortune—gleaming machines purchased with the best of intentions by well-meaning donors who wanted to drag education into the digital age. The computers are impressive pieces of kit, the kind that would make students in London or New York feel right at home.
There’s just one problem: they don’t work. Not because they’re broken, but because the internet connection they depend on is about as reliable as a chocolate teapot. The teachers who were meant to use them? They barely received any training before being left to figure out the rest on their own.
This is a textbook case of what economists call the “planning fallacy”—the tendency to focus on the best-case scenario while ignoring all the ways things can go spectacularly wrong. And in Uganda’s education system, things have been going wrong in precisely this way for years.
The Iron Law of Technology Deployment: Context Is King
Here’s what the computer lab donors missed, and it’s a lesson that echoes through decades of development economics research: technology is only as good as the ecosystem that supports it.
In the 1990s, economists began studying why some technology interventions succeeded while others failed. The pattern was clear: success depended far less on the sophistication of the technology than on whether it fit the local context. A brilliant innovation in Boston might be completely useless in Kampala—not because the people are different, but because the infrastructure, training systems, and cultural context are different.
The computer labs scattered across Uganda are a perfect illustration of this principle. They represent what development economists call “planners’ solutions”—top-down interventions that look logical on paper but ignore the messy reality of implementation.
A Lesson in Appropriate Technology
Now, let us look at a different approach—one that could be called “Small Is Beautiful” — a solution that fits the context rather than forcing the context to fit the solution.
Since 2023, Impact Bridge has been quietly conducting an experiment that turns conventional wisdom on its head. Instead of expensive computers that need constant internet connectivity, they’re using Raspberry Pi microcomputers—devices that cost a fraction of traditional computers and work perfectly well offline.
But here’s the clever bit: these tiny machines come preloaded with Kiwix. Wikipedia, educational videos, interactive lessons—all accessible instantly, with no internet connection required.
Why Expertise Matters More Than Equipment
But here’s where the story gets really interesting, and where Impact Bridge has stumbled onto something that decades of education research made obvious: the teacher matters more than the technology.
There’s a famous study by economist Raj Chetty that tracked millions of students over decades. The finding was remarkable: having a great teacher for just one year could increase a student’s lifetime earnings by hundreds of thousands of dollars. A terrible teacher could do the opposite damage.
Yet most technology interventions in education focus obsessively on the gadgets while treating teacher training as an afterthought. Impact Bridge flips this script entirely.
Impact Bridge have developed what they call “Champion Teachers”—educators who don’t just learn to use the technology but become evangelists for a completely different way of thinking about learning. These champions then train other teachers, creating what network theorists call “preferential attachment”—the tendency for successful nodes in a network to attract more connections, accelerating the spread of innovation.
It’s a process that mirrors how innovations actually spread in the real world: not through top-down mandates, but through peer-to-peer influence and demonstration effects.
How Do You Know If Anything Is Working?
This brings us to one of the most persistent problems in development work: measurement. How do you know if your intervention is actually making a difference, or if you’re just fooling yourself with feel-good anecdotes?
Impact Bridge has learned from decades of evaluation research that plagued earlier development efforts. They’re not just counting how many computers they’ve installed or how many teachers they’ve trained—metrics that tell you nothing about actual impact. Instead, they’re tracking student progress through assessments and monitoring teacher confidence through regular surveys.
This approach reflects what economists call “revealed preference”—judging success not by what people say, but by what they do. If teachers are using the technology months after the training ends, and if student performance is improving, then you might actually be onto something.
Why Success Breeds Success
Early results suggest something fascinating is happening. In their pilot school, the intervention seems to be creating what economists call “positive externalities”—benefits that extend beyond the immediate participants.
Students who were previously disengaged are participating actively in lessons. Teachers report increased confidence. But perhaps most importantly, the model appears to be self-sustaining. Champion Teachers are training new Champion Teachers. Champion Schools are inspiring other schools to adopt similar approaches.
This is the holy grail of development economics: interventions that create their own momentum rather than requiring constant external support.
Can Lightning Strike Twice?
Of course, skepticism is warranted. Development literature is littered with pilot programs that showed tremendous promise but failed when scaled up. Do interventions lose effectiveness as they expand beyond their original context? Of course they do.
Impact Bridge’s five-year plan shows they understand this challenge. Rather than rushing to scale, they’re focusing on getting the fundamentals right: training 50+ teachers across a few schools, perfecting their curriculum integration, and building sustainable mentorship models.
This patience reflects what implementation science has taught us: successful scaling requires what researchers call “fidelity”—maintaining the core elements that made the original intervention work while adapting to new contexts.
So What Did We Learn?
First, that good intentions are necessary but not sufficient. The road to development hell is paved with expensive equipment that nobody knows how to use.
Second, that appropriate technology beats sophisticated technology every time. A simple solution that works is infinitely better than an elegant solution that doesn’t.
Third, that investing in people pays higher dividends than investing in gadgets. No amount of technology can compensate for poor teaching, but good teaching can work miracles with even basic tools.
And finally, that measurement matters. If you can’t demonstrate impact, you’re probably not having any.
The story of Impact Bridge is still being written. Their approach might fail when scaled, or encounter unforeseen obstacles, or simply fall victim to the same implementation challenges that have derailed countless well-intentioned interventions.
But for now, in classrooms across Uganda where students are exploring Wikipedia for the first time and teachers are discovering new ways to engage their classes, it looks remarkably like what success might actually look like.
Want to support evidence-based development that actually works? Consider supporting Impact Bridge‘s carefully measured approach to educational transformation. Because sometimes, the best way to solve tomorrow’s problems is to learn from yesterday’s mistakes.