Each year in the United States ~600,000 people die of cancer, 40,000 people die in car crashes, and millions of kids lose their love of learning in school.
To address these challenges:
- Health care turns to tests with mice, and carefully controlled clinical trials.
- Car makers rely on digital simulations and crash dummies.
- Education experiments with … little children?
There is no way to do meaningful tests on the quality of a learning innovation without introducing it directly to human subjects.
Which makes it really hard to try new stuff.
For adults, the difficulty of introducing new learning solutions is problematic, but surmountable. We let adults self-select for novel educational experiences — like coding school, or ayahuasca, or what have you — regardless of whether they are proven, or even safe. Adults ability to volunteer, and their willingness to pay to try new things that might improve their lives, ensures that there is a meaningful amount of experimentation in the adult learning market¹. Entrepreneurs are commercially incentivized to introduce new products and techniques; which, over time, hopefully leads to the emergence of standout learning innovations that can widely improve the human experience².
Kids are another story. Acting in childrens’ best interests, society massively constrains the range of experimentation for new learning approaches in order to keep kids safe. Since we lack the ability to test a new educational program or teaching technique in a simulator — much less let mice try a new calculator design — the cost of this conservatism is that we lose the opportunity to discover unconventional breakthroughs.
We HAVE to protect kids. That is non-negotiable. And, the tradeoff to our honoring and protectiveness of each individual child, which is so so right, is that we have limited methods to innovate and determine what’s actually best for kids overall. Without the ability to run bold experiments, we’re left with mostly feeble and false ‘inventions’ which end up further undermining the appeal of innovation. The negative feedback loop looks something like this:
- The status quo is considered to be ‘default safe’, primarily because it is known³. (Even though everyone agrees that its outcomes are unsatisfactory)
- New solutions cannot risk the safety of children, so only guarded experiments that are a lot like the status quo are permitted.
- The ‘new’, ‘cutting edge’ stuff in education doesn’t make that much of a difference, because it’s not actually that different from what we were already doing.
- The ‘failure’ or ‘minimal impact’ of each ‘innovation experiment’ further entrenches comfort with, or at least tolerance of, the status quo and reduces our societal willingness to devote resources to try new things in education, especially unconventional new things.
Repeat steps 1–4. Run for a century. You get some pretty retroactive schools.
Ultimately, the experiments we undertake in education are so so much smaller, and less likely to succeed, than our attempts to address cancer and car crashes. It’s no one’s fault. It’s definitely not science’s fault. It’s just a very discouraging reality that the difficulty of innovating on behalf of kids is rooted in immutable scientific limitations.
If safety is the essential factor guiding our parameters for appropriate experimentation in education, what exactly does it mean that new learning products for kids need to be safe?
- Without going on about it, this quality of voluntariness seems absolutely essential. Volunteers — adults who consent to try new products even though doing so might end up hurting them, not helping them — are basically the key to our whole drug industry. We’re never going to have this type of voluntariness in education because kids will always need a guardian’s permission to participate in something new; and they will very rarely get it.
- Although it’s still hard to scientifically validate new approaches, (ayahuasca, anyone?!), and, unfortunately, it’s hard for me to think of many examples of learning innovations / best practices that have been widely adopted by adults worldwide. (With the HUGELY notable exceptions of the internet / Google)
- There are some fascinating parallels to autonomous cars here, where even though the status quo is provably bad (tons of crashes and deaths), there is very little societal tolerance for replacing it with a new system which is somewhat less bad. It’s not good enough for autonomous cars to reduce crashes by 25%, because every driverless car death is going to be magnified by the media a thousand fold — which means that the first ‘products’ on the road need to be nearly perfect. In lots of ways, the same extraordinarily high standards of safety and effectiveness for autonomous cars apply to new learning innovations. (Although the budgets / investor interest for R&D are pretty darn different, let me tell you!)