1,000 Moonshots: Why the Future of AI is More Human (and Surprising) Than You Think
- Introduction: Beyond the “Doomer” vs. “Accelerationist” Standoff
The current discourse surrounding Artificial Intelligence is frequently trapped in an uncompromising, binary struggle. On one side, “accelerationists” push for rapid development with minimal oversight; on the other, “doomers” warn of existential risks and advocate for a pause or heavy regulation. As a policy analyst, I find this polarization misses the most critical reality: the outcome of AI development is not a foregone conclusion, but a result of proactive effort.
In the groundbreaking paper Shaping AI’s Impact on Billions of Lives, a coalition of senior computer scientists, legal experts, and “rising stars”—including Jeff Dean, John Hennessy, and specialists in low-resource languages—argues that we must move past these antagonistic positions. Their perspective is further bolstered by interviews with luminaries such as President Barack Obama, former National Security Advisor Susan Rice, and Nobelist John Jumper. These voices suggest that AI progress is “the best bet going forward” and should not be slowed down; rather, it must be directed. We are at a pivotal moment where practitioners and policymakers can consciously steer the technology toward the common good.
“Our view is that we are still in the early days of practical AI, and that focused efforts by practitioners, policymakers, and other stakeholders can still maximize the upsides of AI and minimize its downsides.”
- The Elasticity Paradox: Why Productivity Doesn’t Always Kill Jobs
Public anxiety regarding AI centers on the fear of job replacement. However, a rigorous analysis of labor economics reveals a counter-intuitive reality I call the “Elasticity Paradox.” Whether productivity gains destroy or create jobs depends entirely on the nature of demand.
The economic mechanism is straightforward: in “elastic” fields, a decrease in price (driven by productivity gains) results in a large increase in the quantity of services acquired. Consider the U.S. Census data from 1970 to 2020: despite massive technological advancements, the number of programmers increased 11-fold and commercial airline pilots increased 8-fold. Conversely, “inelastic” demand means consumption remains capped regardless of efficiency. Agriculture is the classic example: the human capacity to consume food is finite. Consequently, as farming became more efficient, the U.S. agricultural workforce plummeted from 40% in 1900 to 20% in 1940, then to 4% in 1970, and finally just 2% today.
Market Demand and Job Impact (U.S. Data 1970–2020)
- Elastic Fields (High Job Growth):
- Programming: 11-fold increase as tools made coding cheaper and more ubiquitous.
- Aviation: 8-fold increase in pilots as jet engines and autopilots lowered travel costs, sparking a massive surge in demand.
- Legal Services: 4-fold increase in lawyers who now handle their own digital tasks, replacing the roles of typists and operators.
- Inelastic Fields (Job Decline):
- Agriculture: Workforce collapsed from 40% in 1900 to 2% today because efficiency gains far outpaced the limited human demand for food.
- Administrative Support: Typist jobs fell ~50-fold and telephone operators shrank ~300-fold as their specific tasks were absorbed by professionals using new technology.
- The Sisyphus Strategy: Automating Drudgery to Save Meaningful Work
To build the social license necessary for AI adoption, we must adopt the “Sisyphus Strategy”: targeting the menial, repetitive tasks that currently prevent experts from performing meaningful work. This approach is not just about efficiency; it is a psychological imperative. By focusing on the “drudgery” of current tasks, we build trust with professionals, making them more likely to embrace and safeguard AI tools in the long term.
The paper outlines specific “milestones” to guide this effort:
- The Teacher’s Aide Milestone: AI should prioritize automating lesson plans, grading, and recordkeeping. This frees teachers to focus on student inspiration and interaction—the human-centric core of the profession.
- The Healthcare Aide Milestone: Instead of attempting to replace diagnosis immediately, AI should first absorb the “endless insurance documentation” that leads to physician burnout.
Targeting these “unattractive aspects” ensures that human experts remain in the decision path while making their work more enjoyable and meaningful.
“If policymakers and practitioners first target AI systems that automate menial and unfulfilling aspects of current jobs, they can make work more meaningful and enjoyable.”
- A Tale of Two Geographies: Expertise Scarcity vs. Displacement
The impact of AI is not uniform; it is dictated by geography and existing social safety nets. In advanced economies like the U.S. and Canada, the primary fear is the displacement of highly trained professionals. However, the policy environment significantly changes the stakes. For instance, displacement is a sharper concern in the U.S. because healthcare is tied to employment and unemployment insurance is significantly shorter (≤26 weeks in the U.S. vs. ≤45 weeks in Canada).
In contrast, lean economies face a crisis of expert scarcity rather than displacement. While the U.S. has approximately 3.6 physicians per 1,000 people, the world average is 1.7, and in some lean economies, the number drops to a staggering 0.5 per 1,000. In these regions, AI is not a competitor but a vital bridge. Much like the mobile phone leapfrogged landline infrastructure, AI-powered healthcare and educational aides can provide high-quality expertise to populations that currently have almost none.
Expert Availability (Per 1,000 People)
- United States: 3.6 Physicians / 4.0 Lawyers
- World Average: 1.7 Physicians
- Lean Economies (e.g., Ethiopia/Haiti): ~0.5 Physicians / 1.7 Lawyers (or fewer)
- The New Innovation Blueprint: Launching 1,000 Moonshots
Historically, transformative technological shifts were state-funded: the Manhattan Project (27B in today’s dollars)** and the **Space Race (318B today). Today’s AI “Space Race” is unique because it is predominantly backed by private industry. This necessitates a new innovation infrastructure—a coordinated public-private partnership that prioritizes the public good over purely commercial interests.
The authors propose a “pluralistic” model of innovation. We should not fund one single “moonshot,” but rather 1,000 diverse efforts—from protein folding to civic discourse. This blueprint relies on two pillars:
- Inducement Prizes: $1M+ rewards (modeled after the XPRIZE and DARPA Grand Challenges) designed to stimulate research on specific targets, such as a “Rapid Upskilling Prize” to help displaced workers.
- Multidisciplinary Research Centers: Ad hoc, high-impact centers modeled after successful historical precedents like the UNIX project, RADLab, or Par Lab.
Critically, the authors are not calling for a government handout. They argue that funding should come from the philanthropy of those who have prospered in the computer industry, organized through entities like the Laude Institute—a nonprofit dedicated to supporting these prizes and centers.
"The fact is we can use AI to launch a thousand moonshots. … If we create the right blueprint for innovation, we don’t have to pick one moon."
- Conclusion: From Polarization to Pluralism
The future of AI is not a fixed destination but a path shaped by “directed efforts.” Beyond its economic potential—which could raise U.S. GDP growth to 3%—AI holds a profound social promise. It can act as a digital mediator, rephrasing polarized comments to be more diplomatic and “promoting respect, understanding, and democratic reciprocity.”
To realize this, we must shift our focus from commercial interest alone to a model that serves the public interest. The authors are not asking for government funding, but for government collaboration on a blueprint that empowers humanity rather than replacing it. Whether we use AI to solve our greatest societal challenges or merely to optimize advertising remains the defining policy question of our era.
Are we prepared to fund and prioritize AI that solves our greatest societal challenges rather than just our most profitable ones?
Acknowledgements & Further Reading
This blog post is based on a paper by Dean, Hennessy et al. on the societal impact of AI, and its future development. Read it here, full-text link included. The blog authors gratefully acknowledge the help of Gemini Generative AI in producing this overview.
Leave a Reply