Towards Humanist Superintelligence
A humanist future
Here's a question that's not getting the attention it deserves: what kind of AI does the world really want? I think it's probably the most important question of our time.
For several years now, progress has been phenomenal. We're breezing past the great milestones. The Turing Test, a guiding inspiration for many in the field for 70 years, was effectively passed without any fanfare and hardly any acknowledgement. With the arrival of thinking and reasoning models, we've crossed an inflection point on the journey towards superintelligence. If AGI is often seen as the point at which an AI can match human performance at all tasks, then superintelligence is when it can go far beyond that performance.
Instead of endlessly debating capabilities or timing, it's time to think hard about the purpose of technology, what we want from it, what its limitations should be, and how we're going to ensure this incredible tech always benefits humanity.
At Microsoft AI, we're working towards Humanist Superintelligence (HSI): incredibly advanced AI capabilities that always work for, in service of, people and humanity more generally. We think of it as systems that are problem-oriented and tend towards the domain specific. Not an unbounded and unlimited entity with high degrees of autonomy – but AI that is carefully calibrated, contextualized, within limits. We want to both explore and prioritize how the most advanced forms of AI can keep humanity in control while at the same time accelerating our path towards tackling our most pressing global challenges.
To do this we have formed the MAI Superintelligence Team, led by me as part of Microsoft AI. We want it to be the world's best place to research and build AI, bar none. I think about it as humanist superintelligence to clearly indicate this isn't about some directionless technological goal, an empty challenge, a mountain for its own sake. We are doing this to solve real concrete problems and do it in such a way that it remains grounded and controllable. We are not building an ill-defined and ethereal superintelligence; we are building a practical technology explicitly designed only to serve humanity.
In doing this we reject narratives about a race to AGI, and instead see it as part of a wider and deeply human endeavour to improve our lives and future prospects. We also reject binaries of boom and doom; we're in this for the long haul to deliver tangible, specific, safe benefits for billions of people. We feel a deep responsibility to get this right.
The history of humanism has been its enduring ability to fight off orthodoxy, totalitarian tendencies, pessimism and help us preserve human dignity, freedom to reason in pursuit of moral human progress. In that spirit, we think this approach will help humanity unlock almost all the benefits of AI, while avoiding the most extreme risks.
Climbing the exponential slope
The rate of progress has been eye-watering. This year it feels like everyone in AI is talking about the dawn of superintelligence. Such a system will have an open-ended ability of "learning to learn", the ultimate meta skill. It would therefore likely continue improving, going far beyond human-level performance across all conceivable activities. It will be more valuable than anything we've ever known.
But to what end?
The prize for humanity is enormous. A world of rapid advances in living standards and science, and a time of new art forms, culture and growth. It's a truly inspiring mission, and one that has motivated me for decades. We should celebrate and accelerate technology because it's been the greatest engine of human progress in history. That's why we need much, much more of it.
In the last 250 years, our intelligence drove the most beautiful process of scientific discovery and entrepreneurial application that has more than doubled life expectancy from 30 to 75. It's our intelligence and the technologies we've invented that's delivered food, light, shelter, healthcare, entertainment and knowledge to a population that grew from 1b to 8b people in that period.
It's technology that enables us to fly around the globe, treat an infection with antibiotics, stare into the furthest reaches of outer space, and, yes, share a cat meme with millions of people we've never met. Walk into any modern supermarket, hospital, school or office and what you're seeing is a marvel of human ingenuity. AI is the next phase in this journey. This is what Satya means when he talks about increasing global GDP growth to 10%; a transformative boost. As a platform of platforms, this is core to Microsoft's mission of enabling others to create and invent at global scale.
When you hear about AI, then, this is what it's worth keeping in mind. This is about making us collectively the best version of ourselves. AI is the path to better healthcare for everyone. AI is how our society levels up, escapes an increasingly zero-sum world. It's how we grow the economy to increase wealth broadly, and enable a higher standard of living across society. Or let me put it another way: take AI out of the picture and the gains over the next decades look much harder to come by. It's the next step on the long road of human creativity and invention, pushing the boundaries of what we can make, think and do. It's how we discover new kinds of energy generation, new modes of entertainment.
AI - HSI - is how we rebuild.
Containment is necessary
At the same time we have to ask ourselves, how are we going to contain (secure and control), let alone align (make it "care" enough about humans not to harm us) a system that is – by design – intended to keep getting smarter than us? We simply don't know what might emerge from autonomous, constantly evolving and improving systems that know every aspect of our science and society.
And since this kind of superintelligence can continuously improve itself, we'll need to contain and align it not just once, but constantly, in perpetuity.
And it gets more complicated. It's not just the "we" in today's frontier AI research labs that have to do it. All of humanity needs to do it, together, all the time. Every commercial lab, every start up, every government, all need to be constantly alert and engaged in a project of alignment and containment, and that's before we even deal with the bad actors and the crazy garage tinkerers.
No AI developer, no safety researcher, no policy expert, no person I've encountered has a reassuring answer to this question. How do we guarantee it's safe? If you think that's overly dramatic, I'd love to hear your rebuttal. Perhaps I'm missing something.
Creating superintelligence is one thing; but creating provable, robust containment and alignment alongside it is the urgent challenge facing humanity in the 21st century. And until we have that answer, we need to understand all the avenues facing us – both towards and away from superintelligence, or perhaps to an altogether alternative form of it.
The purpose of technology
Technology's purpose is to help advance human civilization. It should help everyone live happier, healthier lives. It should help us invent a future where humanity and our environment truly prosper.
I think Albert Einstein put it best when he said: "The concern for man and his destiny must always be the chief interest of all technical effort... in order that the creations of our mind shall be a blessing and not a curse to mankind."
Any technology that doesn't achieve this is a failure. And we should reject it.
That remains the test of the coming wave of superintelligence and it's the question we must ask over and over: how do we know, for sure, that this technology will do much more good than harm? As we get closer to superintelligence in the coming years, how certain are we that we won't lose control? And who makes that assessment? And most importantly, amid the uncertainty of that question, what kind of superintelligence should we build, with what limitations and guardrails?
These questions are central to everything we do at the MAI Superintelligence Team and guide us day to day as we make decisions. The core, long term interests of human beings should be clearly prioritized over any research and development agenda.
Towards humanist superintelligence
I think we technologists need to do a better job of imagining a future that most people in the world actually want to live in.
Humanist superintelligence (HSI) offers an alternative vision anchored on both a non-negotiable human-centrism and a commitment to accelerating technological innovation... but in that order. The order is key. It means proactively avoiding harm and then accelerating.
Instead of being designed to beat all humans at all tasks and dominate everything, HSI begins rooted in specific societal challenges that improve human well-being. Our recent paper on expert AI medical diagnosis is a great directional example of this (more on this below).
It's clearly showing signs of progress towards a medical superintelligence and when it makes its way into production it will be truly transformational. And yet since it's envisaged as a more focused series of domain specific superintelligences, it poses less severe alignment or containment challenges.
Quite simply, HSI is built to get all the goodness of science and invention without the "uncontrollable risks" part. It is, we hope, a common-sense approach to the field.
It may seem absurd to have to declare it, but HSI is a vision to ensure humanity remains at the top of the food chain. It's a vision of AI that's always on humanity's side. That always works for all of us. That helps support and grow human roles, not take them away; that makes us smarter, not the opposite as some increasingly fear. That always serves our interests and makes our planet healthier, wealthier and protects our fragile natural environment, regardless of the status of frontier safety and alignment research.
We owe it to the future to deliver a palpably improved world from the one we inherited. Sometimes it's easy to overlook the amazing things technology has already delivered. When you put a jacket on because the office AC is too low or get frustrated by the lines at airport check-in during the holidays or agonize about what to watch on your smart TV: that's the extraordinary privilege afforded to us by technology. Each moment would have bewildered our ancestors. And so would our grumbling. If we get this right, something similar is possible again.
Where Humanist Superintelligence will count
Here are three application domains that inspire us at Microsoft AI. There are, however, many more, and I'll be outlining them in future.
An AI companion for everyone - Everyone who wants one will have a perfect and cheap AI companion helping you learn, act, be productive and feel supported. Many of us feel ground down by the everyday mental load; overwhelmed and distracted; rattled by a persistent drumbeat of information and pressures that never seems to stop. If we get it right, an AI companion will help shoulder that load, get things done, and be a personal and creative sounding board. AI Companions will be personalized, adapting to the contours of our life but not afraid to push back in your best interests, built to always support, rather than replace, human connection, designed with trust and responsibility at its heart.
AI Companions will also have a profound impact on how we learn. They'll work with the strengths and weaknesses of every student, alongside teachers, to ensure they can achieve their full potential and encourage their intellectual curiosity. That means tailored learning methods, adaptive curricula, completely customized exercises. "One size fits all" education will seem as bizarre to the next generation as rote learning Latin does to us.
Medical Superintelligence – We will see the arrival of medical superintelligence in the next few years. This is the kind of domain specific humanist superintelligence we need more than anything. We'll have expert level performance at the full range of diagnostics, alongside highly capable planning and prediction in operational clinical settings. For as long as I've been working in AI, solving this challenge has been my passion. It will mean world-class clinical knowledge and intervention / treatment is available everywhere.
As I mentioned above, our recent work demonstrates the value of this narrower form of domain specific superintelligence. The New England Journal of Medicine includes a Case Challenge in every issue – a list of symptoms and a patient to diagnose. It's fiendishly difficult with pass rates of low single digit percentages even for domain experts let alone the average doctor. Our orchestrator, MAI-DxO, managed to reach 85% across the Case Challenges. Human doctors max out at about 20%, and need to order many more expensive tests. In our view both clinicians and patients alike would welcome the extra support. This work just hints at the potential to revolutionize healthcare.
Plentiful clean energy – Energy drives the cost of everything. We need more of it, more cheaply and more cleanly. Electricity consumption is estimated to rise 34% through 2050, driven in no small part by the rise in datacentre demand. I predict we will have cheap and abundant renewable generation and storage before 2040, and AI will play a big part in delivering it. It will help create and manage new workflows for designing and deploying new scientific breakthroughs. These advances will help produce everything from new carbon negative materials to far cheaper and lighter batteries, to far more efficient utilization of existing resources like grid infrastructure, water systems, manufacturing processes and supply chains. It will suggest and help implement viable carbon removal strategies at meaningful scale. And AI will also help push breakthroughs that finally crack fusion power.
These breakthroughs alongside many others are coming with HSI, and they'll profoundly improve our civilization. They will make a transformative difference to billions of people. This next decade may well be the most productive in history. And yet, the risks are growing faster than ever before.
A safer superintelligence
Alongside spelling out very precisely the kind of superintelligence we should build, the time has come to also consider what societal boundaries, norms and laws we want around this process. At MAI this is a discussion, and a set of actions, that we welcome.
Doing this requires real trade-offs and tough decisions that come in environments of immense competitive pressure and also opportunity. There are numerous challenges and obstacles to both delivering the vision and avoiding the downsides, including around recruitment, security, mindset, the structure of the market and the calibration of optimum research paths that steer the course between harnessing upside and avoiding those downsides. There is at present a collective action problem of more unsafe models of superintelligence potentially being able to develop faster and operate more freely.
Overcoming this, as with all such problems, is an immense challenge that will require meaningful coordination across companies and governments and beyond. But it starts I believe with a willingness to be open about vision, open to conversations with others in the field, regulators, the public. That's why I'm publishing this – to start a process and to make clear that we are not building a superintelligence at any cost, with no limits. There's a lot more to say (and of course do) on all of it, and over the next months and years you can expect more from me and MAI to candidly explain and explore our work in this area.
Humans matter more than AI
Ultimately what HSI requires is an industry shift in approach. Are those building AI optimizing for AI or for humanity, and who gets to judge? At Microsoft AI, we believe humans matter more than AI. We want to build AI that deeply reflects our wider mission to empower every person on the planet.
Humanist superintelligence keeps us humans at the centre of the picture. It's AI that's on humanity's team, a subordinate, controllable AI, one that won't, that can't open a Pandora's Box. Contained, value aligned, safe – these are basics but not enough. HSI keeps humanity in the driving seat, always. Optimized for specific domains, with real restrictions on autonomy, my hope is that this can avoid some of the risks and leave precious space for human flourishing, for us to keep improving, engaging and trying, as we always have.
Unlocking the true benefits of the most advanced forms of AI is not something we can do alone. Accountability and oversight are to be welcomed when the stakes are this high. Superintelligence could be the best invention ever – but only if it puts the interests of humans above everything else. Only if it's in service to humanity.
This – humanist, applied – is the superintelligence I believe the world wants. It's the superintelligence I want to build. And it's the superintelligence we're going to build on MAI's Superintelligence Team.