When AGI isn't enough: taxonomizing problems for AI
A lens on different kinds of problems, in light of progress in AI
It is very hard to reason about what will happen in a world with ubiquitous intelligence.
That said, the world has (so far) evolved bound by human incentives; rather than focusing on the technical specifics of what might or might not happen, a useful lens is categorizing types of problems by whether or not adding more intelligence helps solve them.
My hypothesis is that while AI will increase the number of beings with intelligence and the amount of intelligence in the world, its progress will remain constrained by human incentive structures (for now). The same incentives that encourage banks to invest, engineers to build, and architects to design, would apply where it’s not a human working on the problem; this is because someone will have to direct an AI tool to work on the problem, and that costs both time and money.
While existing structures have driven significant progress, they have also given rise to perverse incentives— e.g. doctors prescribe more of a drug if they receive money from a pharma company tied to it. The key question I want to explore is how these incentives may diverge from the rest of society’s interests, in light of AI.
Viewed from this perspective, the core issues introduced by AI are less about an epic battle between humans and AI, and more about an exacerbation of imbalances in the current misaligned human markets and systems. Understanding how existing problems are tied to the human systems enacting them can help anticipate the coming effects of AI.
Taxonomy of different kinds of problems
It’s possible to think through how AI might change a particular problem by simulating what adding more humans to the problem would do. Adding more humans doesn’t always help — no matter how many of my friends I get together, I won’t be able to beat a chess grandmaster, and adding more people won’t split 4 spring rolls amongst 5 people (without access to a knife)1.
There are five categories I've identified2:
Problems that are solved with more entities that are intelligent
Problems that are solved with more intelligence
Problems that are solved by specifying the problem
Problems solved by putting things online
Problems that remain difficult, despite more intelligence
I. Problems that are solved with more entities that are intelligent
Description of problems: Some problems are directly addressed by increasing the automation we have access to today. In particular, historically, this focuses on more bureaucratic tasks that are easily specified, but time consuming to do. An example of this is some legal contract drafting, forms of market analysis, and customer support centers.
These kinds of problems are characterized by a person completing a known task that has a clear metric of success.
My general prediction is that these kinds of problems will be dramatically transformed by LLMs and similar technologies, as the fields are progressing today. Work in these fields will become dramatically more productive in terms of raw output of goods and services. It’s worth noting that some of these sectors may be exposed to labor replacement for the first time.
II. Problems that are solved with more intelligence
Description of problems: Some problems are more difficult than we currently know how to solve or to automate. In order to solve these problems you need a greater intelligence, rather than more entities with intelligence.
In human terms, there are tasks that are hard for middle school students, no matter how many middle school students you get in a room. It seems completely possible that we need someone more intelligent than the current people working on the Riemann Hypothesis in order to solve it.3
These kinds of problems are characterized by a person completing an unknown task that has a clear metric of success. This can arise either because the coordination of achievable tasks is hard, even if the individual tasks are easy, or if the task itself is very hard.
My general prediction is that progress on these kinds of problems will depend most heavily on the specific path that research takes and the technical specifics that emerge. And these kinds of problems are the least answered by thinking about incentive structures.
III. Problems solved by specifying the problem (coordination and planning)
Description of problems: Some problems are more difficult than we currently know how to solve or to automate because the coordination of achievable tasks is hard, even if the individual tasks are easy. As an example, at the moment, even if AI could produce perfect code to run experiments, in order to produce a meaningful research project, you need someone to devise, schedule, and iterate on the set of experiments. As another example, a party planner has the general goal of planning a party, each step may be fairly straightforward and capable of automation but coordinating is a separate task that so far, we have found more difficult.
These kinds of problems are characterized by a person completing many known tasks that each have their own clear measure of success, as well as a clear measure of success for the final goal.
My general prediction is that these kinds of problems will increasingly be more tractable by LLMs and similar technologies, especially as progress on the individual tasks (of category I) gets increasingly good. At the moment, it seems somewhat likely that additional progress on reasoning and reinforcement learning may be able to handle these problems directly.
IV. Problems solved by putting things online
Description of problems: Some problems are potentially simple for computers, but for physical constraints (e.g. it is difficult for a laptop to read a physical piece of paper). These kinds of problems would be solved by the “simple” action of putting things online and producing more physical-to-internet interfaces. Going back to an example from the previous section with the “party planner planning a party” - this is not only a coordination problem, but many tasks involve physical actions that are difficult to automate currently (e.g. moving a table, showing someone a venue, reading flyers that are formatted differently).
These kinds of problems are characterized by being related to both physical items and interpersonal relationships4.
My general prediction is: Progress here will continue rapidly, as there are clear market incentives to make progress here. We already are seeing tremendous gains on making the world more legible to AI models, powered through recent progress in AI on simple tasks like reading PDFs, as well as the explosion of work on applying AI to robotics. However, there will likely be a long-tail of difficult tasks here.
Further, progress on many of these kinds of tasks requires more research on UI/UX (and not solely AI), as there will be a huge friction point with humans interfacing with technology. E.g. having your AI call a restaurant to schedule a reservation requires the other party to play ball too.
V. Problems that remain difficult, despite more intelligence
Description of Problems: Some problems do not fall into these existing categories, and there are reasonable reasons to think that they won’t. This final set of problems is particularly interesting because it’s both often overlooked and in my view it’s actually possible to make real predictions about what happens in these settings by looking at incentive structures present in the systems.
I’ll break the framework I was using for the previous sections slightly, because I’d like to drill down into these problems more carefully.
Within this general category of problems, I would at least break down the set of problems into a few more sets:
Problems that are intractable
Moral issues (e.g. picking between axioms)
Problems that don’t have have “correct” solutions and have opposing parties
Problems we don’t want solved
Problems that appear simple but we don’t know how they work well enough
Going into each of these:
Problems that are intractable. As an example, there may be parties (who we value equally) with contradictory desires, e.g. trying to allocate 5 houses to 6 people, or 2 parties wanting the same land. Other examples are: trying to achieve consistency, availability, and partition-resistance, or the halting problem. These kinds of problems are often characterized by resource constraints
Moral issues (picking between axioms). For example, there are two methods through which one could allocate resources which are both “reasonable,” and one needs to choose between them (e.g. some notions of fairness are categorically incompatible).
Problems that don’t have have “correct” solutions and have opposing parties
Legal disputes: There are 2 parties, defendant and prosecutor; they cannot both win. Increasing intelligence/capability evenly on both sides does not change the balance of things. It is worth mentioning that it’s possible that tools to facilitate “automated mediation” could help by providing more resources to the judge, rather than the parties5.
US healthcare system: The healthcare system is complex. Complexity grows when there are financial advantages for parties to add complexity. There is no pressure to introduce simplicity. Adding either intelligence or intelligences doesn’t change this.
Labor issues: You have people in power attempting to use other people (with much less power) as cheaply as possible (e.g. business owners “vs” workers). When considering the power shift, it’s valuable to look at tool development about these power dynamics. It’s worth observing that no one is building tools from the perspective of balancing the power/class imbalance present in most workplaces, because there isn’t money in it. There’s little reason to think this will change.
Problems we don’t want solved:
Educating our children with human values: While we may be interested in using agents to aid in education, I’m not under the impression we are interested in teaching our children alternative sets of values.
Liability: From a legal and moralistic perspective, humans (and lawyers) like and prefer it when there is a clear human being responsible for an action that can be held responsible, and punished. For our incentive alignment systems to work, we rely on liability, and we likely don’t want to remove this.
Human companionship: we don’t want AI nurses for our parents - we want them to feel heard and appreciated; at the moment, this can only be done by a human, and I suspect this won’t change for a very long time. This does not mean that there isn’t great value in using tools like LLMs (or general automation) to help nurses care for patients. I draw a distinction between companionship and care - AI doctors, for example, would not fall under this category, and we primarily do want doctors to be automated.
As a final subproblem I also include:
Problems that appear simple but we don’t know how they work well enough. The main example I have here is poverty or “why do food deserts appear.” Most people would clearly articulate that poverty is bad for individuals and bad for humanity, but serious poverty exists regardless of people trying to solve them. This points heavily to the direction of - this is not a problem of intelligence (at the very least it is a problem of political will), likely it is a problem of the goals of those in power not aligning with those without.6
There are a large set of (economic) problems that I believe we don’t understand well enough, the example I provide here is just one example, and certainly not exhaustive.
These kinds of problems perhaps fall under “III problems solved by specifying the problem,” but I am not convinced that they necessarily do have solutions that aren’t contradictory to society’s other desires, so I leave it here.
My general prediction for problems that remain difficult (despite intelligence) is:
AI will not make progress on point 1 or 2 (as they are impossible).
People with more resources will start “winning more” of problems in point 3 (Problems that don’t have have “correct” solutions and have opposing parties) in the same way that now people with more money win more legal battles, especially if they are able to “DOS” attack weaker people, using AI tools.
Given resource constraints, more of problem 4 (Problems we don’t want solved) will be addressed the same way we address problems of type 3 (Problems that don’t have have “correct” solutions and have opposing parties)
I’m not certain how to think about problem 5.
In general, the evolution of solutions for “problems that remain difficult” will likely evolve similarly following general incentives, and follow a trend of either unchanged progress or consolidating power in the hands of those who already have power and money. The rate of the trend of the exacerbation of power imbalances will depend on how much each individual task is able to be broken down into problems that are of types I-IV.
What to do with this Taxonomy? Planning for different worlds
Okay, cool man! You just listed a bunch of problems — now what?
My ending thoughts are that our society does often function fairly well — there are many problems in the world, but a lot of what we see around us is an example of alignment gone right. That said, disparities and tensions between parts of society are certainly growing, and more than misalignment between humans and AI, we should be cautious of increased misalignment between people with power and people without.
I put forth this framework as a way to view what kinds of problems AI will help with, and what kinds of problems may get worse. In an ideal world, this framework of problems can help reason about where we should apply our efforts to prevent human-incentives from harming humans. AGI won't arrive into a world lacking power structures and incentives, so we should expect the consequences of AGI to be shaped as much by the technology as by the existing relations which structure our society.
I’d be interested in exploring this more in future writings, but I am erring on the side of posting before things are “perfect” — reach out if you’re interested.
Thank you to Eric Lu and Ben Murphy for comments.
In some sense, it will actually make it harder.
It’s worth saying that these 5 groups do have blurry edges, and some problems may not fit neatly into a single category.
No offense, if you’re currently working on the Riemman Hypothesis. You can do it. I believe in you.
Partially, because many tasks that are not interpersonal have already been brought online
It’s worth asking if warfare (cyber or physical) is one of these.
One could argue that we just aren’t interested in solving the problem, but enough people are interested in solving the problem(s) that it seems more complicated than that.