AI in 10 minutes | Susan Athey |

AI in 10 minutes | Susan Athey |


The term artificial intelligence is used really broadly so it can encompass machine learning which is basically a type of statistics where you use the data to select the model. Artificial intelligence can also encompass much more complex algorithms like algorithms you would use to play a boardgame like chess or have a robot climb a wall for decision tools that help firms make decisions. Right now actually most of the AI applications are very simple. There are things like recognizing digits and taking paper and digitizing them. But as you get more sophisticated you can use AI in things like anti-money laundering or fraud detection. And then as you get more and more sophisticated you can have chatbots that are interacting with people helping people get advise make investment decisions. When I’m thinking about how governments should approach policies for AI it’s important to think about in what direction we want this innovation to go. Firms have incentives to reduce costs and to improve their profits. And that has actually led to a lot of amazing innovation but these innovations are not just applicable in the for-profit sector those innovations can also help the government be more efficient it can help the social impact sector. But we may need a bit of a nudge and some leadership to get the innovation to be applied in those contexts and be applied well. And also to do the kinds of basic research that are most beneficial for social impact. For example it might be that a manufacturing firm is thinking more about replacing humans to make the assembly line more effecient. While in the social impact sector you might focus more on augmenting humans and making them more efficient. If we think about the R&D policy of our universities and our research policy broadly if we do some of the basic research that helps us learn how to use AI to augment humans and make them more productive. When that research is done it becomes more appealing for the private sector to also apply the R&D in that way and the may find that indeed it’s more profitable to augment humans in the for-profit sector as well. Of course the for-profit sector will already do some of that and indeed they already are but I think we can put a thumb on the scale in terms of the public R&D the publicly funded R&D to push it more in that direction. Another big policy issue for AI is how do we make sure it is used safely? And that it’s used in a way that doesn’t have unintended consequences. And that it’s adopted where it can be efficient. Taking that last point when I think about impediments to adoption in financial services for example there are applications where traditionally regulators have monitored the process that a bank uses and as long as you follow the process then you don’t get in trouble and banks are very worried about getting in trouble although it’s not always apparent to the consumer. And so if you don’t have a regulatory process that really recognizes the realities of machine learning and AI firms actually won’t adopt it in places where it can improve efficiency. One of the key things about using machine learning to accomplish a task is that it has a demonstrable error rate. So even if you follow the process you will make mistakes and you know that you make mistakes. In fact, you can’t build an algorithm without measuring your mistakes. And that’s actually quite a profound problem for a regulated financial institution. It’s very hard for them to come to grips with the fact that they have written on paper well-documented “I am willing to accept an error rate”. It requires thinking about cost-benefit analysis. Now cost-benefit analysis is of course very natural to economists. But it’s not always natural to the legal system and so we actually may need to change our regulation to be more explicit about the need for cost-benefit analysis. We may also need to build expertise among regulators because it turns out that even academics at the frontier of research have not actually written down all of the best practices for applying AI in the wild. Everything from discrimination issues and fairness issues to other types of risks we haven’t actually fleshed out what the best practices are. And that’s a topic for applied research that actually differ sector by sector. My view is that we will need specialized research for regulating AI in the financial sector and that will look a bit different then regulating the use of AI in social media and that would look a bit different than regulating in manufacturing. Each of these has some common themes but also really specific problems that require domain knowledge. In the case where decision makers and governments do actually open themselves up there are many opportunities for government to improve it’s efficiency. We have seen applications all over the world with simple digital technology for example, just tracking do doctors go to the clinics where they are employed to provide healthcare? Do teachers go to the schools where they are employed to work? It can be difficult to monitor like that at scale. But once you have digital technology in cell phones it actually becomes very inexpensive to monitor. Similarly it can be easy to monitor at scale the condition of roads and traffic conditions the efficiency of transportation the efficiency of inspectors do government inspectors actually go to inspect the buildings they are supposed to inspect? Do child workers actually visit the families they are designed to protect? And we can use digital technology to capture images and videos which both document that the worker did their job but also allow a second look and auditing of the quality of the work to make sure that indeed conditions match the description and the decisions that are made. We also can really reduce the cost of information gathering and provide decision support. So you might have a social worker who has to make a decision about whether to further investigate a family after a claim of child abuse and they might only have ten minutes to make a decision about a particular case before they move on to the next one. In that time period they don’t have the ability to process and find all the relevant information but artificial intelligence can indeed gather all the information and they can make sure that highly risky cases get flagged for extra time prioritizing the use of resources and allowing for better services. These are all examples where we can get more bang for our buck in terms of government. We can make sure that the dollars we spend are effective. Generally the workers themselves are happy to have that monitoring in that setting as well because a worker also doesn’t want to be responsible for causing an accident. They would like feedback on their performance. I did some research looking at the monitoring of Uber drivers using a mobile phone application. And there I found that the Uber drivers drove more safely than taxis. And also I found that when you provided Uber drivers information about their driving even if it didn’t affect their wages that they improved their driving without any explicit incentive. Which is consistent with this idea that generally people want to do a good job. That’s most peoples motivation they don’t want to do a bad job. The Stanford Institute for Human-Centered Artificial Intelligence is an exciting new venture we just launched officially last March with a big opening conference. We are a interdisciplinary institute that’s really designed to make sure that innovation in artificial intelligence is beneficial to humans. There are three big pillars that we work on. First of all trying to make artificial intelligence more human-like to really make it more intelligent. Right now most applications of artificial intelligence are paradoxically not very intelligent. They study a static environment and they classify images like say to be cats and dogs but they don’t think they don’t do a good job understanding cause and effect or reason about scenarios that they haven’t seen before. There’s a variety of different approaches we’re taking on that dimension. One large group is thinking about neuroscience-based artificial intelligence. My team is looking at causal inference and trying to bring in some of knowledge from statistics and social sciences about how to use data to reason about choices between alternatives and especially to bring in domain knowledge and modelling assumptions that allow us to think about scenarios we haven’t seen before. In economics we’ve done things like model the effects of a potential merger that hasn’t happened yet. And to do that we have to make assumptions about both consumer behavior and firm behavior and incorporate those in a statistical model. So my belief is that for certain applications to make the artificial intelligence more intelligent we need to use techniques like that that don’t just take everything like a black box but actually use information about the setting. The second area we’re focusing on in the institute for Human-Centered Artificial Intelligence is the desire to make artificial intelligence augment humans rather than just replace them. And we have to do a lot more research in a variety of areas to make that a reality. In practice for a variety of reasons we probably will not be able to just replace a human with a robotic decision-maker. There’re too many special circumstances and there’s too many scenarios that are not available in the training data that would allow the machine to really know what to do. But the human brain is an amazing thing if you can bring in a a lot of background information if you can synthesize large amounts of information that might be difficult for humans to process and summarize that in a way that’s meaningful and so that the human understands it that can make their decisions much better and much faster. What we can do in those situations is we need to make the artificial intelligence understandable by the humans. We need to express to the human decision-maker where it’s limitations are. We need to tell the human decision-maker in this scenario we actually don’t have a lot relevant training data and we don’t know what decision you should make while in this other scenario we’re making a recommendation that might seem counterintuitive to you but you should trust us because there is really solid evidence that that’s the right thing to do. I also want to inspire people about the importance of business people and researchers becoming engineers. Because in the future a lot of services that today are delivered in an very analogue way will be delivered digitally. And that’s the delivery of services from education to health to all the array of government services will have a big digital component. The fact that they’re digital will mean that we can optimize them. And that means that it’s possible to make them better and we all should get out of the mode of thinking we’re gonna study things from fifty thousand feet and hope that in ten years somebody listens to us. We need to get out of that mode and move in to the mode where if we see a problem we can fix it. And actually at a relatively low cost. If it’s a digital service with software you can make it better and even small groups of people with relatively small amount of resources can actually deliver services to large groups of people that are quite effective. That means that we shouldn’t sit around whining about the problems that face the world. We should get out of our chairs and go fix them.

Leave a Reply

Your email address will not be published. Required fields are marked *