SoVote

Decentralized Democracy

David Autor

44th Parl. 1st Sess.
November 20, 2023
  • Hear!
  • Rabble!
  • star_border
  • 12:08:22 p.m.
  • Watch
  • Read Aloud
Thank you for having me. My name is David Autor, and I am the Ford professor of economics at the MIT Department of Economics, and also co-director of the MIT “shaping the future of work” initiative. I am honoured to speak with you today about my research on artificial intelligence and the future of work, and I apologize for my cold. AI presents obvious threats to workers and the labour force. While machines of the past could only automate routine tasks with clear rules, AI can quickly adapt to problems that require creativity and judgment. It seems reasonable to worry that AI will suddenly make huge swaths of human work redundant. I believe these concerns are somewhat misplaced, however. Strong demand for labour has persisted throughout past periods of technical change, like the industrial or computing revolutions, and all signs point to growing labour scarcity, not the opposite, in most industrialized countries, including Canada. Instead, the important question to ask is how AI will impact the value of human expertise, by which I mean the skills and judgment in specific domains like medicine, teaching and software development, or modern crafts such as electrical work or plumbing. Will new technologies augment the value of human expertise, or will it make human judgment valueless? In industrialized economies, expertise is the primary source of labour’s market value. Consider the jobs of air traffic controllers in comparison with crossing guards, both of whom have the job of protecting lives by preventing vehicle collusions. Air traffic controllers in the U.S. are paid four times more than crossing guards. Why? It's because they have scarce expertise, painstakingly acquired and necessary for their important work. The value of that expertise is augmented by tools: Without GPS, radar and two-way radio, an air traffic controller is basically a person in a field staring at the sky. Crossing guards provide a similar socially valuable social service, but most able-bodied adults can serve as crossing guards without formal training and without any expertise, and this virtually guarantees low wages. While technology makes air traffic controllers' expertise valuable, it can also make human expertise redundant. London cab drivers used to train for years, memorizing all the streets of London. GPS made this expertise economically irrelevant. It's no longer necessary. You might ask, why isn't all expertise eventually made superfluous by automation? The answer is that human expertise becomes relevant because its domain expands with social needs. Jobs like software developers, laparoscopic surgeons and hospice careworkers emerged only when technological or social innovations made them necessary. In fact, my co-authors and I estimate that around 60% of all jobs that people do in the U.S. today didn’t exist in 1940. Technology and other social forces can just as readily create opportunities for high-quality work as they can automate it. I believe that AI can create novel opportunities for non-college workers—low and middle-educated workers. With the support of AI tools, these workers could perform tasks that had previously required more costly training and highly specific knowledge. For example, medical professionals with less training than doctors could tackle more complicated tasks with the assistance of AI. In the U.S., in part due to technological innovations such as software that prevents the dispensing of harmful drug interactions, nurse practitioners have proven effective at tasks formerly reserved for doctors with five more years of medical education. AI could push this further, helping workers with less training deliver high-quality care. This is not to say that AI makes expertise irrelevant. It's just the opposite: AI can enable valuable expertise to go further. AI tools enable less experienced programmers to write better code faster. They help awkward writers to produce more fluid prose. This positive future of which I'm speaking is not guaranteed. We must make collective decisions to build it. For example, China has made substantial investments in AI technology, in part to create the most effective surveillance and censorship systems in human history. This is not a preordained consequence of AI, although it depends on it, but it's a result of a particular vision of how to use this new tool. Similarly, it is far from inevitable that AI will automate all of our jobs. That's a vision that many AI pioneers are pursuing. I think this would be mistake. To shape this protean technology, AI, to constructive ends, political leaders must work with industry, NGOs, labourers and universities to build a future in which machines work in service of minds. Let me end by saying what government can do. I don't claim to have complete answers here, but let me say a couple of things. First, governments should germinate and fund human-complementary AI research. The current path of private sector development has a bias towards automation. Government can correct this by supporting the development of worker-augmenting AI in industries like health care, education or skilled crafts work. Second, I would prioritize protections for workers. Using AI for undue surveillance for high-stakes decisions like hiring and firing and to appropriate workers' creative works without compensation should be disallowed. Empowering workers to collectively bargain and including them in rule-making is a critical step. I'm also concerned about AI safety. I think governments are comparatively well equipped to regulate safety. Let me end by saying that rather than asking, “What will AI do to us?”, we should ask, “What do we want AI to do for us?” Answering that question thoughtfully and acting decisively will help us build a future that we all will want to inhabit and that we will want our children to inherit. Thank you very much. I welcome your questions.
972 words
  • Hear!
  • Rabble!
  • star_border
  • 12:33:36 p.m.
  • Watch
  • Read Aloud
I think it's a very big issue, and depending on the regulatory regime, I don't think privacy is guaranteed in what can and cannot be tracked. Our phones are full-time surveillance devices that not only know all the things we do but report that information to third parties for money. That is then resold. Privacy will be compromised unless regulation prevents it and unless people have ownership of the right to privacy. I think it's a very serious concern. If I may, Mr. Chair, I'll respond very quickly to something that Ms. Janssen just said about AI and jobs. I do not think we should take it as a historical fact that technology has always improved jobs. The Luddites were absolutely correct that power mills wiped out their employment. Not only that, but wages didn't rise for six decades, and growth was stunted; starvation increased. I'm not saying that these advances weren't ultimately beneficial, but these technological changes are never uniformly an improvement for all jobs or all people. There are almost always losers—peoples whose expertise is devalued—and when we make these big transitions, we should be prepared to help people adjust to those transitions. This will not be costless—
213 words
  • Hear!
  • Rabble!
  • star_border
  • 12:39:29 p.m.
  • Watch
  • Read Aloud
It's difficult, because it's moving so fast, as Ms. Janssen and others noted. It would be helpful, I think, to involve the private sector, in order to try to get a sense. Even in the U.S.—which is not the world's leader in information collection, by any stretch of the imagination—we now do large surveys on who's using AI and what they're doing with it. However, we don't have a good sense. One thing is to understand what task it is being applied to, in what sectors and for what activities. Another is to look at the nature of how jobs are changing, which occupations are growing or shrinking, and what wages are being paid. It's even ideally about understanding, from workers, how their work is changing. Coming at it from both the workers' and firms' perspectives would be potentially complementary.
152 words
  • Hear!
  • Rabble!
  • star_border
  • 12:55:04 p.m.
  • Watch
  • Read Aloud
Thank you, Mr. Chair. I do support the idea of a federal advisory council, as all folks here today have testified. This is moving very fast. It poses new opportunities and new challenges. Bringing in top expertise in an advisory role is an excellent idea. Of the three topics I would most address, one is how to use the technology to augment labour rather than automate it. I don't think we should take as a given that augmentation necessarily occurs. Countries steer technologies. Nuclear energy is used by North Korea solely for offensive weapons. It's used by Japan solely for energy generation. They have no offensive nuclear weapons. That's a choice of a country; it's not a characteristic of technology. How to use it well to augment workers is the first thing. The second thing is protection for workers. As I noted, undue surveillance, high-stakes decision-making by opaque algorithms, and AI's appropriation of workers' creative work without compensation should be regulated. We have fair use when it comes to intellectual property, but the laws were not written for AI. The final thing I would say is on visibility into these technologies. They are opaque. They're making high-stakes decisions, and often the creators of technologies will not even disclose what sources of data have been used for training. I don't think that's acceptable. I think there's a public interest in making sure that machines that are making important decisions—and valuable decisions; I use and support AI—need to be understandable to regulators and to consumers.
269 words
  • Hear!
  • Rabble!
  • star_border