SoVote

Decentralized Democracy

James Bessen

44th Parl. 1st Sess.
November 20, 2023
  • 11:04:50 a.m.
  • Watch
  • Read Aloud
Sure. I'll say just a few words. AI has gotten an awful lot of media hype, and I think that makes it very confusing to understand what its impact will be. I tend to view it as much more continuous with the kinds of changes that information technology has been bringing about for the last 70 years, particularly regarding the role of automation. There are tremendous and exciting things that AI can do. Some of them are very impressive. Many of them, unfortunately, are still very far removed from the point at which they can replace labour. In fact, what tends to happen—and this has been true throughout the period—is that automation mainly pertains to automating specific tasks of a job rather than the entire job, and a lot of people misunderstand that. There are very few jobs that have been completely automated by technology. I looked at the U.S. census and identified occupations that had been completely automated by technology. I found only one, which was elevator operator. Other jobs were lost and other occupations disappeared because technology became obsolete or tastes changed, so we no longer have telegraph operators and we no longer have housekeepers of boarding houses. That's been over a period in which technology has had a tremendous impact on automating tasks and affecting labour and productivity. What it means, basically, is that there's been a lot of fearmongering about AI causing massive unemployment. We've been using AI since the 1980s, and we're not seeing massive unemployment. I don't think we're going to see massive unemployment any time in the next couple of decades, but we are going to see many specific jobs being challenged or disappearing, and new jobs being created. The real challenge of AI for the labour force is not that it will create mass unemployment but that it will require people to change jobs, to acquire new skills, to maybe change locations or to learn new occupations. These transitions are very costly, can become burdensome, and are really a very major concern. There's a second thing I'll point out, but I don't want to be long here. Another major impact—and this has been true of information technology for the last two decades—is that AI has done a lot to increase the dominance of large firms. We see that large firms are acquiring a larger share of the markets. They're much less likely to be disrupted by innovators in the traditional Schumpeterian fashion, where the start-up comes along with the bright new idea and replaces the incumbent. That's happening less frequently. That's important for a number of reasons, but it also affects the labour force in a couple of ways. One is that large firms tend to pay more, in part because they have advanced technology, and this tends to increase wage inequality. Information technology has been leading to boosting differences in pay, even for the same occupations. We'll see big differences so that the same job description will pay much more at a large firm. The second thing is that partly because of that, there's a really significant talent war, with these new technologies requiring specific skills that work with the technology. I'm talking not just about STEM skills but all sorts of skills of people who have experience adapting their skills to work with the technology. They're in great demand, and large firms have an upper hand in the talent wars. They'll pay more; therefore, they can recruit more readily. There's nothing bad with their paying more—we want labour to earn more—but at the same time, it means that smaller firms, particularly innovative start-ups, are having a harder time growing. We see that the growth of start-ups declines in areas where large-firm hiring is predominant. That becomes sort of an indirect concern for labour. I will just wrap it up with that. Thank you.
679 words
  • Hear!
  • Rabble!
  • star_border
  • 11:22:35 a.m.
  • Watch
  • Read Aloud
Yes, definitely. We surveyed AI start-ups about the kinds of ethical issues that they were attempting to control, and they saw a very definite need. We were surprised, actually. We thought that ethics would be the last thing on their radar, and in fact the majority were actually implementing things and taking actions that had some teeth in them. In some cases, they let people go. There were concerns about bias that might arise in training. So yes, ethics has been important. I think it's going to become more important as these systems develop and we understand more about what they can do and what their effects will be.
111 words
  • Hear!
  • Rabble!
  • star_border
  • 11:23:35 a.m.
  • Watch
  • Read Aloud
I'm sorry. I'm not a Canadian, so I'm not that familiar with Canada's privacy laws.
19 words
  • Hear!
  • Rabble!
  • star_border
  • 11:23:53 a.m.
  • Watch
  • Read Aloud
Oh, absolutely. There are a bunch of things. First off, there's a huge issue in terms of intellectual property, copyright in particular. Particularly with these large language models like ChatGPT, they're trained on a bunch of data that is out there, much of which is under copyright protection and out there on the Internet. It can result in cases, and some of these have been very clearly demonstrated, in which they are more or less reproducing copyrighted material without permission. Antitrust is also an issue. I referred earlier to the effect of information technology generally increasing the dominance of large firms. I believe AI is going to accelerate that tendency. It's not directly an immediate problem for antitrust law, but it means that antitrust law is going to become that much more important as the dominance of these firms grows. I will also—
147 words
  • Hear!
  • Rabble!
  • star_border
  • 11:25:18 a.m.
  • Watch
  • Read Aloud
In terms of intellectual property, I think there are some strong recommendations about copyright that need to be effected, and that's going to be a big problem to work out. In terms of privacy laws, it's much more difficult and it concerns the extent to which privacy-protected information is being made available to AI systems that may reuse it in a different way. This is the problem the other speaker referred to. It is—
78 words
  • Hear!
  • Rabble!
  • star_border
  • 11:44:42 a.m.
  • Watch
  • Read Aloud
I'm not sure we've learned much about AI specifically. There have been a number of studies done on using AI to assist writers. There's some evidence that it helps less-skilled writers do a better job. I don't think that AI's anywhere near the point where it can really replace writers. I think that was being talked about, but I don't see any evidence that it's about to happen or can happen. My own experience—and the experiences of a whole number of other people who have tried to do writing with ChatGPT or whatever—is that there are some huge limitations on using this technology at this point.
118 words
  • Hear!
  • Rabble!
  • star_border
  • 11:46:40 a.m.
  • Watch
  • Read Aloud
That's certainly an interesting idea and one I hadn't thought about before. I immediately see that it runs into a problem, which is that all of the regulations and requirements that go into approvals are not something an AI system can just ignore. AI may be helpful. You would like to be able to see ways in which perhaps the various regulators would be able to use AI to analyze the various reports and speed up that process, but they'd have to be willing to do so. You might see ways in which AI could help compile all the various approvals. There are possibilities for it to work, but I think it's a difficult problem, because there's a big interaction between regulations and laws and the technology. You can very easily see a situation in which AI would be used and then there would be a lawsuit because somebody didn't like the outcome.
159 words
  • Hear!
  • Rabble!
  • star_border
  • 11:48:23 a.m.
  • Watch
  • Read Aloud
Yes. AI can make recommendations about what to trim, but it can't trim it itself. Obviously it requires legal and regulatory approval to trim the process. It's a good idea.
32 words
  • Hear!
  • Rabble!
  • star_border