SoVote

Decentralized Democracy

House Committee

44th Parl. 1st Sess.
November 20, 2023
  • 11:02:08 a.m.
  • Watch
  • Read Aloud
Members, the clerk has advised me that we have a quorum and all witnesses have been sound-tested and are okay. With that, I call to order meeting number 89 of the House of Commons Standing Committee on Human Resources, Skills and Social Development and the Status of Persons with Disabilities. Pursuant to Standing Order 108(2), the committee is resuming its study on the implications of artificial intelligence technologies for the Canadian labour force. Today’s meeting is taking place in a hybrid format, meaning that members are attending in person in the room and virtually. You have the option to choose the official language of your choice. To participate, those appearing virtually can use the globe icon at the bottom of their Surface. If there is an interruption in the translation services, please get my attention by using the “raise hand” icon, and we'll suspend while it's being corrected. I will remind members appearing in the room to keep their headsets clear of the mike to avoid the translators getting hearing feedback from your device. I would also ask members to speak clearly and slowly for the benefit of the interpreters. We have two panels today. With the first panel, we have, as an individual, appearing by video conference, James Bessen, professor and director of the technology and policy research initiative at Boston University; in person in the room, Angus Lockhart, senior policy analyst at the Dais at Toronto Metropolitan University; and, appearing by video conference, Olivier Carrière, executive assistant director to the Quebec director of Unifor. Welcome back. I believe, Mr. Carrière, that there were issues the last time. Thank you for coming again. We will begin with opening statements, beginning with you, Mr. Bessen, if you are ready with your opening statement for five minutes or less.
310 words
  • Hear!
  • Rabble!
  • star_border
  • 11:04:50 a.m.
  • Watch
  • Read Aloud
Sure. I'll say just a few words. AI has gotten an awful lot of media hype, and I think that makes it very confusing to understand what its impact will be. I tend to view it as much more continuous with the kinds of changes that information technology has been bringing about for the last 70 years, particularly regarding the role of automation. There are tremendous and exciting things that AI can do. Some of them are very impressive. Many of them, unfortunately, are still very far removed from the point at which they can replace labour. In fact, what tends to happen—and this has been true throughout the period—is that automation mainly pertains to automating specific tasks of a job rather than the entire job, and a lot of people misunderstand that. There are very few jobs that have been completely automated by technology. I looked at the U.S. census and identified occupations that had been completely automated by technology. I found only one, which was elevator operator. Other jobs were lost and other occupations disappeared because technology became obsolete or tastes changed, so we no longer have telegraph operators and we no longer have housekeepers of boarding houses. That's been over a period in which technology has had a tremendous impact on automating tasks and affecting labour and productivity. What it means, basically, is that there's been a lot of fearmongering about AI causing massive unemployment. We've been using AI since the 1980s, and we're not seeing massive unemployment. I don't think we're going to see massive unemployment any time in the next couple of decades, but we are going to see many specific jobs being challenged or disappearing, and new jobs being created. The real challenge of AI for the labour force is not that it will create mass unemployment but that it will require people to change jobs, to acquire new skills, to maybe change locations or to learn new occupations. These transitions are very costly, can become burdensome, and are really a very major concern. There's a second thing I'll point out, but I don't want to be long here. Another major impact—and this has been true of information technology for the last two decades—is that AI has done a lot to increase the dominance of large firms. We see that large firms are acquiring a larger share of the markets. They're much less likely to be disrupted by innovators in the traditional Schumpeterian fashion, where the start-up comes along with the bright new idea and replaces the incumbent. That's happening less frequently. That's important for a number of reasons, but it also affects the labour force in a couple of ways. One is that large firms tend to pay more, in part because they have advanced technology, and this tends to increase wage inequality. Information technology has been leading to boosting differences in pay, even for the same occupations. We'll see big differences so that the same job description will pay much more at a large firm. The second thing is that partly because of that, there's a really significant talent war, with these new technologies requiring specific skills that work with the technology. I'm talking not just about STEM skills but all sorts of skills of people who have experience adapting their skills to work with the technology. They're in great demand, and large firms have an upper hand in the talent wars. They'll pay more; therefore, they can recruit more readily. There's nothing bad with their paying more—we want labour to earn more—but at the same time, it means that smaller firms, particularly innovative start-ups, are having a harder time growing. We see that the growth of start-ups declines in areas where large-firm hiring is predominant. That becomes sort of an indirect concern for labour. I will just wrap it up with that. Thank you.
679 words
  • Hear!
  • Rabble!
  • star_border
  • 11:10:00 a.m.
  • Watch
  • Read Aloud
Thank you, Mr. Bessen. Mr. Lockhart, go ahead for five minutes, please.
12 words
  • Hear!
  • Rabble!
  • star_border
  • 11:10:07 a.m.
  • Watch
  • Read Aloud
Thank you, Mr. Chair, for the invitation to address this committee today. My name is Angus Lockhart. I'm a senior policy analyst at the Dais, a policy think tank at Toronto Metropolitan University, where we develop the people and ideas we need to advance an inclusive, innovative economy, education system and democracy for Canada. I feel privileged to be able to contribute to this important conversation today. In addition, I have a brief I co-authored with Viet Vu. I'm submitting it, and it will, hopefully, be available soon.
91 words
  • Hear!
  • Rabble!
  • star_border
  • 11:10:39 a.m.
  • Watch
  • Read Aloud
Today I would like to talk about three things—what we know about past waves of automation in Canada, what the Dais has learned from our research into the impact of automation on workers, and how the current wave of automation is different from what we have seen before. First, I want to set some context for my remarks. The concern for workers in an age of automation is not new. In fact, it has been ongoing for more than 200 years, since machines started to enter the economy. What we have seen through many waves of automation, in the end, is not mass unemployment for the most part, but increased prosperity. Our research at the Dais suggests that AI is much like past waves of automation. The risk from AI to those whose jobs are likely to be impacted is smaller than the risks to Canada of not keeping pace with technological change, both on productivity and on remaining internationally competitive. This, however, does not mean that there aren't any bad ways to use this technology, or that adoption won't hurt at least some workers and specific industries. The question has to be how we can support workers and be thoughtful about how we adopt AI, not whether we should move ahead with automation. The good news is that we're still in the early stages of AI adoption in Canadian workplaces. Our recent research shows that just 4% of businesses employing 15% of the Canadian workforce have adopted AI so far. Less than 2% of online job postings this September cited AI skills. Most people are not yet exposed to AI in their workplace. This is likely and hopefully going to change over the next decade, making now the time to act and put in place frameworks that support responsible adoption and workers. In order to do so, we ought to understand how this technology differs from what came before it. Probably the biggest change in the latest wave of large language models is how easy they are to use and how easy it is to judge the quality of their outputs. Both the inputs and outputs of tools like ChatGPT are interpretable by workers without specialized technical skills compared to previous waves of automation that required technical skills to implement in the workplace and produced outputs that were often not interpretable by lower-skilled workers. This means that the new wave of AI tools are uniquely positioned to support lower-skilled workers rather than automating entire tasks that they previously did. Evidence from some initial experimental research suggests that in moderately skilful writing tasks, the support of a GPT tool helps bridge the gap and quality between weaker and stronger writers. That said, we also want to acknowledge that previous waves of automation and digitization in Canada have not had fully equitable outcomes. While, in general, increased prosperity has improved quality of life for all Canadians, the benefits have nonetheless been disproportionately concentrated among historically advantaged groups. With AI we run the risk of this again being the case. It's currently being adopted most quickly by large businesses in Canada, and those tend to be owned by men. However, because we are still in the early stages of AI adoption in Canada, there is time to make sure it's not the case. We can't afford to miss out on the prosperity that AI offers, but we need the prosperity to uplift all Canadians and not just a select few. I want to end by saying there's still a lot of work to do here. At the Dais we're going to continue to research and try to understand how generative AI can be and already is used in the Canadian workplace and what the impacts for working Canadians are. Our work relies on data collected by Statistics Canada in surveys like the “Survey of Digital Technology and Internet Use”. We're glad to see that this committee is taking a serious look at this issue. Continued support for and interest in this kind of research puts Canada in a better position to tackle these challenges. Thank you again for the opportunity. I will be happy to answer questions when we get there.
714 words
  • Hear!
  • Rabble!
  • star_border
  • 11:14:07 a.m.
  • Watch
  • Read Aloud
Thank you, Mr. Lockhart. Now, Mr. Carrière, please go ahead for five minutes.
15 words
  • Hear!
  • Rabble!
  • star_border
  • 11:14:23 a.m.
  • Watch
  • Read Aloud
Thank you, Mr. Chair. The fundamental problem with algorithmic management is that wehave no information. There’s no framework for all kinds of elements. There seems to be a wish to pass this problem on to unions and employers, but unions can’t be the solution for managing artificial intelligence in the workplace, when we know that the unionization rate is around 15% in the private sector. This will require a regulatory framework deployed by every level of government. Nothing is known. No doubt the clauses in collective agreements relating to technological change were used to address artificial intelligence issues, and that was a mistake. It was a mistake because, often, the triggers for technological change clauses are related to job losses or potential job losses. Unfortunately, that doesn’t address issues related to artificial intelligence, which deals with a multitude of situations that don’t result from job loss. We hear about artificial intelligence as if it’s something positive that will lighten the load on workers. Unfortunately, there’s a downside, such as reduced autonomy and increasingly intrusive surveillance. Workers are constantly being monitored, since algorithms need data to do their jobs. We don’t know how this data is stored, how it’s analyzed or how it’s reused. The ability to collect data is not regulated. We therefore need to regulate data and what is done with it, but above all we need to regulate and mandate dialogue between employers and employees to understand the whole issue of explainability and transparency. There isn’t any. For years now, we’ve been using tools that make decisions on behalf of workers, but they haven’t been presented as algorithmic management or artificial intelligence tools. They were simply described as new tools. For example, at Bell Canada, there’s the Blueprint tool for customer service staff. When speaking with a customer, workers are required to follow a decision tree that tells them what to do based on the customer’s stated problems. The employee’s judgment is completely removed from the process. What’s more, the employee must enter data into the tool to ensure that the various interpretation scenarios are effective and appropriate for the customer. This is done in various industries, such as transportation, where algorithms make decisions for truckers, whether it’s about the best route or the best driving practice to use. This completely eliminates the individual’s judgment and ability to drive their vehicle. They are required to follow the tool’s instructions. They must be managed. The Organization for Economic Cooperation and Development, or OECD, has laid down four principles: artificial intelligence must be oriented towards sustainable development, it must be human-centred, it must be transparent and explainable, and the system must be robust and accountable. At present, we have none of those things, because there’s no disclosure obligation. In our view, this is the first step that needs to be taken. It’s about knowing the tools, understanding their effects and then implementing solutions that are truly benefiting from the efficiency or added value of technological tools in the company. We’re in a period marked by a shortage of workers. It is simply untrue that we’re going to transform a customer service operator into someone who will program or manage algorithmic tools. In any case, in Quebec, there’s currently a shortage of 9,000 to 10,000 workers in the IT sector, and our workers who can’t fill the gap. There’s a kind of vicious circle that has to stop, and it has to start with the implementation of mandatory disclosure or mandatory dialogue between employers and their employees. Thank you very much.
647 words
  • Hear!
  • Rabble!
  • star_border
  • 11:19:21 a.m.
  • Watch
  • Read Aloud
Thank you, Mr. Carrière.
7 words
  • Hear!
  • Rabble!
  • star_border
  • 11:19:34 a.m.
  • Watch
  • Read Aloud
We will begin the first six-minute round of questions with Ms. Gray. Please proceed, Ms. Gray.
17 words
  • Hear!
  • Rabble!
  • star_border
  • 11:19:34 a.m.
  • Watch
  • Read Aloud
Thank you, Mr. Chair, and thank you to all the witnesses for being here. My first questions are for Angus Lockhart from Toronto Metropolitan University. You stated, as part of an article, that, “While some medical practices benefit from the inclusion of AI, there are serious privacy risks in feeding private medical data into a computer model that must be addressed.” I just want to confirm that this was something you wrote. Do you believe Canada's privacy laws are adequate to address these privacy issues?
88 words
  • Hear!
  • Rabble!
  • star_border
  • 11:20:09 a.m.
  • Watch
  • Read Aloud
Yes, that is something we wrote. I think I co-authored that with Viet Vu as well. I don't know, strictly speaking, if the medical laws are accurate. I do know that AI is going to require new forms of medical privacy. As data gets fed into these large algorithms, there's an opportunity for the algorithms to spew that back out in a way that we don't or can't anticipate. It requires a degree of care that is larger and more significant than previous tools. We've seen with ChatGPT and tools like it that there's a risk that whatever gets fed into them can come back out. It's very challenging to incorporate systems that will prevent that from happening, or at least to make sure you're extremely confident that it won't happen.
141 words
  • Hear!
  • Rabble!
  • star_border
  • 11:20:57 a.m.
  • Watch
  • Read Aloud
Thank you. Do you believe there are security and privacy concerns that are currently barriers to the adoption of AI?
20 words
  • Hear!
  • Rabble!
  • star_border
  • 11:21:06 a.m.
  • Watch
  • Read Aloud
We did a study on the adoption of AI in Canada. We found that for the most part, very few businesses are actually looking at security and privacy concerns as a barrier to them. Something like less than 3% or somewhere in that range of businesses that have yet to adopt artificial intelligence cite anything like that as their concern. For the most part, people really just don't know what tools are available to their business.
77 words
  • Hear!
  • Rabble!
  • star_border
  • 11:21:34 a.m.
  • Watch
  • Read Aloud
Thank you. If government were to amend privacy laws, do you believe that would help remove some of those barriers? Are there concerns that privacy laws in Canada may not be helpful to protect people's privacy?
37 words
  • Hear!
  • Rabble!
  • star_border
  • 11:21:53 a.m.
  • Watch
  • Read Aloud
Yes. I think there's room to provide more clarity for businesses on what the privacy concerns are and what they need to be really careful about. To some strong degree, probably a lot of that will have to fall on the developers of the actual AI tools and not the businesses implementing them themselves. In general, I think there is always room to help support that, but it probably wouldn't be a massive driver of increased adoption in Canada, even if it were improved.
86 words
  • Hear!
  • Rabble!
  • star_border
  • 11:22:20 a.m.
  • Watch
  • Read Aloud
Thank you. My next questions are for you, Professor Bessen. You contributed to a paper last year that talked about AI start-ups. Do you think AI development poses ethical and data access issues?
34 words
  • Hear!
  • Rabble!
  • star_border
  • 11:22:35 a.m.
  • Watch
  • Read Aloud
Yes, definitely. We surveyed AI start-ups about the kinds of ethical issues that they were attempting to control, and they saw a very definite need. We were surprised, actually. We thought that ethics would be the last thing on their radar, and in fact the majority were actually implementing things and taking actions that had some teeth in them. In some cases, they let people go. There were concerns about bias that might arise in training. So yes, ethics has been important. I think it's going to become more important as these systems develop and we understand more about what they can do and what their effects will be.
111 words
  • Hear!
  • Rabble!
  • star_border
  • 11:23:28 a.m.
  • Watch
  • Read Aloud
Thank you. Do you believe Canada's privacy legislation and protections are sufficient to address concerns with AI development?
19 words
  • Hear!
  • Rabble!
  • star_border
  • 11:23:35 a.m.
  • Watch
  • Read Aloud
I'm sorry. I'm not a Canadian, so I'm not that familiar with Canada's privacy laws.
19 words
  • Hear!
  • Rabble!
  • star_border
  • 11:23:42 a.m.
  • Watch
  • Read Aloud
Do you believe new AI technologies will create issues for workers with respect to intellectual property and antitrust issues—issues around ownership of data and privacy?
27 words
  • Hear!
  • Rabble!
  • star_border