SoVote

Decentralized Democracy

Angus Lockhart

44th Parl. 1st Sess.
November 20, 2023
  • 11:10:07 a.m.
  • Watch
  • Read Aloud
Thank you, Mr. Chair, for the invitation to address this committee today. My name is Angus Lockhart. I'm a senior policy analyst at the Dais, a policy think tank at Toronto Metropolitan University, where we develop the people and ideas we need to advance an inclusive, innovative economy, education system and democracy for Canada. I feel privileged to be able to contribute to this important conversation today. In addition, I have a brief I co-authored with Viet Vu. I'm submitting it, and it will, hopefully, be available soon.
91 words
  • Hear!
  • Rabble!
  • star_border
  • 11:10:39 a.m.
  • Watch
  • Read Aloud
Today I would like to talk about three things—what we know about past waves of automation in Canada, what the Dais has learned from our research into the impact of automation on workers, and how the current wave of automation is different from what we have seen before. First, I want to set some context for my remarks. The concern for workers in an age of automation is not new. In fact, it has been ongoing for more than 200 years, since machines started to enter the economy. What we have seen through many waves of automation, in the end, is not mass unemployment for the most part, but increased prosperity. Our research at the Dais suggests that AI is much like past waves of automation. The risk from AI to those whose jobs are likely to be impacted is smaller than the risks to Canada of not keeping pace with technological change, both on productivity and on remaining internationally competitive. This, however, does not mean that there aren't any bad ways to use this technology, or that adoption won't hurt at least some workers and specific industries. The question has to be how we can support workers and be thoughtful about how we adopt AI, not whether we should move ahead with automation. The good news is that we're still in the early stages of AI adoption in Canadian workplaces. Our recent research shows that just 4% of businesses employing 15% of the Canadian workforce have adopted AI so far. Less than 2% of online job postings this September cited AI skills. Most people are not yet exposed to AI in their workplace. This is likely and hopefully going to change over the next decade, making now the time to act and put in place frameworks that support responsible adoption and workers. In order to do so, we ought to understand how this technology differs from what came before it. Probably the biggest change in the latest wave of large language models is how easy they are to use and how easy it is to judge the quality of their outputs. Both the inputs and outputs of tools like ChatGPT are interpretable by workers without specialized technical skills compared to previous waves of automation that required technical skills to implement in the workplace and produced outputs that were often not interpretable by lower-skilled workers. This means that the new wave of AI tools are uniquely positioned to support lower-skilled workers rather than automating entire tasks that they previously did. Evidence from some initial experimental research suggests that in moderately skilful writing tasks, the support of a GPT tool helps bridge the gap and quality between weaker and stronger writers. That said, we also want to acknowledge that previous waves of automation and digitization in Canada have not had fully equitable outcomes. While, in general, increased prosperity has improved quality of life for all Canadians, the benefits have nonetheless been disproportionately concentrated among historically advantaged groups. With AI we run the risk of this again being the case. It's currently being adopted most quickly by large businesses in Canada, and those tend to be owned by men. However, because we are still in the early stages of AI adoption in Canada, there is time to make sure it's not the case. We can't afford to miss out on the prosperity that AI offers, but we need the prosperity to uplift all Canadians and not just a select few. I want to end by saying there's still a lot of work to do here. At the Dais we're going to continue to research and try to understand how generative AI can be and already is used in the Canadian workplace and what the impacts for working Canadians are. Our work relies on data collected by Statistics Canada in surveys like the “Survey of Digital Technology and Internet Use”. We're glad to see that this committee is taking a serious look at this issue. Continued support for and interest in this kind of research puts Canada in a better position to tackle these challenges. Thank you again for the opportunity. I will be happy to answer questions when we get there.
714 words
  • Hear!
  • Rabble!
  • star_border
  • 11:20:09 a.m.
  • Watch
  • Read Aloud
Yes, that is something we wrote. I think I co-authored that with Viet Vu as well. I don't know, strictly speaking, if the medical laws are accurate. I do know that AI is going to require new forms of medical privacy. As data gets fed into these large algorithms, there's an opportunity for the algorithms to spew that back out in a way that we don't or can't anticipate. It requires a degree of care that is larger and more significant than previous tools. We've seen with ChatGPT and tools like it that there's a risk that whatever gets fed into them can come back out. It's very challenging to incorporate systems that will prevent that from happening, or at least to make sure you're extremely confident that it won't happen.
141 words
  • Hear!
  • Rabble!
  • star_border
  • 11:21:06 a.m.
  • Watch
  • Read Aloud
We did a study on the adoption of AI in Canada. We found that for the most part, very few businesses are actually looking at security and privacy concerns as a barrier to them. Something like less than 3% or somewhere in that range of businesses that have yet to adopt artificial intelligence cite anything like that as their concern. For the most part, people really just don't know what tools are available to their business.
77 words
  • Hear!
  • Rabble!
  • star_border
  • 11:21:53 a.m.
  • Watch
  • Read Aloud
Yes. I think there's room to provide more clarity for businesses on what the privacy concerns are and what they need to be really careful about. To some strong degree, probably a lot of that will have to fall on the developers of the actual AI tools and not the businesses implementing them themselves. In general, I think there is always room to help support that, but it probably wouldn't be a massive driver of increased adoption in Canada, even if it were improved.
86 words
  • Hear!
  • Rabble!
  • star_border
  • 11:30:37 a.m.
  • Watch
  • Read Aloud
That makes total sense. We saw that just 2% of all jobs have any kind of AI skills. You're exactly right in saying those AI skills are traditional tech-based skills—things that require advanced training to use. There is a generation of new, generative tools that take natural language inputs and don't require the same technical skills to use. That said, there is still a whole range of technologies that require those digital and technical skills to use. The new technologies aren't necessarily replacing them. They're more additive. They're operating in new areas in which the old technologies didn't help. There is still going to be increased demand and need for AI skills, broadly. The same workers who don't have AI skills and are being asked for AI skills are going to be able to adopt the new tools, but they might not necessarily be able to use any of the older, existing tools.
163 words
  • Hear!
  • Rabble!
  • star_border
  • 11:39:43 a.m.
  • Watch
  • Read Aloud
I think that's probably a very challenging question to answer in a short time. What we certainly view as part of a responsible framework is making sure that when artificial intelligence is implemented, it's not being done in a way that's harmful to the workers who are using it explicitly. There are always risks of increased workplace surveillance and facial recognition being used in the workplace, and we definitely want to avoid any kind of negative impacts from that. Beyond that, there's a huge risk from AI that businesses will be able to implement AI and reduce labour, and that the increased productivity and benefits from that could be concentrated among just the ownership of the business. That runs the risk, obviously, of increasing wealth inequality in Canada. At the Dais we strongly believe that prosperity and GDP growth are beneficial for Canadians, but only when they are distributed among all groups. I don't think I have an answer for how to make sure the benefits that come from increased productivity for workers are distributed among all of the workers and the people in the firm, but I do know that's going to be an important part of keeping up with AI adoption.
209 words
  • Hear!
  • Rabble!
  • star_border
  • 11:41:07 a.m.
  • Watch
  • Read Aloud
I think that is definitely a path that needs to be investigated. I think that when you do that, you need to make sure all groups are represented. Obviously, you need to make sure industry's represented. Having unions there is important. I think the trickiest part is making sure you have non-unionized workers represented there in some capacity, because a large portion of Canada's workforce is not unionized. If those voices aren't present at the table, then you really run the risk of a two-tiered system of unionized versus non-unionized workers.
97 words
  • Hear!
  • Rabble!
  • star_border
  • 11:48:44 a.m.
  • Watch
  • Read Aloud
I would say two things. The first is that when we switch from talking about AI use in private workplaces to AI use by government, there are a lot of different questions that raises and a lot of different issues that come about. In the private sector a lot of the time we get to just focus on productivity, but in the public sector there's a lot more to consider than productivity. You can't just talk about making the process faster, because I think there's an important equity concern here even when it comes to housing applications. Handing over to an AI tool any kind of judgment on that makes for a real challenge. The second is more on the topic of using AI to cut down on regulations. I think you're going to really run into a challenge there, because there are real social considerations, as opposed to just productivity or efficiency considerations, that go into that kind of regulation system. It seems to me that it's probably better done and left to humans and human decision-making for now.
186 words
  • Hear!
  • Rabble!
  • star_border
  • 12:02:09 p.m.
  • Watch
  • Read Aloud
AI has the potential both to promote equity and to harm it. If we look specifically at persons with disabilities, there are examples in which AI has been used to improve the capacity of people with disabilities to operate in a workplace. There is a café that recently opened in Tokyo that uses robots to help increase the motor function of people with disabilities in order to help them fully operate within that workplace. At the same time, if you don't take an equity lens when you're implementing artificial intelligence, those marginalized groups—people with disabilities and other groups like them—are going to be the first people harmed by the introduction of AI in the workplace. You have to start from a place of asking how AI can help uplift and increase the participation of everyone, and use that as your framework, instead of starting with, “We have AI. What can we get rid of with it?”
165 words
  • Hear!
  • Rabble!
  • star_border