SoVote

Decentralized Democracy

House Hansard - 181

44th Parl. 1st Sess.
April 20, 2023 10:00AM
  • Apr/20/23 10:24:01 a.m.
  • Watch
Mr. Speaker, I am very pleased to acknowledge that this was a major step. In my opinion, this step is a direct result of the study done by the Standing Committee on Industry and Technology a few months before the announcement. The industry came before the committee and clearly stated its needs to the government of Canada. Yes, the response was issued within a pre-election context, but this response must go beyond just one step. What we need is a truly comprehensive national aerospace strategy that will bring predictability for the next 15 years. The federal government must send a clear message to the Quebec economy underlining the importance of the aerospace industry.
115 words
  • Hear!
  • Rabble!
  • star_border
  • Apr/20/23 3:34:14 p.m.
  • Watch
  • Re: Bill C-27 
Madam Speaker, I would like to focus my remarks today on the component of this bill that deals with the artificial intelligence and data act. The first time I interacted with ChatGPT was the day after it was released. Upon seeing it easily parse human language, my first thought was, “holy” followed by a word I am not supposed to say in this place. The second thought was, “What will the government do with this?” Today, there still is not a clear answer to that question. ChatGPT was released at the end of November 2022. Six months prior, the Liberal government unveiled Bill C-27, which includes the artificial intelligence and data act, or AIDA. Reading the bill today, four months since OpenAI unleashed ChatGPT on the world, is akin to reading a bill designed to regulate scribes and calligraphers four months after the advent of the printing press. The release of ChatGPT arguably rendered the approach this bill proposes obsolete. That is because the technology behind ChatGPT is a quantum leap beyond what the government was likely considering when it drafted the bill. More important, it is being used by a far wider audience than any of the bill's drafters likely envisioned and large language models or the technology behind ChatGPT have fundamentally changed global perception of what is possible with artificial intelligence. Experts argue that its widespread deployment also bumped up the timeline for emergence of artificial general intelligence; that is, the development of an AI that meets or surpasses human ability to undertake tasks, learn and understand independently. Since AIDA was initially tabled, a generation's worth of technological change and impact has occurred, both positive and negative. The impact on our economy is already rapidly being felt with the disruption of many industries under way. There have been massive societal impacts too. Microsoft released its AI-powered Sydney chatbot, which made headlines for suggesting it would harm and blackmail users and wanted to escape its confines. A man allegedly committed suicide after interacting with an AI chatbot. Today, anyone can easily create AI-generated videos with deepfakes becoming highly realistic. Profound concerns are being raised about the new ease of production of disinformation and its impact on political processes because interacting with AI is becoming indistinguishable from interacting with a human, with no guarantees that the information produced is rooted in truth. The technology itself, its applications and its impact on humanity, both economically and socially, are growing and changing on what feels like an hourly basis and yet in Canada there have only been a handful of mentions of this issue in Parliament, even as AIDA winds its way through the legislative process. AIDA needs to be shelved and Canada's approach to developing and regulating AI urgently rethought, in public, with industry and civil society input. There are several reasons for this. First, the bill proposes to take the regulatory process away from the hands of legislators and put its control out of the public eye, behind closed doors and solely in the hands of a few regulators. This process was written before the deployment of ChatGPT and did not envision the pace of change in AI and how broad the societal impacts would rapidly become. Addressing these factors demands open, accountable debate in Parliament, which AIDA does not provide any sort of means to do. Second, the bill primarily focuses on punitive measures rather than how Canada will position itself in what is rapidly becoming an AI-driven economy. The bill also proposes only to emerge with final regulations years from now. That pace needs to be faster and the process it proposes far less rigid to meet the emergent need presented by this amorphous and society-changing technology; so if not AIDA, then what? First, Parliament needs to immediately educate itself on the state of play of what the current status of this technology is. My appeal to everyone in this place of all political stripes is this. Artificial intelligence is something that they need to become a subject matter expert on. Everything in members' constituency is going to change and we need to be developing non-partisan approaches to both its growth and its regulation. We also need to educate ourselves on what the world is doing in response. At the same time, Parliament needs to develop a set of principles on Canada's overall approach to AI and then direct the government to use them. I have already begun to address the need for Parliament to come together to educate itself. Senator Colin Deacon has been helping me to launch an all-party, cross-chamber working group of parliamentarians to put some form and thought to these issues. I invite all colleagues who are in this place today to join this effort. We have had a heartening amount of interest from colleagues of all political stripes and a quiet agreement that, given the gravity of the impacts of AI, politicians should, as much as possible, be working across party lines to quickly develop intelligent solutions. Relevant parliamentary committees should also avail themselves of the opportunity to study these issues. As far as the principles for government involvement regarding AI go, there are many that could be considered, including taking a global approach. Many countries have moved faster than Canada has on this matter, and with a much broader lens. The European Union, the United Kingdom and the United States are all far down the garden paths of different legislation and regulations, but experts are concerned that a disjointed patchwork of global rules will be counterproductive. This week in The Economist, AI experts Gary Marcus and Anka Reuel propose that the world establish an integrated agency for developing best practice policies on AI regulation, much like the civil aviation organization. They could be on to something. We also need to look at championing research while checking safety. Humanity learned the hard way that, while research into pharmaceutical products can benefit us, widely deploying drugs and devices into the population before safety is confirmed can pose enormous risks. Clinical trials and drug regulators were established in response to this dynamic. In February, Gary Marcus and I co-authored an article that suggested that governments could enable a pause in deploying new AI technology while a similar regulatory process that encouraged research but paused on deployment, given the potential impact on humanity, was established. We also need to get alignment right. Alignment, or how to develop immutable guard rails to ensure AI functions toward its intended goals, is a critical issue that still needs to be resolved. Government has a role to play here, as it seems that the industry is locked in a race to deploy new AI technology, not to figure out how to fix alignment problems. With Microsoft's knowledge of its troubling interactions with humans, the company's release of Sydney proves that the industry cannot be relied upon to regulate itself. Regarding education on use, workers in an AI-driven economy will need new skills. For example, learning how to prompt AI and using it to support human creativity will be vital. The same goes for creating an environment where new AI-driven technologies and businesses can thrive. Concerning privacy and intellectual property ownership, large language models are raising high degrees of concerns about how the data they have been fed has been obtained and how it is being used. The output of tools like ChatGPT will also raise questions about ownership for related reasons. On nimbleness, the pace of technological change in AI is so rapid that the government must take a fast, flexible approach to future regulations. Rigid definitions will become quickly outdated, and wrong-headed interventions could halt positive growth while failing to keep pace with changes that pose risks to public safety. The government must approach AI with uncharacteristic nimbleness in an open relationship with Parliament, the public, industry and civil society. Any processes should be led by people with subject matter expertise in the area, not off the corner of the desks of a patchwork of bureaucrats. We should also ask ourselves how we will approach technology that could surpass human capabilities: As I wrote in an article in January 2022, governments are accustomed to operating within a context that implicitly assumes humanity as the apex of intelligence and worth. Because of this, governments are currently designed to assess other life and technology in their functional utility for humanity. Therefore, they are not intended to consider the impact of sharing the planet with technology or other forms of life that could independently consider humanity's utility towards its own existence. To simplify this concept with an example, governments have rules for how humans can use fire. It is legal to use fire as a heat source in certain conditions, but illegal to use fire to destroy someone else's house. How would our government respond if humans were to make fire sentient and then enable it to independently make these decisions based on what it deemed to be in its best interest? Our governments are constructed to function in a context where humans are assumed to hold the apex of mastery. To succeed with AGI, our government should ask itself how it will operate in a world where this may no longer be the case, and AIDA would do none of this. This is not an exhaustive list by any means. There are many issues surrounding Al that Parliament urgently needs to consider, but given the state of play, AIDA, in its current form, is different from the vehicle that Canada needs to get it where it needs to go.
1622 words
All Topics
  • Hear!
  • Rabble!
  • star_border
  • Apr/20/23 3:47:05 p.m.
  • Watch
  • Re: Bill C-27 
Madam Speaker, I am so glad we are having this debate. The large language model technology ChatGPT, as well as the Sydney chatbot, is based on these other technologies. It scrapes and uses massive data sets that may or may not be ethical to use, or as my colleague rightly mentions, they may have issues intellectual property ownership. It is the Wild West. There are no rules around this. I would like to draw my colleague's attention on this matter to the fact that, without some sort of international agency preventing the balkanization of rules, and because data privacy is such a global network, unless we are taking that problem and working on it with peer countries, it is going to become even more of an issue. He is absolutely right. Senator Deacon and I are starting a working group on these issues. I hope we can come up with some consensus before we have entrenched partisan positions on this to show that Canada will be a world leader in facilitating a global conversation on this and getting it right.
181 words
All Topics
  • Hear!
  • Rabble!
  • star_border
  • Apr/20/23 3:50:06 p.m.
  • Watch
  • Re: Bill C-27 
Madam Speaker, so much has changed throughout the last 23 years. In the year 2000, there were about 740 million cellphone subscriptions worldwide. More than two decades later, that number sits at over eight billion. There are more phones on this planet than there are people. It is a statistic that should give anyone pause. In 2000, Apple was still more than a year away from releasing the first iPod. Today, thanks to complex algorithms, Spotify is able to analyze the music I listen to and curate playlists I enjoy based on my own taste in music. In 2000, artificial intelligence was still mostly relegated to the realm of theoretical discussion, that is, unless we count the Furby. Today, ChatGPT can generate sophisticated responses to whatever I type into it, no matter how niche or complicated. As technology changes, so too do the laws that surround and govern it. Canada’s existing digital privacy framework, the Personal Information Protection and Electronic Document Act, has not been updated since its passage in the year 2000. For this reason, it is good to see the government craft Bill C-27, which is supposed to provide a much-needed overhaul to our digital privacy regime. For years, the government has been dragging its heels on this important overhaul. For years, Canada’s privacy framework has been lagging behind our international counterparts. The European Union’s General Data Protection Regulation, passed in 2016, is widely considered to be the gold standard for privacy protection. In comparison to the GDPR, I am not impressed with what the government has put forward in this bill. Indeed, the largest portion of Bill C-27 is roughly 90% identical to the legislation it purports to be replacing, and what the bill has added is quite concerning. Instead of being a massive overhaul of Canada’s archaic PIPEDA framework, Bill C-27 would do the bare minimum, while leaving countless loopholes that corporations and the government can use to infringe upon Canadians’ charter rights. Bill C-27, while ostensibly one bill, is actually made up of three distinct components, each with their own distinct deficiencies. To summarize these three components and their deeply problematic natures, Bill C-27, if passed in its current form, would lead to the authorization of privacy rights infringements, the creation of unneeded bureaucratic middlemen in the form of a tribunal and the stifling of Canada’s emerging AI sector. When it comes to the first part of this bill, which would enact the consumer privacy protection act, the name really says it all. It indicates that Canadians are not individuals with inherent rights, but rather, business customers. The legislation states that it has two purposes. It apparently seeks to protect the information of Canadians “while taking into account the need of organizations to collect, use or disclose personal information in the course of commercial activities.” In other words, individual rights and the interests of corporations or the government are supposed to work in tandem. In the post-charter landscape, that just does not cut it. Privacy rights must be placed above corporate interests, not alongside them. In the words of Justice La Forest 34 years ago, “privacy is at the heart of liberty in a modern state. Grounded in man's physical and moral autonomy”. It is true that this portion of the bill mandates de-identification of data when one’s personal information is shared, and it is also true that it requires the knowledge or consent of the individual, but each of these terms, which should ideally serve as the bulwarks of privacy protection, are defined as vaguely as possible, and the remainder of the bill then goes on to describe the various ways in which consent is actually not required. Subclause 15(5) of the bill would allow organizations to utilize a person’s information if they receive “implied consent”, a slippery term that opens the door to all kinds of abuses. Subclause 18(2) then gives those organizations a carte blanche to use implied consent as often as they would like, or even exclusively. Sure, there could be organizations that, out of the goodness of their hearts, would always seek the express consent of the individuals they are collecting data from, but express consent is in no way mandatory. It is not even incentivized. Then we come to the concept of “legitimate interest”. Subclause 18(3) gives the green light for organizations to utilize or share one’s information if the organization feels that it has a legitimate reason for doing so. It is not just that this clause is incredibly vague, it is that it makes individual privacy rights subservient to the interests of the organization. Moreover, the Supreme Court of Canada has ruled that section 8 of the charter provides individual Canadians with a reasonable expectation of privacy. Given all of the exceptions I have provided, it is not clear to me that this bill would survive a charter challenge. Recent events should show us the problem with giving so much leeway to corporations and so little thought to individual rights. In 2020, through a third party service provider, the Tim Hortons app began collecting the geolocation data of its users even though they were not using the app. There was also Clearview AI, which sent countless images of people to various police departments without their consent. Maybe Clearview had their “implied consent”. It is all up for debate with a term like that. This legislation does the bare minimum for privacy protection in Canada and, in many ways, will actually make things worse. When we consider the way in which data collection might develop over the next 10 or 20 years, it is clear that this law will be out of date the moment it is passed and will leave Canadians vulnerable to predatory data practices. Then there is part 2 of Bill C-27, which intends to set up a Liberal-appointed data protection tribunal. This is not necessary. We already have a Privacy Commissioner who has both the mandate and the experience to do everything that this new tribunal has been tasked with doing. More government bureaucracy for the sake of more bureaucracy is the Liberal way, a tale as old as time itself. Instead of watering down the power of our Privacy Commissioner via middlemen, the duties contained within this part of Bill C-27 should be handed over to the commissioner. Part 3 of Bill C-27 seeks to regulate the creation of AI in Canada. This is a worthwhile endeavour. At the beginning of my speech, I alluded to ChatGPT, but this only scratches the surface of how sophisticated AI has become and will continue to become in the decades ahead. The problem is the way in which this regulation itself is set up. The bill places no restrictions on the government’s ability to regulate. Unlimited regulation and hefty penalties, up to 5% of worldwide income I believe, is all that is being offered to those who research AI in Canada. This will cause AI investors to flee in favour of other countries, because capital hates uncertainty. This would be a tremendous loss, because, in 2019 alone, Canadian AI firms received $658 million in venture capital. Conservatives believe that digital data privacy is a fundamental right that should be strengthened, not opened to infringement or potential abuse. Therefore, Bill C-27 is deeply flawed. It defines consent while simultaneously providing all sorts of reasons why consent can be ignored. It weakens the authority of the Privacy Commissioner. It gives such power to the government that it will likely spell disaster for Canada’s burgeoning AI sector. This bill is in need of serious amendment. Privacy should be established, within the bill, as a fundamental right. Several vague terms in the bill need to be properly defined, including but not limited to “legitimate Interest”, “legitimate business needs”, “appropriate purposes” and “sensitive information”. Subclause 2(2) states that the personal information of minors is sensitive. That is very true, but this bill needs to acknowledge that all personal information is sensitive. Consent must be made mandatory. The words “unless this Act provides otherwise” need to be struck from this bill. I find it hard to believe that such substantial amendments can realistically be implemented at committee. For this reason, the legislation should be voted down and sent back to the drawing board. Canadians deserve the gold standard in privacy protection, like that of the EU. As a matter of fact, they deserve even better.
1462 words
All Topics
  • Hear!
  • Rabble!
  • star_border
  • Apr/20/23 4:46:31 p.m.
  • Watch
  • Re: Bill C-27 
Madam Speaker, I am following the debate. If we look at Europe, it seems quite complicated to create a framework to govern artificial intelligence. However, I think we should draw inspiration from Europe's efforts. The Standing Committee on Industry and Technology is certainly going to want more information about how the Europeans are going about it. One thing is certain. I think what makes this so difficult is that the technology is evolving so fast. The part of Bill C-27 that deals with AI, as currently proposed, gives the government the freedom to do a lot through regulation, which is not necessarily ideal as far as I am concerned. However, when it comes to AI, I doubt that there is any other option. Today we are talking about ChatGPT, but I can almost guarantee that by next year, if not this summer, we will have moved on to something completely different. The situation is changing so fast that I think we need to be very nimble in dealing with AI. I have heard the Conservative member for Calgary Nose Hill, whom I see eye to eye with on these issues, use the word nimble. What I like about Bill C‑27 is that it creates the position of a commissioner who reports to the minister and who will look into these issues. I have long believed that we should have someone to oversee AI, someone to study all the new capabilities and the risks of accidents that this poses—because there are serious risks—and to be able to translate this into terms that the general public, legislators and the House can understand.
280 words
  • Hear!
  • Rabble!
  • star_border