SoVote

Decentralized Democracy

House Hansard - 214

44th Parl. 1st Sess.
June 15, 2023 10:00AM
  • Jun/15/23 1:14:58 p.m.
  • Watch
  • Re: Bill C-27 
moved: That it be an instruction to the Standing Committee on Industry and Technology that, during its consideration of Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, the committee be granted the power to divide the bill into three pieces of legislation: (a) Bill C-27A, An Act to enact the Consumer Privacy Protection Act, containing Part 1 and the schedule to section 2; (b) Bill C-27B, An Act to enact the Personal Information and Data Protection Tribunal Act, containing Part 2; and (c) Bill C-27C, An Act to enact the Artificial Intelligence and Data Act, containing Part 3. He said: Mr. Speaker, I am happy to be here today to speak on this motion. I will be splitting my time today with the member for South Shore—St. Margarets. Bill C-27 is a very important bill. We have talked about privacy legislation now for about eight or nine months. Our whole premise was that privacy always should be a fundamental right of Canadians. We talked about the limitations of this bill when the government announced it. That was missing from the bill. The bill was in three parts. The first part spoke to replacing the “PIP” in “PIPEDA”; the second part was announcing and debating the use of a tribunal; and the third part was about AI. This motion asks to split this bill into three parts so the committee can look at and vote on each part individually. If we talk about why that is needed at this point, it is very simple. The third part about AI part is the most flawed. When we look at the bill in its entirety and we have gone through debate, we certainly hope to have this bill go to the industry committee. The government delayed sending this to committee, but I am hoping it will be in committee in the early fall, and we want to debate, for the most part, the AI section. I stand today to shed light on a topic that has captured the imagination of many, and yet poses significant risk to our society: the dangers of artificial intelligence, or AI. While AI has the potential to revolutionize our world, we must also be aware of the dangers it presents and take proactive steps to mitigate them. For decades, AI and the imaginary and real threats it brings has been a subject of fascination in popular culture. I remember, as a child, watching a movie called WarGames. A teenager wanted to change his grades, he went into a computer to try to do that and the computer offered to play a game of nuclear annihilation. It ended up that the U.S.S.R., through this computer, was about to attack the U.S. NORAD thought it was happening, was ready to strike back and somehow the computer could not figure out what was right or wrong and the only way the student was able to figure it out was to play a game of tic-tac-toe that he found he could never win. At the end, after playing the nuclear game he could never win, he said he would play a nice game of chess because that is easier, someone wins, someone loses and it is safe. This was AI in 1984. My favourite movie with AI was The Matrix. In The Matrix, humans were batteries in the world, who were taken over and owned by machines until Neo saved them and gave them freedom. Another movie that I remember as a kid was Terminator 2, and we know how that one ended. It was pretty good. We are not sure if it has even ended yet. I think there is another one coming. Arnold Schwarzenegger is still alive. We find ourselves in a season of alarmism over artificial intelligence, with warnings from experts of the need to prioritize the mitigation of AI risks. One of the greatest concerns around AI is the potential loss of jobs as automation and intelligent machines rise. Has anyone ever heard of the Texas McDonald's that is run entirely without people? It is coming. They have figured out how to use robots and machines to eliminate staff positions. Even though it is not AI, all of us go to the grocery store now and can check out on our own. When we shop, we see lots of different ways, whether it is Amazon or others, that companies are using AI for robotics. We have heard of dark industrial storage where robots operate in the dark, moving products from exit to entrance, and people are not needed. It is a big problem for job losses. Another major risk of AI lies in the erosion of privacy and personal data security. As AI becomes more integrated in our lives, it gathers vast amounts of data about individuals, which can be used to manipulate behaviour, target individuals and our children with personalized advertisements, and infringe upon our civil liberties. The first part of Bill C-27 has to do with the third part, but is not the same. We must establish strong regulations and ethical guidelines to protect our privacy rights and prevent the misuse of personal data. Transparency and accountability should be at the forefront of AI development, ensuring that individuals have control over their own information. Moreover, the rapid advancement of AI brings with it the potential for unintended consequences. AI systems, while designed to learn and improve, can also develop biases. We saw in the ethics committee, with facial recognition technology, when we had experts come into the committee that, alarmingly, Black females were misidentified 34% of the time by computers. It was called “digital racism”. White males were misidentified only 1% of the time. Again, this is technology that we have allowed, in some instances, to be used by the RCMP and to be used by the forces. All experts asked for a moratorium on that technology, much the same as we are seeing with AI, because without proper oversight and diverse representation in the development of AI logarithms and algorithms, we risk entrenching society biases within these systems. It is imperative that we prioritize diversity and inclusion in AI development to ensure fairness and to avoid exacerbating existing inequalities. The security implications of AI cannot be overlooked either. As AI becomes more sophisticated, it could be weaponized or manipulated by malicious actors. Cyber-attacks exploiting AI vulnerabilities could lead to significant disruptions in critical systems, such as health care, transportation and defence. They say the greatest risk of war right now is not by sticks and stones, but by computers and joysticks and that AI could infiltrate our systems. One thing I was reading about the other week is the risk of a solar storm that could knock out all the technology, but AI and cybersecurity could do the same. Can members imagine what our world would be like if we did not have Internet for a day, weeks or a month? We certainly saw that with the Rogers outage last summer, but we can imagine if it was malicious in intent. Last, we must address the ethical dilemmas posed by AI. As AI systems become more autonomous, they raise complex questions about accountability and decision-making. We have heard about Tesla having automobiles that have gone off course, and the computer is making the life-or-death decision about where that car is going. The other day I heard a report about vehicles in L.A. that are autonomous and running by Tesla or by taxi, and that fire trucks and ambulances could not get by the vehicles, because the vehicles were programmed to stop and put their four-way lights on, so these fire trucks could not get past them due to AI decisions. They had to smash the windshields in order to get the vehicles out of the way, and they lost precious minutes getting to the scene of a fire. While AI holds immense potential to improve our lives, we must remain vigilant to the danger it presents. We cannot afford to turn a blind eye to the risks of job displacement, privacy breaches, bias, security threats or ethical concerns. It is our responsibility to shape the future of AI in a way that benefits all of humanity while mitigating its potential harms. We need to work together to foster a world where AI is harnessed for the greater good, ensuring that progress is made with compassion, fairness and responsible stewardship. Let us shift for a moment to the positive aspects of AI, and AI actually does exist for good. We have AI working right now with health care diagnostics. Algorithms are being developed to analyze medical images, such as X-rays and MRls, to assist doctors in diagnosing diseases like cancer, enabling earlier detection and improved treatment outcomes. We have disease prevention and prediction. AI models can analyze large datasets of patient information and genetic data to identify patterns and predict the likelihood of individuals getting certain diseases. There is environmental conservation. Al-powered systems are being used to monitor and analyze environmental data. I have heard of farmers who are using computer systems to monitor the nitrogen in soil, so they can monitor how much water and how much fertilizer they need to put in the soil, which is saving our environment. There is disaster response and management. AI is used to analyze social media posts and other data sources during natural disasters to provide real-time information, identify critical needs, and coordinate rescue and relief efforts. For education and personalized learning, AI is changing the way people are learning right now. The greatest thing we have is ChatGPT, and ChatGPT has revolutionized research. Of course we are looking at the possibility of jobs being lost. It has even helped me with my speech today. We have a lot of great things that are happening, and in the bill we certainly are going to be looking at how we change and monitor that. The bill should be split into three sections. We need to make sure we look at privacy as a fundamental human right for Bill C-27 as number one; the tribunal is number two; but AI is number three. We need to have as many witnesses as possible to make sure we get it right, and we need to work with our G7 partners to make sure we all look at AI and its benefits, its shortcomings and its benefits to society in Canada and the future.
1800 words
  • Hear!
  • Rabble!
  • star_border
  • Jun/15/23 1:26:45 p.m.
  • Watch
Mr. Speaker, we do talk about child care. This bill actually looks at protecting the privacy of our children. It is disappointing to hear we are not interested in one or the other. We are interested in all of this for our children and in privacy specifically, because children who are using tablets and cellphones are having their data scraped from the Internet and sold to companies. Sometimes their location is shared and it puts them in harm's way. This legislation looks at that. What the government has not done is recognize that privacy is a fundamental human right. The Conservatives have recognized that that is the case for this bill and certainly for our children. This bill is as important as anything else for our children and their futures, and we are certainly going to focus on that.
140 words
  • Hear!
  • Rabble!
  • star_border
  • Jun/15/23 1:28:24 p.m.
  • Watch
Mr. Speaker, everyone knows ChatGPT. The member mentioned at one point that it helped write a question. It is phenomenal how quick it is and how it helps with research and advancement. I even had it help with my speech. However, there are certainly a lot of risks, and the technology falls short in several areas. Number one, it leaves the details for AI governance to future regulations, and the government has not even looked at them and studied them. We have a focus on addressing individual harm and excluding collective harms such as threats to democracy and the environment and the reinforcement of existing inequalities. Additionally, AIDA primarily applies to the private sector, leaving out high-impact government applications for AI. In short, we are really narrowly focused even in this bill. When we bring this bill to committee, we are going to bring in a massive number of witnesses to have great testimony. Certainly, when a Conservative government gets in power, we are going to table great legislation that not only maximizes AI for good but also protects Canadians from harms.
183 words
  • Hear!
  • Rabble!
  • star_border
  • Jun/15/23 1:30:13 p.m.
  • Watch
Mr. Speaker, data is valuable. Right now we live in an economy based on a tangible old-style economy as well as in an intangible economy. That means data and intellectual property are very valuable to corporations. They are valuable to advertisers. I dare say they are valuable to the government. The government, of course, holds swaths of information. When we think about all the data out there, it is in every movement we make. Every time someone makes a sound, Siri asks, “What was that?” We see it every time we are doing something with our Apple watches. Our Apple watches even track our temperatures and track women who are going into a menstrual cycle in the U.S. It is very concerning. That data is worth something to everyone. It is a balance. We should look, first of all, at protecting people from harms, individuals, making sure we have fundamental human rights for individuals for privacy protection. Also, we should recognize that some companies need data for good, as I mentioned earlier regarding health research and development. We want to balance that. Data is valuable. Let us make sure we do it right and do it together.
201 words
  • Hear!
  • Rabble!
  • star_border