#AI; #Canada’simmigration; #refugeesystem; #HumanRights; #Toronto; #IHRP; #CitizenLab
Toronto, Sept 27 (Canadian-Media): Increased replacement of human decision makers in Canada’s immigration and refugee system with algorithms and artificial intelligence had been violating fundamental human rights, said a report released Wednesday by the University of Toronto’s International Human Rights Program (IHRP) and the Citizen Lab at the Munk School of Global Affairs and Public Policy, media reports said.
AI in Canada's Immigration System/Courtesy of Toronto Univ
Since at least 2014, these new automated techniques have been evaluating immigrant and visitor applications such as Express Entry for Permanent Residence with recent announcements of expansion of the uses of these technologies in a variety of applications and immigration decisions in the coming years.
Exploring new technologies and innovations, although useful in reducing delays in an immigration system, should not compromise proper mechanisms, accountability and transparency.
Irresponsibly adopting them and relying on discriminatory and stereotypical markers, such as appearance, religion, or travel patterns can result in severe rights violations, such as discrimination and threats to life and liberty both nationally and internationally.
Canada should advocate the adoption of a new framework of accountability with safeguards and review processes we have in place for the frailties in human decision-making, continues the report, to better understand the current and prospective impacts of automated decision system technologies on human rights and the public interest.
(Reporting by Asha Bajaj)