
RESPONSIBLE AI REQUIRES STRONGER PRIVACY PROTECTIONS AND INCLUSIVE DEMOCRATIC GOVERNANCE
Lundi 19 février 2018
Par Ariane Quintal, BSc, Matthew Sample, PhD, Eric Racine, PhD
Introduction
In this comment, we raise concerns regarding two of the principles included in the Montreal Declaration for Responsible AI, namely privacy and democracy. Our recommendations are not comprehensive, but instead suggest what we take to be the primary issues left unresolved by the two principles.
Privacy
The Declaration names privacy as one of its key tenets and states that people should be able to access their personal information and data used by algorithms. In choosing this formulation, the Declaration invokes a culture of data sharing by default. From a commercial perspective, this is understandable, as access to user data enables innovation and improvements in performance of algorithms, while the centralized treatment of data can be profitable. Some publics, however, are wary about such developments that fail to integrate user perspectives and desire privacy protection through social or regulatory means (1). In a recent paper titled Four ethical priorities for neurotechnologies and AI, the Morningside group strongly defends privacy and articulates it as the need to keep data private by default, particularly when it is of neural origin (2). Yet, we believe that similar protection should extend to non-neural and personal data used by algorithms, which can be equally sensitive and identifying. Moreover, there is an alternative to the method and culture of data sharing by default: federated learning, or decentralized machine learning, which occurs on the user’s device such that the lessons learned are sent back for analysis, without the need for data sharing (2).
Democracy
We support the inclusion of democracy as a core tenet of the Declaration. First, democracy should indeed shape the declaration itself. To this end, we appreciate that the Declaration committee organizes gatherings in public libraries and CEGEPs to engage diverse publics in thinking about responsible AI development, but we fear that the Declaration, drafted prior to these initiatives, may rigidly frame public discussions. These public engagements seem to function solely as a way to add legitimacy to an existing document. Instead, the public should have been meaningfully engaged in deliberating on the contents of the Declaration from the very beginning. Similarly worrying, while the Declaration committee is hosting a co-construction day, access is restricted to those with invitations, raising concerns of representativity and inclusion. Together, the lack of credible public engagement may render the Declaration technocratic or politically illegitimate in the eyes of the public and of policymakers.
Second, we believe that the governance of AI should be responsible, as implied by the principle of democracy. The Declaration positively distinguishes itself from the Morningside group’s paper, which fails to explicitly acknowledge the role of democracy and democratic institutions notwithstanding many references to the clear necessity of public input. Yet, despite the good will of the Declaration committee, the industry lacks incentives to exert systematic and sustained self-governance in AI R&D (3,4). Hence, the possible ineffectiveness of the Declaration warrants meaningful government involvement and broader public engagement. However, we acknowledge that these democratic initiatives may be challenging to implement due to the rapid pace of innovation (as opposed to slowly evolving regulations) and limited public knowledge on how algorithms are conceived, respectively. Without sufficient regulation, companies could use algorithms (and eventually, AI) to confine debate to stances that they judge acceptable, hindering free sharing of ideas and the development of critical thinking (5).
Conclusion
To conclude, we believe that the Declaration should include improved safeguards for privacy of user data. Regarding democracy, we are concerned that the process used to develop the Declaration may be insufficiently inclusive. AI governance, lastly, should also be democratic and involve not only public input and declarations of values but also effective and responsive government regulation.
Ariane Quintal, BSc
MA Candidate in Bioethics, École de santé publique, Université de Montréal
Graduate researcher, Neuroethics Research Unit, Institut de recherches cliniques de Montréal
Matthew Sample, PhD
Postdoctoral Researcher, Neuroethics Research Unit, Institut de recherches cliniques de Montréal
Department of Neurology and Neurosurgery, McGill University
Eric Racine, PhD
Director, Neuroethics Research Unit
Full Research Professor IRCM
Department of Medicine and Department of Social and Preventive Medicine, Université de Montréal
Department of Neurology and Neurosurgery, Medicine & Biomedical Ethics Unit, McGill University
References
-
Lehoux P, Miller FA, Grimard D, Gauthier P. Anticipating health innovations in 2030-2040: Where does responsibility lie for the publics? Public Underst Sci. 2017 Aug 1;963662517725715.
-
Yuste R, Goering S, Arcas BAY, Bi G, Carmena JM, Carter A, et al. Four ethical priorities for neurotechnologies and AI. Nature. 2017 Nov 8;551(7679):159–63.
-
Crawford K, Calo R. There is a blind spot in AI research. Nature. 2016 Oct 20;538(7625):311–3.
-
Campolo A, Sanfilippo M, Whittaker M, Crawford K. AI Now 2017 Report [Internet]. AI Now; 2017 [cited 2018 Feb 15]. Available from: https://ainowinstitute.org/AI_Now_2017_Report.pdf
-
Perez S. Twitter adds more anti-abuse measures focused on banning accounts, silencing bullying. TechCrunch [Internet]. 2017 Mar 1 [cited 2018 Feb 5]; Available from: http://social.techcrunch.com/2017/03/01/twitter-adds-more-anti-abuse-measures-focused-on-banning-accounts-silencing-bullying/