Artificial intelligence in education: judicial issues

By Simon Collin, University of Quebec at Montreal (Canada) and Emmanuelle Marceau, Cégep du Vieux Montreal-CRDP (Canada)

Introduction: potential of artificial intelligence for education

Artificial intelligence (AI) has gained a growing educational and scientific interest over the past thirty years, which has recently accelerated following the improvement in the technical performance of AI (Becker, 2018).

In their systematic literature review, Zawacki-Richter et al. (2019) identified four key applications of AI in higher education:

1/ profiling and prediction (eg admission to a study programme, school dropout);

2/intelligent guidance systems (eg educational content, feedback);

3/measurement and evaluation (eg automatic assessment, academic involvement);

4/ and adaptive and personalized systems (e.g. recommendation and selection of personalized content).

On the other hand, the ethical and critical issues raised by AI are little studied in higher education (Zawacki-Richter et al., 2019), and in education more broadly (Krutka, 2021). To contribute to this emerging reflection, we propose to address some ethical and critical issues of AI, without however claiming to be exhaustive, and to formulate some actions to better take it into account, both from the point of view of design and use.

It is important to bear in mind that the issues below are for the most part not specific to AI insofar as they also arise in other technologies. In addition, they can be found in other areas of society where AI is used. However, these issues are often exacerbated by current developments in AI and are uniquely applied in education, which in our view justifies the relevance of a reflection limited to AI in education.

Some ethical and critical issues of AI in education

The ethical and critical issues raised by AI in education are many and have different origins. A first type of problem is related to the sheer amount of data AI needs, which can create potential biases and raise the issue of respect for the privacy of students and school staff (Andrejevic et al., 2020; Perrotta et al., 2020 ). Krutka (2021) takes the example of Google’s education suite, which collects data without the free and informed consent of students and school staff (in violation of their own and provincial and state policies) and exploits it in opaque ways. The data of students and school staff is therefore used without their knowledge, thereby violating respect for their privacy.

Moreover, AI is mainly produced by private companies rather than educational institutions (Williamson et al., 2020; Selwyn et al., 2020), and studied mainly by researchers in computer science or science, technology, engineering and mathematics rather than by educational researchers ( Zawacki-Richter et al., 2019). This situation generates a second type of ethical and critical problems related to the expertise and educational representations mobilized by the design teams.

Outside of education, several studies have already pointed to the lack of diversity within design teams, resulting in representativeness bias ranging from the underrepresentation of certain social groups to their discrimination, stigmatization or exclusion. For example, in 2015, the Google Photos algorithm associated a photo of two black Americans with the tag “gorillas” because they were not trained enough to recognize dark faces (Plenke, 2015).

Finally, the increasing automation of AI implies that it is able to take on an increasing share of the educational actions that usually fall to students and school staff (Selwyn, 2019). This raises a different kind of ethical and critical questions regarding teachers’ autonomy and professional judgment, as well as students’ freedom of choice according to the division of labor between them and the AI.

An example is the case of behavioral management systems reported by Livingtsone and Sefton-Green (2016). Behavioral management systems allow teachers to document harmful student behavior, which is then automatically compiled and reported to school administration with commensurate consequences.

Due to a lack of time in the classroom, some teachers document behavior after class, sometimes without informing the students involved that they do not remember, which undermines the principles of consistency and fairness in education.

Preventing the ethical and critical problems of AI: from design to use

From these types of ethical and critical issues, some avenues for reflection and action can be outlined. First of all, it is good to take these matters into account already in the design phase in order to prevent any negative consequences during use as much as possible. We can then ask the following question: to what extent do design teams integrate expertise and educational representations when developing technologies with AI? And to what extent are these educational expertise and representations representative of the diversity and uniqueness of Quebec’s school environments?

A first step to ensure this is for design teams to adopt “user-centric” models (eg, Labarthe, 210) to maximize consideration of expertise and educational representations and preserve the educational purpose of economic and technical purposes. An additional step consists of adopting and respecting ethical design principles, such as systematically and explicitly informing users when they interact with an artificial intelligence system.

In terms of use, raising awareness among students and school staff of the challenges of AI in education means integrating an explicit ethical and critical dimension into technology training. To be complete, this dimension would benefit from not being limited to the “good uses” of AI, but articulated around understanding the interactions between the design and use of AI on the one hand, and between the use and their educational and social implications on the other.

The techno-ethical model of Krutka et al. (2019) opens an interesting path in initial and continuing teacher education: to determine whether a particular technology is ethical, it provides an analysis of the ethical, legal, democratic, economic, technological and pedagogical dimensions, guided by questions, as well as considerations and practical applications to include in teacher education.

not to conclude

The integration of AI into education is relatively recent, so the operationalization of its potential is still largely to come. In order to give it direction, we find it necessary to guide it with a proactive consideration of the ethical and critical issues raised by AI, anchoring them in the context of a reflection on justice. As such, training in ethics for school personnel deserves to be brought forward to equip them as best as possible to intervene and interact in a rapidly changing world.

Bibliography

  • Andrejevic, M., and Selwyn, N. (2020). Facial recognition technology in schools: critical questions and concerns. Learning, Media and Technology, 45 (2), 115-128. https://doi.org/10.1080/17439884.2020.1686014
  • Becker, Brett. (2018). Artificial intelligence in education: what is it, where is it now, where is it going? In B. Mooney. Ireland’s Yearbook of Education (pp. 42-46). Dublin: education matters.
  • Krutka, DG, Heath, MK and Staudt Willet, KB (2019). Foreground techno-ethics: towards critical perspectives in technology and teacher education. Journal of Technology and Teacher Education, 27 (4), 555-574
  • Krutka, DG, Smits, RM, and Willhelm, TA (2021) Don’t Be Evil: Should We Use Google in Schools?. Technical trends. https://doi.org/10.1007/s11528-021-00599-4
  • Labarthe, Fabien, 2010, “Design and SHS in the process of user-oriented innovation: what mutual contributions? Breakaways, 2, 14-25.
  • Perrotta, C., and Selwyn, N. (2020). Deep learning goes to school: towards a relational understanding of AI in education. Learning, Media and Technology, 45(3), 251-269. https://doi.org/10.1080/17439884.2020.1686017
  • Plenke, M. (2015). Google just misidentified 2 African Americans in the most racist way. microphone. Retrieved April 8, 2021 from: https://www.mic.com/articles/121555/google-photos-misidentified-african-americans-as-gorillas
  • Selwyn, N. (2019). Should robots replace teachers? AI and the future of education. Cambridge: Police Press.
  • Selwyn, N., and Gaevic, D. (2020). The datafication of higher education: discussing its promises and problems. Teaching in Higher Education, 25(4), 527-540. https://doi.org/10.1080/13562517.2019.1689388
  • Williamson, B., and Eynon, R. (2020). Historical threads, missing links and future directions in AI in education. Learning, Media and Technology, 45(3), 223-235. https://doi.org/10.1080/17439884.2020.1798995
  • Zawacki-Richter, O., Marín, VI, Bond, M., and Gouverneur, F. (2019). Systematic review of research on applications of artificial intelligence in higher education – where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 1-27. https://doi.org/10.1186/s41239-019-0171-0

To cite this article

© Authors. This work, available at
http://dx.doi.org/10.18162/fp.2021.a230, is being distributed
under Creative Commons Attribution 4.0 International license http://creativecommons.org/licences/by/4.0/deed.fr

Leave a Comment