Contributor: Tom Morgan

Considerations for the responsible use of AI tools in teaching

As educators embrace AI tools to enhance teaching and learning experiences, it's paramount to consider the responsible use of such technology. This section introduces key considerations to ensure ethical, equitable, and effective integration of AI tools in educational settings.

From data privacy and security to transparency in algorithmic decision-making, understanding these principles can help foster an environment conducive to positive student outcomes.

The sections below explore key issues to assist with thinking through which tools to select and use, and to uphold integrity, inclusivity, and student welfare while harnessing the potential of AI tools in education.

Transparency

  • AI complicates and challenges existing practices of gathering, use of, and deployment of personal data. An educator may want to consider what data is being collected by the platform or tool, and how this data may be transformed or redeployed.

    How does the kind of data collected and its use impact your cost benefit analysis of the utility of engaging with the tool or platform?

    Can foregrounding the selection and use of ‘responsible and transparent’ platforms and tools help model best practices and also help students understand appropriate approaches to apply in other contexts.

  • AI tools and platforms interact in new ways with existing understandings of copyright, authorship and moral rights. Moreover - much of their training data exists in a legal grey area - so selection of tools with clear frameworks around authorship and use of copyrighted material will model and direct student behaviour and expectations beyond the assessment and unit.

  • Openness here does not refer to the visibility of the underlying AI model guiding the outputs but to the openness of the organisation/creators and their commitment to transparency in their data sources, training approaches and other structural decisions.

    Does the tool operate within an open source framework or a closed commercial framework?

    Having a clearer picture of the orientation of the provider will help evaluate other components of the tool - and ideally model practices for students that are closer to academic models of knowledge production rather than the closed shop of commercial innovation.

Fairness

  • Following established logics for apps, AI tools are typically offered in combinations of free, freemium and premium options, each with different levels of access and different cost structures.

    Paid access may grant faster outputs or more powerful models, or being able to access the service without rate limits.

    Consider which level of access may be required to effectively use the tool for a given educational application and what strategies may be used to try to ensure fair use across a diverse cohort of students.

    For example, can you require everyone to use only the free version and its functionalities? Should everyone purchase a particular level of access that is clearly identified as a manageable cost associated with undertaking the unit? Can the use case be structured so that receiving a faster output via a paid version does not necessarily disadvantage someone using a free version to accomplish the same task?

  • What levels of technical and content knowledge are necessary for all students to possess in order to equitably use a given tool/platform?

    What support needs to be provided to bridge knowledge gaps across diverse student cohorts?

    Some students may already be literate in the forms, approaches and understanding of certain tools or models - and how might that may advantage or disadvantage them.

  • There are technical and physical barriers to entry for incorporating some technologies into educational deliveries - including specialist hardware and familiarity with software environments.

    There are specific models that can be run locally - either on GPUS or on apple silicon. This can be a way of bypassing paid services, or rate limitation - but does come with technical and hardware barriers. In many ways engagement with AI models on a local machine/server is preferred as this controlled environment mitigates all some concerns regarding data privacy, training etc - but it opens up considerations about technical approach, and the resources required to run these.

  • Engagement with tools is not without effort - time is required to understand the tool, approach, interface, and conventions. This cognitive load can impact on learning experiences and student wellbeing. If learning is structured with multiple platforms and tools across a unit - students will need to retain working knowledge of these tools. That is, diversity, range, and types of tools employed in a unit should be carefully considered from the perspective of variations in their interface, approaches, systems and conventions.

  • Having engaged with a tool and developed a working knowledge of its processes - a student should reasonably expect to be able to re-deploy this at a later date - that is, coordinating and planning for consistency of tools across units, year levels, courses, faculties has benefits. Selection of tools should consider student experiences in other units - and there may need to be consultation around a set pathway through and appropriate scaffolding of specific AI tools across areas of study within courses.

  • AI tools and their underlying models have embedded, visible and hidden biases. Commercial services may have worked to add guide rails but the biases will still be present based on training data, training method, biases in human trainers or the user group, and intent/purpose of the tool. This means that approaches toward cultural safety are paramount - and selection of tools should consider their predisposition to generate biased, inflammatory or discriminatory content.

    Open discussions about bias with students can foster critical engagement with information and AI outputs.

Ethical Considerations

  • Engaging with these tools raises issues with both appropriate authorship and ownership of training material. Tools also prompt questions about how submitted material may be used to further train or refine models - and how platforms plan to retain data for these purposes.

    On a personal level, you may consider how and why you are directing students to engage with tools that have been assembled from stolen, scraped, and copyright infringing datasets. On a practical level, you may want to consider the durable contributions your students may be making to specific platforms and tools.

  • Above and beyond educational application and value - you may want to consider a broader set of student experiences - interrogating what is embedded or embodied in interacting with the tool. For example, what kinds of experiences are being constructed for the student/user - and how do these contribute to an overall sense and experience. That is - does engagement with the tool replace authentic learning events, dialogues, and interactions.

    How might you position the use of particular tools within the educational journey a student takes across a unit or course?

  • Training and use of AI tools - especially large language models - is incredibly energy intensive and has obvious and increasing impacts on carbon footprints. Engagement with tools should be undertaken with an awareness of these costs - recognising that behaviours and expectations set now will shape broader approaches to the tool and expectations around its practical use and availability.

    Understanding the environmental impacts of may influence our decision making processes.

Educational Response

  • AI tools are often presented as black boxes - systems that defy clear explanation of their processes but retain predictive powers and abilities. There are questions about the possible false dichotomy between ‘explainability and accuracy.’ Efforts to achieve explainable AI are a counterbalance to the “black box” nature of many systems (info in, info out but no clear understanding of what occurs in between). Are the AI tools that you choose to use explainable?

    If the internal workings of the tool or model are not visible then we should still be concerned with other relevant variables shaping outputs - including training data and general principles informing the training and deployment of the tool.

  • Mirroring a concern with the internal workings of the tool - what are your external workings and motivations for integrating the tool. How deliberate or explicit can you be about why you have selected and deployed this tool in the teaching environment. The onus is on the educator to provide context and information around the use and implementation of the particular tool(s).

    In other words, carefully explain why.

  • The current landscape is characterised by startups and emerging tools with rapidly changing terms of use and product scope. With this in mind, what duration might you expect this tool to be available for? a semester; a year; or the duration of your students’ course?

    On a practical level - how translatable are the steps or processes or principles embedded in the use of the tool, and what will happen if the tool disappears, is discontinued, or changes dramatically in scope, purpose or function.

  • Given that we often engage with ecosystems of software - does this tool work with a durable set of other associated tools that the student could reasonably expect to re-use - like Google, Microsoft, Adobe, etc. product families. If selecting a tool, are you building in skills that will scaffold and support the student across other semesters and contexts, or are you locking a student into a particular way of working and a captive of a particular platform, that may or may not be the current industry standard.

  • Universities maintain enterprise agreements and technical support for a select range of software, platforms and hardware. Yet AI tools are emerging at a rapid rate and at various scales.

    In selecting an AI tool/platform what kind of technical support (instructional material, problem solving, community of practice, etc.) exist?

    What kind of additional support will students and staff need to use the tool/platform and how will it be provided?