Principles of Critical AI Literacy

As we continue to navigate the evolving landscape of AI, the writing program recommends that faculty:

Develop clear and transparent policies about AI use in writing classes.

Writ 1150 instructors have the academic freedom to set their own syllabus policies clarifying if, how, or when students may use AI to assist with writing assignments. Our only programmatic expectation is that instructors include clear AI policies on their syllabi and take time in class to discuss those policies with students. Alternatively, some instructors may choose to collaboratively design an AI policy with their students. We recommend that faculty design policies that emphasize the importance of transparent disclosure of AI use, require the attribution of words or images generated by AI, clarify the important role of human thinking in all writing, and aim to cultivate a climate of trust and respectful dialogue among students and teachers.

Emphasize that AI tools can offer only limited assistance with (not replacement for) human writing and thinking. We remain committed to human writing as a powerful mode of inquiry, learning, and communication that cannot and should not be replaced by machines. At the same time, we recognize that some students and professionals are turning to AI tools to assist them with writing and research tasks. Accordingly, we see value in engaging students in discussing and reflectively using AI tools in critical, rhetorical, and ethical ways to assist with generating ideas, conducting research, revising, and/or editing. Importantly, we believe that any use of AI should be accompanied by substantial human writing and/or conversation – with an emphasis on reflecting critically on how human prompting, rhetorical choice-making, and ethical judgments are fundamental whenever we choose to use AI to assist with writing and research tasks. 

Teach students to critically evaluate the affordances and limitations of various AI tools for writing and research. We are living in a moment in which many AI tools are being developed and refined, and they all have unique limitations and affordances. Instead of simply defaulting to engaging with ChatGPT, we recommend critically discussing and potentially trying out a range of tools. For example, we have found that the AI tools custom-built for scholarly research (such as Elicit) are much better at locating and accurately summarizing peer-reviewed articles than ChatGPT, which tends to hallucinate citations and information. In addition to considering options for using AI tools for assistance, we also think it is important to discuss with students when and why we might choose to refuse to use particular AI tools entirely.

Apply a social justice lens to our AI-related instruction – including considering when and how we might choose to reduce or refuse the use of AI tools that cause harm.

We are concerned about how many current AI large language models are designed in ways that algorithmically reinforce racist, sexist, heteronormative, and ableist biases, rely on unjust labor models, and contribute to catastrophic climate change. We recommend that instructors include readings and discussions on syllabi that educate students about the many pressing ethical questions raised by various AI tools.  While some of us are developing ways to combine technological critique with limited reflective use of AI, we also fully support instructors’ academic freedom to refuse the use of any (or all) AI tools that do not align with their values. We would note that refusal to use (or allow use) of a tool should not be equated with refusal to discuss the tool with students – as any AI refusal will be most pedagogically meaningful if it’s explained and used as a starting point for dialogue.

Ensure students have agency in deciding whether or not to input their words into an AI large language model. While current AI tools vary in how they do (or do not) collect user data, we are concerned that many AI tools train their models based on user data in ways that are not readily transparent. We recommend that instructors discuss the privacy policies and settings of any AI tools we demonstrate in class. We also believe students should retain control over when or if their words are input into an AI tool; as such, writing instructors should refrain from directly inputting student writing into AI tools (unless soliciting opt-in volunteers for a class demonstration). Moreover, while some of us integrate activities in reflective, critical AI use into classes, we believe that instructors should provide alternative forms of engagement for any students who have a privacy or ethical objection to using a particular AI tool.  ted

Recognize that Human Feedback on Writing Is Vital for Learning. In Writ 1150, students receive intensive feedback from faculty that is designed to guide their revision of writing and support their growth as writers. Ultimately, we conceptualize faculty-generated feedback on writing as an act of meaningful human dialogue with students, and a crucial way we align our work with the Jesuit value of Cura Personalis. Because we recognize the unique scholarly expertise that all faculty have in commenting on writing, we do not use AI tools to generate feedback on or determine grades for student writing. Additionally, learning to give and receive feedback from diverse audiences is a key goal of all Writ 1150 classes. As such, we regularly engage students in providing human-generated feedback to one another. Some Writ 1150 faculty may engage students in voluntary activities in which they prompt AI tools to generate formative feedback about their writing; however, we consider these AI-assisted feedback activities to be supplemental to — rather than a replacement for – human-generated faculty feedback and peer response.

Prioritize student voices in the ongoing conversation about AI in writing instruction. Rejecting simplistic assumptions about how today’s college students use and feel about AI, we commit ourselves to ongoing dialogue with students as we refine our approaches to AI in the coming years. Writ 1150 instructors prioritize student voices about AI in many ways – by engaging students in public writing and media making about AI, by collaboratively developing AI course policies with students, and by gathering student feedback about what kind of AI-related instruction (if any) would be most and least helpful to them in writing classes. We see ourselves as learners as well as teachers in the evolving AI landscape, and we seek to cultivate inquiry-based classes in which we can learn from and with our students about AI (and other pressing social issues).