Principles of Critical AI Literacy

As we continue to navigate the evolving landscape of AI, the writing program recommends that faculty:

Develop clear and transparent policies about AI use in writing classes. Writ 1150 instructors have the academic freedom to set their own syllabus policies clarifying how, when, or if students may use AI to assist with writing assignments. Our only programmatic expectation is that instructors include clear AI policies on their syllabi and take time in class to discuss those policies with students. Alternatively, some instructors may choose to collaboratively design an AI policy with their students. We recommend that faculty consider designing policies that emphasize the importance of transparent disclosure of AI use, require the attribution of words or images generated by AI, clarify the important role of human thinking in all writing, and aim to cultivate a climate of trust and respectful dialogue among students and teachers.

Emphasize that AI tools can offer only limited assistance with (not replacement for) human writing and thinking. We remain committed to human writing as a powerful mode of inquiry, learning, and communication that cannot and should not be replaced by machines. At the same time, we recognize that many (though not all) students and professionals are turning to AI tools to assist them with writing and research tasks. Accordingly, we see value in engaging students in discussing and reflectively using AI tools in critical, rhetorical, and ethical ways to assist with generating ideas, conducting research, revising, and/or editing. Importantly, we believe that any use of AI should be accompanied by substantial human writing and/or conversation – with an emphasis on reflecting critically on how human prompting, rhetorical choice-making, and ethical judgments are fundamental whenever we choose to use AI to assist with writing and research tasks. 

Teach students to critically evaluate the affordances and limitations of various AI tools for writing and research. We are living in a moment in which many AI tools are being developed and refined, and they all have unique affordances and limitations. Instead of simply defaulting to engaging with ChatGPT, we recommend critically discussing and trying out a range of tools. For example, we have found that the AI tools custom-built for scholarly research (such as Elicit) are much better at locating and accurately summarizing peer-reviewed articles than ChatGPT, which tends to hallucinate citations and information.

Apply a social justice lens to our AI-related instruction – including considering when and how we might choose to reduce or (perhaps) refuse the use of AI tools that cause harm. We are concerned about how many current AI large language models are designed in ways that algorithmically reinforce racist, sexist, heteronormative, and ableist biases, rely on unjust labor models, and contribute to catastrophic climate change We recommend that instructors include readings and discussions on syllabi that educate students about the many pressing ethical questions raised by various AI tools.  While many of us are developing ways to completed technological critique with limited reflective use of AI, we also fully support instructors’ academic freedom to refuse the use of any AI tools that do not align with their values. We would note that refusal to use (or allow use) of a tool should not be equated with refusal to discuss the tool with students – as any AI refusal will be most pedagogically meaningful if it’s explained and used as a starting point for dialogue.

Ensure students have agency in deciding whether or not to input their words into an AI large language model. While current AI tools vary in how they do (or do not) collect user data, we are concerned that many AI tools train their models based on user data in ways that are not readily transparent. We recommend that instructors discuss the privacy policies and settings of any AI tools we demonstrate in class. We also believe students should retain control over when or if their words are inputted into an AI tool; as such, writing instructors should refrain from directly inputting student writing into AI tools (unless soliciting opt-in volunteers for a class demonstration). Moreover, while we encourage instructors to consider implementing class activities that invite students to reflectively use AI tools, we recommend that instructors express willingness to provide alternative forms of engagement for any students who have a privacy or ethical objection to using that particular tool. 

Prioritize student voices in the ongoing conversation about AI in writing instruction. Rejecting simplistic assumptions about how today’s college students use and feel about AI, we commit ourselves to ongoing dialogue with students as we refine our approaches to AI in the coming years. Writ 1150 instructors prioritize student voices about AI in many ways – by engaging students in public writing and media making about AI, by collaboratively developing AI course policies with students, and by gathering student feedback about what kind of AI-related instruction (if any) would be most and least helpful to them in writing classes. We see ourselves as learners as well as teachers in the evolving AI landscape, and we seek to cultivate inquiry-based classes in which we can learn from and with our students about AI (and other pressing social issues).