Praxis: A Writing Center Journal • Vol. 22, No. 1 (2024)
How the Lack of Cohesion in University AI Policy Poses Challenges to Writing Consultants
Meredith Perkins
Miami University
perkin16@miamioh.edu
Ally Britton-Heitz
Miami University
brittoam@miamioh.edu
Kylie Mullis
Miami University
mulliskm@miamioh.edu
Generative AI writing tools are here and here to stay. Yet, as undergraduate writing consultants, we have observed that many students and faculty in the writing center space are hesitant to accept or adapt to this reality. Instead, “technostress” has led many educators to completely disavow any AI usage, effectively withdrawing from valuable opportunities to demonstrate ethical AI use to students (Kohnke, Lucas, et al. 306). While we personally relate to feelings of “technostress,” we also believe that the emergence of AI has given writing centers a profound responsibility to become hubs of ethical AI education and experimentation at our university campuses. Kohnke, Lucas, et. al. conclude that one of the most effective ways to resolve technostress among English educators is to create “centralized resources and guidelines” and “collaborative communities of practice” (313). To help begin this conversation within our East Central Writing Center Association, we decided to host a roundtable during the spring 2024 ECWCA Conference that would spark greater conversation about what consultants need in this time of rapid change.
We spent the first part of our roundtable describing the types of generative AI tools students are using. Several of the consultants and directors in attendance had never used AI before, so we felt it was important to begin any conversation about AI by offering examples of what it looked like, what it can do, and—most importantly—what it cannot do. To delineate the differences between what writing consultants and AI writing tools can each provide a student, we then presented the results of a small qualitative study we conducted which compared and contrasted ChatGPT-3.5 feedback against consultant-generated feedback in an asynchronous consultation.
The most important conversation we hoped our roundtable would spark was about the obstacles university-wide AI policies pose to consultants. At our own institution, Miami University, our academic integrity policy leaves it up to the discretion of the individual professor to authorize or not authorize AI usage in the classroom (“Academic Integrity: Policy Library”). Through researching policies at peer institutions in Ohio, we concluded that AI policies that give professors autonomy over AI usage in their classroom appears to be the most popular institutional response. We also observed that most policies were created by bureaucratic arms of the university rather than in collaboration with writing centers, which raises questions about how informed administrators are about how these policies can affect writing centers. While we agree having flexible policies on paper is preferable to a policy that outright bans AI, or a lack of policy at all, we are concerned that discretionary AI policies have created confusing and incongruous conditions marked by a lack of cohesion with how AI is regulated across university departments.
For consultants, a discretionary AI policy opens the door for a situation in which every new student that enters the writing center has a different set of AI parameters to follow: one student’s professor may permit Grammarly but ban other types of AI, another student’s professor may ban all AI usage, and another student may have a writing assignment where AI usage is not only encouraged, but required. Similar to how writing consultants must be educated to understand and teach different citation styles, writing consultants must learn and understand how to consult with and without AI-generated writing. Our universities’ students need writing centers to help them approach writing in both AI-permissive and AI-intolerant contexts.
In our experience, conversations at universities about AI and writing tend to end with the vague notion that we should use AI “ethically,” thus avoiding “unethical” practices. However, “ethical” usage is seldom defined or demonstrated, as AI is something that not even its creators fully understand (Heikkilä). As consultants, we believe it is important that writing centers do not shy away from this complicated question. If directors do not take the initiative to teach consultants to consult for assignments with AI, consultants will feel unsupported and insecure in their ability to continue to support writers. Vague conversations about “ethical AI” aren’t helpful—walking through case studies, developing scripts, and having a written writing center-level AI protocol is.
Our roundtable was the first time we were able to discuss our questions and ideas about consulting with AI in a larger community of consultants. As part of our presentation, we created three case studies: one where a student has clearly used AI on an assignment they aren’t allowed to, one where a student is asking how to credit AI on an assignment where they are permitted to use AI, and one where a student is permitted to use AI for brainstorming project ideas, if they send their professor screenshots. Consultants spoke openly about the concerns they would have in each scenario and shared plans for how they would navigate each conversation. As a group, we brainstormed best practices for navigating consultations with AI. We agreed that, in general, it is not the consultant’s role to memorize every professor’s AI policy or act as policers of AI use on behalf of their institutions. At the same time, for writing centers to continue being a community of practice for student writers, peer consultants need to be prepared to help all student writers—including those curious about incorporating AI tools in writing along university and professor guidelines.
As consultants, we are trained to use a variety of tools to assist writers (i.e. the Purdue OWL, in-house writing center handouts, and graphic design programs [Canva, Adobe, etc.]). From our perspective, AI is simply another writing tool that we can help students learn with. Artificial intelligence cannot replace the experience of meeting in-person with a peer consultant, but as it improves, students will have new ways to more efficiently research archives, compute large data sets, and brainstorm. We believe writing centers are uniquely positioned to both help model ethical AI usage and educate students on the inherent biases and privacy concerns that limit the effectiveness of AI. The transition to online writing centers thirty years ago was a time of great technostress, but embracing the internet has allowed writing centers to become more accessible and more widely used. If writing centers take charge in academia’s transition to a post-AI era, our communities can help create a culture where AI in writing stays ethical.
What does taking charge look like? At our writing center, all consultants are actively completing small-scale, semester-long research projects. For six weeks, consultants examine high level issues, exploring different writing topics related to AI, to improve overall understanding of writing center applications. While our center has long prioritized consultant-led research, encouraging consultants to take the lead on exploring the limitations and opportunities within the new field of AI has helped destigmatize conversations about AI within our center. As consultants who were trained before the creation of ChatGPT, we strongly advocate preparing students to consult with AI by defining AI best practices in consultant training or professional development sessions. Center directors should help consultants create scripts, run through case studies, and gain a general understanding of AI tools, which can go a long way in improving consultants’ confidence. Not every consultant needs to be an AI researcher, but every consultant should be prepared to help a student with an AI writing assignment simply because assignments will increasingly feature AI use and requirements.
At the Howe Writing Center, one of our core principles is that new and unfamiliar writing tasks impact a writer’s performance. AI is unfamiliar to students and consultants alike, and the lack of cohesive policy at the institutional level only further obfuscates productive conversations about the future of our field. Academic integrity policies that permit AI greatly expand the scope of a consultant’s role, and while these policies need more clarification, writing centers cannot wait until policies are perfect before taking a leadership role within AI education. A writing center’s importance on campus is drastically increasing, as students seek help from consultants who could serve as mediators of new technology in a time of intense technostress. To properly heed this call to action, centers should create communities of practice where all consultants can find support as we navigate both learning and teaching in an era of complex AI academic integrity policies.
WORKS CITED
“Academic Integrity: Policy Library.” Miami University, miamioh.edu/policy-library/students/undergraduate/academic-regulations/academic-integrity.html.
Heikkilä, Melissa. “Nobody Knows How AI Works.” MIT Technology Review, MIT Technology Review, 6 Mar. 2024, www.technologyreview.com/2024/03/05/1089449/nobody-knows-how-ai-works/.
Kohnke, Lucas, et al. “Technostress and English Language Teaching in the Age of Generative AI.” Educational Technology & Society, vol. 27, no. 2, 2024, pp. 306–20. JSTOR, https://www.jstor.org/stable/48766177.