Praxis: A Writing Center Journal • Vol. 20, No. 1 (2022)
Self-Initiated Writing Center Visits and Writing Development: A First-Year Writing Assessment
Salena Sampson Anderson
Valparaiso University
salena.anderson@valpo.edu
Abstract
This article analyzes the relationship between writing center use and writing improvements from the first to second semester in first-year university writing assessment data. The study correlates self-initiated writing center use and improvement in several areas, including title, thesis, organizational statement, organization, use of evidence, and clarity. These improvements contrast with those for peers who did not visit the center or who only visited when required. Writing center visits may directly impact assessment results when students visit the center with papers later designated for assessment. However, many assessment samples were not part of a writing center session. Instead, there may be differences in the population of students who self-initiate writing center appointments and those who do not. For instance, students with self-initiated writing center visits were less likely to identify as writers, and their initial assessment results were slightly lower than their peers’. However, by the second semester, their assessment scores generally surpassed those of their more confident peers. These findings suggest that students who self-initiate writing center visits are, as a group, better positioned to achieve increases in writing assessment scores across their first year because of productive writing center sessions and an open mindset for seeking writing support. However, this article also shows how quantitative data from writing program assessment may be leveraged against qualitative writing center data to highlight and address inequities, as observed in the case study of a multilingual writer whose assessment results did not feature the same positive changes as those of her peers.
Introduction
Many writing center directors and tutors cherish stories of writers' growth and increasing agency. Reflecting on longitudinal writing research, however, Nancy Sommers (2008) notes that "writing development is not always visible on the page" and that "writing development involves steps both forward and backward" (154). For these reasons and others, measuring writers' growth in the center and elsewhere can present challenges. While the relationship between writer growth and writing improvement is complex, in observing the “incremental” growth of writers and increased writing assessment scores for writing center clients, Luke Niiler leverages assessment data to inform conversations across campus about writing center pedagogy and its outcomes. He argues, “My colleagues outside the discipline don’t quite know what to make of the well-worn adage, ‘it’s the writer, not the writing.’ So with my stats in tow, I can state a variation on that theme: we help writers make ‘better writing,’ and I can explain exactly what I mean” (15). Even as Niiler highlights the importance of writing assessment data in conversations with colleagues across campus, Megan Jewell (2011) argues that writing centers should play a central role in writing program assessment more broadly because of their unique insight into student writing as well as their ability to facilitate collaboration between diverse faculty and administrators in supporting student writers. Jewell provocatively asks, “How might the transformative, dialogic spaces opened up by program assessment be useful not only in terms of their pedagogical benefits, but for their rhetorical value in terms of increasing writing center visibility and bolstering institutional legitimacy?” The present study builds upon Jewell’s concept of centering the writing center in conversations around university writing assessment, exploring how data and insights from the writing center may inform campus-wide longitudinal writing assessment.
This paper presents an IRB-approved study showing a correlation between self-initiated writing center visits and increases in university writing assessment scores between first-year students' first and second semesters. Score increases appear to relate both to positive impacts from writing center sessions and to personal characteristics associated with self-initiated writing center use, such as attitudes toward oneself as a writer. While students who self-initiated writing center sessions were less likely to identify confidently as writers and were more likely to have slightly lower first-semester assessment scores than their peers, by their second semester this same group of writers was more likely to have surpassed their peers’ second-semester writing assessment scores in most areas. As in all populations, however, there is variation. In some cases, this variation may be understood in terms of individual differences; or in this case, it may be related to institutional access and barriers. A case study involving a multilingual writer’s relatively modest gains between semesters presents an exception to the larger quantitative findings, and related writing center records suggest the need for more attention to supporting multilingual writers' goals. Thus, this study suggests the value of interweaving quantitative writing program assessment data with qualitative writing center assessment data to create a richer picture of student writing and to identify and respond to issues with equity and access in the campus writing community.
Literature review
The present research draws from a body of writing center scholarship showing a relationship between writing center use and writing improvement, including studies by scholars like Luke Niiler, Madeline Picciotto, Scott Pleasant, and Deno Trakas. It simultaneously draws from a tradition of longitudinal writing assessment, such as the 2017 study by Daniel Oppenheimer and his colleagues, which reveals modest writer growth across a student’s undergraduate career. Considering these veins of research together, one might wonder what types of writing gains are possible over shorter periods when a student truly invests in their writing process and is assisted by the writing center. Answering this type of question within the context of a small university really necessitates the use of both university writing assessment results and writing center data – a combination of qualitative and quantitative data. Thus, while the present study is informed by Richard Haswell’s call for more replicable, aggregable, data-supported (RAD) research, this study also draws from Isabelle Thompson’s astute observation that qualitative and quantitative writing center research may be used hand in hand to enrich analysis.
Trends in quantitative and qualitative writing (center) assessment
Quantitative and qualitative writing assessment research afford different opportunities and challenges. While Haswell’s influential 2005 publication “NCTE/CCCC’s Recent War on Scholarship” argues for more RAD research in the field of writing studies, other scholars argue for the benefits of qualitative approaches to writing assessment. Kathleen Cassity speaks to a lack of eagerness to engage in quantitative writing assessment, recognizing “how difficult it is to measure in quantitative terms a holistic aggregate of skills like those required for effective written composition” (63). Krystia Nora similarly argues for the value of qualitative assessment of writing programs, highlighting the difficulty of isolating the different factors that impact writer growth. However, others, such as Rebecca Day Babcock, Terese Thonus, and Neal Learner highlight the fact that qualitative and quantitative writing center research may be complementary. Similarly, Isabelle Thompson argues for the value of writing center assessment more generally, while advocating a combined use of both qualitative and quantitative measures in particular. Most recently, in 2021 Scott Pleasant and Deno P. Trakas provide a history of writing center scholars’ appraisal of quantitative and qualitative methods for assessment and argue, like Thompson, that both quantitative and qualitative writing center assessments are useful and serve different purposes.¹ In particular, Pleasant and Trakas highlight their own assessment practices, which involve the use of qualitative data for tutor training and quantitative data for reporting. Though the present study focuses largely on quantitative analysis, it also builds upon Pleasant and Trakas’ work by considering the unique insights afforded by both qualitative and quantitative data.
Writing center use, writing center improvements, and motivation
As quantitative studies have become more frequent in writing center research, several such studies have argued for the use of quantitative data in reporting, have shown correlations between writing center use and increased assessment scores, and have noted the complexity of distinguishing the impact of the writing center (as opposed to other factors). In 2003, Neil Learner argues for increased attention to quantitative writing center data, emphasizing the importance of this work for accountability both to our institutions and to ourselves. In 2005 in Writing Lab Newsletter, Niiler presents on a quantitative relationship between writing center use and improvements in student writing, arguing that “quantitative research is still new to this field,” and making the case for more statistical analysis of writing center impacts. Similarly, Madeleine Picciotto (2010) reports a correlation between writing center use and pass rates on a university-wide assessment that determines exit from basic writing classes and placement into first-year writing. However, she also notes the challenges in interpreting the results, namely that “students who chose to come to the center may have been particularly motivated, and it might have been their motivation — not writing center assistance — that led to improved exam performance.” Pleasant and Trakas, in the quantitative portion of their study, likewise report a correlation between writing center use and writing assessment scores, in this case determined by an assessment designed by the writing center, involving faculty from outside the center. As Picciotto, they similarly qualify their results: “It is difficult to imagine a study that could demonstrate definitively that writing center intervention is the sole cause of a student’s improvement, but what we have determined is that our students tend to improve their drafts after visiting the writing center.” The types of statistical work that disentangle the impact of the writing center on student writing from other factors, such as motivation, are indeed more complex.
Triangulating data from different sources may provide some perspective. Learner (2003) attempts such an analysis. Learner argues that at least for the data considered in his study the impact of the writing center may be isolated from the impact of other variables through the use of additional surveys or institutional data, such as SAT scores and survey results from the Learning and Study Strategies Inventory, an instrument that allows students to self-report on factors that may impact their learning “readiness” (69). In the absence of such a survey, a factor like motivation can be difficult to disentangle from other factors that may impact writing center use or uptake of writing center feedback, however. For instance, Jaclyn Wells considers whether requiring writing center visits can be productive for students, recognizing a consistent belief within the center that students who are “motivated” to visit have the most productive sessions. Wells (100-101) goes on to note that of students who completed required sessions in her study, many reported favorable attitudes toward the center and supported required visits, even as some of these same students reported that they would not be able to use the center as regularly due to course load, family obligations, athletics, and other factors. In other words, a variety of factors can impact students’ ability to visit the writing center, and it may be difficult to differentiate fully between motivation and other factors that may have an impact on actual student ability to visit the center or to integrate feedback fully into their writing.
Longitudinal Writing assessment and its limitations
Writing growth is notoriously difficult to study, not only, as Sommers notes, because it is sometimes invisible in a given written sample but also because it may be defined and understood in many ways. For instance, Ann Gere in introducing an edited volume entitled Developing Writers in Higher Education: A Longitudinal Study “aimed to avoid a single meaning of writing development, not only because existing research has articulated multiple meanings, but because of the inherent danger of seeing writing in monolithic terms” (2). She goes on to explain how such definitions can become yardsticks by which student writers and instructors are measured, whether or not the metrics accurately capture all aspects of writers’ growth.
Furthermore, such metrics often benefit certain populations of students while disadvantaging others. For instance, Staci Perryman-Clark illuminates how “assessment creates or denies opportunity structures,” highlighting the ways that “[d]ecisions about writing assessment are rooted in racial and linguistic identity,” and thus disproportionally impact writers of color and multilingual writers in negative ways (206). Indeed, in the present essay, we observe a case study of a multilingual writer whose assessment results and writing center records suggest campus writing experiences that differ from those of many of her monolingual peers, including less growth in areas identified by the student as priority areas for writing center feedback. Thus, we see that writing instruction and support may also limit access for some students, even as assessment may do the same. Effective assessment may also seek to redress some of these equity issues in the extent to which it is able to document larger patterns in student writing over time as well as ways in which individual writers are (less than fully) supported across their undergraduate experience.
Acknowledging these limitations, previous research attempts to explore factors that appear to be correlated with different aspects of writer growth, measured qualitatively through case studies and quantitively through rubrics. The extent to which changes in writing assessment scores are observed depends on many factors, ranging from assessment design to length of observation and differences in student populations. For instance, in one pre-test and post-test design involving timed writings before and after students’ first-year writing experience, Lisa Rourke and Xuchen Zhou anticipated statistically significant differences in writing assessment scores assigned by blind reviewers. However, they report a lack of statistically significant findings and recommend instead that, “Longitudinal studies, such as those using portfolios, offer assessment opportunities that account for development over time” (279). They note even in the case of such longitudinal studies, there may still be limitations, however. Sommers (2008), reflecting on some of the limitations of quantitative longitudinal writing assessment, offers several case studies to help explore why some writers progress more than others during their undergraduate careers. Sommers shows that eagerness to enter larger scholarly conversations is a factor motivating writer growth. Comparatively, Oppenheimer, et al. (2017) provides a large-scale quantitative analysis of longitudinal writing growth in undergraduates, showing an average improvement of 0.3 on a 4-point scale between matriculation and graduation with higher scores for women and humanities majors. Thus, while there are limitations with longitudinal writing assessment (and all writing assessment), both qualitative and quantitative studies have suggested some variables that may correlate with longitudinal gains in writing assessment scores in their specific settings.
Methods and materials
The present study was completed at a small, private comprehensive university in the Midwest region in the United States. The research draws from two existing sources of university writing program data: first-year writing assessment data and writing center records. The purpose of the design is to explore how existing sources of assessment data may be leveraged to provide a more complete picture of writing on our campus, in this case considering any changes between first and second semester writing in light of records of writing center use. This research proposal was approved by our institution’s Institutional Review Board and followed all institutional guidelines for ethical use of human subjects’ data.
First-Year Writing Assessment
For the first-year writing assessment, at the end the semester, the writing program gathers writing samples from all students enrolled in Core 110 and Core 115, the university’s first-year writing courses. Students are prompted by email and again in class to submit their samples, and many instructors take time in class for students to upload their samples. From these submissions, the writing program has historically selected a random sample of papers for a blind reading, including submissions from all sections. From an initial sample of 90 students, one student’s sample was excluded as it did not include submissions for both Core 110 and 115, rendering a total sample of paired Core 110 and 115 submissions for 89 students. Identifying information about the instructor and student names was stripped from the sample, and the papers were numbered with a key that matches student names to paper numbers, which was retained by the writing program and not shared with raters.
The rating process involved both a group norming session and independent rating, similar to that in Oppenheimer, et al. (2017). Raters for this assessment included a team of Core and WIC faculty and other stakeholders in the university writing program, for example, the Core director and the first-year writing librarian. For norming, this group of raters met with the University Writing Director; and we discussed the rubric and then rated three papers during this group session. After rating this smaller sample, we shared our scores and discussed any discrepancies in our ratings, establishing that our scores were typically within one point of one another. The full collection of writing samples for all 90 students was shared with raters as an anonymized folder of Core 110 essays and an anonymized folder of Core 115 essays. After the norming session, raters read and scored essays independently. All score sheets were shared with the University Writing Director; and for any papers for which scores were neither matching nor adjacent, a third reader from the assessment team provided an additional reading and score.
The writing samples considered for this assessment were the third (and final) papers completed in both Core 110 and Core 115. Both papers were argumentative in nature, and both papers required the use of source texts, with the Core 110 samples featuring three-to-four-page essays and the Core 115 samples featuring six-to-eight-page essays with comparatively more required source use. The assessment data in this study derive from the 2018-2019 assessment, completed in the summer of 2019 as the impacts of COVID and funding changes have delayed more recent university-wide writing assessments of this design.
Writing Center Context, Records, and Analysis
correlated with writing center records. The university writing center offers around 1,600 to 1,800 appointments per year with substantial use from first-year writers. The team of writing consultants consists of ten to twelve consultants, most of whom are undergraduate students; however, the center consistently employs one to three graduate consultants or faculty who also serve as consultants primarily for graduate students. Writing center records are created and stored in the mywconline system. This study focuses on writing center records of who used the writing center, when, and whether any of the visits were marked as student-initiated by the writer client. These data are taken from client report forms and appointment forms, drawing in part from the work of Rita Malenczyk in her consideration of these essential sources of writing center data and narrative.
After all assessment results were compiled by the university writing program, the key matching student names and writing samples was used to search for each writer in the mywconline system. Information about whether each student had visited the writing center, whether they had ever completed a self-initiated visit (as opposed to a required visit), and whether the consultant indicated working on the third paper in either Core 110 or 115 (i.e. the papers included in the writing assessment sample) was hand coded in a spreadsheet with the assessment results.
The files were disaggregated to create two separate spreadsheets, one of writing assessment scores from students who had completed self-initiated writing center visits and one of students who had not completed any such visits. A nonparametric test, Wilcoxson-Ranked Sign, was selected due to the paired data, relatively small sample sizes, and absence of an assumption of normally distributed data. These tests were used to compare Core 110 and Core 115 assessment scores for each rubric category for the two groups of students, those with self-initiated writing center visits and those without.
Results and discussion
Students with self-initiated writing center visits showed more improvement than peers. Results revealed statistically significant improvements in title, thesis, organizational statement, organization, evidence, and clarity in second semester writing samples for students with one or more self-initiated writing center visits (n=31) as seen in table 1 in the appendix. Comparatively, students who did not visit the writing center or who visited only when required (n=58) showed statistically significant improvement between the first and second semester of their first year only in their use of titles as seen in table 2.
A Closer Look at Student Populations
There may be differences in the student populations that proactively seek out self-initiated writing center sessions, in table 1, and those who do not, in table 2. For instance, though the differences are not statistically significant, one may note that first-semester assessment scores from students with self-initiated visits are slightly lower than their peers’ scores in five of the categories assessed by the rubric, namely title, thesis, organizational statement, clarity, and tone. For four of the areas (organization, mechanics, evidence, and conventions), the two groups of students’ papers featured nearly identical scores from the first semester. In only one area, style, did the students with self-initiated writing center appointments score more highly than their peers in the first semester. Each of these differences is subtle, typically a tenth of a point or less.
What is most interesting and exciting, however, is that by the second semester, students with self-initiated writing center appointments (who – as a group – were slightly behind their peers) actually achieved higher writing scores in the university writing assessment in almost all areas, including the organizational statement, organization, clarity, style, tone, evidence, and conventions. In the remaining three areas, the two student groups’ scores were similar, with title being the only category in which second-semester scores were not higher for students with self-initiated visits. In the areas of improvement, gains were often two tenths of a point or more. In other words, if students who self-initiated writing center appointments were ever behind their peers, by the end of the first year, their writing assessments earned higher scores than those of their peers.
The subtle differences in writing assessment scores during the first semester appear related to students’ self-conceptions as writers, especially when considering the distinctions between those who confidently identify as writers and those who express reservation, as seen in table 3. What is striking about the data in table 3 is the large proportion of students who do not identify as writers, at least when asked in a writing center profile question. However, there are some statistically significant differences between the two groups of students (χ2 = 8.517, p < .01), most notably the differences in proportions of students who self-identify as writers without any reservation. Confident self-identification as a writer is less common in students with self-initiated writing center use. Also of note are the number of students who only sometimes or rarely identify as a writer, which is more common among this same group. As this is an open field in the writing center profile form, some students in this case simply typed “yes” to the question, “Do you consider yourself a writer?” In the sample of those who did not visit the writing center or who only visited when required, around 24 per cent of the students self-identified as writers without any hedging. Comparatively, only 6.5 per cent of students with self-initiated writing center use similarly identified as writers. However, over 35 per cent of students with self-initiated writing center use qualified their identity as writers in responding to this question, indicating that they “kind of”, “partially”, or “occasionally” considered themselves as writers, or more negatively they answered they were “not a good one” or were “not really” a writer. In other words, a surprising majority of both students who visited the writing center voluntarily and those who did not declined to identify as writers. However, writing center non-users and those with only required visits were more likely than peers to identify confidently as writers while students who self-initiated writing center visits were more likely to express ambivalence or uncertainty about their identity as writers when compared to their peers. These findings suggest that at least for some students, confidence in one’s identity as a writer disinclines writing center use, perhaps resulting from stigma attached to writing center use by some students.
As we consider these two groups of students and their differences in writing assessment scores over time, we must be careful to distinguish between increases in scores that may be directly attributable to the fruits of productive writing center sessions and increases in scores that relate more generally to differences in the student populations. To do so, we may remove the writing samples for which there is writing center documentation indicating that the student worked on that paper at the center. Comparing tables 4 and 5, both of which exclude papers that had been part of a writing center session, we still observe some differences between the assessments for the students who ever self-initiated a writing center visit and those who did not.
In particular, for all categories except style and academic tone, a greater proportion of writing samples from students with self-initiated writing center visits showed score increases between the first and second semester – even when samples for which students directly sought writing center feedback are excluded. This pattern suggests that writing center work alone is not the sole factor in the increases in writing assessment scores between these students’ first and second semester but that individual factors, perhaps related to the students’ attitudes toward themselves as writers or toward their writing, may also impact their writing development.
The Impact of the Writing Center
visited the writing center for either or both essays included in the first-year university writing assessment. This sample may be used to illustrate how writing center data and writing program data may be interwoven to create a richer picture of campus writing. In table 6, organized from the most negative changes in assessment scores to the most positive changes in assessment scores, we observe that most of the negative scores are for samples with required writing center visits. In particular, six of the eight samples with no change or with a negative change between the first and second-semester writing assessments were samples written by students who completed required visits to the writing center. Interestingly, three of these required visits were in the students’ second semester. If we were to assume that writing center visits consistently result in students’ uptake of feedback and implementation of positive changes in their writing, we might anticipate that writing center visits in the second semester alone would help to drive higher writing assessment scores in the second semester. Of course, we do not know how these writing assessment samples would have scored in their earlier drafts, so we cannot rule out improvement associated with writing center visits in these cases. Still, the fact that the majority of these cases of lower second semester writing assessment scores come from students with required appointments does suggest a pattern. If we have suspected that there may occasionally be students merely completing requirements in the center without investment in the session, it is possible that some of these samples may derive from such sessions. On the other hand, self-initiated writing center visits for a writing assessment sample do not guarantee higher assessment scores than those from a student’s previous semester’s sample. Note, for instance, the paired sample with scores that fell slightly from 2.825 to 2.75, despite the writer’s completion of a self-initiated visit during the second semester and no visit for their first-semester writing sample. This difference is subtle and may just as well represent random chance. The only other student in this sample with a lower second semester assessment score and a self-initiated writing center visit completed that session during their first semester. In other words, this pattern may be in part attributable to the positive impact of the first semester writing center session, if, for instance, this session helped the student to revise their first semester sample and thus achieve a higher assessment score in that semester but did not visit during the second semester.
A corollary to the relatively high frequency of required visits involved in samples with negative assessment results is the comparatively higher frequency of self-initiated visits in samples showing the greatest positive change in assessment scores between semesters. For all samples in the study, the mean change between first and second semester writing assessment scores was a modest 0.11 on a 4-point scale. For students whose samples demonstrated above average increases in scores between their first and second semester writing assessment, six of ten paired samples involved a self-initiated writing center visit for at least one of the samples. Students who visited the writing center for their first and second semester samples were also more common in this category: of the four students who visited the writing center for both their first and second semester writing samples, all four showed positive change in assessment scores between their first and second semester, and three of the four showed above average positive change, including one student who visited both semesters only for required sessions. These findings suggest that sustained writing center use has a positive impact on writing assessment scores, even for some writers in the presence of a visit requirement.
We must also be careful to note that decreased second-semester assessment scores, relative to first-semester assessment scores, for this small number of students after required sessions does not alone argue against the possible value of required sessions. In particular, many of the writers who self-initiated sessions first experienced the center through a required session. In other words, differences in assessment results may not relate to the requirement at all but instead to different writer attitudes, with some writers self-initiating sessions after an initial required visit even as others show no further engagement with the center. Irene Clark (1985), Barbara Gordon (2008), and Rebecca Babcock and Terese Thonus (2012) all argue for the possible advantages of required writing center sessions, especially in the context of introducing students to the center. The findings in this study reinforce this role for required visits. Jacyln Wells (2016), Barbara Bell and Robert Stutts (1997) highlight some of the limitations associated with required visits, such as schedule crowding and periodic issues with writer engagement; yet again they conclude that the benefits are likely greater than the losses. A similar pattern seems to emerge in the present data.
However, the single – and perhaps unsurprising – pattern that provides unity to the overall picture in table 6 is that samples with higher initial scores were more likely to have comparatively lower second semester assessment scores while the inverse was also true. Lower initial assessment scores left ample room for improvement, with many of the greatest changes between first and second semester samples deriving from paired samples with a relatively low first-semester score. In particular, across the full dataset for the study, a score of 2.38 was the mean for first-semester samples while a score of 2.46 was the mean for second-semester samples. Six of the seven paired scores that showed negative changes between first and second semester involved an initial score that was above average. Similarly, six of the ten paired samples with above average changes featured below average initial scores, including several in the middle to high one range. Part of this pattern may be explained as students with weaker initial writing samples benefited from writing center support, often from self-initiated visits but also from some required visits. Thus, the specific samples which involved a writing center visit, considered in concert with the earlier findings, suggest the positive impact of writing center visits on assessment score changes between first and second semester, especially in the presence of self-initiated visits.
A Case study and tutor training
As in Sommers’ work, case studies can illuminate stories and exceptions within the quantitative patterns. For instance, one of the students who made regular self-initiated writing center visits in this study was a young multilingual woman, for whom we will use the pseudonym Akuba. Akuba’s writing lagged in areas where her peers’ writing improved, while improving in other areas. Only title, tone, and disciplinary conventions improved from her first to second semester. Four areas (thesis, clarity, style, and mechanics) remained the same; and the organizational statement, organization, and use of evidence earned lower assessment scores in her second semester. Interestingly, mywconline writing center appointment records from Akuba’s first year show requests for support with organization, thesis, use of sources, and formatting while post-session client report forms suggest that consultants only partially addressed her concerns and on occasion instead addressed sentence-level issues, even when other goals were indicated by the writer as their intended focus for the session. By her junior and senior year, Akuba requested support with word choice and sentence structure; but her writing self-regulation and growth were compromised by these early visits. In the areas which Akuba astutely identified as focus areas where she would benefit from further writing dialogue, Akuba had not consistently received the support she needed and requested. In turn, her goals shifted in response to the goals identified by her writing center consultants. While assessment scores for Akuba’s tone and disciplinary conventions did indeed increase between her first and second semester, the other areas she self-identified as goals for writing support demonstrated no comparable growth.
To engage with these findings, in Spring 2022, in collaboration with a TESOL faculty member and students, the writing center offered all writing center consultants and interns a workshop dedicated to goal setting during consultations with multilingual students. While this subject matter is addressed in the writing center internship course, being able to hear of specific findings from our own center helped to facilitate our discussion of how to listen effectively and support multilingual writers’ writing goals in the center and beyond. Thus, this case study, in providing an exception to the larger quantitative trends, helped to shape a local dialogue around best practices for supporting multilingual writers in our writing center. As Bobbi Olson argues, “All student writers deserve to be heard on their own terms as they try to negotiate and understand the expectations placed on them from without” (5). This assessment data gave us an opportunity to reflect on strategies for listening. Thus, even – and perhaps especially – as we identify areas for continued growth, writing centers must celebrate our ability to leverage qualitative and quantitative writing center and writing assessment data to identify trends in writers whom we are best supporting, to identify gaps, and to drive improvements in our practices. This use of assessment data truly is a strength in the center and allows us to build upon and extend our best practices.
Discussion and conclusions
Overall, students who engaged in self-initiated writing center visits showed more improvement in their scores between first and second semester in a blind university writing assessment compared to their peers. These findings suggest not only the value of the writing center but possibly also of other factors that inform writer growth and that may distinguish students who more frequently make self-initiated appointments, such as students’ ambivalent self-identification as writers. Ironically, the students who made more self-initiated writing center appointments were also the same students who were more likely to express ambivalence about their identity as writers, but in turn they were also the students who showed the most writing growth between first and second semester. In many areas, these students’ writing in their first year showed improvements of 0.2 to 0.4 on a 4-point scale, with growth in these areas comparable to the average growth observed in Oppenheimer, et al. (2017) across four years.
While these findings are encouraging, assessment results are only as useful as the stories they allow us to share, the questions they invite us to ask, and the actions they encourage us to take. The present findings suggest possible benefits (and risks) for students, faculty, and writing centers. Students would benefit from being more aware of the impact of writing center sessions, and of investment in the writing process more generally. For more confident writers, the realization that their writing may not grow as much in the absence of this investment in process could be a motivation to visit the center. For less confident writers, seeing the impact of the center and a student’s own investment in their writing process might likewise encourage use or continued use, while helping to destigmatize the choice of using the center. From a faculty perspective, an important learning objective in a first-year writing class might be that students develop a growth mindset with regard to their writing and writing process. In writing conferences, faculty might talk with students about whether and why they identify as writers. Through candid conversations in conferences, class, and reflective writing, we can help all writers to see that the way to grow is to invest in their process, including engaging with the writing center.
As the writing center seeks to analyze student experiences and writing outcomes, the present findings may also suggest a strategy that allows us to gain more insight from our assessment data. Given the differences in assessment outcomes for students who self-initiate writing center visits compared to those who do not, it may be worthwhile breaking out these student populations in a center’s assessment data, including post-session surveys and writing assessments. Disaggregating the data and adding targeted questions in our surveys may help give a clearer picture of the distinct student needs in the two groups, and – perhaps more importantly – how to encourage students toward self-initiation. Additional research is needed to explore further the exact impact of the writing center on student writing assessment results. For instance, qualitative studies comparing multiple drafts of the same assessment paper may be enriched by writing center records documenting not only the focus of the session but also whether the writer self-initiated the session or visited in response to a requirement. Similarly, discourse analysis of required and self-initiated sessions may reveal differences in conversational turns, writer engagement, and scaffolds for positive revisions.
Scaffolding writer self-regulation may be of specific benefit as writing centers work to promote the kind of writer engagement that motivates a student to visit the center. For instance, in a study of writing center discourse, Isabelle Thompson and Jo Mackiewicz illustrate how tutors use a variety of question types to promote student engagement, and they consider how self-regulation may be relevant in this process. As in the present study, they include writer initiation of sessions as an important component of this self-regulation, arguing that “[b]y taking the initiative to come to the writing center, by collaborating with tutors to set agendas, and by asking questions,” the students in their study were “already exercising some self-regulation.” They also call for additional scaffolding and feedback in the writing center to promote and support this self-regulation (63, 64). Even as writing instructors may design in-class activities that support writers’ efforts to craft specific questions and agenda items for upcoming writing center sessions, writing center tutors in consultations and workshops may likewise help to promote writer-initiated questions during sessions. For instance, through the study of specific center transcripts, as in Thompson and Mackiewicz’s research, tutors may be trained to ask certain types of question sequences and to provide scaffolding and feedback that models and encourages writer questions, self-regulation, and engagement. Writers’ ever-developing skills in self-regulation, fostered by the writing center and their classes, may in turn promote both self-initiated writing center visits and the types of writing growth observed in writing program assessments.
Limitations and Future Research
In exploring these patterns of writer growth, this study seeks to illustrate how writing programs may leverage existing first-year writing and writing center assessment data to be mutually informative. As a result, this study provides analyses based on existing writing assessment data, and it is possible that revisions to the first-year writing rubric or to the mywconline report form prompts might yield deeper insights into the relationship between writing center use and first-year writing outcomes. Ideally, this type of analysis might be used to develop new lines of dialogue, which as Jewell suggests, might position the writing center as central to campus conversations on writing. From these conversations, guided by local questions and data, writing programs and writing centers may decide if and in what ways they might like to revise rubrics or writing center report prompts to gather data that best address their needs.
Given the labor-intensive and expensive nature of this quantitative first-year writing assessment, involving multiple writing colleagues serving as blind readers, and given budgetary changes and changes in faculty availability during the recent pandemic, only one year of data is available for analysis. It is possible that over time, even as there is variation from one campus to the next, that one may observe different assessment results. This possibility highlights the importance of similar assessment work connecting writing program assessment more deeply with writing center assessment, completed across a broader array of campus settings within single academic years but also longitudinally. Writing centers are well-positioned to get to know student writers individually and over time. Integrating quantitative and qualitative writing center assessment data into the larger writing program assessment provides a richer portrait of student writing and writers. This process can also reveal inequities in writing instruction, assessment, or support as we compare case studies against quantitative findings. It is our responsibility to leverage all available writing assessment data to best understand the writers on our campus and to use assessment to create opportunity and access.
Acknowledgments
I would like to thank Colleen Morrissey for her diligent work in writing program data entry and Kelly Belanger for generously sharing writing program assessment data.
Notes
See Pleasant and Trakas (3) for a recent literature review of writing center scholarship comparing quantitative and qualitative methods of assessment.
Works cited
Babcock, Rebecca Day, and Thonus, Terese. Researching the Writing Center: Towards an Evidence-Based Practice. Peter Lang, 2018. https://dl.acm.org/doi/abs/10.5555/3208646
Bell, Barbara, and Robert Stutts. "The Road to Hell is Paved with Good Intentions: The Effects of Mandatory Writing Center Visits on Student and Tutor Attitudes." Writing Lab Newsletter, vol. 22, no. 1, 1997, pp. 5-8. https://www.wlnjournal.org/archives/v22/22-1.pdf
Cassity, Kathleen J. "Measuring the Invisible: The Limits of Outcomes-Based Assessment." Writing on the Edge, vol. 25, no. 1, 2014, pp. 62-71. https://www.jstor.org/stable/24871681#metadata_info_tab_contents
Clark, Irene Lurkis. "Leading the Horse: The Writing Center and Required Visits." The Writing Center Journal, vol. 5, no. 2/1, 1985, pp. 31-34. https://www.jstor.org/stable/43441808#metadata_info_tab_contents
Gere, Anne R. Developing Writers in Higher Education: A Longitudinal Study. University of Michigan Press, 2019. https://library.oapen.org/handle/20.500.12657/23990
Gordon, Barbara Lynn. "Requiring First-Year Writing Classes to Visit the Writing Center: Bad Attitudes or Positive Results?" Teaching English in the Two-Year College, vol. 36, no. 2, 2008, pp. 154. https://www.proquest.com/docview/220964832?pq-origsite=gscholar&fromopenview=true
Haswell, Richard H. "NCTE/CCCC’s Recent War on Scholarship." Written Communication, vol. 22, no. 2, 2005, 198-223. https://journals.sagepub.com/doi/abs/10.1177/0741088305275367
Jewell, M. S. "Sustaining Argument: Centralizing the Role of the Writing Center in Program Assessment." Praxis: A Writing Center Journal, vol. 8, no. 2, 2011. https://repositories.lib.utexas.edu/handle/2152/62569
Lerner, Neal. "Counting Beans and Making Beans Count." Writing Lab Newsletter, vol. 22, no. 1, 1997, pp. 1-3. https://www.wlnjournal.org/archives/v22/22-1.pdf
Lerner, Neal. "Writing Center Assessment: Searching for the ‘Proof’ of Our Effectiveness." M. Pemberton, & J. Kinkead, The Center Will Hold: Critical Perspectives on Writing Center Scholarship (2003): 58-73. https://www.jstor.org/stable/pdf/j.ctt46nxnq.7.pdf
Malenczyk, Rita. “‘I Thought I’d Put That in to Amuse You’: Tutor Reports as Organizational Narrative." The Writing Center Journal, vol. 33, no. 1, 2013, pp. 74-95. https://www.jstor.org/stable/43442404#metadata_info_tab_contents
Niiler, Luke. "The Numbers Speak Again: A Continued Statistical Analysis of Writing Center Outcomes." Writing Lab Newsletter, vol. 29, no. 5, 2005, pp. 13-15. https://www.wlnjournal.org/archives/v29/29.5.pdf
Nora, Krystia. "Moving Forward in Writing Program Assessment Design: Why Postmodern Qualitative Assessment Makes Sense." CEA Forum, vol. 39, no. 2. College English Association. Web site: http://www. cea-web. org, 2010. https://eric.ed.gov/?id=EJ1083538
Olson, Bobbi. "Rethinking our Work with Multilingual Writers: The Ethics and Responsibility of Language Teaching in the Writing Center." Praxis: A Writing Center Journal (2013). https://repositories.lib.utexas.edu/handle/2152/62178
Oppenheimer, Daniel, et al. "Improvement of Writing Skills During College: A Multi-year Cross-sectional and Longitudinal Study of Undergraduate Writing Performance." Assessing Writing, vol. 32, 2017, pp. 12-27. https://www.sciencedirect.com/science/article/pii/S1075293516300708
Perryman-Clark, Staci M. "Who We Are (n't) Assessing: Racializing Language and Writing Assessment in Writing Program Administration." College English, vol. 79, no. 2, 2016, pp. 206-211. https://www.jstor.org/stable/44805919#metadata_info_tab_contents
Picciotto, Madeleine. "Writing Assessment and Writing Center Assessment: Collision, Collusion, Conversation." Praxis: A Writing Center Journal (2010). https://repositories.lib.utexas.edu/handle/2152/62298
Pleasant, Scott E., and Deno P. Trakas. "Two Approaches to Writing Center Assessment." WLN: A Journal of Writing Center Scholarship, vol. 45, no. 7-8, 2021, pp. 3-11. https://www.wlnjournal.org/archives/v45/45.7-8.pdf
Rourke, Lisa, and Xuchen Zhou. "When Scores Do Not Increase: Notes on Quantitative Approaches to Writing Assessment." Journal of Writing Analytics, vol. 3, 2019, pp. 264-285. https://wac.colostate.edu/docs/jwa/vol3/rourke.pdf
Sommers, Nancy. "The Call of Research: A Longitudinal View of Writing Development." College Composition and Communication, vol. 60, no. 1, 2008, pp. 152-164. https://www.jstor.org/stable/20457050#metadata_info_tab_contents
Thompson, Isabelle. “Writing Center Assessment: Why and a Little How.” The Writing Center Journal, vol. 26, no. 1, 2006, pp. 33–61. https://docs.lib.purdue.edu/wcj/vol26/iss1/7/
Thompson, Isabelle, and Jo Mackiewicz. "Questioning in Writing Center Conferences." The Writing Center Journal, vol. 33, no. 2, 2014, pp. 37-70. https://www.jstor.org/stable/43443371#metadata_info_tab_contents
Wells, Jaclyn. "Why We Resist ‘Leading the Horse’: Required Tutoring, RAD Research, and Our Writing Center Ideals." The Writing Center Journal, 2016, pp. 87-114. https://www.jstor.org/stable/43824058#metadata_info_tab_contents
Appendix A
Table 1: Changes from First to Second Semester: Students with Self-Initiated Writing Center Visits
Table 2: Changes from First to Second Semester: Students with No Self-Initiated Writing Center Visits
Table 3: Self-Identification as Writers in Writing Center Records
Table 4: Students with Self-Initiated Writing Center Visits for Papers Other than the Assessment Samples
Table 5: Students Who Never Self-Initiated Any Writing Center Visits and Who Did Not Complete a Required Visit for the Assessment Sample
Table 6: Writing Center Use for Writing Assessment Samples and Scores