Praxis: A Writing Center Journal • Vol. 10, No. 1 (2012)
WRITING CENTER ASSESSMENT: AN ARGUMENT FOR CHANGE
Kristen Welch
Longwood University
kristen.d.welch@gmail.com
Susan Revels-Parker
Longwood University
leslie.revelsparker@live.longwood.edu
Assessment embodies the potential for change if used to its fullest advantage. For any writing center, assessment can build arguments for additional or continued funding, staffing, or space on campus. It can also provide an explanation of (and defense of) the value of its services to the campus community. Yet an assessment must have accurate outcomes and collect accurate data to build these arguments. Inspired by the desire to create a new, part time or full-time position in the writing center for the director, the pilot assessment at Longwood University (a small liberal arts college nestled in central Virginia) was designed to show the center’s effectiveness at helping to improve student writing, at helping to increase a student’s confidence about writing, at offering excellent professional development to the tutors and to the graduate assistant, at satisfying the needs of both students who use the center and the faculty who send them there, and at showing increasing amounts of usage to indicate positive growth (see Appendix). Thus, the pilot assessment was designed to show proof of the effectiveness of the center in order to ensure its longevity, along with the need for a permanent director so it would be able to reach more students through consistent leadership. As assessment takes center stage on the “to do” list of writing center directors, it is important to consider several aspects of building a coherent pilot such as context, audience, data collection, and possibilities for collaboration with faculty and the Office of Assessment and Institutional Research. In this paper, we will explore seven key questions for designing an effective pilot that address each of these areas.
The Process for Creating a Pilot: Heuristics for Getting Started
The seven questions are:
Who should be in charge of the assessment at your institution?
What kinds of challenges does your writing center face?
What are your goals for the assessment?
What’s the mission of the writing center?
What kinds of data do you already collect and how it is useful for measuring the outcomes you define?
How can you partner with your own Office of Assessment and Institutional Research to establish a plan, collect needed data, and analyze results?
How can you publish the results to help others and to be rewarded for your work? How can you use the assessment process to mentor graduate students or junior faculty on campus?
1. Who should be in charge of assessment at your institution?
Most people would assume the director of the writing center would always be in charge, but this is not always the case. In “Assessing the Writing Center: A Qualitative Tale of a Quantitative Study,” Doug Enders describes his experience with allowing someone from the Institutional Research office to conduct the assessment. For one, the interpretation of purely quantitative data allows the attitude of the assessor to affect the portrayal of results since there is no clear way to quantify the effect the writing center has on students’ writing ability, student retention, or even student grades. Most of the published narratives of writing center assessments agree that the relationship between the writing center and its positive effect on students is a weak cause/effect argument without proper contextualization (Bell 2000; Lerner 2003; North 1994). In Ender’s article he writes that in all of the quantitative studies the IR person ran, the correlation between the center’s work and a student’s writing ability, choice to finish a degree at that institution, and/or grades was too weak to have any real meaning (9). If the administration had not already been favorable toward the center, it was a real possibility that the budget could have been negatively impacted (8).
At Longwood University, it is the choice of our OAIR office to always put faculty in charge of assessments in order to use the experience and expertise of faculty with the subject area in question to create a logical, usable assessment. So when the administration asks who should be in charge of the assessment, they might consider three things: 1. who has the most at stake, 2. who has the greatest amount of knowledge and experience with the subject at hand, and 3., who will be responsible for making changes in response to the assessment? It is difficult to convince someone that change is needed when the assessment was poorly designed or when it overlooked major concerns particular to that area. While it may seem like less work to send an OAIR expert to conduct the assessment, it ends up creating problems that make the implementation of change less efficient in the end.
2. What kinds of challenges does your writing center face?
At a workshop we offered at the Virginia Assessment Group’s conference in Williamsburg in November of 2011, we gathered answers to this question from the participants. Many faculty members misunderstand the mission of the center. One participant, who was chair of an English Department, believed the writing center was a “fix-it shop” and wondered how to get better proofreading out of the tutors. North famously defined the writing center as a place that helped with the writing-process, based on “the writers it serves,” with a goal to “make better writers, not better writing” (438). He disagreed with the idea that the writing center was just a first-aid station that did editing and proofreading. He also described the tutor as a “participant-observer” (439), or in other words, as a collaborative teacher. This conception of writing center work was very different from what we heard voiced during the workshop in Williamsburg last fall. Another participant had concerns about online tutoring since a significant number of his university’s programs were fully online and he worried about serving students who might never visit the campus, and another participant had concerns about communication with faculty in terms of pedagogical differences and areas of convergence. The challenges at Longwood University include a need for additional ESL training for the tutors, and a need for more graduate students as tutors. Although we do not have the space in this paper to offer solutions, the point is that acknowledging challenges is an important part of the assessment process since goals can be set in response to them.
3. In light of these challenges, what are your goals for the assessment? In other words, who is your real audience for the assessment report?
We know that we are being asked to provide evidence of the quality of instruction at all levels of the university in order to maintain accreditation, but for most of us, the real audience consists of those administrators who can provide the monetary resources needed to support the center and its work. In “Sustaining Argument: Centralizing the Program in Writing Center Assessment,” M. S. Jewell writes:
Importantly…in addition to directly participating in assessment procedures, we have since spearheaded the communication of results to writing program and other campus administrators, and publicized the extent to which outcomes are met to faculty and students through outreach activities such as writing-center sponsored workshops. These activities have led me to reflect on the ways in which not only our own, but other writing centers might take advantage of the institutional discourses generated by program assessment.
She follows this with a beautifully stated question: “How might the transformative, dialogic spaces opened up by program assessment be useful not only in terms of their pedagogical benefits, but for their rhetorical value in terms of increasing writing center visibility and bolstering institutional legitimacy?” Indeed, James Bell lists the two goals of an assessment as using the data to improve tutor training and to “influence the amount of funding from those who control the budget” (7, 8). We would add that an assessment might need to address the support for faculty teaching writing intensive courses in order to demonstrate the positive campus-wide impact the center has and also to show that the center can be a place to centralize writing-across-the-curriculum initiatives. By working closely with the center designed to support teaching on campus (called CAFÉ at Longwood University—a center where pedagogical advice and support is offered to faculty), writing across campus can be improved with instruction for faculty in designing clear assignment sheets, appropriate examples for writing in a particular discipline, and in assessing writing by doing more than marking spelling and grammar errors. Whereas Stephen North’s 1994 article about what a writing center should be and do called for embedding a need for writing instruction at all levels in English programs, today the proliferation of centers for faculty support mean that partnerships can be formed in new ways.
4. What is the mission of the writing center?
To conduct an assessment, the mission of the center determines the desired outcomes for an assessment. If the writing center at your institution does not already have a mission statement or has an outdated mission statement, here are some questions to help draft or revise one:
How does (or how can) the mission of the writing center mirror the institution’s mission?
How does the mission support the goals in place for assessment on campus?
How does the mission encompass a desire to aid in student support and retention?
Does the mission include a concern for the surrounding community (i. e. non-students)?
If you already have a draft to work from, does the mission ignore any particular group (i. e. faculty, athletes, graduate students)?
If so, is this done intentionally or should the mission be revised to include one or more groups previously left out?
Neal Lerner suggests we use student and faculty perceptions, institutional expectations, and research on student needs to assess the effectiveness of the writing center (65). So, he wants us to ask: Does the mission of the writing center at your institution “fit” the need? Also, Lerner asks: How does the center’s mission and stated outcomes line up with the national standards that exist? (72). Reviewing the International Writing Center Association (IWCA) and the Council of Writing Program Administrators’ (WPA) official statements on what a center should be and what “good” writing is can be useful for showing how your center aims for excellence in the teaching and support of good writing (72).
Adapted from Wright State University’s assessment plan, Longwood University’s writing center mission statement currently reads as follows:
The mission of the Writing Center is to serve students, tutors, and faculty as a resource for improving writing across Longwood University's campus. We seek to improve student learning by working on written documents with students at any stage of the writing process. The mission is not to create a perfect paper, but to work on targeted areas of concern with students. We also seek to offer professional development opportunities that help them to embrace their role as citizen leaders in the Writing Center. We stress a caring, generous demeanor, and we emphasize listening over dominating the session. Finally, we seek to offer faculty on Longwood University's campus support through writing workshops for their students, specialized tutoring, or in other ways.
The mission statement has established a focus for the center, highlighting the fact that we serve tutors and faculty, not just the students who come in. The statement also clarifies what we perceive is our “job” in the center for faculty who mistakenly believe every student who visits the center should emerge with an A+ paper. Finally, the mission statement outlines some of the work we do in offering workshops and specialized tutoring that is not immediately apparent to some faculty and administration on campus.
5. What kinds of data do you already collect, and how is it useful for measuring the outcomes you define?
A common theme in published articles on writing center assessment is that the best an assessment can do is show a “possible causal relationship” between higher GPAs, graduation rates, better scores on papers written later in the semester versus those written earlier, so it is wise to be cautiously optimistic about whatever data the assessment reveals (Wingate 9). At Longwood University, we have collected data on how many students visit the center for years. We now collect data on the number of visitors, repeat visitors, what classes they are there for and for what professor; the length of the session and the types of areas they address in a tutoring session (done through a discussion board folder on Blackboard that tutors submit after each session); student evaluations of the center; feedback on workshops conducted for faculty; and feedback from faculty on areas of improvement they noticed in student writing after a visit to the center. In addition, we ask tutors to offer feedback on their professional development through an interview. Since professional development is a key part of the mission statement, it is vital to gather information so it can be used to improve tutor training.
6. How can you partner with your own Office of Assessment and Institutional Research to establish a plan, collect needed data, and analyze results?
Once these initial questions are answered, it is useful to have a paradigm for organizing the information you gathered. We chose to model ours on the assessment cycle used at Longwood. The assessment cycle typically consists of:
Establishing a mission statement (a vision for the efficacy of the work, not just a description of the center).
Setting Goals (broad statements based on the mission statement).
Establishing Outcomes and Objectives (concrete goals that can be measured).
Establishing Measures (both direct and indirect).
Setting Targets for Achievement.
Reporting Findings.
Establishing Action Plans based upon those Findings.
Each of these seven pieces is to be found in WEAVE (the online database designed by another university in Virginia and adopted for use at Longwood University) and can be used to show a SACS (Southern Association of Colleges and Schools) reviewer a coherent picture of how the writing center has established a cycle of assessment in order to bring about continued improvement. To simplify our work and to make it connect with other assessments done on the campus, we also decided to use a modified version of the rubric already approved for use in the SCHEV WCC (Written Communication Competency) assessment conducted over the last two years.
Thus, after the mission statement, we saw that the assessment plan needed to include four major sections: Goals, Outcomes, Measurements, and Targets. These sections define what is being collected and how the data will demonstrate the center’s effectiveness. Working with Dr. Linda Townsend, the Assistant Director of OAIR, we defined five areas for each of the three sections to organize our articulation of goals and measurements:
Improved Student Writing.
Increased Confidence.
Increased Student Utilization of Services.
Enhanced Professional Development.
Satisfaction with Service.
Keeping these areas consistent for each of the four major sections helped to clarify the goals of the plan. The section on “Improved Student Writing” was made consistent with the WCC because it focused on the same four areas of writing. Namely, these areas were analysis, organization, audience, and mechanics. One other note, a Likert scale was added to the original drafts of the surveys we designed as direct and indirect measures because Dr. Townsend argued that it would be difficult to quantify answers to open-ended questions and the administration responded best to quantitative data instead of qualitative. If we hoped to argue for a full time director or for additional resources for the center, we would need to convince the administration that it was absolutely necessary and that it made sense in light of our objective, numerical data. We also included a short consent form on each because Mrs. Revels-Parker and Dr. Welch wanted to present this information and to make the results public through this publication, thus the surveys had to meet IRB requirements. Without IRB clearance, all of our data would have had to remain unpublished, minimizing the value of our work for ourselves and for other institutions wanting to review the strengths and weaknesses of the pilot assessment.
Finally, a full draft of the assessment plan is at the end of this article, but it is important to highlight the value of the surveys. In “The Faculty Survey: Identifying Bridges Between the Classroom and the Writing Center,” Masiello and Hayward emphasized the importance of routine faculty surveys to ensure that the faculty understands what the function of the center is. The inaugural survey they sent out revealed a broad range of ideas about the center’s role in improving writing across the campus and helped the center adjust program services and tutor training.
7. How can you publish the results to help others and to be rewarded for your work? How can you use the assessment process to mentor graduate students or junior faculty on campus?
We found that the most important aspect of the pilot assessment was that its organizational structure provided an excellent paradigm for other directors to build upon. In addition, our mistakes were just as interesting and valuable as our successes. Thus, as we drafted this article, we wrote about our disappointments. For example, even though making the assessment plan consistent with the structure of the assessment cycle was helpful, several significant problems emerged after running the pilot this year. First, our attempt at rating papers written in the first part of the semester with a modified version of the rubric used for the Written Communication Competency assessment failed. We set up our plans with five different professors, collecting papers from them. However, we were reluctant to ask them to require students to come to the center in order to guarantee the success of our pilot. Thus, by November only six students from five different courses had chosen to visit the writing center despite the encouragement of faculty. One faculty member had even required her students to visit the center three times, but they still did not come. When it was clear that the assessment would not be useful for the pilot, we contacted to the OAIR office to cancel the assessment in order to sidestep any unnecessary expense.
Second, we provided a form in the center to collect data on how students perceived the help they received. The student evaluation of a tutoring session form has been very effective except that students skip any question that they do not feel is relevant to their particular session. They also sometimes would add a “6” in order to make the point that they were very, very pleased with the help they received. However, our initial results for the student evaluations show that 82% of our student clients rate us a “4” or higher on a 5 point scale.
While these are hardly insurmountable problems, a third problem consisted of a short survey we sent to faculty members to get feedback on student improvement. Instead of providing useful comments, they sent back the forms with some version of “I don’t know; I teach forty students in this class and don’t read drafts” or “I’m not sure how helpful my feedback will be since one student who visited the center is a strong writer and didn’t improve much and the other was weak and improved some.” Even though our rating with the four faculty members who sent back the survey shows that 75% of them rate us a “4” or higher, these ratings seem meaningless in light of the comments supplied.
Conclusion
It is important to us to know that we have expended a great deal of time, energy, and money for the benefit of an institution with high academic integrity and standards that are recognized by other institutions, alumni, and perspective students. Assessment allows us to see how one institution stacks up against another and to reach ever higher toward achieving even more. Pride in our institution is one reason we feel it is important to support and to raise writing performance across campus in measurable and meaningful ways. Working in the writing center provides us with a unique opportunity to share successful writing techniques and to offer access to the proper writing resources with our student clients. The writing center assessment is a valuable part of the institutional assessment for accreditation and should receive all of the resources it needs to continue serving students. In the future, the lessons learned from this pilot assessment will be used to craft an even more effective assessment plan, to chart new goals, and to use new measures to show that the writing center is and continues to be one of the most important student support centers on our campus. Hopefully, our experiences will be of help to others as they begin this process for themselves.
Works Cited
“Assessment Plan.” Wright State University. www.wright.edu/assessment/plans/uwc_plan04.doc. Web.
Bell, James. H. “When Hard Questions Are Asked: Evaluating Writing Centers.” The Writing Center Journal. 21.1 (Fall/Winter 2000): 7-28. Print.
“Center for Academic Success.” Christopher Newport University. Accessed Nov. 2011. Web.
Jewell, M. S. “Sustaining Argument: Centralizing the Role of the Writing Center in Program Assessment.” Praxis: A Writing Center Journal. 8.2 (Spring 2011). Web.
Lerner, Neal. “Counting Beans and Making Beans Count.” Writing Lab Newsletter 22.1 (1997): 1-3. Print.
- - - . “Writing Center Assessment: Searching for the ‘Proof’ of Our Effectiveness.” The Center Will Hold: Critical Perspectives on Writing Center Scholarship. Ed. Michael A. Pemberton and Joyce Kinkead. Logan, UT: Utah State Univ. Press, 2003. 58-73. Print.
Masiello, Lea and Malcolm Hayward. “The Faculty Survey: Identifying Bridges between the Classroom and the Writing Center.” The Writing Center Journal. 11.2 (1991): 73-81. Print.
North, Stephen M. “The Idea of a Writing Center.” College English. 46.5 (1984): 433-446. Print.
- - -. “Revisiting ‘The Idea of a Writing Center.’" The Writing Center Journal. 15.1 (1994): 7-20. Print.
Wingate, Molly. “Writing Centers as Sites of Academic Culture.” The Writing Center Journal. 21.2 (Spring/Summer 2001): 7-20. Print.
Appendix
Longwood University Writing Center Assessment Plan
1. Mission
The mission of the Writing Center is to serve students, tutors, and faculty as a resource for improving writing across Longwood University's campus. We seek to improve student learning by working on written documents with students at any stage of the writing process. The mission is not to create a perfect paper, but to work on targeted areas of concern with students. We also seek to offer professional development opportunities that help them to embrace their role as citizen leaders in the Writing Center. We stress a caring, generous demeanor, and we emphasize listening over dominating the session. Finally, we seek to offer faculty on Longwood University's campus support through writing workshops for their students, specialized tutoring, or in other ways.
2. Goals
The University Writing Center will:
provide an accessible, comfortable, collaborative environment for writers of all abilities;
foster the growth and confidence of writers by clarifying and promoting techniques of effective writing;
serve as a writing resource for the university community and beyond through the Writing Center web page, presentations, and workshops;
enhance the academic experiences of writing consultants employed in the Writing Center by encouraging professional development and frequent self-reflection on their roles as tutors and writers.
3. Outcomes
The Writing Center will assess the following outcomes for students who visit the center or who are taught through workshops:
Improved Student Writing
Students will demonstrate improvement in their written communication competency skills in one or more of these four areas: analysis, organization, audience, and mechanics.
Analysis
Identifies, summarizes, and analyzes the topic/problem in the reading with significant clarity and addresses all relevant questions and issues.
Organization
Organizes paragraphs coherently to support the connections of the reading to the field and to articulate new concepts with consistent and skillful use of appropriate, clear transitions and well-developed explanations.
Audience
Demonstrates precision and control over language, examples, and concepts that are appropriate to the topic and/or rhetorical situation.
Mechanics
Uses perfectly correct grammar, spelling, and proper documentation.
Increased Confidence
Students should exhibit more confidence as writers. Also, faculty should see students exhibiting more confidence in their abilities as writers.
Professional Development
The Writing Center will assess the following outcomes for the tutors and graduate assistant employed by the center:
Enhanced academic and professional experiences of writing consultants and a graduate assistant through training and professional development activities.
Satisfaction with Service
The Writing Center will assess the following outcomes for the faculty and students who use the center:
Satisfaction of students and faculty.
Increased Student Utilization of Services
The Writing Center will advertise and will work with faculty to increase student utilization of services.
4. Measures for each outcome
Improved Student Writing
In conjunction with several faculty members, we will collect an early writing assignment and a writing assignment written after students have had the opportunity to visit the Writing Center. Using the rubric in Appendix 1, we will assess differences in the quality of writing.Student evaluations will be used to gauge students’ perceptions of improvement in their writing and to document which learning outcomes described in the “outcomes” section of this document were achieved (See Appendix 2).
Increased Confidence
Student evaluations will be used to gauge students’ perceptions of increased confidence in their writing abilities (See Appendix 2).Faculty evaluations from across the disciplines will be used to gauge faculty perceptions of improvement in student writing and confidence (See Appendix 3).
Professional Development
Exit surveys as well as interviews with graduating tutors and with the graduate assistant will be used as a means of assessing the writing center’s impact on its student employees (See Appendix 4).
Satisfaction with Service
Faculty and student evaluations will be used to gauge the level of satisfaction with the services (See Appendices 2 & 3).Faculty evaluations will be used to gauge the level of satisfaction with workshops and/or specialized tutoring sessions (See Appendix 5).
Increased Utilization of Services
Data on the number of visitors to the center, the number of repeat visitors to the center, and the number of students served through workshops will be collected and compared to data collected in the previous two years in order to demonstrate increased utilization of services (See Appendix 6).
5. Targets
Improved Student Writing
The pilot will establish a baseline for the effects of the Writing Center intervention on student writing.
Increased Confidence
The pilot will establish a baseline for measuring improvement in student confidence about writing.
Increased Student Utilization of Services
The pilot will compare data from Fall 2011 to data collected in the Fall of 2010 and 2009 to show an increase in use of services.
Enhanced Professional Development
The pilot will establish a baseline for measuring the impact of professional development on Writing Center staff.
Satisfaction with Service
The pilot will establish a baseline for measuring satisfaction with service.
Timetable for Assessment
Surveys and data will be collected during the Fall 2011 semester. A pre-assessment will occur on Saturday, November 19th. A random sample of 50 papers collected from five faculty will be assessed by four different faculty members with the rubric in Appendix 1. These papers will be collected before students have been encouraged to visit the Writing Center.
A post-assessment will occur on Saturday, February 11th. A random sample of 50 papers collected from five faculty will be assessed by four different faculty members with the rubric in Appendix 1. These papers will be collected after students have had an opportunity to visit the Writing Center.
A report will be submitted by May of 2012.