Praxis: A Writing Center Journal • Vol 18, No 2 (2021)

“I Believe This is What You Were Trying to Get Across Here”: The Effectiveness of Asynchronous eTutoring Comments

Courtney Buck
Wittenberg University
courtebuck@gmail.com

Jamie Spallino
Wittenberg University
Jamie.spallino@gmail.com

Emily Nolan
Wittenberg University
emilydnolan@outlook.com

Abstract

This article discusses our work examining asynchronous eTutoring comments and how we determined whether tutor comments on papers submitted to our writing center were effective. Drawing from the fields of writing center theory, education, and rhetoric and composition, we define effectiveness as a combination of revision and improvement factors (Faigley and Witte; Stay; Bowden). Data collected consisted of initial and subsequent drafts of student papers submitted for eTutoring sessions, including the comments a tutor made on each paper. We categorized the comments and corresponding revisions to answer the following questions: which types of comments result in the greatest number of revision changes? And, do those comments, according to our definition, align with the types of comments we find to be the most effective? We found that frequency and effectiveness were not the only factors in determining a comment’s importance. We emphasize the necessity of instruction and scaffolding in tutor comments to potentially increase their effectiveness and student understanding.

Introduction

Writing center theorists have spent decades debating the benefits of various tutoring methodologies, exploring, for example, the tensions between directive and nondirective approaches. In his 1991 article, Jeff Brooks argues for a minimalist, hands-off approach that encourages writers to generate solutions to all their own concerns. Linda K. Shamoon and Deborah H. Burns responded in 1995 with an opposite approach, positing that tutors’ knowledge gives them the authority to exert some level of control over the conversation and the writer’s work. Woven through their work, and other similar articles, is the goal of achieving a balance between writer agency and tutor expertise. 

Other research moves beyond the shifting roles of tutor and writer and takes a more detailed approach to tutors’ comments, seeking to categorize and understand what tutors say. Jo Mackiewicz and Isabelle Thompson’s 2014 article divides tutor comments into three strategies that are identified in other disciplines as educationally effective: direct instruction, cognitive scaffolding, and motivational scaffolding. Other works focus on individual types of comments, such as Effie Maclellan on praise or JoAnn B. Johnson on questions.

In a field where the qualities and modalities of tutor feedback are perpetually researched and refined, there appear to be few studies analyzing the effectiveness of this tutor commentary. As writing centers aspire to provide the best feedback for writers, it seems curious that related research has historically focused on the feedback itself, rather than the changes that writers elect to make (or not make) in response. This paucity of research, however, is understandable since tutors rarely have access to writers’ follow-up work after a session—we only have access to half the conversation.

Perhaps the tools to better understand students’ roles in a session reside in research done outside of the writing center field; researchers have turned to students’ consecutive drafts of written assignments in order to study interaction with instructor commentary. In his 1995 case study “Tracing Authoritative and Internally Persuasive Discourses: A Case Study of Response, Revision, and Disciplinary Enculturation,” Paul Prior analyzed the “relationships among writing, response, and learning in disciplinary settings” to ultimately discern “who is talking in these texts” (291). By studying a professor’s comments and a graduate student’s reciprocal revisions, Prior discovered that “the professor's response involved extensive rewriting and that the student routinely incorporated the rewritten text into subsequent drafts” (288). Though these findings seem to support students’ tendency to make comment-based revisions, the limited nature of this case study makes drawing linear conclusions challenging. 

Darsie Bowden’s 2018 article “Comments on Student Papers: Student Perspectives” expands on Prior’s foundation. One focus of her study was the students’ choice to exercise their agency as writers: which comments did they ignore, and why? To understand the way students work through instructor commentary, students in her research study were interviewed about comments they received on both the rough and final drafts of an assignment. For the first interview, the students were asked about their comprehension of the instructor commentary and how they planned to revise. For the second interview, the same students were asked to analyze how the revisions they chose to make, as well as the ones they did not, ultimately influenced the final product and grade. She found that confusion, concern about grades, writers’ past experiences, and more played a role in writers’ choices to revise or not revise.

As writing centers continue to explore student responses to comments, they have also been amassing a growing body of work addressing eTutoring. This literature maps the chronological progression from Barbara Monroe, David Coogan, and Michael Mattison and Andrea Ascuena describing the practices implemented therein to Beth Hewett debating the benefits and drawbacks of synchronous and asynchronous eTutoring sessions. Though the two modalities are often discussed as mutually exclusive or radically different, they share a student-centered model with emphasis on scaffolding, instruction, and a focus on student growth.

In her 2010 book The Online Writing Center: A Guide for Teachers and Tutors, Hewett provides a description of current eTutoring practices as well as her recommendations for their continuing development. One of her main points is the independence of online writing center work from its face-to-face counterpart: “Future theoretical and pedagogical research needs to consider the online conference-based instructional environment as one that requires its own theories and practices—attentive to, but distinctive from, contemporary writing instruction theory and practice developed for traditional settings” (Hewett 162). She also identifies the two categories of eTutoring, synchronous and asynchronous, and details their unique qualities, strengths, and challenges. To extend her discussion of eTutoring as a unique format, Hewett calls for research to develop and analyze eTutoring practices: “In addition to answers for questions about efficacy, educators need solid descriptions and analyses of online instructional commentary: how instructors talk online, so to speak, and how students revise in response” (159).

By examining a stored cache of re-submitted eTutoring papers, our work analyzes the types of revisions tutors ask for in asynchronous eTutoring sessions and the types of changes their comments elicit. To study those resulting revisions, we look to Jessica Williams, Byron Stay, and Bowden’s work. We also draw from Lester Faigley and Stephen Witte in the field of rhetoric and composition for their 1981 taxonomy of revision changes that allows us to compare the revisions requested by the tutor to the corresponding changes in the student’s draft. Our research falls into the intersection of these fields, leading us to define effectiveness as a combination of revision and improvement factors: Which comments elicited change? How did those changes compare to the tutor’s initial request? Did they improve the paper?

In this article, we discuss our work examining asynchronous eTutoring and determining whether tutor comments on papers emailed to our writing center were effective. We explain the process by which our writing center conducts eTutoring sessions and highlight the role of Microsoft Word. We describe our IRB-approved institutional quantitative study, a project that analyzed papers resubmitted to our writing center over the last eight years. We provide findings from our two-year study that reveal the effectiveness of the eTutoring comments made by past and current tutors. We use our findings to answer the following questions: Which types of comments result in the greatest number of revision changes? Do those comments, according to our definition, align with the types of comments we find to be the most effective? We discuss the separation of asynchronous and synchronous eTutoring by exploring the impact of reader response comments. Finally, we investigate the potential of studying student satisfaction in the hopes of discerning if students coming to writing centers are satisfied with the types of comments that tutors most often make.

Background

Our research targeted the comments made by tutors¹ at a small liberal arts university in the Midwest during asynchronous eTutoring sessions. Our center runs through WCONLINE, an online calendar where our students can schedule either a synchronous session—face-to-face or online—or an asynchronous eTutoring session. When a student schedules an eTutoring session, they must choose an hour-long time slot and attach their paper to the session on the calendar, preferably as a Microsoft Word document so the tutor can make comments directly on the document. 

Tutors typically use the same commenting style and format for each eTutoring session, which they learn during a semester-long tutor education course. During this course, instruction on eTutoring emphasizes the importance of asking questions, prompting thought,  and avoiding overcorrecting. Additionally, tutors are trained to prioritize Higher Order Concerns (HOCS), which include elements such as thesis, organization, and development; Later/Lower Order Concerns (LOCs) are elements such as grammar and sentence structure. The commenting format used by tutors is based primarily on the structure described by Monroe’s 1998 article. The sections of each eTutoring session include a front note, side comments, and an end note (Figures 1 and 2).

In the front note, a tutor introduces themself to a writer, thanks them for sending in their paper, and acknowledges the concerns they identified on the online appointment form. In the end note, the tutor summarizes their comments and potentially makes new comments. Here a tutor thanks the writer again for using the Writing Center and encourages them to schedule another session. The tutor can also use Track Changes, a function in Microsoft Word under the Review tab. Tutors use this function when making small, in-text changes that writers can either accept or reject. In the side comments throughout the text, the tutor leaves suggestions, questions, and other comments intended to entice the writer to revise the paper. These side comments were the target of our research. 

Once the tutor has finished responding, the documents are saved on the Writing Center’s shared drive and attached to the eTutoring session on WCONLINE. Therefore, a record of first drafts, tutor comments, and second drafts becomes available if a student resubmits the same paper for another eTutoring session. Our complete sample set consisted of 46 student papers that fit those criteria.²

Methods

To analyze the tutors’ side comments from our sample set, we used Faigley and Witte’s taxonomy of revision changes (Figure 3). The taxonomy was created to present more distinct categories to analyze revisions made by writers with different levels of expertise. Because the taxonomy was not designed for writing center work, it was initially used solely for writer moves. We chose this taxonomy over others created specifically for writing centers because it allowed us to assess the revision a tutor’s comment requested and directly compare that to the student’s corresponding revision, if any, in their next draft. Faigley and Witte’s taxonomy was designed, in their own words, to be a “system for analyzing the effects of revision changes on meaning” (401). This taxonomy helped us trace the conversational momentum and impulse to revise present—perhaps unexpectedly—in asynchronous eTutoring sessions, rather than focusing solely on tutor moves. The distinctions between the types of revisions also helped us consider the knowledge and skills required to make each one, which may have implications for the student’s decision to revise and the quality of revision they ultimately make.

According to this taxonomy, there are two main types of revision changes: surface and text-base. As the name implies, surface changes do not alter the meaning of a sentence or text. There are two subcategories under surface changes: formal and meaning-preserving. Formal changes include spelling, tense, and format revisions. As in the example in Table 1, a formal comment might suggest that the student switch their punctuation marks in order to be grammatically accurate, as “punctuation” is listed under formal changes. The other subcategory, meaning-preserving, is defined as a change that does not alter the meaning of a sentence. A meaning-preserving comment might suggest that a student vary their sentence structure (Table 1). As this alteration would only modify the syntax, the meaning of the sentence would remain the same. 

On the other side of the taxonomy are the text-base changes, revisions that do alter the meaning of a sentence, paragraph, or paper. Under text-base changes are two subcategories: microstructure and macrostructure. Microstructure changes alter the meaning of a sentence or short passage but do not change the summary of the entire text. A tutor’s microstructure comment might recommend that a student restructure a paper by using a different selection from a cited article. In the example shown in Table 1, the tutor asks the student to change the details supporting one of their points, causing a small modification in the meaning of the paragraph but not altering the meaning of the paper as a whole. Macrostructure changes, on the other hand, do alter the meaning of the entire text. A macrostructure comment might advise a student to alter their thesis, as changing the thesis would affect a written summary of the paper (Table 1).³

Once we selected the taxonomy, we spent a few weeks norming comments to ensure accuracy and consistency within our group. Each of us worked on the same paper individually, attempting to place each tutor comment into a category on the taxonomy. Once we completed this step, we met to discuss why we had placed the comments into certain categories. This process helped us understand the taxonomy itself and our usage of it.

As our research progressed, we found that several of the tutor comments did not fit into the four categories of Faigley and Witte’s taxonomy, so we decided to add two categories of our own (Figure 4). We added praise, expressions of approval or encouragement, which are straightforward and relatively easy to identify. Predominantly, praise comments consist of a tutor expressing their approval of an idea the writer mentioned (Table 1). We also added sayback, a term coined by Peter Elbow and Pat Belanoff in Sharing and Responding. They describe the writer’s perspective on a sayback comment as follows: “Please say back to me in your own words what you hear me getting at in my piece, but say it in a somewhat questioning or tentative way—as an invitation for me to reply with my own restatement of what you’ve said” (Elbow and Belanoff 8). Thus, the tutor’s summary of what they believe the writer is trying to communicate is an open invitation—not an outright request—to revise without a specific desired result. What is important to note about these two additional categories is that, unlike the four main categories from Faigley and Witte, neither praise nor sayback comments explicitly request a change from the writer. 

Often the side comments corresponded to one category. However, a side comment could encompass more than one of these categories; if that was the case, we split the side comments into parts A, B, and so on. For example, if a comment led with praise and then asked for a formal change, we split it into two parts: one for praise and one for a formal change. This was often the case when praise was used to offset a request for revision. We decided to also categorize comments in the front notes and end notes if they included a suggestion that was not mentioned in any of the side comments, but this did not occur very often. Overall, we looked at 736 side comments, 2 comments from front notes, and 31 comments from end notes, for a total of 844 analyzed comments. 

After categorizing the tutor comments, we looked to the writers’ revisions, if any. Because asynchronous sessions do not allow students to give verbal feedback, the evidence of their response to a tutor’s comment lies in the changes they make to the text. We categorized writers’ revisions in response to tutor comments using the same taxonomy as the comments themselves. Thus, we could directly analyze any interaction between the comment and its corresponding revision.

Since we wanted to gauge not only the quantity but also the quality of those revisions, we turned to Stay’s 1993 article “When Re-Writing Succeeds: An Analysis of Student Revisions.” Stay used Faigley and Witte’s taxonomy as we did. To identify the quality of revisions, he created a Taxonomy of Quality Changes, using a plus sign to indicate that the revision was an improvement, a minus sign to indicate a regression, and a zero to indicate no significant change in quality. We drew from this taxonomy, categorizing each revision the writer made as higher than, lower than, or equal to the revision the tutor recommended or implied in their comment.

We then drew from Williams’s 2004 study, “Tutoring and Revision: Second Language Writers in the Writing Center.” As she discovered by having teachers grade pre- and post-session drafts of student papers, changes in a draft do not always lead to an overall improvement in the paper. We implemented a similar process, comparing the student drafts before and after revision to determine whether the writer’s changes improved the paper’s content, coherence, or flow. This was a yes or no binary based on our judgments as tutors.

Ultimately, our combination of these factors allowed us to categorize the changes we saw between drafts, and our research question(s) developed accordingly:

  1. Which comments elicited change?

  2. How did those changes compare to the tutor’s initial request?

  3. Did the changes improve the paper?

  4. Determined as a combination of those three factors, were the comments effective?

Data

The drafts we studied ranged in length from one page to over 20 pages and varied in content from concert reviews for an introductory music class to a senior thesis on religion. Because there was significant variation in the length and types of the drafts we studied, the total number of tutor comments on each individual draft varied greatly as well, ranging from 4 to 52. The median number of comments per draft was 18, and the mean was 18.8 comments. 

We then broke the comments down into groups according to the six categories in our working taxonomy to determine which types of comments the tutors made most frequently (Figure 5). The largest proportion of comments—approximately 34.2%—requested microstructure changes, with the runner-up being formal changes at approximately 20.6% of the tutor comments. The two categories we added to Faigley and Witte’s taxonomy, sayback and praise, added up to about 20% of the comments, cementing our belief in their importance and relevance to our work. 

Of the tutor comments, 53.3% directly elicited revisions—which may seem low, but keep in mind that only four out of the six categories of comments explicitly requested changes. Calculated without the two additional categories, sayback and praise, 67.1% of tutor comments that requested change elicited it. Using Stay’s Taxonomy of Quality Changes, we compared the change requested (either explicitly or implicitly) by a tutor’s comment to the student’s corresponding revision on the second draft; we discovered that a vast majority of those revisions—78.4% of them, to be exact—were equal in quality to the change the tutor requested (Figure 6). In terms of students’ changes improving the draft, we found that if a revision was made, 91% of the time it improved the paper.

To move toward our goal of discovering what makes a good tutor comment, we classified a comment as “effective” when it did all of the following: elicited a change, led to a revision equal to or higher than its request, and resulted in an improvement in the paper. The following comment example demonstrates this classification system in practice. In this example, the tutor addressed the following transition sentence between paragraphs in a student writer’s literary analysis: “In the relationship between Violet and Titus, Violet is able to not only understand emotions, but she is also able to describe them, while Titus is not sure of what his feelings are and how to express them” (student Z2, draft 3). The tutor commented, “For a transition here, consider bringing up the clash between the relationship. There is a clear difference between them and between their relationship with their fathers,” explaining how this transition sentence could be an opportunity to identify how the characters’ upbringings factor into their differing emotional intelligences. The tutor also made a track change suggesting that the student replace “and” at the end of the sentence with “nor.” The student revised the sentence as follows: “The different upbringings that Violet and Titus have reflect the hardships they face in communication and understanding each other through out [sic] their relationship.” In this revision, the student not only incorporates the tutor’s suggestion of upbringings, but also expands the thought to include the repercussions of the characters’ varying communication skills—a change we categorized as higher than the one requested by the tutor’s comment. The student’s revision clarifies and expands their transition sentence, giving the reader important information not previously considered; thus, we also determined that it improved the paper. In this case, the tutor’s comment elicited a change higher than its suggested outcome and an improvement in the paper, fulfilling all our requirements for effectiveness.

We were curious to calculate an overall percentage of how many tutor comments fit our definition of effectiveness. To create the most accurate calculations, we only analyzed an effectiveness rate for the four comment types from Faigley and Witte’s taxonomy—formal, meaning-preserving, microstructure, and macrostructure—because they are the four comment types that directly request revisions. Thus, this calculation does not include sayback or praise comments. Based on these criteria, our calculations show that 55.39% of comments that directly asked for change were effective.

Effectiveness By Comment Type

To delve deeper into our data, we analyzed the results elicited by each category of comments to find out which type is most effective. We calculated the percentage of comments that fit our definition of effectiveness for each of the four comment types that requested revision. The rate of effectiveness was highest for formal comments and lowest for macrostructure comments (Figure 7a). Interestingly, those effectiveness rates do not align very closely with how often each type of comment is made (Figure 7b); this discrepancy led us to consider the implications of effectiveness and frequency of comments from both the students’ and tutors’ perspectives. For example, microstructure comments are the most commonly made, but they are only the third most effective comment type. It seems counterintuitive that the comments appearing most frequently are not eliciting change most frequently, but the rates of effectiveness still logically make sense. 

When we placed the response rates and effectiveness percentages for each comment category side by side, we observed a correlation between the two (Figure 8). It appears that, if a writer chooses to revise based on a comment, they are also likely to follow through by meeting or surpassing the desired level of revision and improving their paper. So, the most important step in achieving an effective comment—one which elicits a change, leads to a revision equal to or higher than its request, and results in an improvement in the paper—appears to be getting the student to engage with the tutor’s comment. Since 91% of student revisions resulted in paper improvement, once they engage, the revision will most likely be effective overall.

Though that pattern is consistent for all four types of tutor comments requesting change, the magnitude of the trend is not. By comparing the percentage of comments that elicited revision to the percentage we categorized as effective within each of those categories, we noticed that the gap between those two percentages increases with the “difficulty” of the revision. For example, formal comments elicited revision 78.3% of the time and effective revision 72.8% of the time—only approximately a 5% difference—while macrostructure comments had a gap of over 20% between revision (59% of macrostructure comments) and effectiveness (38% of macrostructure comments). Thus, the student’s revision is less likely to be effective as the requested revision becomes more involved. 

The exception is sayback, a category of tutor comment that does not request change and therefore is not compatible with our definition of effectiveness; we could only determine a percentage of comments that elicited revision. There were no instances of praise comments requesting or eliciting change, so they are not included in this analysis.

Discussion

With this data, we are able to answer two of our research questions: which comments do tutors make, and which do students respond to? The high percentage of microstructure comments shows that tutors commonly recommend that students make substantive, meaning-based changes to their drafts. Students do not respond to those comments most frequently, instead choosing to make the most formal revisions, such as changes in spelling, punctuation, and usage. However, student revisions, when they occur, are equal or higher than the tutor’s request 84.6% of the time and improve the draft as a whole 91% of the time. Based on this data, student revision in response to tutor commentary is typically of a high quality. As we explore possible factors influencing the comment and revision patterns demonstrated in our data, we return to these statistics as evidence of student thought and effort.

Sayback Comments

The category of sayback comments is not part of Faigley and Witte’s taxonomy; we added it to our working taxonomy as we noticed tutors using these reader-response comments during eTutoring sessions. They have the lowest response rates of any comment type, but the fact that they elicited revision at all merits further investigation. Because they are reader-response statements, they do not request change, so we did not expect them to prompt any. However, our data analysis shows that 34.9% of sayback comments elicited revisions in the papers. And, a few of those revisions were some of the largest in our data set, like several paragraphs added to a draft by Student O. 

In one case, a sentence written by Student R originally read: “By providing this quote, West shows that Hamilton, as well as the other founding fathers believed that every gender, race, and social class have the same liberties.” The tutor made the following sayback comment: “It isn’t limited to these distinctions either. Hamilton is explaining that it is the entirety of the human race that has this equality. Good.” Because this comment was presented in an asynchronous session, that introduces many possible interpretations of the tone. Based on Elbow and Belanoff’s definition, we interpreted this comment as sayback, for it is an invitation to the writer to revise but not a direct request. We divided this comment into two parts in our analysis, classifying the first two sentences as sayback and the last as praise. Though the tutor’s comment did not ask for revision and actually praises the student’s work as it was initially submitted, the student revised in response to the comment. The corresponding sentence in the resubmitted draft read: “By providing this quote, West reveals that Hamilton, as well as the other Founding Fathers, believed that the entire human race, including every gender, race, and social class, have [sic] the same liberties.” Examples such as this one helped us develop our theories that sayback comments help readers understand their ideas as they are conveyed to readers, which often help them bring to the surface themes that were implied but could be better conveyed if stated outright.

As stated earlier, we could not calculate a rate of effectiveness for sayback comments because our definition of “effective” depends on the comment requesting a change, but these comments and the results we saw from them still support a theme that spans much of writing center literature: the idea of responding as a reader. This idea has been explored more with regard to face-to-face sessions, in the context of cognitive scaffolding, “where tutors give students opportunities to figure out what to do on their own” by enabling their skill development, and motivational scaffolding, which “influences students’ effort, persistence [...], and their active participation and engagement” (Mackiewicz and Thompson 56, 63). Our work examines whether the principles of cognitive scaffolding apply to asynchronous eTutoring sessions, as well. We assume that writers see sayback comments as the tutor’s interest in their paper and an opportunity to make sure their point is being properly conveyed or to extend their thought and analysis based on a tutor’s prompting questions. And, because sayback comments do not directly ask for change, any revisions made as a result are presumably evidence of student thought and effort. The student’s deliberate choice to make changes when none were requested seems to demonstrate investment in the process of writing and revising. Since we do not have access to a student’s thoughts during the revision process, this is the only category of comment in which we can argue that the student is not automatically copying the tutor’s ideas and robotically making the requested changes simply to improve the product.

Student and Tutor Communication

There are a few possibilities as to why there is a progressive gap between the percentage of comments eliciting revision and the percentage of effective comments as revision “difficulty” increases. One possibility is the amount of text addressed by a tutor comment. If a comment focuses on a larger amount of text, as many macrostructure revisions do, the writer may feel overwhelmed or frustrated by the amount that they are being asked to revise. Or, perhaps the gap may be attributed to the abstractness of a requested revision. Dealing with concepts such as the thesis statement (which would likely constitute a macrostructure revision) may be more difficult for a writer as opposed to a straightforward punctuation change. More generally, the amount of work required to make a macrostructure revision compared to a formal revision is larger. If a writer perceives that more work is necessary in order to make the requested revision, they may be less likely to make the revision—and follow through with it—for a variety of reasons.

This gap between tutor comments and student revisions may be linked with the concept of HOCs and LOCs. HOCs would likely necessitate macrostructure revisions, while LOCs would likely necessitate microstructure, meaning-preserving, or formal revisions. These three categories—formal, meaning-preserving, and microstructure—had higher effectiveness rates than macrostructure comments, and the gap between revision and effectiveness was smaller as well. Based on this information, it is possible that writers focus on LOCs more, while tutors address HOCs more. Because of the nature of asynchronous eTutoring, the tutor and writer cannot have conversations setting the agenda for the upcoming session, which perhaps explains these differences in focus.

This deficit could also indicate a lack of knowledge or training on how to make these more difficult organization- and content-based revisions. In that case, student writers may not be reaching the goals tutors envision because they do not have the tools to meet those goals. This could also imply a lack of scaffolding from tutors to help students make those revisions. In asynchronous eTutoring sessions, the tutor has very little understanding of the writer’s abilities, if any, so perhaps they are more likely to assume that the writer will know how to make the requested revision instead of explaining it. During synchronous sessions, tutors constantly adjust their instructive approaches in response to the student’s interaction with their feedback. For example, if it becomes apparent to a tutor that a student is struggling to engage with a revision suggestion, the tutor may begin using more direct instruction. Asynchronous sessions do not allow tutors to make adjustments to their tutoring style as they are unable to see how the students are responding to their comments. 

Our goal at the beginning of this study was to eventually improve our writing center’s practice by encouraging tutors to make more of the most effective comment type, but given this data, we now wonder if that is the best option. Though formal comments are the easiest, and also the most effective, they are not the only type of comment important for improving student papers. Perhaps, instead of changing the types of comments tutors make, we can encourage them to increase the effectiveness of all their comments by including more instruction and scaffolding so that students will be better equipped to make the changes tutors request of them.

Non-Tutor Factors

In addition to elements involving tutors and students, lack of revision could also be attributed to a number of unrelated concerns, meaning that a “low” level of effectiveness does not necessarily reflect poorly on the tutors. Confusion over what a comment is asking for, disagreement with the tutor’s request, and lack of knowledge about how to make a certain revision could all contribute to changes not being made. In her study, Bowden found that confusion was one of the largest factors contributing to writers’ lack of revision. In addition, sometimes writers may not have the time or desire to make the suggested revision. For example, we discovered a trend where writers would simply delete sentences that the tutor had commented on instead of revising them in a productive way. While these were revisions, they often resulted in the removal of important information from the paper, so we categorized them as “lower” in Stay’s taxonomy, therefore making the comment ineffective. However, that these revisions do not align with the tutors’ suggested changes does not negate their possible contribution to the paper.

Confusion may not pertain solely to what a tutor’s comment is asking for. Technology is at the basis of email sessions. In our writing center, we use Microsoft Word. Though Word is not particularly confusing, certain features can be. Turning on Track Changes can be mildly difficult, and writers must have “All Markup” chosen within Track Changes in order to see comments on the paper. If students do not have this turned on, they may be unable to see the tutor’s comments—this is what we believe happened with Student A’s drafts. Student A made no revisions between the first and second drafts and did not accept the Track Changes, we presume due to their inability to see the tutor’s comments. We do not believe this is a frequently occurring issue, but it is still one worth addressing for the writing centers that offer eTutoring. One potential solution writing centers may explore for this would be to create an instructional video for students that demonstrates how to use Word.

Presumptions

Examining the assumptions, abilities, and technological knowledge tutors and students bring to sessions led us to consider the presumptions we brought to this research project regarding effective tutoring practices. In writing centers, we strive to help writers and want to provide them with the best services possible. Tutors are trained, so they and the writers implicitly agree that tutors have a more in-depth level of knowledge about writing and that their input will bring improvement; otherwise, why would writers use the writing center? Tutors’ in-depth knowledge, though, should not equate to the perception that they absolutely, always know best. However, throughout our research, we worked under the assumptions that tutors’ knowledge and expertise bring about comments that are going to help, so writers should make the revisions tutors suggest. But these ideas are just as we said: assumptions. When we were categorizing comments, we did find a few comments where we felt the tutor misunderstood the writer’s intention. Therefore, we understand that tutors are not infallible. It is writers’ work and ultimately up to them to decide what the best version of their work is. Still, under our definition of effectiveness, we had to work under the previously mentioned assumption that it is beneficial for the writer to make the suggested revisions.

The ideal percentage of effective tutor comments was one of our presumptions. As previously stated, the percent of tutor comments that were effective was 55.39%. We recall feeling disheartened when we first calculated the percentage of tutor comments that were effective—as tutors in training at the time, we held high hopes and expectations for our work, and it was difficult to be faced with empirical evidence that only about half of the eTutoring comments would ultimately fit our definition of effectiveness. However, the fact that students use more than half of tutors’ comments to create revisions that improve their papers is still quite impressive. Though paper improvement is certainly not the only goal of writing centers, it is a helpful way to measure their success through the effectiveness of tutor comments. Another positive aspect of this data is that it demonstrates that writers are thinking through the changes they want to make to their draft. They are not simply making all the revisions requested by tutors, but rather using their agency and determining which revisions will help create a better version of their work.

Limitations and Further Research

As with any research, there were limitations to our study. Our study took place in a small, liberal arts institution in the Midwest, so its population may be distributed differently than those of other colleges or universities. This factor also affects the population of tutors whose comments were analyzed. In addition to their various identities not being replicable, the perspectives and ideologies tutors gained from our semester-long tutor education course may have led them to develop different commenting styles or types than those found in other institutions. We took the tutor education course at the same time that we began categorizing comments and completed our categorizations during our first three months as tutors. As a result, our perspectives on tutors’ comments and our categorizations may have developed throughout the process, reflecting our growth and experience.

The student tutors and writers were also affected by a number of unquantifiable factors, such as their experience with the type of paper they are working with, their relationship with the professor who assigned the paper, the time until the paper is due, and their motivation to revise. Finally, our study was affected by a small sample size. Because our sample included 46 sets of student drafts, we could analyze the patterns we saw but ultimately cannot guarantee generalizability. We encourage other writing centers to replicate this study to create a larger data pool, which will allow more accurate inferences about tutor efficacy in writing centers as a whole.

Another limitation was only being able to examine a part of the picture at hand. First, we did not know what motivated any of the writers in study to use our writing center. Different motivations may play a role in writers’ revisions. Also, we were able to see tutors’ comments and writers’ revisions in response, but we had no way of knowing what either party was thinking beyond the comments and revisions they made. We have no explanation for why writers chose to revise or not revise, why they chose to revise the way they did, or what their thoughts were about tutors’ comments. These gaps regarding thought processes leave much information that we can speculate about, but it would be helpful to actually know writers’ reasoning. Focus groups, interviews, surveys, and talking with both writers and tutors to gauge the success of sessions are all avenues for future research that would allow us to better understand asynchronous eTutoring and the revision process, as used in Bowden’s study.

Additionally, sayback comments are asynchronous tutoring’s closest counterpart to the non-directive, conversational tutoring style commonly used in synchronous sessions. Since back-and-forth conversation cannot occur within asynchronous email tutoring, as found in Coogan’s study, provoking student thought along the same lines may be the next best strategy. Therefore, our analysis of sayback comments introduces some new questions: If these comments are our strongest evidence of student thought, should we be encouraging tutors to use them more often? And, if we increase emphasis on the comments which are the closest corollary to face-to-face discussion, what does that imply about the differences between synchronous and asynchronous sessions and the validity of each? 

To expand on this idea of synchronous versus asynchronous tutoring, it is important to recognize that there is debate in the writing center field about this matter as emphasized by James Inman and Donna Sewell, Hewett and Chris Ehman, and Hewett. Some believe that asynchronous tutoring should be a sort of mirror, and that tutors should strive to replicate synchronous tutoring and use the same practices. However, there are others who believe asynchronous tutoring should be its own genre entirely, and tutors should use tutoring methods specific to asynchronous sessions. While views have changed in recent years, some writing centers have resisted doing any online tutoring at all, preferring to offer only face-to-face synchronous tutoring. However, as a result of the COVID-19 pandemic, the shift to online sessions became a necessity for some centers that had never utilized it before, thus suggesting that this research and other similar studies are perhaps more relevant now than ever before.

Conclusion

Through our study of asynchronous eTutoring comments, we observed the patterns of our tutors’ current commenting practice as well as possible explanations for student writers’ revisions in response to that feedback. Based on our data, tutors are most likely to recommend that clients make sentence-level, meaning-changing (microstructure) changes. Students, on the other hand, most commonly make effective revisions—similar to the tutor’s request and improving their paper—in response to comments asking for technical, surface-level changes that do not alter the meaning of the paper (formal). We speculated that students may respond to these comments more often because they involve little time or effort and are likely to have a correct response. Because student revisions become less effective as the perceived difficulty of the requested change increases, lack of prior knowledge or tutor scaffolding about how to make certain revisions may play a role, as well. However, some students also exceeded our expectations, making revisions in response to reader-response comments, or sayback comments, that did not request a specific change at all, creating a “conversational” atmosphere despite the asynchronicity of the sessions we examined.

This unexpected effect of sayback comments led us to examine the ways asynchronous eTutoring sessions appear to adopt aspects of synchronous sessions. The comfortable, conversational style of sayback comments allowed us to debate if writing centers should strive to restructure their online sessions to emphasize these resemblances to in-person sessions, or if the dichotomy between the two should be enforced. Initially, our results led us to value the separation between asynchronous and synchronous sessions, as eTutoring sessions present unique opportunities: being able to interact without location constraints, technology application, and convenience. With eTutoring, there is also the advantage of the writer having a written record to refer to. However, as indicated in our limitations section, asynchronous sessions are not flawless. The loss of social interaction, confusion over intentions, and technology failures are all aspects of asynchronous eTutoring sessions that are difficult to remedy. 

A next step in the research process might be examining the possibilities and disadvantages of online synchronous sessions. Prompted by the transition to remote instruction due to the COVID-19 outbreak, our writing center recently added synchronous online sessions to our WCONLINE calendar. Since the introduction of this session type, students have scheduled more eTutoring sessions than synchronous online sessions. However, it is quite possible that the contrast in student preference is influenced by the newness of this option, and our writing center staff remains interested in the long-term effects of these sessions. While our research did not examine online synchronous sessions, further research could look at the ways tutors and writers can have effective conversations on the platform.

In writing center practice, tutors aspire to provide writers with the tools to improve not only a single paper but many papers in the future. Ideally, when a writer addresses a tutor comment, they are able to understand the purpose of that revision in that situation. While this engagement is significant, and might even deem the tutor’s comment effective, the greater success of a comment would be if the writer is able to independently identify a similar call for revision in a future paper. This independent identification would thus suggest that a tutor’s comment led to an improvement in the student's writing process. Similarly, it may be helpful to expand this aspiration for holistic improvement to the tutors, as well. Further research might examine whether tutors can allow a successful tutoring moment to foster improvement in their own tutoring process. While this was not the focus of our research, the modality of our research would be such an avenue to enable this exploration.

In order to examine the effectiveness of tutor comments, we used an electronic cache of re-submitted papers. If tutors were given the opportunity to work habitually with a student over a period of time, or with multiple drafts of a paper from the same student, they would be granted access, as we were, to the results of their sessions. Thus, it may be possible to examine whether tutors adjust their tutoring styles in asynchronous sessions in response to a student’s engagement with their comments. Our data set could allow for some of this investigation as a few of the papers were resubmitted several times. It would be interesting to see, for example, if a student’s tendency to address few macrostructure comments enticed a tutor to make fewer macrostructure comments and more of another type—or if the tutor continued to make macrostructure comments but included more explanation or options for revision. This adaptation would thus indicate that the tutor is adjusting their tutoring style to better suit the student’s needs. Further studies could first analyze the extent and ways that tutors alter their instructive approaches while working repeatedly with a student, and then examine how these tutors apply their revised tutoring models in future sessions with different students. 

Ultimately, by exploring a stored electronic cache of re-submitted papers, we were able to examine the practice and effectiveness of eTutoring commentary. This examination provided insight into both the types of comments tutors in our center made, as well as the reciprocal student revision rates. As our research focused more on the classification of comments and the resulting revisions, further studies might analyze student satisfaction with tutor comments in the aspiration of encouraging awareness of tutor efficacy. Comparing the calculated effectiveness percentage to a calculated satisfaction rate could disclose how students feel about the types and quality of the comments made during eTutoring sessions. As the growth of writers remains the mission of writing center practice, finding ways to maximize both the effectiveness of eTutoring comments and the satisfaction of the students can help our field respond to its current and future needs.

Notes

  1. Our university’s writing center employs an average of 25 undergraduate tutors, called “advisors,” who must complete a semester-long training course in writing center theory and practice before they begin working.

  2. Because our research involved papers written by past and current students, we gained IRB approval (number 025-201819) by having our mentor replace student names with letters of the alphabet to ensure confidentiality. As a few writers submitted drafts for more than one set of papers, numbers were given for additional sets from the same student.

  3. These four categories—formal, meaning-preserving, microstructure, and macrostructure—are umbrella categories that are further broken down into different types. For this research project, we elected to examine only the four main categories for simplicity and to get a broader look at our data. Further research could analyze the subcategories to glean any possible patterns within each of the four larger types.

Works Cited

Bowden, Darsie. “Comments on Student Papers: Student Perspectives.” The Journal of Writing Assessment, vol. 11, no. 1, 2018, http://journalofwritingassessment.org/article.phparticle=121. Accessed 8 July 2020.

Brooks, Jeff. “Minimalist Tutoring: Making the Student Do All the Work.” The Writing Lab Newsletter, vol. 15, no. 6, 1991, pp. 1-4.

Coogan, David. “Email ‘Tutoring’ as Collaborative Writing.” Wiring the Writing Center, edited by Eric Hobson, Utah State University Press, 1998, pp. 25-43.

Elbow, Peter and Pat Belanoff. Sharing and Responding. McGraw-Hill, 1989.

Faigley, Lester and Stephen Witte. “Analyzing Revision.” College Composition and Communication, vol, 32, no. 4, 1981, pp. 400-414.

Hewett, Beth. The Online Writing Center: A Guide for Teachers and Tutors. Bedford/St. Martin’s, 2010.

Hewett, Beth and Christa Ehmann. Preparing Educators for Online Writing Instruction: Principles and Processes. National Council of Teachers of English, 2004.

Inman, James and Donna Sewell, editors. Taking Flight with OWLs: Examining Electronic Writing Center Work. Routledge, 2000.

Johnson, JoAnn B. “Reevaluation of the Question as a Teaching Tool.” Dynamics of the Writing Conference: Social and Cognitive Interaction, edited by Thomas.

Flynn and Mary King, National Council of Teachers of English, 1993, pp. 34-40.

Mackiewicz, Jo and Isabelle Thompson. “Instruction, Cognitive Scaffolding, and Motivational Scaffolding in Writing Center Tutoring.” Composition Studies, vol. 42, no. 1, 2014, pp. 54-78.

Maclellan, Effie. “Academic Achievement: The Role of Praise in Motivating Students.” Active Learning in Higher Education, vol. 6, no. 3, 2005, pp. 194-206.

Mattison, Michael and Andrea Ascuena. “(Re)Wiring Ourselves: The Electrical and Pedagogical Evolution of a Writing Center.” Computers and Composition, 2006.

Monroe, Barbara. “The Look and Feel of the OWL Conference.” Wiring the Writing Center, edited by Eric Hobson, Utah State University Press, 1998, pp. 3-24.

Prior, Paul. “Tracing Authoritative and Internally Persuasive Discourses: A Case Study of Response, Revision, and Disciplinary Enculturation.” Research in the Teaching of English, vol. 29, no. 3, 1995, pp. 288-325.

Shamoon, Linda K. and Deborah H. Burns. “A Critique of Pure Tutoring.” The Writing Center Journal, vol. 15, no. 2, 1995, pp. 134-151.

Stay, Byron. “When Re-Writing Succeeds: An Analysis of Student Revisions.” The Writing Center Journal, vol. 4, no. 1, 1983, pp. 15-28.

Williams, Jessica. “Tutoring and Revision: Second Language Writers in the Writing Center.” Journal of Second Language Writing, vol. 13, no. 3, 2004, pp. 173-201.

Appendix

Figure 1

Example of eTutoring Session Format: Front Note, Side Comments, and Track Changes

Figure 2

Example of eTutoring Session Format: End Note

Figure 3

Faigley and Witte’s Taxonomy of Revision Changes

Table 1

Examples of Each Comment Type in the Revised Version of Faigley and Witte’s Taxonomy of Revision Changes

Figure 4

Working Taxonomy Used to Categorize Tutor Comments and Corresponding Student Revisions

Source: Lester Faigley and Stephen Witte, “Analyzing Revision,” College Composition and Communication, vol, 32, no. 4, 1981, pp. 403.

Figure 5

Distribution of Tutor Comments by Type Based on Modified Working Taxonomy

Figure 6

Distribution of Student Revisions Compared to Tutor Comments Based on Stay’s Taxonomy of Quality Changes 

Table 2

Examples of Comments and Revisions Categorized According to Stay’s Taxonomy of Quality Changes

Figure 7a

Effectiveness by Comment Typ

Figure 7b

Effective Comments Shown Proportionally Within Comment Frequency Distribution

Figure 8

Distribution of Tutor Comments that Elicited Revision and Comments that Were Effective