Praxis: A Writing Center Journal • Vol. 14, No. 2 (2017)
CINDERELLA’S SLIPPER: RESEARCH, QUASI RESEARCH, RAD RESEARCH, SMALL SCALE EVALUATIONS AND THE SEARCH FOR THE RIGHT FIT
Kathryn Raign
University of North Texas
kathryn.raign@unt.edu
Abstract
In this article, I provide an overview of what our field recognizes as the most useful taxonomies of research. Based on this overview, I argue, via specific examples of published research, that we sometimes conflate simplicity with the simplistic. I conclude by offering an example of a quasi-experiment based on data I have collected that investigates statistical correlations between five factors: a student’s satisfaction with space, tutor’s knowledge, tutor’s ability to share knowledge, student’s likeliness to return to the center, and student’s likeliness to recommend the center. Center directors have the most control over internal factors: who is hired as a tutor, what criteria are used to hire, and how tutors are trained. However, my data shows that these internal factors have less influence on students’ perceptions of the center than external factors such as space. Finally, if space is the most important factor in determining students’ perceptions of center, and center directors have little or no influence over that factor, are directors being unfairly evaluated when their administration looks at their ability to retain current center users, and bring in more? I conclude by exhorting other directors of other centers to share their own data, so that we can all learn from each other’s experiences.
Let me begin with a confession. I became a member of the community of composition and writing center scholars in the ‘90s. I learned from one of our field’s most respected historical researchers, Win Horner. I engaged in both theoretical and narrative inquiry. I was comfortable subjecting my ideas to the dialectical analysis of my colleagues. I spoke deconstruction.
I would not have been comfortable crunching numbers, using Excel, running statistical analyses, or running kappa tests. I would not have understood what you meant if you told me that N=213. Decades later, I still have a grudge against tables that contain more numbers than words, and I only learned to use Excel because it was a required aspect of managing my writing center’s budget. I know (in theory) what a kappa test is but doubt I could run one. I am in awe of my colleagues who have highly developed skills in statistics, and I am frightened that I will be left behind if I can’t learn to speak “data.”
As the cry for “more research” continues to ring throughout the journals I know and love, I am shaken to my core because my field may not have a place for me much longer. What is a social constructionist to do? How do I transition from humanistic scholarship to empirical research? For me, the transition required that I take two steps. First, I needed to explore the full-range of research methods being used in the field of writing center studies, and secondly, I needed to take what I learned from that exploration and make a leap of faith into empirical research, which I accomplished by conducting a quasi-experiment in which I used statistical analysis to look for correlations between five factors: a student’s satisfaction with space, a tutor’s knowledge, a tutor’s ability to share knowledge, a student’s likeliness to return to the center, and a student’s likeliness to recommend the center. I discovered that strong correlations do exist between several of the factors studied, though, surprisingly, the strongest correlations exist between external factors—space and the likelihoods that a student will return to the center or recommend it to a friend.
Let me begin by doing what I’m comfortable doing: providing an historical overview of the research conundrum.
Rebecca Day Babcock and Terese Thonus describe the field of writing center study as a burgeoning one typified by scholarship that is “largely artistic or humanistic, rather than scientific, in a field where both perspectives can and must inform our practice” (3). “Both assessment and research should be based on empirical data, be they qualitative or quantitative, including narratives, numbers and anything noticed” (Babcock and Thonus 4). This empirical data can then be used to determine when we have “achieved success” in the arena of academic tutoring. However, as we all know, there is nothing “simple” about evaluating writing center success and merely answering the call to research will not solve my original problem—how do I transition from humanistic scholarship to empirical research (Babcock and Thonus 145)?
Sarah Liggett, Kerri Jordan, and Steve Price offer some illumination, providing a taxonomy of research methods that will “help readers understand the variety of methodological opportunities available to them” (55). However, while I find the availability of a methodological research toolbox comforting, it does neither me nor the many other researchers in my position of transition any good if we are unable to use the tools within it. So, in order to help myself and others understand how (and perhaps if) to use these tools to make the transition to empirical research, in this article, I
Provide a brief overview of what our field recognizes as the most useful taxonomies of research
Argue via specific examples of published research that we sometimes conflate simplicity—what Garr Reynolds describes as “an intelligent desire for clarity,” with the simplistic—“dumbed down to the point of being deceptive or misleading” (103)
Offer an example of a quasi-experiment based on data I have collected, with meta-discourse to explain the methodology I chose, why, and whether I made an effective choice.
While I undertake this journey for selfish reasons—I do not want to be left behind as my field continues to grow and transform—I hope that if you feel ill-equipped for the future, you will benefit as well.
Overview of Research Methodologies
Different scholars use different rationales to identify the various research methodologies currently being used, so I have synthesized the dominant arguments into the list below. I have taken my terminology of the different research methods from the work of the following scholars:
In 2005, Richard Haswell provided an in-depth definition and description of RAD (replicable, aggregable, data-supported) research. He then looks at NCTE/CCCC’s publication rate of RAD research over the past twenty years. He found a severe decline in the publication of RAD research, and questions whether this trend “will lead to the eventual disappearance of college composition as a legitimate field of study” (Haswell 215, 218).
In 2011, Sarah Liggett, Kerri Jordan, and Steve Price offered what they called a “taxonomy of methodologies to understand how knowledge is—and can be—made in the complex context of writing centers;” they included practitioner inquiry, conceptual inquiry, and empirical inquiry (51, 68-73).
In 2012, Dana Driscoll and Sherry Wynne Perdue reviewed the publication rate of RAD research in The Writing Center Journal from 1980-2009. 16.5% of the 270 articles they coded met their criteria for RAD research.
In 2013, Ryan K. Boettger and Chris Lam looked at 137 articles published in the five primary technical communication journals over the past twenty years, and determined that experiments make up only 6.7% of the total articles (286). They suggest reasons for this lack of experimental research: lack of training, and lack of departmental support due to a failure to understand the nature of experimental research (Boettger and Lam 288).
These authors have identified the following categories of research.
Experiments
Liggett, Jordan, and Price explain that researchers test hypotheses by controlling variables in a specific context and using statistical analyses to measure results (71). I offer Driscoll and Perdue’s “Theory, Lore, and More: An Analysis of RAD Research in The Writing Center Journal, 1980-2009” as an example of an experiment. It qualifies as an experiment for the following reasons:
Tests hypotheses. “How much research has been published in WCJ? How has this changed over time? How much research published in WCJ is RAD (replicable, aggregable, data-supported) research? How has this changed over time? How do WCJ articles score in individual areas of the RAD Research Rubric? What are the most common methods of inquiry, types of research, and number of participants for empirical research studies published in WCJ?” (Driscoll and Perdue 17-18).
Manipulates at least one independent variable within a group of randomly assigned subjects in a controlled environment. Their group included all WCJ articles written from 1980-2009. They believed that reading every article created a richer dataset. They did not include articles from any other sources, concluding that WCJ is representative of the writing center research field (Driscoll and Perdue 18). The group is controlled because it is limited to WCJ publications.
Next, they developed an “article category rubric,” which they used to place each article in a type: “theoretical, practical, research, program description, historical, review, professional, reflection, position statement, and interview” (Driscoll and Perdue 19). Each reader independently read and coded each article and assigned it a type—this is the manipulation by independent variable. In this case, each type is a variable, so there are 10 variables. Of the 270 articles they read, they identified 91 as research articles.
They then used a 7-point rubric to determine which articles could be identified as RAD research. In each category, the highest point value was 3 and the lowest was 0. Only articles that earned at least 10 points were considered RAD research (Driscoll and Perdue 20).
Uses statistical analyses to measure results. They entered the data into spreadsheets and PASW 18 (a piece of software used for analyzing data) for the analysis: “Descriptive statistics, including frequencies, means and medians were calculated” (Driscoll and Perdue 24).
In many cases, I think the complexity of some scholar’s experiments serves to confuse rather than clarify the points they are trying to make—more on this later.
Quasi-Experiments
Quasi-experiments “consist of already established groups and occur in natural settings, such as a classroom or workplace;” “…Researchers must establish between-group equality” (Boettger and Lam 272-273). A writing center director could conduct a quasi-experiment using undergraduate and graduate tutors as established groups in a natural setting—the writing center where they work. Quasi-experiments allow for an assumption that every member of the group is equal. So, a researcher could survey students who worked with both undergraduate and graduate tutors in order to determine if and why undergraduate or graduate tutors are more highly evaluated by students.
Quasi-experiments, which use both quantitative and qualitative research methods (statistical analysis and surveys), provide scholars with a method they need to explore complex questions (e.g. Do graduate tutors get higher student satisfaction scores because they are better tutors, or because students perceive that they are better tutors since they are in graduate school?). While true experiments test “a hypothesis by manipulating at least one independent within a group of randomly assigned subjects in a controlled environment,” quasi-experiments use already established groups that exist in an authentic setting, like a classroom or a workplace (Boettger and Lam 272). This makes the results more valid because what has been measured or observed is how participants would naturally react. It is the authenticity of collecting data in the natural setting of the writing center that makes quasi-experiments an effective choice for writing center research. Because quasi-experiments allow researchers to assume that groups are initially equal (all students who use the center comprise a group), they can apply their results to other groups in similar contexts. Therefore, because we can say A was true of group B in your center, it is likely to be true of group C in my center. In other words, undergraduate tutors are a group, and graduate tutors are another group, and all students who visit the center are a group. No other factors would be considered. This “broad stroke” approach allows us to draw conclusions at the level of our discipline rather than our individual centers.
Small Scale Evaluations
As Peter Carino and Doug Enders note, “A small-scale evaluation can be conducted via survey and arrayed in tabular form to present clear data” (84). James H. Bell describes the small-scale evaluation as “…a series of carefully limited evaluations which, pieced together after a few years, create a fairly comprehensive picture” (16). Because of writing centers’ limited resources, small-scale evaluations focus on one primary factor at a time. Because small-scale evaluations focus on one aspect of a writing center at a time, they are manageable, and data can be built as new issues are explored, resulting in a broad scale understanding of a particular writing center over time. Finally, those of us conducting small-scale evaluations should publish our designs and results in a form conducive to replication by other scholars to allow for the creation of comparable data collections (Bell 17). Small-scale evaluations offer a solid starting point for scholars, such as myself, transitioning into data driven research, which is why, as I will explain later, I chose to conduct a small-scale evaluation to collect data for my quasi-experiment.
RAD Experiments
The term Replicable, Aggregable, Data-supported (RAD) experiment serves as an umbrella term for each of the research methodologies we have discussed. Experiments, quasi-experiments, and small-scale evaluations can be considered RAD research if the study is carefully designed and can be reproduced in other writing centers with the likelihood of getting the same results, which makes this method extremely useful to our field.
RAD research is intended to help scholars begin conversations rather than end them. It is not intended to suggest “a positivistic epistemology whereby ‘truth’ is out there to be discovered” (Carino and Enders 95). It does attempt to support writing center lore by answering the questions, “How do we know this? Why does it work?” (Driscoll and Perdue 12). Carino and Enders’ project, which will be discussed next, is an example of RAD research. Carino and Enders chose to conduct an experiment, a choice that begs the question, are we trying to force our research into molds that don’t fit? Are we buckling under the weight of academic peer pressure? Are we forgetting that the selection of a research method “is just like any other rhetorical decision; it should fit the audience, purpose, and the project” (Driscoll and Purdue 13)?
We are all aware that our field is being scrutinized due to a lack of experimental research, and it is the pressure I have felt to give up ethnographic research in favor of experimental research that motivated this paper. Garr Reynolds acknowledges this pressure when he suggests professionals are so terrified of being described as “lightweight” that they adopt a “when in doubt add more” philosophy (103). Is this fear resulting in the writing of complex experiments for the sake of complexity, rather than for the sake of the methodology’s effectiveness?
Case in Point
Carino and Enders conduct what they term a correlative study of student satisfaction as it is affected by the number of times a student uses the writing center—what would be categorized as an experiment (see page 4) because it manipulates at least one variable within a group of randomly assigned subjects in a controlled environment—in this case, the number of student visits constituted the independent variable and the responses to survey questions constituted the dependent variable. Carino and Enders wanted to determine if they could support the hypothesis that the more a student uses the writing center, the more he or she likes it. They argue that they chose to conduct quantitative research because they felt taking their data to a statistician would result in a “more sophisticated reading of it” (86). However, throughout the study, both writers admit to a lack of comfort with what they considered the “positivist” nature of statistics, preferring a more postmodern approach that put them at odds with positivist terms such as “findings” and “conclusions,” which they feel suggest a “truth” waiting to be discovered (Carino and Enders 86, 95, 96). They begin their study by announcing that “[they] do not believe numbers are necessarily a more reliable way to measure complex realities” (Carino and Enders 85). Secondly, recognizing that they had no background in statistics, they relied on others to crunch their numbers. Finally, they slip out from under their study all together, and manage to find the answer they were apparently seeking all along:
Ultimately, we find ourselves answering our research question deconstructively, positing a ‘yes’ based on one way of reading the data and undoing it with a ‘no’ based on another, or placing the two answers side by side to say ‘yes’ and ‘no.’ To those who would use statistics in the belief they are definite, this move would likely be condemned as the kind of semantic legerdemain that literary types enjoy. (Carino and Enders 100-101)
Why did they use this approach if they have both philosophical and epistemological doubts about the efficacy of using empirical research? The answer can be found in their conclusion: “[they] have a ‘data driven assessment’ at hand when the Dean comes knocking” (Carino and Enders 102).
Perhaps, a better rhetorical fit for their purpose (the dean might come knocking) would be a small-scale evaluation “that can be conducted via survey and arrayed in tabular form to present clear data to administrators interested in writing enter effectiveness” (Carino and Enders 84). Although Bell’s study does not make use of statistics, it is still RAD research, which does not require the use of statistics. Carino and Enders showed that they “(actually the statisticians)” could conduct statistical analysis (86). However, the question of whether they should have remains. Although I first looked to this article as an example of a research model I could emulate, I instead found an example of why I should avoid the trap of choosing a research methodology based on peer pressure rather than its appropriateness to my purpose.
Choosing a Research Methodology
As I hope I have already established, I began this paper as a means of making the transition from ethnographic to empirical research. Understanding the methods from which I can choose is a necessary start, but next, I must ask the appropriate questions grounded in rhetorical theory to choose my research path:
What do I want to know?
Why do I want to know it?
How will I go about investigating it?
How will I tell if I’ve found it? (Cuseo 1).
What Did I Want to Know?
After 9 years in the same location, my writing center moved. I was lucky enough to be asked what I wanted in my new space—a question I researched carefully. As a result, the center’s new space (bright, open, modern) was very different from the old space (small, dark, traditional), and I wanted to know if the new space would affect students’ satisfaction with the center. To answer this question, I had to rule out other factors that might also affect students’ satisfaction, such as the tutors’ knowledge and ability to share it.
Why Did I Want to Know It?
Writing centers are typically squeezed in wherever the administration can find room and outfitted with furniture and equipment that no one else wants. I wanted to know if space affects students’ satisfaction with the center in order to be able to argue that a center’s success is at least partially based on the space it occupies.
How Will I Go About Investigating It?
I decided that the most effective method to gather information was a paper survey that could be given to students at the end of their tutorials. I chose to use a paper survey that can be handed to students rather than an online survey because a study conducted by Duncan D. Nulty showed that “response rates to online surveys of teaching and courses are nearly always very much lower than those obtained when using on-paper surveys” (5).
How Will I Tell If I’ve Found It?
I will know I have found an answer to my question if I can show that students who gave the center’s space a low rating and gave their tutors high ratings said they would not return to the center or recommend it to their friends.
After answering these questions, I came to several conclusions. I needed to collect my data within the natural setting of the writing center because it was the effectiveness of the space itself that interested me. Secondly, I had a natural group of students I could survey—any student who came to the center. Because each student worked in the same space, regardless of his or her needs, I could consider their answers on the survey equal.
After referring back to my earlier list of methodologies, I realized that my study meets the qualifications for RAD research, and I could use a small-scale evaluation to collect data for a quasi-experiment. In this study, I use the data I collected, both quantitative and qualitative, to answer the following questions:
Does whether a tutor is considered knowledgeable affect whether a student will return to the center?
Does whether the tutor shares his or her knowledge effectively affect whether a student will return to the center?
Does whether a tutor is knowledgeable affect whether a student will recommend the center?
Does whether a tutor shares his or her knowledge effectively affect whether a student recommends the center?
Does a student’s satisfaction with the space affect whether a student will return?
Does the student’s satisfaction with the space affect whether he or she will recommend the center?
Is it true that the lower the student’s satisfaction with the space the less likely he or she is to return?
Next, I discuss the methodology that I developed.
Methodology
I used the following procedure to survey students. The writing center at the University of North Texas (UNT) is used by many types of students, all of whom were surveyed:
developmental writers
students taking Composition 1 or 2
students taking all levels of technical writing
any enrolled undergraduate
any enrolled graduate student
At the end of each tutorial, the worker at the front desk hands a student a copy of the five-question survey and asks him or her to fill it out and place it in the box on the desk. Once a week, the box is emptied and tutors sort the surveys. Below is a copy of the survey:
Although this survey is relatively simple, respondents are “less likely to answer if a question is too long or they do not understand how they should answer” (Oracle 2). A study conducted by Pete Cape and Keith Phillips showed that “if researchers work to keep surveys shorter, it will not only help to ensure response quality, but it will also make for more motivated and responsive respondents” (10). The type of answer is equally important. “A good question asks for just one piece of information and doesn’t have any additional questions embedded within it” (Oracle 2). The use of matrix-style questions—grouping questions that employ the same answer choices (which can make it easier to respond) also increases the number of responses (Oracle 2). Finally, answers must be clearly labeled to allow respondents to give accurate answers. If a survey asks respondents to rank their satisfaction on a scale of 1 to 3, but does not clearly assign meaning to those numbers (e.g. Which is highly satisfied, 1 or 3?), the answers “will be worthless” (Oracle 4).
Over the course of this semester, the center tutored approximately 1,964 students—N=1,964, the total number of students who used the center. I say approximately because this number does not include students who attended workshops, or every student who came during our walk-in hours (5-9 Monday through Thursday) when we do not have a student worker at the front desk ensuring that every student is checked-in. 310 of the 1,964 (16%) students sampled responded to the survey.
Quantitative (Objective) Data
Next, I include the data that I pulled from the surveys. After placing my data in spreadsheets, I could begin looking for the correlations that would answer my questions. On the advice of a friend who is an expert in statistics, I used the following correlation calculator: Pearson Correlation Coefficient Calculator (http://www.socscistatistics.com/tests/pearson). After I received my results, I placed them in a piece of statistics software called SPSS to double-check my data—again on the advice of my friend. The table that follows was produced by SPSS.
Discussion of Data
As the table demonstrates, 5 correlations were significant with a 1% margin of error and 3 were significant with a 5% margin of error. 4 of the total correlations did not include satisfaction with space:
Likeliness to recommend/Likeliness to return
Ability to share knowledge/Likeliness to recommend
Ability to share knowledge/Tutor’s knowledge
Ability to share knowledge/Likeliness to recommend
4 were based on satisfaction with space:
Satisfaction with space/Likeliness to return
Satisfaction with space/Likeliness to recommend
Tutor’s knowledge/Satisfaction with space
Ability to share knowledge/Satisfaction with space
Though space-related correlations are smaller than the others, they are still significant.
The non-space related correlations are of little surprise. We would expect students who would recommend the lab to return to the lab, just as we would expect students to assume that tutors who can share knowledge must have knowledge, and a writing center where tutors can share knowledge is a center worth recommending. These correlations are reassuring because they demonstrate that so much of the work of the center director—hiring, training, and supervising tutors—is not wasted. Just as importantly, this data provides writing center directors with tangible evidence that they can give the administration to prove that their work is increasing the number of students likely to use the center, either because they will return or recommend the center to a friend.
On the other hand, the correlations based on satisfaction with space are disturbing. Why disturbing? In most cases, center directors have the most control over internal factors: who is hired as a tutor, what criteria are used to hire, and how tutors are trained. However, in my experience, center directors are not often able to choose either their physical location or the equipment within that location. Too often, centers are shoved into any unused corner or classroom available. The space isn’t chosen for its effectiveness. It is chosen simply because it is available. When center directors are given choices they are often superficial: what color would you like us to paint the walls? Where should we place the furniture? Rarely do center directors have the opportunity to make those space decisions that might truly increase a student’s satisfaction—sound barriers, comfortable furniture, natural lighting, access to food, drink, and bathrooms. According to Nancy Van Note Chism, corporations, hospitals, and institutes of learning are reconsidering the importance of everything from furniture and lighting to the availability of restrooms and food (10).
If space is an important factor in determining whether students return to a center, recommend that center, or believe that the tutors in that center have the ability to effectively share a body of knowledge, and center directors have little or no influence over that factor, are directors being unfairly evaluated when their administrations look at their ability to retain current center users and bring in more?
By being denied the option to choose and design a center’s space based on both empirical research and best practices, are center directors being set up for failure? No one would expect a researcher in the STEM areas to conduct research in a space that wasn’t properly designed and equipped for maximum results, yet center directors, center tutors, and the students who use those centers are expected to work in substandard locations every day.
This data supports the importance of the physical space in which writing centers are typically housed, and reiterates the need for writing centers to be designed intentionally, not squeezed in wherever the administration can find room, and outfitted with furniture and equipment that no one else wants. Although the written comments I received will be the focus of another paper, I do want to touch on what I found because space is a crucial factor.
Qualitative (Subjective) Data—Student Comments
The numbers tell a good story, but they don’t tell the only story. The students’ comments add another layer of support.
37% of students who filled out the survey included a comment. Of the 106 total comments (N=106), 12 were negative—11%. On the surface, this seems to be cause for concern; however, of the 12 negative comments,
30% (4 students) requested more space
30% (4 students) requested more time
10% (1 student) observed that the center was unprepared for the class size of a workshop (more than 40 students attended)
10% (1 student) suggested that tutorials would be enriched if more than one tutor participated
20% (2 students) suggested that one tutor gets off-topic
40% of all comments were directly related to the issue of space.
Writing centers have traditionally existed in what Russell Carpenter describes as the “peripheral spaces within our institutions,” a problem as students “receive strong messages of what [their] learning experience is likely to be” based on the space in which it occurs (xxv, 5). If the messages predict something “interesting” or “exciting,” they are more likely to choose to engage (Carpenter 5).
I hope that this small-scale study will have two effects: I hope you are empowered to conduct your own research. Each of you is in possession of a treasure trove of important information, both statistical and anecdotal. Sharing this data with your administration is the first step in validating the work your center does and the resources you need to amplify its success.
I hope you are empowered to make some noise—“…noise should be expected and recognized for what it is: an attempt to alert others” (Boquet 6). Let your organization’s decision-makers know that you will not be satisfied with the broom closet. You, your tutors, and your students deserve more and you have the data to prove it.
Conclusion
I have learned an important lesson. Numbers do tell a story, and with the correct tools, statistical analysis is possible and provides a powerful resource. Without using statistical analysis to identify correlations in my raw data, I could not have answered my research questions, and I would not have a body of meaningful data that I can use to begin an important conversation with my university’s administration—one in which I hope to use my data to demonstrate that the writing center would benefit from an improved location. Without the data I have collected, I could not have this conversation effectively.
I still believe that the very nature of the writing center requires us to respect and honor the work that writing centers epitomize—talking, thinking, writing, laughing, sharing. Those of us who are called to work in writing centers share a secret—writing centers are places of mystery, magic, and multiplicity. To succeed, and to help our students succeed, we must not only rely upon our reliable and known tools, but we must also seek out new ones.
Works Cited
Babcock, Rebecca Day & Terese Thonus. Researching The Writing Center. Peter Lang, 2012, New York.
Bell, J. H. “When Hard Questions Are Asked: Evaluating Writing Centers.” The Writing Center Journal, vol. 21, no. 1, 2000, pp. 7-28.
Boettger, Ryan & Chris Lam. “An Overview of Experimental and Quasi-experimental Research in Technical Communication Journals (1992-2011).” IEEE Transactions of Professional Communication, vol. 56, no. 4, 2013, pp. 272-293.
Boquet, Elizabeth H. Noise From the Writing Center. Utah State University Press, 2002, Logan, Utah.
Cape, Peter & Keith Phillips. "Questionnaire Length, Fatigue Effects and Response Quality Revisited." Apr, 2015, <https://www.surveysampling.com/site/assets/files/1586/questionnaire-length-and-fatiigue-effects-the-latest-thinking-and-practical-solutions.pdf.>
Carino, Peter and Doug Enders. "Does Frequency of Visits to the Writing Center Increase Student Satisfaction? A Statistical Correlation Study—or Story." The Writing Center Journal, vol. 22, no. 1, 2001, pp. 82-193.
Carpenter, Russell G. "Preface." Cases on Higher Education Spaces: Innovation, Collaboration, and Technology. Ed. Russell G. Carpenter. Information Science Reference, 2013. Pennsylvania, pp. xxiii-xxxv.
Chism, Nancy Van Note. “A Tale of Two Classrooms.” New Directions in Teaching and Learning, Dec. 1, 2002, pp. 5-12.
Cuseo, Joseph. "Assessment of the First-year Experience: Six Significant Questions." Esource for College Transitions. 6 Nov. 2000. Apr. 5 2015. < http://sc.edu/fye/esource/>.
Driscoll, Dana & Sherry Wynn Perdue. "Theory, Lore, and More: An Analysis of RAD Research in The Writing Center Journal, 1980-2009." The Writing Center Journal, vol. 32, no.1, 2012, pp. 11-39.
Haswell, Richard H. "NCTE/CCCC's Recent War on Scholarship." Written Communication, vol. 22, no. 2, 2008, pp. 198-223.
Johanek, Cindy. Composing Research: A Contextualist Paradigm, for Rhetoric and Composition. Utah State Press, 2000, Logan, Utah.
Johnson, Burke R. and Anthony J. Onwuegbuzie. "Mixed Methods Research: A Research Paradigm Whose Time Has Come." Educational Researcher, vol. 33, no. 7, 2007, pp.14-26.
Liggett, Sarah, Kerri Jordan, & Steve Price. "Mapping Knowledge-making in Writing Center Research: A Taxonomy of Methodologies." The Writing Center Journal, vol. 31, no. 2, 2011, pp. 50-88.
Nulty, Duncan D. "The Adequacy of Response Rates to Online And Paper Surveys." Assessment & Evaluation in Higher Education, 2008. <www.uaf.edu/files/uafgov/fsadmin-nulty5-19-10.pdf>.
Oracle. "Best Practices for Improving Survey Participation." 15, Apr, 2015, /www.oracle.com/us/products/applications/best-practices-improve-survey-1583708.pdf, 2012>.
Reynolds, Garr. Presentation Zen. New Rider Press, 2012, Logan, Utah.