back

Online Full-Time Faculty’s Perceptions of Ideal Evaluation Processes

Meredith DeCosta [meredith.decosta@gcu.edu], Emily Bergquist [emily.bergquist@gcu.edu], Rick Holbeck [rick.holbeck@gcu.edu], Scott Greenberger [scott.greenberger@gcu.edu], Courtney McGinnis, Kevin Reidhead, Grand Canyon University, United States of America

Abstract

Post-secondary institutions around the world use various methods to evaluate the teaching performance of faculty members. Effective evaluations identify areas of instructional strength, provide faculty with opportunities for growth, and allow for reflective inquiry. While there is an extensive body of research related to the evaluation of faculty in traditional settings, there have been few studies examining online faculty members’ perceptions of evaluation processes. The present study involved dissemination of an e-survey to online full-time faculty at a large university in the Southwest United States, as well as qualitative content analysis of survey data. Findings suggest that online full-time faculty expressed interest in improvement as instructors, distinct from modality, and preferred descriptive, qualitative, and holistic feedback rather than quantitative or punitive feedback. Further, participants articulated a desire to be evaluated by those with content-specific knowledge rather than teaching expertise in the online environment. This study has implications for online distance administrators and those stakeholders involved in online faculty evaluation. Additional research is needed to continue to establish a baseline for how online faculty members conceptualize ideal evaluation processes.

Keywords: online learning, evaluation, online faculty, faculty evaluation, content analysis, full-time faculty

Online Full-Time Faculty’s Perceptions of Ideal Evaluation Processes

Post-secondary institutions around the world use various methods to evaluate the teaching performance of faculty members. Effective evaluations identify areas of instructional strength, provide faculty with opportunities for growth, and allow for reflective inquiry. MacMillan, Mitchell, and Manarin (2010) contended that extensive evaluation mechanisms not only improve day-to-day teaching practices for individual instructors, they are also the first step to informed teaching and scholarship. Further, effective evaluations of faculty include systematic assessment and reflective critique by several stakeholders, including peer, self, administrators, and specialists (Wellein, Ragucci & Lapointe, 2009).

While there is an extensive body of research related to the evaluation of faculty in traditional settings, there have been fewer studies examining online faculty members’ self-reported perceptions of evaluation processes (Berk, 2013). Indeed, 86.6% of colleges and universities now offer online courses (Allen & Seaman, 2013); however, the instruments used to evaluate online teaching, most of which have been extrapolated from traditional settings, have been questioned by scholars (Berk, 2013; Hathorn & Hathorn, 2010; Rothman et al., 2011). Despite a broad acceptance that effective evaluation tools are needed, research to date has suggested that faculty evaluation systems have been largely insufficient (Arreola, 1979, 1986, 1995, 2000a, 2000b; Arreola et al., 2001; Berk, 2013). This is most evident in online programs where evaluation tools are often drawn from traditional programs, despite the arguably unique skills required teaching online (Berk, 2013; Hathorn & Hathorn, 2010; Rothman et al., 2011).

Research is needed to develop a baseline for what online full-time faculty members conceptualize as an ideal process for their evaluations. Baran, Correia, and Thompson (2011) contended that institutions of higher education should consider “teachers as adult learners who continuously transform their meaning of structures related to online teaching through a continuous process of critical reflection and action” (p.421). If faculty can and should be active participants in how they are evaluated, then research is needed to reveal how they idealize, conceptualize, and envision the processes most helpful to their work as online instructors. Online education continues to grow: 6.7 million students in the United States alone are enrolled in at least one online course (Allen & Seaman, 2013). As such, a deeper, more thorough understanding not only of faculty evaluation, but more specifically, of online faculty evaluation is necessary, particularly with online full-time faculty, a growing phenomenon in higher education.

This paper outlines a qualitative study of one university’s online full-time faculty. Through collection and analysis of survey data, findings are offered, which illuminate how online full-time faculty conceptualize the ways in which teaching performance should be evaluated. The theoretical grounding and related literature, setting, participants, methods, findings, analysis, and discussion follow. The goal of this study was to address a needed area of research on online full-time faculty’s perceptions of evaluation by offering a window into the practices associated with evaluating faculty in the online modality.

Theoretical Grounding

This study was rooted in Lave and Wenger (1991) and Wenger’s (1998, 2000) theory of communities of practice. Communities of practice have three characteristics in common: domain, community, and practice. More specifically, communities of practice are formed by individuals who engage in a process of collective learning in a shared domain. These communities are not bound by place or time and can cross modality, setting, and locus. Online full-time faculty members who teach primarily for a single university or college, for instance, who share the goal and practice of teaching in a post-secondary setting and e-modality, can constitute a community of practice. This is particularly true of online full-time faculty within an open and collaborative environment, such as the participants and setting included in this study.

Within the theory of communities of practice, the competence and experience of individuals help to generate learning and innovation (Wenger, 2000). Universities and colleges benefit from the social learning that can emerge from communities of practice (Smith, 2003, 2009). These communities simultaneously enhance learning of a group while also enabling individuals to take collective responsibility for managing knowledge needed to succeed. As such, there is a direct link between learning and performance (Wenger, 2012). If faculty members learn and grow together as practitioners, the assumption within this theory is that their performance in the online classroom will improve. In essence, the theory of communities of practice suggests that faculty members can actively share tips, best practices, engage with colleagues, and ultimately leverage knowledge with one another’s assistance (Lave & Wenger, 1991).

There is a need to assess the effectiveness of such communities. Because of this need, evaluation tools can be developed and utilized within communities of practice. Evaluations can be an integral element of the community-making process if designed and disseminated to enhance classroom instruction and improve collective knowledge. Stakeholders, including faculty and their supervisors, can collect evaluation information to support decision-making processes and enhance creative production (Wenger, Trayner & de Laat, 2011). Communities can reflect on their work and use results to understand the value of their activities and interactions (Wenger et al., 2011). In their essence, evaluations for online full-time faculty should attempt to assess teacher effectiveness. If evaluations are used for this purpose, they can assist in enhancing the domain, community, and practice of a group of faculty. Furthermore, the survival and success of a community of practice, like that of online full-time faculty, can be directly related to the knowing and learning that occurs within these social systems (Wenger, 2012).

Review of Relevant Literature

Faculty Evaluations

Faculty evaluations are discussed frequently in higher education and possess several purposes. Evaluations of faculty are generally intended to improve and assess the teaching and learning that occurs in classrooms. A range of methodologies for evaluating faculty are considered acceptable and encouraged, including administrative, self, and peer evaluations (Braskamp, Brandenburg, & Ory, 1984; Braskamp, 2000; Wellein et al., 2009). Recent trends in faculty evaluation have moved away from a single or solitary measure of teaching effectiveness towards holistic approaches to faculty evaluation. This approach includes activities within and beyond the classroom, including involvement in learning communities, personal character, collaboration, reflection, professionalism, and growth potential (Braskamp, 2000; Glassick, Huber, & Maeroff, 1997; Light & Cox, 2001; Mandernach et al., 2005; Ramsden, 2003; Schön, 1983; Tagg, 2003). Shifts have also occurred in departments where teaching is no longer seen solely as the individual teacher’s classroom-based exchange with students; it also includes a scholarly approach to teaching that goes beyond the traditional view of research and publication and extends to cross-disciplinary collaborations, professional development, reflective practice, instructional growth, even community work (Boyer, 1990, 1996; Glassick et al., 1997). As such, universities and colleges are encouraged to develop comprehensive evaluation systems that allow for reflection and critical inquiry, not just measurement, reward, and reprimand (Berk, 2006, 2014; Boyer, 1990, 1996; Glassick et al., 1997).

Online Faculty Evaluations

There are a growing number of papers outlining best practices, competencies, and principles associated with online learning (see Arreola, 2000a; Burke, 2005; Levy, 2003; Roblyer & Ekhaml, 2000; Sunal et al., 2003; Tobin, 2004); however, there have been few studies examining the evaluation of online faculty (see Hixon et al., 2011; Mandernach et al., 2005; Rockwell, Furgason & Marx, 2000) and even fewer studies exploring online faculty members’ perceptions of evaluation processes (see Schulte, 2014; Mandernach et al., 2005). Due to the rapid growth of online education, existing evaluation scales, specifically those used in traditional settings, have been questioned (Berk, 2013; Hathorn & Hathorn, 2010; Rothman et al., 2011). The root of the concern with using evaluation tools designed for traditional settings has largely revolved around the relevance, accuracy, and effectiveness of existing evaluation scales (Berk, 2013; Harrington & Reasons, 2005; Loveland, 2007). Studies on the evaluation of faculty suggest that there are key differences within the online environment that may affect measurement tools and processes (Berk, 2013; Tallent-Runnels et al., 2006). Berk (2013) captured concerns about evaluations of online faculty by arguing for comprehensive assessments of online faculty that are specific to the electronic classroom space. Further research into online faculty’s perceptions of current and ideal evaluation processes is needed to provide insight into how to best structure assessment processes.

Context

The university where the study took place had a relatively atypical online faculty model. One hundred sixty-nine of its faculty served as online full-time faculty members. Location, work requirements, and faculty supervision differentiate this model. The model included undergraduate, master’s, and doctoral faculty members teaching online courses in a program with rolling enrolment. Instructors facilitated approximately four courses at a time. While their courses were delivered electronically, faculty members held office hours eight hours a day Monday through Friday in an office building with other online full-time faculty members, as well as students’ counsellors and support staff. During office hours, faculty viewed documents, assessed student work, noted phone calls, and engaged with students in the learning management system. They were expected to communicate with traditional faculty, deans, curriculum developers, and student counsellors. Instructors were encouraged to participate in professional development opportunities both online and face-to-face, as well as scholarly activities, including research and publication. Faculty members reported to a supervisor and director who conducted informal weekly and quarterly reviews, as well as a formal annual review.

When this study occurred in 2014, the department relied on direct supervisor evaluation of faculty with an analysis of at least one course per content area per quarter. Depending on the supervisor, this quarterly review also frequently referenced numeric and descriptive data from students’ end of course survey data. The quarterly review process under examination in this study was a convention used by supervisors to formatively assess teacher performance. The document included 25 criteria related to the areas of participation, engagement, and facilitation; grading and feedback, classroom management; and personal development and relationships. During the review of courses each quarter, supervisors rated faculty members as “met,” “partially met,” or “did not meet” for all 25 criteria. The supervisor was expected to offer documentation to supplement the ranking. Finally, the quarterly review included an overall ranking at the conclusion of the document where the online full-time faculty member ranked as “exceptional,” “good,” or “needs improvement.”

The research team established a single goal: to collect feedback from online full-time faculty members regarding how they were evaluated and to use this data to improve the university’s online full-time faculty quarterly evaluation processes. The two-part research question, specifically related to the data collected and analyzed in this paper, considered, “If faculty members could envision the ideal process to evaluate their teaching, what might that process look like? How frequently would faculty members be evaluated?”

Participants

In the first quarter of 2014, all 169 online full-time faculty members at a large university in the Southwest United States were invited via email to participate in a survey. One hundred and eighteen of the 169 faculty participated in the survey. This is a response rate of 69.8%. The response rate may have been influenced by the small-scale pilot study administered prior to the larger survey sent to all faculty.

Of the 118 faculty who responded to the survey, 41.53% had been teaching at the university level for 2-5 years, 44.07% had been an online full-time faculty member at this university for 2-5 years (zero had been in this position at the university for more than five years because the position was not created until 2010). The study participants included faculty from the education, arts and sciences, theology, business, and doctoral colleges teaching undergraduate, masters, and doctoral level courses. The researchers opted not to collect further demographic information on categories like racial and ethnic identity, gender, age, or religion because of their close knowledge of the participants, ultimately ensuring anonymity and reducing potential for researcher bias.

The research team included six stakeholders directly invested in the development of the university’s online full-time faculty department and the impact of teaching on student learning in the online environment. The six researchers included directors, supervisors, and faculty. Three of the researchers were part of the administrative team who directly evaluates faculty each quarter. To exercise transparency, researchers expressly shared with faculty members the following:

  1. the process was for research purposes and improvement initiatives for the online full-time faculty department,
  2. the survey was anonymous,
  3. the research team included those who are directly involved in faculty quarterly evaluation, and
  4. researchers would not be able to establish the identities of those involved.

Methods

Researchers disseminated a small-scale pilot survey via email to a random stratified group of 44 online full-time faculty members from each college at the university. Random.org assisted in the selection of this random stratified group. Survey Monkey, a web-based survey service, was used to administer the survey instrument. The survey was primarily qualitative in nature, asking open-ended questions. The pilot survey distributed prior to the large-scale study helped identify concerns with the survey instrument. Results from the small-scale pilot survey study forced the researchers to clarify phrasing on one of the questions and included the current quarterly evaluation document supervisors use to evaluate faculty for reference.

The revised follow-up survey was sent via email to all 169 online full-time faculty members. Again, Survey Monkey was used. Faculty members were informed by researchers that their participation was voluntary, anonymous, and future evaluations or job statuses would not be influenced by their responses on the survey. Further, faculty members were not required to answer every question on the survey. Participants completed the survey in approximately 20 minutes and had two weeks to complete the survey until the link was closed.

The survey asked 11 descriptive questions regarding online teaching and the evaluation processes of online instructors. The instrument was divided into three sections, including: (1) perceptions of the roles of online faculty, (2) perceptions of teaching evaluations, and (3) perceptions of current evaluation processes for online full-time faculty. The second section was the focus of analysis in this paper and included an item related to the ideal or most beneficial types of evaluation and their frequency.

Data was analyzed qualitatively and objectively through the content analysis method (Berelson, 1952; Carney, 1972; Holsti, 1968, 1969; Krippendorff & Bock, 2008) to gain insight into the current evaluation processes for online full-time faculty. The analysis process was selected because it is systematic, orderly, and purposeful (Berelson, 1952; Holsti, 1968, 1969). Content analysis allows for objective coding of descriptive survey data.

The team reviewed 11 descriptive survey responses from 118 full-time online faculty members, highlighting and annotating each unit of analysis relevant to the research question. Units of analysis included descriptive words, phrases, and sentences. After the initial analysis, similar units were combined. These units were then collapsed systematically and repeatedly into other larger categories based on similar content or redundancies. Next, key words or phrases from the units were extracted, resulting in a set of codes or categories for each descriptive question. The process continued until all relevant units were grouped or re-grouped with similar units and labelled with a code (Krippendorff & Bock, 2008). The team then identified robust themes, or most prominent codes, by counting frequency of instances (Krippendorff & Bock, 2008).

Researchers analyzed survey responses independently to develop codes with as little bias as possible, focusing on the words, phrases, and sentences written by participants. Researchers shared their codes with each other through a five-hour coding session designed to ensure reliability (Miles & Huberman, 1994; Neuendorf, 2002). The workshop afforded researchers the opportunity to identify points of conflict or communion in the coding process, to move codes into new categories, to alter the language of categories if needed, and to agree upon robust codes. One instance of conflict occurred when a researcher identified a code in her private coding session; however, after the coding session, it was determined that the label was not specific enough for what the other researchers had discovered. As a result, the group developed a new label to more accurately describe the phenomenon. A series of robust or prominent codes materialized from the coding session. Codes were labelled robust based on number of occurrences in survey data. Codes with more than five units were included in the findings and analysis below. Participants were not required to respond to every survey question.

Findings

The robust code for one survey question is explicated in the findings section below. The question stated, “If you could envision the ideal process to evaluate your teaching, what might that process look like? How frequently would you be evaluated?” This paper focuses on one survey question because the responses illuminate the preferences, conceptualizations, and idealizations of online full-time faculty, which is needed to establish a baseline for this model of online education and for evaluation processes used therein.

Findings show that the most robust code was evaluations should focus on growth or improvement of the instructor and students. There were 33 units in this code. When describing the ideal process, faculty expressed comments such as, “I have a desire for growth,” “less task-y or checklist-y,” “more qualitative and personal,” “given specifics on how to improve,” “evaluate of use of higher order thinking,” “focus on growth of employee,” “promote ongoing growth,” “show areas of growth,” “qualitative rather than quantitative,” and “challenge critical thinking and deeper thinking.”

Findings show that the second most robust code was administrators should select evaluators that can effectively evaluate courses. There were 14 units in this code. When describing the ideal process, faculty expressed comments such as, “Supervisors may not have the training or experience in my specific field to provide adequate assessment,” “evaluated by a subject matter expert,” “faculty to meet with one another to share best practices,” “evaluators who know the content to evaluate a class,” “evaluators should know the content to peruse a class,” and “someone who is capable of instructing my content should evaluate me.”

Findings show that the third most robust code was to differentiate evaluation delivery and timeline. There were five units in this code. When describing the ideal process, faculty expressed comments such as, “Individualized, one on one, face to face,” “according to course content and student load (not uniform),” evaluated individually rather than a blanket style for everyone,” and “both formal and informal evaluations.”

Sixty-three noted frequency in their response. 26 or 41% preferred bi-annual evaluations to quarterly evaluations. Twenty or 32% preferred annual supervisor evaluations to quarterly evaluation. Seventeen or 27% preferred the current quarterly supervisor evaluation model.

Analysis

To foreground the analysis of this question, the term “ideal” was not defined in the survey. The researchers made this rhetorical move intentionally. The goal was to have faculty express their sentiments regarding what constitutes ideal on their own accord without inviting in researchers’ preconceived notions of this concept. It is evident through the participants’ open communication on the survey, as well as the comparisons made between the ideal process and the current process, that they were able to conceptualize and verbalize their own versions of “ideal.”

The most significant finding from this survey question suggests that online full-time faculty believed qualitative, personal feedback focused on improvement, not focused on a “checklist,” was ideal. Those who envisioned a new system expressed their “ideal” process in contrast to the system currently in place to evaluate their teaching and articulated a desire for qualitative, holistic, and inquiry-based feedback. The online full-time faculty who participated in this survey did not distinguish between online and traditional instructors. For instance, participants noted a desire to be evaluated on “critical thinking,” “higher order thinking,” and “areas of growth,” which are qualities not related to modality. In fact, no faculty argued for an evaluation that included online-specific characteristics like strong forum facilitation techniques, integration of technology, classroom organization, or visibility in the classroom. This suggests that online full-time faculty at this university, as a community of practice, were interested in growing their general knowledge of teaching practices, but either did not know enough about techniques unique to teaching online, did not want to be evaluated on these techniques, or did not consider these techniques important or distinct from techniques like engaging students in critical inquiry.

Faculty members noted their interest in being reviewed by a peer or supervisor with subject matter expertise and the ability to share best practices within a particular content. Faculty represented five different colleges at the university and wanted to be evaluated by those who not only belonged to their college but to their specific content. Compellingly, faculty did not emphasize that the person should be experienced in online education. Rather, they were more concerned that the supervisor had subject matter expertise, knowledge in the specific content area, and an understanding of the needs of that content and its curriculum. Faculty’s comments suggest less emphasis on being evaluated by someone with expertise in e-learning and more focus on expertise in a given subject. Specifically, faculty expressed a preference for being reviewed by a peer with subject matter knowledge rather than a supervisor without content knowledge.

Furthermore, the part of the survey question regarding the frequency with which evaluations should occur was included based upon prior feedback and concern from study participants in informal conversations with supervisors. In responses, faculty emphasized that the current quarterly system of evaluation was not ideal. Biannual and annual evaluations were preferred while quarterly evaluations, the current model, were considered too frequent. The “ideal” amount of evaluations was tied into the faculty’s emphasis on improvement. Rather than an evaluation that measures what a faculty member did or did not do in the classroom, respondents argued for a coaching/mentoring form of evaluation that allowed time for growth and improvement.  

Discussion

Prior to the survey, the researchers’ perceptions were that online full-time faculty at this university were part of a distinct community of practice (Lave & Wenger, 1991; Wenger, 1998, 2000) and, therefore, as online instructors would envision the ideal process as one that would cultivate their efforts as instructors in an e-environment. Previous research has advanced the notion that online learning and evaluations of online faculty are unique and therefore require a unique characteristics and qualities (Berk, 2013; Tallent-Runnels et al., 2006), one that the researchers presumed would be recognized and desired in evaluations by online full-time faculty. The researchers hypothesized that online full-time faculty, many of whom came from traditional settings, would want to improve as instructors in the online environment and, thus, would want to be evaluated on these criteria.

Data from the present study, however, suggests otherwise. Key to the current investigation, online full-time faculty in this study were interested in improving generally as instructors and wanted to be evaluated by those with content knowledge. The motivation, investment, and commitment of online full-time faculty, particularly those close in proximity, was different from that of other faculty populations (Mueller, Mandernach & Sanderson, 2013). This does not mean that they conceived of their roles or evaluation as online instructors as unimportant. This does mean, however, that this population preferred evaluations focused on content and teaching practices. Scholarly communities present in time and space might build a network focused on collective growth (Mueller et al., 2013). This community of practice (Lave & Wenger, 1991; Wenger, 1998, 2000) argued for evaluations that were descriptive, qualitative, holistic, and supportive rather than driven by quantitative or punitive measures. The modality in this case was not insignificant to these faculty members, just not as significant as being evaluated on “ongoing growth.” Online full-time faculty in this study did not appear to conflate e-modality with pedagogy, establishing that the mode of what was taught and how it was taught were unique (Moore & Kearsley, 2012). Furthermore, faculty believed that holistic reviews with an emphasis on content knowledge outweighed other factors.

These findings offer implications for online distance learning administrators, supervisors, and other associated stakeholders in similar environments who are attempting to establish criteria and processes for evaluating online faculty. Based on the findings from this study, stakeholders may consider qualitative, holistic feedback provided by subject matter experts, specifically peers, rather than supervisor evaluations emphasizing explicitly quantifiable measures. These findings can also be used by online distance learning administrators, supervisors, and stakeholders when creating evaluations. This study suggests that online faculty members are primarily interested in quality growth and improvement related to content and pedagogy and less interested in quantity (e.g. number of forum posts or number of messages sent to students). As such, evaluations should be devised to include specific areas of opportunity in faculty members’ content instruction, as well as areas of success that can be replicated and refined. Per this study, these criteria should be preferred to evaluations focused primarily on job expectations that can be quantified. Further, this study implies that faculty input is needed when evaluations are developed. Rather than assuming what priorities matter to faculty, their input, along with the input of other stakeholders, can ensure that the evaluation aligns with the environment and establishes buy-in with all stakeholders.

While this study provides qualitative insight into what online full-time faculty members conceptualize as an ideal process for their evaluation, additional research is needed. A quantitative study could explore which specific criteria in an evaluation are most important to online full-time faculty. This would help expand or counter the argument made in the present study that general pedagogies and content knowledge were considered idealized online teaching techniques. Notably, although participants in this study preferred content-focused evaluations, this does mean that they did not want at least some of their online practices evaluated. It is clear from personal encounters with study participants and larger survey data that they have expectations of their teaching and professional growth in online teaching and learning. Participants’ responses and focus on content may be related to the setting and experiences of online full-time faculty in this study. Additional exploration of online full-time faculty perceptions regarding faculty evaluations may assist in uncovering a deeper and more thorough understanding of evaluations specific to skills and pedagogies used in online teaching and learning. Continued examination of the population explored within this study over time may lead to an evolution of perception and shift in focus. As online education continues to grow and new faculty models continue to develop (Allen & Seaman, 2013), research is needed to explore ideal evaluation processes, as well as perceptions of current evaluation practices.

There continues to be opportunities for growth and greater understanding in how faculty are evaluated at universities and colleges (Arreola, 1979, 1986, 1995, 2000a, 2000b; Arreola, Aleamoni & Theall, 2001; Berk, 2013; Hathorn & Hathorn, 2010; Rothman et al., 2011) and the body of literature could benefit from a more in depth analysis of online full-time faculty as a community of practice (Lave & Wenger, 1991; Wenger, 1998, 2000). In addition, further examination is needed to develop a clearer understanding of online faculty perceptions of online pedagogy and how these skills should be assessed and evaluated. Nevertheless, this study is one small but important step to understanding new teaching environments for online faculty. This study illuminates the importance of instructional growth and content for online full-time faculty, as well as their preferences on how online faculty should be evaluated. Further, the study emphasizes the need to collect data from faculty and involve faculty in their evaluation processes. The more the field understands the needs and visions of online faculty, the more likely it will be that evaluations can be developed to improve the quality of online learning. 

References

  1. Allen, I.E. and Seaman, J. (2013). Changing course: Ten years of tracking online education in the United States. Babson Survey Research Group, Pearson, Sloan-C. Retrieved from http://www.onlinelearningsurvey.com/reports/changingcourse.pdf
  2. Arreola, R.A. (1979). Strategy for developing a comprehensive faculty evaluation system. In Engineering Education, 12, (pp. 239-244).
  3. Arreola, R.A. (1986). Evaluating the dimensions of teaching. In Instructional Evaluation, 8(2), (pp. 4-14).
  4. Arreola, R.A. (1995). Developing a comprehensive faculty evaluation system. Bolton, MA: Anker Publishing Company.
  5. Arreola, R.A. (2000a). Developing a comprehensive faculty evaluation (2nd ed.). Bolton, MA: Anker Publishing Company.
  6. Arreola, R.A. (2000b). Interview. In The Department Chair, 11(2), (pp. 4-5).
  7. Arreola, R.A.; Aleamoni, L.A.; Theall, M. (2001). College teaching as meta-profession: Reconceptualizing the scholarship of teaching and learning. Paper presented at the 9th Annual American AAHE Conference on Faculty Roles and Rewards, Tampa, FL.
  8. Baran, E.; Correia, A. and Thompson, A. (2011). Transforming online teaching practice: Critical analysis of the literature on the roles and competencies of online teachers. In Distance Education, 32(3), (pp. 421-439). doi:10.1080/01587919.2011.610293
  9. Beebe, R.; Vonderwell, S.; Boboc, M. (2010). Emerging Patterns in Transferring Assessment Practices from F2f to Online Environments. In Electronic Journal of E-Learning, 8(1), (pp. 1-12).
  10. Berelson, B. (1952). Content analysis in communications research. Glencoe, IL: Free Press.
  11. Berk, R.A. (2006). Thirteen strategies to measure college teaching: A consumer’s guide to rating scale construction, assessment, and decision making for faculty, administrators, and clinicians. Sterling, VA: Stylus Publishing.
  12. Berk, R.A. (2013). Face-to-face versus online course evaluations: A “consumer's guide” to seven strategies. In Journal of Online Teaching and Learning, 9(1), (pp. 140-148).
  13. Berk, R.A. (2014). Should student outcomes be used to evaluate teaching? In Journal of Faculty Development, 28(2), (pp. 87-96).
  14. Boyer, E.L. (1990). Scholarship reconsidered: Priorities of the professoriate. Princeton, NJ: Carnegie Foundation for the Advancement of Teaching.
  15. Boyer, E.L. (1996). The scholarship of engagement. In Journal of Public Service and Outreach, 1, (pp. 11-20).
  16. Braskamp, L.A. (2000). Toward a more holistic approach to assessing faculty as teachers. In K. Ryan (ed.), Evaluating teaching in higher education: A vision for the future. New directions for teaching and learning, Number 83. San Francisco: Jossey-Bass.
  17. Braskamp, L.A.; Brandenburg, D.C. and Ory, J.C. (1984). Evaluating teaching effectiveness. Beverly Hills, CA: Sage.
  18. Burke, L.A. (2005). Transitioning to online course offerings: Tactical and strategic considerations. In Journal of Interactive Online Learning, 4(2), (pp. 94-107).
  19. Carney, T.F. (1972). Content analysis. Winnipeg: University of Manitoba Press.
  20. Glassick, C.E.; Huber, M.T. and Maeroff, G.I. (1997). Scholarship assessed: Evaluation of the professoriate. San Francisco: Jossey-Bass.
  21. Harrington, C.F. and Reasons, S.G. (2005). Online student evaluation of teaching for distance education: A perfect match? In The Journal of Educators Online, 2(1), (pp. 1-12). Retrieved from http://www.thejeo.com/ReasonsFinal.pdf
  22. Hathorn, L. and Hathorn, J. (2010). Evaluation of online course websites: Is teaching online a tug-of-war? In Journal of Educational Computing Research, 42(2), (pp. 197-217). doi:10.2190/EC.42.2.d
  23. Hixon, E.; Barczyk, C.; Buckenmeyer, J.; Feldman, L. (2011). Mentoring university faculty to become high quality online educators: A program evaluation. In Online Journal of Distance Learning Administration, 14(5).
  24. Holsti, O.R. (1968). Content analysis. In G. Lindzey & E. Aaronson (eds.), The handbook of social psychology. Reading, MA: Addison-Wesley.
  25. Holsti, O.R. (1969). Content analysis for the social sciences and humanities. Reading, MA: Addison-Wesley.
  26. Krippendorff, K.H. and Bock, M.A. (2008). The content analysis reader. Thousand Oaks, CA: Sage.
  27. Lave, J. and Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge, UK: Cambridge University Press.
  28. Levy, S. (2003). Six factors to consider when planning online distance learning programs in higher education. In Online Journal of Distance Learning Education, 6(1).
  29. Light, G. and Cox, R. (2001). Learning and teaching in higher education: The reflective professional. London: Paul Chapman Publishing.
  30. Loveland, K.A. (2007). Student evaluation of teaching (SET) in web-based classes: Preliminary findings and a call for further research. In The Journal of Educators Online, 4(2), (pp. 1-18). Retrieved from http://www.thejeo.com/Volume4Number2/Loveland Final.pdf
  31. MacMillan, M.; Mitchell, M. and Manarin, K. (2010). Evaluating teaching as the first step to SoTL. Paper presented at SoTL Commons Conference, Statesboro, GA, 2010, March 1.
  32. Mandernach, J.B.; Donnelli, E.; Dailey, A.; Schulte, M. (2005). A faculty evaluation model for online instructors: Mentoring and evaluation in the online classroom. In Online Journal of Distance Learning Administration, 8(3).
  33. Miles, M.B. and Huberman, A.M. (1994). Qualitative data analysis. Thousand Oaks, CA: Sage.
  34. Moore, M. and Kearsley, G. (2012). Distance education: A systems view of online learning. Belmont, CA: Wadsworth CENGAGE.
  35. Mueller, B.; Mandernach, B.J.; Sanderson, K. (2013). Adjunct versus full-time faculty: Comparison of student outcomes in the online classroom. In Journal of Online Teaching and Learning, 9(3), (pp. 341-352).
  36. Neuendorf, K. A. (2002). The content analysis guidebook. Thousand Oaks, CA: Sage.
  37. Ramsden, P. (2003). Learning to teach in higher education. New York: Routledge.
  38. Roblyer, M.D. and Ekhaml, L.E. (2000). How interactive are YOUR distance courses? A rubric for assessing interaction in distance learning. In Online Journal of Distance Learning Administration, 3(2).
  39. Rockwell, K.; Furgason, J.; Marx, D.B. (2000). Research and evaluation needs for distance education: A Delphi study. In Online Journal of Distance Learning Administration, 3(3).
  40. Rothman, T.; Romeo, L.; Brennan, M.; Mitchell, D. (2011). Criteria for assessing student satisfaction with online courses. In International Journal for e-Learning Security, 1(1-2), (pp. 27-32). Retrieved from http://www.infonomics-society.org/IJeLS/Criteria for Assessing Student Satisfaction with Online Courses.pdf
  41. Schön, D.A. (1983). The reflective practitioner: How professionals think in action. New York: Basic Books.
  42. Schulte, M. (2014). Faculty Perceptions on the Benefits of Instructor Evaluation for Improved Online Facilitation. In the Proceedings of TCC Online Conference, 2014, (pp. 98-110). Retrieved from: http://etec.hawaii.edu/proceedings/2014/Schulte.pdf
  43. Smith, M. K. (2003, 2009). Jean Lave, Etienne Wenger and communities of practice. In the encyclopedia of informal education. Available online at http://www.infed.org/biblio/communities_of_practice.htm
  44. Sunal, D.W.; Sunal, C.S.; Odell, M.R. and Sundberg, C.A. (2003). Research-supported best practices for developing online learning. In The Journal of Interactive Online Learning, 2(1).
  45. Tagg, J. (2003). The learning paradigm college. Bolton, MA: Anker Publishing Company.
  46. Tallent-Runnels, M.K.; Thomas, J.A.; Lan, W.Y.; Cooper, S.; Ahern, T.C.; Shaw, S.M.; Liu, X. (2006). Teaching courses online: A review of the research. In Review of Educational Research, 76(1), (pp. 93-135). doi:10.3102/00346543076001093
  47. Tobin, T.J. (2004). Best practices for administrative evaluation of online faculty. In Online Journal of Distance Learning Administration, 7(2).
  48. Wellein, M.G.; Ragucci, K.R. and Lapointe, M. (2009). A peer review process for classroom teaching. In American Journal of Pharmaceutical Education, 73(5), (pp. 1-7).
  49. Wenger, E. (1998). Communities of practice: Learning, meaning, and identity. Cambridge: Cambridge University Press.
  50. Wenger, E. (2000). Communities of practice and social learning systems. In Organization, 7(2), (pp. 225-246).
  51. Wenger, E. (2012). Communities of practice: A brief introduction. Retrieved from http://wenger-trayner.com/theory/
  52. Wenger, E.; Trayner, B. and de Laat, M. (2011) Promoting and assessing value creation in communities and networks: A conceptual framework. Raport 18, Ruud de Moor Centrum, Open Universiteit. Retrieved from http://wenger-trayner.com/wp-content/uploads/2011/12/11-04-Wenger_Trayner_DeLaat_Value_creation.pdf
  53.  

Tags

e-learning, distance learning, distance education, online learning, higher education, DE, blended learning, ICT, information and communication technology, internet, collaborative learning, learning management system, MOOC, interaction, LMS,

Current issue on De Gruyter Online

– electronic content hosting and distribution platform

EURODL is indexed by ERIC

– the Education Resources Information Center, the world's largest digital library of education literature

EURODL is indexed by DOAJ

– the Directory of Open Access Journals

EURODL is indexed by Cabells

– the Cabell's Directories

EURODL is indexed by EBSCO

– the EBSCO Publishing – EBSCOhost Online Research Databases

For new referees

If you would like to referee articles for EURODL, please write to the Chief Editor Ulrich Bernath, including a brief CV and your area of interest.