back

Developing a New Generation MOOC (ngMOOC): A Design-Based Implementation Research Project with Cognitive Architecture and Student Feedback in Mind

Geoff Woolcott [geoff.woolcott@scu.edu.au], School of Education, Carolyn Seton [carolyn.seton@scu.edu.au], School of Business and Tourism, Raina Mason [raina.mason@scu.edu.au], School of Business and Tourism, Southern Cross University, Australia, Ouhao Chen [ouhao.chen@nie.edu.sg], National Institute of Education, Nanyang Technological University, Singapore, Warren Lake [warren.lake@scu.edu.au], School of Environment, Science and Engineering, Christos Markopoulos [christos.markopoulos@scu.edu.au], School of Education, William Boyd [william.boyd@scu.edu.au], School of Environment, Science and Engineering, Southern Cross University [https://www.scu.edu.au], Australia

Abstract

This paper describes a design-based implementation research (DBIR) approach to the development and trialling of a new generation massive open online course (ngMOOC) situated in an instructional setting of undergraduate mathematics at a regional Australian university. This process is underscored by two important innovations: (a) a basis in a well-established human cognitive architecture in terms of cognitive load theory; and (b) point-of-contact feedback based in a well-tested online system dedicated to enhancing the learning process. Analysis of preliminary trials suggests that the DBIR approach to the ngMOOC construction and development supports theoretical standpoints that argue for an understanding of how design for optimal learning can utilise conditions, such as differing online or blended educational contexts, in order to be effective and scalable. The ngMOOC development described in this paper marks the adoption of a cognitive architecture in conjunction with feedback systems, offering the groundwork for use of adaptive systems that cater for learner expertise. This approach seems especially useful in constructing and developing online learning that is self-paced and curriculum-based.

Abstract in German

Dieses Dokument beschreibt einen designbasierten Implementierungsforschungsansatz (DBIR-Ansatz) für die Entwicklung und Erprobung eines modernen offenen Online-Kurs (ngMOOC - new generation massive open online course), der sich im pädagogischen Rahmen der Bachelormathematik bei einem regionalen australischen Universitätbefindet. Dieser Prozess wird durch zwei wichtige Neuerungen unterstrichen: (a) eine Grundlage in einer etablierten humanen kognitiven Architektur in Bezug auf die kognitive Belastungstheorie; und (b) Feedback bei den Kontaktpunkten das sich in einem bewährten Online-System basiert, und das sich der Verbesserung des Lernprozesses widmet. Die Analyse von Vorversuchen legt nahe, dass der DBIR-Ansatz für die Konstruktion und Entwicklung von ngMOOC theoretische Positionen unterstützt, die ein Verständnis dafür zulassen, wie Design für optimales Lernen Bedingungen wie unterschiedliche Online- oder gemischte Bildungskontexte nutzen kann, um effektiv und skalierbar zu sein. Die in diesem Dokument beschriebene ngMOOC-Entwicklung markiert die Einführung einer kognitiven Architektur in Verbindung mit Feedback-Systemen, die Grundlage für den Einsatz von adaptiven Systemen bietet, die für Lernkompetenz sorgen. Dieser Ansatz scheint besonders nützlich zu sein, wenn es darum geht, Online-Lernen aufzubauen und zu entwickeln, das selbstständig ist und sich auf Lehrplänen basiert.

Keywords: massive open online course (MOOC), design-based implementation research (DBIR), online learning, cognitive load theory, point-of-contact feedback.

Introduction

Massive open online courses (MOOCs) have become increasingly popular in the modern educational world, providing opportunities for learners to develop and test their own learning networks in online environments (Bozkurt, Akgün-Özbek, & Zawacki-Richter, 2017; Liyanagunawardena, Adams, & Williams, 2013; Waldrop, 2013). MOOCs were originally created through efforts to provide collaborative interactions for online learners in shared open educational resources, as mega-connectivism courses or cMOOCs, (Mackness, Waite, Roberts, & Lovegrove, 2013; Siemens, 2008; 2013; Steffens et al., 2015). Course-based MOOCs (or xMOOCs) emerged soon after, as a non-collaborative variation, providing structured and sequenced open access courses for individuals who wished to complete them; a formal qualification was sometimes an additional extra (Hew, 2015; McAuley, Stewart, Siemens, & Cormier, 2015). Although the two MOOC types currently appear divergent, a wider, political perspective may eventually see a blurring of the distinction between the two (e.g., see discussion in Knox, 2016).

The current divergence between collaborative (cMOOC) and course-based (xMOOC) offerings presented learning design and development challenges (Rodriguez, 2012). Moreover, further challenges have emerged in the rapidly developing MOOC world. The quality of MOOC offerings has been called into question with regard to learning and cognition, with calls for research into their instructional design (McAndrew & Scanlon, 2013; Moe, 2015; Siemens, 2013) and, in particular, instructional design based in human cognitive architecture (Chen, Woolcott, & Sweller, 2017). Most MOOCs do not base their organisation on current knowledge of human cognition, despite MOOCs appearing to have originated within the learning sciences (Clarà & Barberà, 2014). Many design and development challenges are not related to consideration of cognitive processes, but rather to other factors, such as motivation, participation and study time (Champaign et al., 2014; El-Hmoudova, 2014; Hew, 2015; Petronzi & Hadi, 2016; Zheng, Rosson, Shih, & Carroll, 2015). Nevertheless, a significant confounding issue, particularly in xMOOCs, is the lack of input from learners in design and development (Freitas & Parades, 2018; Stone et al., 2017). Recent MOOCs that have tried to resolve this issue, and such input can successfully reframe MOOC construction by, for example, scaling problem-based learning (Verstegen, Spruijt, Dolmans, & van Merriënboer, 2016).

This article reviews how some of these challenges may be addressed by outlining the design and development of a new generation MOOC (ngMOOC) as a focal problem of practice for application primarily in an instructional setting related to undergraduate mathematics. Design and development of this ngMOOC was guided by a design-based implementation research (DBIR) approach (Penuel, Fishman, Cheng, & Sabelli, 2016) that emphasised two significant foundations.

  1. Cognitive load theory, a well-established theory founded on human cognitive architecture (Bruer, 2016).
  2. Point-of-contact student feedback, based in a well-tested online system dedicated to enhancing the learning process (Lake, Boyd, Boyd, & Hellmundt, 2017).

The design and development journey through two trial application phases drew on four established principles of design-based implementation research (Fishman et al., 2013; p.136):

  • a focus on persistent problems of practice from multiple stakeholders’ perspectives;
  • a commitment to iterative, collaborative design;
  • a concern with developing theory and knowledge related to both classroom learning and implementation through systematic inquiry; and,
  • a concern with developing capacity for sustaining change in systems.

Study Context

A persistent problem in undergraduate mathematics

In higher education worldwide an increasing number of graduates do not have the requisite mathematics knowledge and skills that the modern industrial workforce requires; graduates present with a wide range of tertiary-level competencies in mathematics and related areas (Bureau of Labor Statistics, 2011; Deloitte Report, 2012; Hanushek & Woessmann, 2010; Holdren & Lander, 2012; Kuenzi, 2008; Office of the Chief Scientist, 2012). Part of the problem relates to courses that have no pre-requisite or assumed quantitative skills—students from diverse backgrounds are entering their tertiary mathematics studies with vastly differing competencies and struggling to successfully complete even preliminary mathematics subjects (Bressoud, 2014; Croft, Harrison, & Robinson, 2009; King & Cattlin, 2015; Peters, 2013). There is, therefore, on-going discussion, globally, regarding the need for a rethink and redesign of mathematics teaching and learning at university and college levels in order to cater for the weak mathematics foundation of some university students (Burdman, 2015; Groen et al., 2015; Hill, Rowan, & Ball, 2005; Lawson, Croft, & Waller 2012; Okimoto & Heck, 2015; Woolcott, Chamberlain, Whannell, & Galligan, 2018a).

The “Bite size maths: Building mathematics capability of low SES students in regional/remote Australia” project (Bite size maths) described in this paper, is one response to this global situation. The various educational and social settings in Australia —such as in regional and non-cosmopolitan universities—reflect the global challenges of student retention and progression in mathematics or academic numeracy (Galligan, 2013; Kennedy, Lyons, & Quinn, 2014; Lyons et al., 2006). “Bite size maths” is dedicated to providing online resources to support undergraduate mathematics and academic numeracy, and its aim is to establish the foundations for a change in the way that online education is offered at university. Its initial focus is on six universities located in regional Australia that offer online and/or blended education across multiple campuses. These universities, like many educational institutions in regional areas, all have a substantial proportion of students with weak mathematics background or who have completed schooling more than 10 years before commencing higher education (Australian Academy of Science, 2016; Lyons et al., 2006). In addition, large numbers of students are from mid- to low-socioeconomic (SES), first-in-family, non-English speaking or Indigenous backgrounds, backgrounds that are known to provide risk factors for university-based learning (Burnheim & Harvey, 2016).

The new generation MOOC (ngMOOC) was designed and developed within the “Bite sized maths” project as a response to such challenges, focussing on numeracy required for mathematical competencies in a range of programs such as business, nursing and education. The ngMOOC takes into account lack of pre-requisite or assumed quantitative skills amongst entry students, a lack that translates into persistent high levels of mathematics attrition from first-year subjects (Australian Academy of Science, 2016; Galligan, 2013; King & Cattlin, 2015). The ngMOOC provides a resource for optimising outcomes for students who are not prepared for the level of quantitative skills needed in their university studies (Australian Academy of Science, 2016; Galligan, 2013; Woolcott et al., 2018a). The important differentiation of this ngMOOC from previous types of MOOCs is that they are either designed around undergraduate mathematics courses, essentially as xMOOCs (e.g., Daza, Makriyannis & Rovira Riera, 2013) or around curriculum integration and faculty-student interaction more in line with cMOOCs (e.g., Bralić & Divjak, 2018).

Design-Based Implementation Research (DBIR)

The overall project, and the ngMOOC within it, is framed in terms of the principles of design-based implementation research (DBIR). DBIR has emerged within the learning sciences as a combination of design-based research, modelled around design and testing of innovation within learning contexts, and implementation research which is allied with implementation of innovations (Fishman et al., 2013; Penuel et al., 2016). The four principles of DBIR (Fishman et al., 2013) (listed above) take up the issue of collaborative research and practice that involves multiple stakeholders, in a process that aims to design, test and implement innovations through iterative functionality. Penuel et al. (2016) record the success of this approach through a range of programs, projects, and partnerships that serve to integrate the four principles, rather than consider each on its own, as an effective way of providing evidence-based research and implementation: “And what makes the development of DBIR worthwhile as an endeavour is that it expands methods available for developing evidence related to the implementation, efficacy, and scaling of innovations.” (Penuel et al., 2016; p.145).

Implementing innovation across various educational settings and levels has become a critical feature of the DBIR trajectory. For example, Penuel, Coburn, and Gallagher (2013) report on stakeholders negotiating the problems of practice among collaborative endeavours among researchers, district leaders, and teachers. While these practices may not always be labelled as DBIR, they can be described in terms of the four principles. Woolcott et al. (2017b, 2017c), in a four-year project designed to improve pre-service teachers’ confidence and competence in teaching science and mathematics, for example, describe a similar process of co-creation based on persistent problems, iterative collaborative design and development, theory and knowledge development related to classroom learning and implementation, as well as a view to systemic change (Scott, Woolcott, Keast, & Chamberlain, 2018). A multi-level systemic overview is an important consideration in such projects, rather than a locked-in focus only on student learning (Deans & Anderson, 2013) as is a view towards co-creation “that can allow for institutions and students to work together to improve the student experience and enhance students’ ability to act as partners” (Dollinger, Lodge, & Coates, 2018; p.1).

Human cognition and student feedback

The ngMOOC draws together two comprehensive research fields, human cognitive architecture and point-of-contact feedback, each well established in its own right but rarely combined in a single learning context.

Human cognitive architecture and cognitive load theory

Human cognitive architecture is concerned with the organization of the structures, functions and processes that allow each person to learn, think and solve problems associated with the biologically secondary knowledge that is central to instructional design rather than the biologically primary knowledge obtained naturally and effortlessly without instruction (Geary, 2012). A key feature of human cognitive architecture is described in cognitive load theory as comprising a limited working memory, which can only deal with a small amount of new information at a time, and a long-term memory, which can hold an unlimited number of elements (schemas) on a relatively permanent basis (Sweller, Ayres, & Kalygua, 2011).

Research on human cognitive architecture over the last two decades has sought to better understand what aspects support problem solving and learning, noting that human cognitive architecture and effective instructional design are inseparably intertwined (Sweller et al., 2011). Sweller’s Cognitive load theory has become one of the most cited learning theories in contemporary educational design (Bruer, 2016) and is crucial to the success of all forms of computer-based instruction (Chen et al., 2017). Cognitive load theory provides a set of guidelines for instructional design that are predicated on an understanding of human cognition. Comprehensive testing of these principles has given rise to a set of identified cognitive load effects that can be applied in a number of different learning modalities to improve learning (Sweller et al., 2011, and see also Kalyuga, Chandler, & Sweller, 2000; Paas & van Merriënboer, 1994; van Merriënboer & De Croock, 1992).

The ngMOOC discussed in this paper was designed with the principles of cognitive load theory in mind. Three cognitive load effects, in particular, form the basis of the MOOC construction: (a) the worked example effect; (b) the modality effect; and (c) the problem completion effect (Sweller et al., 2011). The ngMOOC draws on reports that the use of video podcasts for learning appears to have a positive effect on student performance (Kay & Kletskin, 2012). Importantly, since these reports leave open the question of the nature of adequate design for interactive podcasts to be effective for student learning (Chen et al., 2017), the ngMOOC uses cognitive load theory as a conceptual basis for podcast construction and use.

Point-of-contact feedback and student learning

Survey questionnaires have been used extensively to improve teaching and learning (Richardson, 2005) but, with few exceptions, students completing questionnaires are not given immediate or meaningful feedback on either the information they provide or the survey results (Lake et al., 2017; Watson, 2003); when feedback is given, it is not always provided in a timely manner (Brookhart, 2008). Parikh, McReelis, and Hodges (2001) and Watson (2003) have argued that point-of-contact feedback is an essential component of student learning that also allows educators to make changes to content to better accommodate student needs. Point-of-contact feedback serves to let students know about different learning approaches, providing guidance on which may be most appropriate in particular contexts, and allows feedback from the students on how well the instructional design has facilitated their learning.

Over many years, Biggs (e.g., 1987, 1999) has developed, tested and refined a robust tool called the Study Process Questionnaire (SPQ) that measures learning approaches, motivation and strategy. In recent times an updated version of the questionnaire (Biggs, Kember, & Leung, 2001) has been adapted as point-of-contact feedback to measure deep and surface learning approaches in undergraduate education contexts (Lake et al., 2017). The use of the SPQ in this context is based on a well-established strand of pedagogical research that not only differentiates between deep and surface approaches to learning, but also demonstrates the superiority of deep learning (Diseth, 2003; Entwistle, Tait, & McCune, 2000). The adaption of point-of-contact feedback has progressed to focus on the immediate needs of students, and has been now used in a variety of learning contexts; feedback for students about their learning approaches, motivations and strategies has successfully been embedded in online course delivery for undergraduate students (Lake et al., 2017).

Method

The “Bite size maths” project was iterative, undertaken in two phases, with a Phase 1 pilot program undertaken in 2016 and with Phase 2 as a follow-on program through 2019. The two Phases were examined as embedded case studies (as opposed to a multi-case study approach, see e.g., Yin, 2013) using mixed methods approaches, presenting an opportunity for case comparison within the DBIR context. Project partners—24 university mathematics and education experts across the six study universities—co-created a baseline data set via a review of national database statistics on disadvantage in regional education, as well as through surveys and semi-structured interviews (Woolcott et al., 2017a). Several face-to-face meetings of experts at the trial university provided valuable feedback about responses to Phase 1; this was integral to collaborative co-creation, construction and evaluation of the ngMOOC in Phase 2. Student feedback was also utilised in both phases to inform collaborative decision-making.

Phase 1: Co-creation, development and evaluation of five online modules

This pilot phase was conducted on an online learning system within a one-semester introductory mathematics subject at a single university. Volunteer participants from within the subject cohort were randomly assigned to either a treatment group or a control group who were able to participate in five trial modules (Figure 1), designed for online delivery as an optional resource in the introductory subject. Each module was made up of five sections each containing two short-duration interactive online podcasts, with each section dedicated to a particular aspect of the topic.

Figure 1.

Figure 1. Protocol used for the trial modules within the online learning system. (Used with permission Woolcott, 2017.)

The module design was sequenced so that each of the first four sections built knowledge and skills such that a participant would then be able to complete the tasks in the final section. Each section was structured based on use of the worked example effect (Sweller & Cooper, 1985) and designed around a participant being able to complete a pattern of two pairs of worked example and problem solving tasks, as well as being able to try their newly acquired knowledge (and reinforce it) in a post-test included at the end of each section.

Participants in the treatment group received two pairs of worked example and problem solving tasks in each section, a total of ten pairs for the five sections in each module. The participants in control group were presented with two pairs of problem solving tasks that were identical to problems used for treatment group, but with no worked examples provided. Therefore, there was also a total of ten pairs of problems for the five sections in each module for the control group. Equal durations were allowed for each of the matching problems in the podcasts, so that the treatment and the control group were given the same amount of time for the same problem, whether or not a worked example was presented.

There was a post-test of six multiple-choice questions at the end of each section, and, therefore, a total of five post-tests for each Module. These questions were based on the problems presented in the sections and provided an opportunity for participants to practice, as well as providing for assessment of learning. The multiple choice questions in each test were randomized, a feature included in the online learning system, as were the multiple choice answer options. The post-tests provided 30 multiple choices test results for each participant in each module, allowing a determination of learning effectiveness.

In order to measure cognitive load, a subjective rating survey was placed after each pair of tasks in each section for both treatment and control groups (see Table 1). An online comment box was included at the end of each module, with no limit on word space.

Table 1:  The subjective rating survey for cognitive load

Table 1.

Module designed engaged iteration in a pattern typical of DBIR (Penuel et al., 2016) as illustrated in Figure 2.

Figure 2.

Figure 2. Iteration pattern of module design

Data were collected on:

  • the number of participants who attempted modules/sections;
  • the number of attempts at modules/sections completion;
  • the results of online cognitive load surveys in each section;
  • results of the post-tests; and,
  • feedback from open comment boxes.

The use of the online learning system also allowed the researchers to collect data from a participant’s initial attempts and ignore data from subsequent repeated attempts, since the modules were designed for interactivity and repeated use. Data for all students in the introductory subject cohort was available from an online multiple choice test at the start of the teaching session and a face-to-face written exam at the end, serving as an overall pre-test and post-test for students who completed a number of modules on different subject topics.

Phase 2: Co-creation, development and evaluation of the ngMOOC

Data analysis from the Phase 1 pilot informed the development and subsequent construction of the Phase 2 ngMOOC. This MOOC comprised 20 interactive modules for use together or independently on a web hosting service via online subscription. An overall aim was to be able to embed single modules, as interactive online podcasts, or embed the entire 20 modules as the ngMOOC—a novel learning approach in university mathematics programs. These 20 modules continued the Phase 1 focus on cognitive load theory’s design principles, with the addition of the completion effect in the form of “faded” worked examples (Figure 3). These faded worked examples provide the start of the working, but remove (fade out) one or more of the worked solution steps (Renkl, Atkinson, & Große, 2004).

In the ngMOOC modules, looped pathways combined with online rapid assessment allowed the introduction of several enhanced features based on feedback from Phase 1. These include:

  • the addition of point of contact (POC) feedback, including the study process questionnaire (SPQ);
  • the addition of both module and section pre-tests;
  • a repeat option for students with post-test score of less than 2;
  • the addition of faded worked examples as a ‘second loop’ option for students with post-test test scores from 2 to 5 (to reduce cognitive load and support solution path formation during subsequent learning);
  • an out option for students with high expertise (pre-test scores of 10 for a module and 6 for a section) enabling those with high levels of expertise to skip modules or sections; and,
  • the choice to do modules in any order.

Additional changes were the adoption of print, rather than handwriting (onscreen), as a preferred presentation mode, the continuation of animation in delivery, and no duration limits within modules/sections.

Figure 3.

Figure 3. An outline of the common structure of each module of the ngMOOC, showing learning pathways and the placement of the point-of-contact feedback (POC), including the study process questionnaire (SPQ). (Used with permission Woolcott, 2017.)

Results and Discussion

In this section, the experience of running Phases 1 and 2 of this project are used to demonstrate the importance of the four DBIR principles (Fishman et al., 2013) in informing and guiding the development of the ngMOOC.

Principle 1: Collaboration around persistent problems of practice and multiple stakeholder perspectives

Despite recognition of the need for a well-developed mathematical skill set in the contemporary world, Australia differs from many countries, including the USA and China, in that mathematics is not a requirement for high school graduation and university admission. Consequently, there is a persistent problem of practice that relatively few students are choosing career pathways that require some study of mathematics, or students do not have the appropriate mathematics for particular disciplines of study (Chubb et al., 2012; Finkel, 2018). Additionally, over the past 15 years, there has been a decline in the number of students studying mathematics at high school (Mack & Walsh, 2013). Boyd, Foster, Smith, and Boyd (2014) have shown also that, for many university students struggling with introductory mathematics, students’ low perceptions of their capacity for success in mathematics study can result in increased anxiety which, in turn, can set in motion a cycle of self-fulfilling failure.

Australia has responded proactively to this persistent problem with a number of funded research initiatives, including the “Bite sized maths” project described here, designed to make the mathematical sciences an attractive choice for study at both high school and university levels (Office of the Chief Scientist, 2016). Collaboration within the ngMOOC project was built around multiple stakeholder perspectives, involving contributions to design and development from all 24 university partners, as well as feedback from these partners and from students who completed modules in Phase 1 and Phase 2. Many of the team involved in “Bite sized maths” have a history of successful co-creation of resources with university or school students and staff as collaborative researchers and practitioners (Woolcott et al., 2017b; 2017c), and the “Bite sized maths” project team was able to leverage these relationships in opening opportunities within the ngMOOC part of the project (Woolcott et al., 2017c).

Analysis of the in-depth interviews and feedback from project partners supported the view that many students are ill prepared when they enter preparatory and introductory mathematics programs. While students experience difficulties with algebra, fractions, graphs, logarithms and unit conversions, there is a more fundamental barrier to student success in introductory undergraduate mathematics. Quite simply, many are not au fait with the language and conventions of mathematics, and this impedes their learning. The interviewees were positive about embedding opportunities for students to keep practicing until they had mastery of a particular concept—a goal of the broader “Bite size maths” project—but feedback from interviewees stressed the importance of ensuring that such modules be interactive and complement existing unit structure, and of keeping students on-task until completion of any such modules attempted.

Several of the project team undertook a meta-analysis of research on first year undergraduate mathematics attrition and the mechanisms through which this problem is being addressed (Lake et al., 2017). They determined that the most helpful research identified gaps in student mathematical knowledge, providing insights into how to best identify at-risk students, and suggested ways to assist these students. Interventions to support students struggling with introductory mathematics might be loosely grouped under two categories—those that involve mentoring and building student motivation, and those that focus on the learning content itself. These approaches are not mutually exclusive and many successful interventions have drawn on both (e.g., see Croft et al., 2009).

The “Bite sized maths” team, therefore, acted in full awareness of the need for all stakeholders to have a say in how the project progressed, including via feedback at various stages of design, development and trialling. In essence, a complexity thinking framework (Scott et al., 2018) was adopted in the ngMOOC, where multiple stakeholders in the university staff and student community were made aware of project goals as effective working boundary conditions. These goals, and timelines for goal delivery, were determined in early group meetings by a core group of university partners based on the government funding agreement and partner objectives localised to the needs of their own university environment, that is, the needs of their undergraduate mathematics learning environments.

As a result of this approach, the design and development of the Phase 1 trial emerged from the collaborative workings of the core group and their interactions with other stakeholders as recorded in meeting notes, journals, minutes and workshop records. The decision to conduct the Phase 1 trial at a single university was agreed upon, for example, by collaboration teams as a way of staying within the timeframe of the project goals. The core group was able to identify development strategies that enabled delivery of project goals by drawing on expertise from previous collaborations, including in data management, videography, computer-based learning interventions as online podcasts, design of instructional materials and instructional design and pedagogy.

A core group of project participants was able to coordinate a further focus on the persistent problem by using feedback from Phase 1 in the scaling up and refinement processes undertaken in Phase 2 (Table 2).

Table 2:  Feedback from Phase 1 and its accommodation in Phase 2.

Table 2.

Principle 2: A commitment to iterative, collaborative design

“Collaborative design research often focuses on the development and testing of usable tools for improving teaching and learning in specific subject matter domains and settings.” (Penuel et al., 2016; p.8)

In the early design and development of the modules in Phase 1, stakeholders decided that an iterative design provided effective teaching and learning values of the modules through efficiencies in use of resources given a limited budget. Coincidentally, the ngMOOC and module structures were also iterated internally for construction efficiency and to allow for incremental learning capability. Collecting evidence during and after a process iteration is typical of the DBIR process. However, in this case, it also included the enhanced capability of feedback from the online system itself. As Means and Anderson (2013; p.21) report, “When the learning session includes digital interaction, the digital learning system can collect data automatically, and those data can be combined with the knowledge collected by practitioners or researchers in the offline world for a more complete picture.”

Iterative design is not necessarily a good option for MOOCs, since they are generally offered in irregular patterns and directed at interventions, making design-based approaches difficult to entertain for repeated offerings (Gašević, Kovanović, Joksimović, & Siemens, 2014). In the ngMOOC, however, there were two iterations within the design and development process itself: the Phase 2 iteration was based on feedback from stakeholders to improve practice trialled in the Phase 1 iteration. In both Phases, designers and learners, therefore, engaged in co-designed and co-created research-practice partnerships, involving people in the design of their own learning (Penuel, Roschelle, & Schechtman, 2007; Penuel, Allen, Coburn, & Farrel, 2015). This is fully consistent with an application of DBIR in the MOOC context, in which “research can be conceived as a form of mutual capacity building at scale” (DiSalvo, 2017; p.29).

Within iterations, the collaborative partnerships drew on considerable in-kind support in terms of commitment to group meetings, review of processes and materials, and semi-structured interviews and surveys. This allowed the budget to be directed to script writing and online production with a few dedicated staff. Module design was taken back to partners at workshops and focus groups to ensure that the modular process trialled in Phase 1 could be scaled up as the ngMOOC of Phase 2. This stakeholder feedback ensured also that the content of the online modules was fit for purpose and graduated for the incremental learning necessary for long-term memory gains (Hew, 2015).

The commitment to collaborative design was enhanced by a common desire, a shared collaborative intentionality (Mesoudi, 2016), to provide a resource that could be accessed online at any time within a given subject offering. The shared intentionality was moderated by feedback from students who undertook modules within trials, providing a broader collaboration that included end-users as well as university partners and the ngMOOC construction team.

Principle 3: A concern with developing theory and knowledge related to both learning and implementation through systematic inquiry

The ngMOOC was designed and developed with a definite strategy in mind. That strategy combined two features of shared open educational resources and online learning systems previously underutilised in MOOCs: a basis in human cognitive architecture (via the principles and effects of cognitive load theory); and the use of point-of-contact feedback for MOOC end users. The project was, therefore, based in two fields, each having well developed theoretical and knowledge dimensions, but with both fields yet to be applied together. This combined application required a systematic two-part approach: (a) to first determine the effectiveness of the key cognitive load effects being implemented, primarily the worked example effect; and (b) to then include the point-of-contact feedback in combination with these and other cognitive load effects, with a focus also on the problem completion effect.

Given such a systematic inquiry focus, the project strategy was to first trial five modules to ascertain how an application in interactive online podcasts of the worked example effect could inform both cognitive load theory and module design and development (Chen et al., 2017). Analysis of Phase 1 trials favoured the worked example condition, although there was insufficient data for a significant treatment effect to be proven (due to confounding of zero scores and non-completions in post-tests). For example, Table 3 shows means and standard deviations from of the analysis of total post-test scores for each participant in each Section in two of the Modules, to test for significant differences between groups using analysis of variance. The repeated measures analysis was conducted on the 10 tests for each of the modules with worked examples/problems as the independent variable.

A repeated measures one-way ANOVA produced an F(1, 32) statistic of 6.91, with the probability value of its occurrence, p = .013, less than 0.05 indicating that there was a significant difference between the experimental (high guidance) and control (low guidance) groups (with non-completions treated as zero scores). Mean values for the high guidance group were always significantly higher than for the low guidance group with a mean standard error, MSe = 5.48, although there was a relatively small effect size, ηp2 = .178. These results indicate that using worked examples to structure online learning environment may be superior to problem-based learning environment.

Table 3:  Means (standard deviations) values for Tests 1 to 10 in Modules 1 and 2.

Table 3.

Students who completed the Phase 1 trials considered the modules to be of value in learning, as the following anonymous online comments attest:

“A great way to study. It focused on one aspect of a single topic which I liked.”

“Yes this was beneficial for my learning, thank you.”

“I’m using Bite Size maths as revision for the exam. the repetitive nature of this test is helping me really master this topic.”

Even though analysis of the Likert scale data obtained in Phase 1 from the cognitive load surveys did not yield significant results when subject to analysis in ANOVA, control group student comments supported the preference for worked examples. Typical comments included:

“The videos needed to show the working out. A step by step guide would of been appreciated.”

“It would be better if it provided step by step instructions for questions as I got stuck on a few.”

“Somewhat helpful but does not explain how to arrive at the correct answer.”

Module design and development in Phase 2 was informed by the high dropout rates in Phase 1, typical of MOOCs, which suggested that larger samples were needed in the subsequent Phase 2 trials. This was reinforced by pre-test and post-test results in the subject itself (and not the modules), which were inconclusive in determining the effect of the limited content coverage of the five trial modules compared to the total subject content coverage. Feedback from these trials indicated also that system delivery needed to be fully automated and accessible through a widely accessible internet portal, rather than an internal university learning management system.

Both the SPQ and POC had been trialled at the study university, and have been shown to enhance the student experience while at the same time providing research data from students about learning approaches not previously used by university teachers (Lake et al., 2017). In Phase 2, students were required to complete the SPQ prior to the pre-test at the beginning of each module and again at the end. This provided comparative data regarding whether student motives (deep or surface) had altered during the module. The students were provided with immediate feedback in the form of responses determined by their level of agreement. Table 4 provides an example of the feedback provided for Question 7 in the SPQ (deep motive).

Table 4:  Example of question and responses in the study process questionnaire (SPQ)

Table 4.

Principle 4: A concern with developing capacity for sustaining change in systems

Sustainable system change was not an obvious goal in initial planning or in Phase 1. This phase did, however, rely on team members who, as individual implementers, brought significant skills and expert knowledge to the project. The skills and expertise included discipline content, awareness of regional student education, professional expertise, mentoring ability and project management. Optimising the efficacy of such hybrid teams is important (Lake et al., 2018; Scott et al. 2018; Woolcott & Chamberlain, 2018).

In the “Bite sized maths” project, therefore, people worked together on common goals, building on a network of prior relationships, as well as drawing upon the elements of cohesion and mutual respect available from newer team members, key attributes of interdisciplinary teams (Lakhani, Benzies, & Hayden, 2012). As a result, there were clear flows of communication and systematic and structured approaches that were mentored by experienced researchers who understood their own capabilities and those of their research partners. Relationships during Phase 1 and the subsequent developmental processes in Phase 2 can be considered within continuum of cooperation, coordination and collaboration of people working together (Table 5).

It can be argued that any system change (in bold in Table 5), whether sustained or not, requires a high trust relationship evidenced by stable relations within the team structure and thicker communication flows as the project develops. In Phase 1, the team began in a medium trust relationship, where several team members already had an experience of working together and were prepared to accommodate and understand the nature of the decision-making processes, communication and working preferences of known co-creators. As the project moved into Phase 2, the feedback and feed forward to stakeholders about module function, and the role of theory, design and implementation in this function, resulted in evidence of high trust relationship, with stable relations within the team structure and thicker communication flows as the project developed. There was an increase in tactical information sharing based on interdependent goals as the team became more committed to systems change, the main intention of the project. There was also a move towards reconfiguring the team structure to allow for pooled resources and accountability to the collaborative network first and foremost.

Table 5:  An integrated view of cooperation, coordination and collaboration in research project networks. (Adapted from Keast and Mandell (2014), used with permission.)

Table 5.

Developing capacity for sustaining change in systems, therefore, may require training to build skills of individual implementers, but this can be achieved within a project provided that there is an awareness of which category in Table 5 is the goal of research and implementation networks. An additional and/or alternative perspective can be taken that sees individuals as part of a systems ecology, where support is directed towards excellence and quality through relationship networks at various levels and dimensions (Woolcott et al., 2018b).

Conclusion

This article elaborates an approach to designing an ngMOOC that expands on the potential for universities to offer a way to provide educational outcomes that are based in learners needs while remaining within a proscribed curriculum. The ngMOOC development outlined here is a beginning for the adoption of a cognitive architecture in conjunction with feedback systems that offers the groundwork for use of adaptive systems that cater for learner expertise. Within this development context, DBIR offers a framework that seems especially useful in guiding the construction and development of online learning that is self-paced and curriculum based, while at the same time facilitating scaling up of programs developed within the framework (Penuel, Fishman, Cheng, & Sabelli, 2011). The ngMOOC was designed to improve system-level outcomes and DBIR offers a design process that is not top-down, but rather is owned by all stakeholders, including local actors who contribute feedback to the design process. The DBIR approach is scalable even if there are limitations on practitioner knowledge—another group may develop a scalable project in a different way using DBIR if their practitioner knowledge was different from that of the team in the “Bite size maths” project.

References

  1. Australian Academy of Science (2016). The mathematical sciences in Australia: A vision for 2025. Canberra, Australia: Australian Academy of Science.
  2. Bali, M. (2014). MOOC pedagogy: gleaning good practice from existing MOOCs. Journal of Online Learning and Teaching, 10, 44-56.
  3. Biggs, J. (1987). The study process questionnaire (SPQ): Manual. British Journal of Educational Psychology, 68, 395-407.
  4. 4.Biggs, J. (1999). What the student does: Teaching for enhanced learning. Higher Education Research & Development, 18, 57-75.
  5. Biggs, J., Kember, D., & Leung, D. Y. (2001). The revised two-factor study process questionnaire: R-SPQ-2F. British Journal of Educational Psychology, 71, 133-149
  6. Boyd, W., Foster, A., Smith, J., & Boyd, W. E. (2014). Feeling good about teaching mathematics: Addressing anxiety amongst pre-service teachers. Creative Education, 5, 207-217.
  7. Bozkurt, A., Akgün-Özbek, E., & Zawacki-Richter, O. (2017). Trends and patterns in Massive Open Online Courses: Review and content analysis of research on MOOCs (2008-2015). The International Review of Research in Open and Distributed Learning, 18(5). doi.org/10.19173/irrodl.v18i5.3080.
  8. Bralić, A., & Divjak, B. (2018). Integrating MOOCs in traditionally taught courses: achieving learning outcomes with blended learning. International Journal of Educational Technology in Higher Education, 15(1). doi: 10.1186/s41239-017-0085-7.
  9. Bressoud, D. M. (2014). Attracting and retaining students to complete two-and four-year undergraduate degrees in STEM: The role of undergraduate mathematics education. Commissioned paper prepared for the Committee on Barriers and Opportunities in Completing 2-Year and 4-Year STEM Degrees. Washington, DC: National Academy of Sciences.
  10. Brookhart, S. M. (2008). How to give effective feedback to your students. Alexandria, VA: Association of Supervision and Curriculum Development.
  11. Bruer, J. T. (2016). Where Is Educational Neuroscience? Educational Neuroscience, 1, 1-13.
  12. Burdman, P. (2015). Degrees of freedom: Diversifying math requirements for college readiness and graduation. Oakland, CA: Learning Works and Policy Analysis for California Education.
  13. Bureau of Labor Statistics (2011). 2010–11 Occupational Outlook Handbook. Retrieved from https://www.bls.gov/ooh/home.htm
  14. Burnheim, C., & Harvey, A. (2016). Far from the studying crowd? Regional and remote students in higher education. In A. Harvey, C. Burnheim & M. Brett (Eds.), Student equity in Australian higher education (pp. 143-162). Singapore: Springer.
  15. Champaign, J., Colvin, K. F., Liu, A., Fredericks, C., Seaton, D., & Pritchard, D. E. (2014). Correlating skill and improvement in 2 MOOCs with a student’s time on tasks. Proceedings of the first ACM conference on Learning @ scale conference, ACM, New York, USA, 11-20.
  16. Chen, O., Woolcott, G., & Sweller, J. (2017). Using cognitive load theory to structure MOOCs and other computer-based learning. Journal of Computer Assisted Learning, 33(4), 293-305. doi:10.1111/jcal.12188
  17. Chubb, I., Findlay, C., Du, L., Burmester, B., & Kusa, L. (2012). Mathematics, engineering and science in the national interest. Canberra, Australia: Office of the Chief Scientist.
  18. Clarà, M., & Barberà, E. (2014). Three problems with the connectivist conception of learning. Journal of Computer Assisted Learning, 30, 197-206.
  19. Croft, A., Harrison, M., & Robinson, C. (2009). Recruitment and retention of students–an integrated and holistic vision of mathematics support. International Journal of Mathematical Education in Science and Technology, 40, 109-125.
  20. Daza, V., Makriyannis, N., & Rovira Riera, C. (2013). MOOC attack: closing the gap between pre-university and university mathematics. Open Learning: The Journal of Open, Distance and e-Learning, 28(3), 227-238.
  21. Deloitte Report (2012). Measuring the economic benefits of mathematical science research in the UK. London, UK: Deloitte MCS Ltd.
  22. DiSalvo, C. (2017). Viewing participatory design from the learning sciences and the field of design. In B. DiSalvo, J. Yip, E. Bonsignore & C. DiSalvo (Eds.), Participatory design for learning: Perspectives from practice and research (pp. 28-42). New York, NY: Routledge.
  23. Diseth, A. (2003). Personality and approaches to learning as predictors of academic achievement. European Journal of Personality, 17, 143-155.
  24. Dollinger, M., Lodge, J., & Coates, H. (2018). Co-creation in higher education: Towards a conceptual model. Journal of Marketing for Higher Education, 28(2), 1-22. doi: 10.1080/08841241.2018.1466756.
  25. Eccles, J. S., & Wigfield, A. (2002). Motivational beliefs, values, and goals. Annual Review of Psychology, 53, 109-132.
  26. El-Hmoudova, D. (2014). MOOCs motivation and communication in the cyber learning environment. Procedia-Social and Behavioral Sciences, 131, 29-34.
  27. Entwistle, N., Tait, H., & McCune, V. (2000). Patterns of response to an approaches to studying inventory across contrasting groups and contexts. European Journal of Psychology of Education, 15, 33-48.
  28. Finkel, A. (2018). Winning the game of Faculty. Universities Australia Higher Education Conference Dinner Address, Parliament House, Canberra, Wednesday 28th February 2018. Retrieved from http://www.chiefscientist.gov.au/wp-content/uploads/Universities-Australia-dinner-address.pdf
  29. Fishman, B. J., Penuel, W. R., Allen, A. R., Cheng, B. H., & Sabelli, N. O. R. A. (2013). Design-based implementation research: An emerging model for transforming the relationship of research and practice. National Society for the Study of Education, 112, 136-156.
  30. Freitas, A., & Paredes, J. (2018). Understanding the faculty perspectives influencing their innovative practices in MOOCs/SPOCs: a case study. International Journal of Educational Technology in Higher Education, 15(1). doi: 10.1186/s41239-017-0086-6.
  31. Galligan, L. (2013). A systematic approach to embedding academic numeracy at university. Higher Education Research & Development, 32, 734-747.
  32. Gašević, D., Kovanović, V., Joksimović, S., & Siemens, G. (2014). Where is research on massive open online courses headed? A data analysis of the MOOC Research Initiative. The International Review of Research in Open and Distributed Learning, 15(5).
  33. Geary, D. C. (2012). Evolutionary educational psychology. In K. Harris, S. Graham, & T. Urdan (Eds.), APA Educational Psychology Handbook (Vol. 1, pp. 597-621). Washington, D.C.: American Psychological Association.
  34. Groen, L., Coupland, M., Langtry, T., Memar, J., Moore, B., & Stanley, J. (2015). The mathematics problem and mastery learning for first-year, undergraduate STEM students. International Journal of Learning, Teaching and Educational Research, 11, 141-160.
  35. Hanushek, E., & Woessmann, L. (2010). The High Cost of Low Educational Performance: The Long-Run Economic Impact of Improving PISA Outcomes. Paris, France: Author.
  36. Hew, K. F. (2015). Promoting engagement in online courses: What strategies can we learn from three highly rated MOOCs. British Journal of Educational Technology, 47, 320-341.
  37. Hill, H. C., Rowan, B., & Ball, D. L. (2005). Effects of teachers’ mathematical knowledge for teaching on student achievement. American Educational Research Journal, 42, 371-406.
  38. Holdren, J., & Lander, E. (2012). Report to the President – Engage to excel: Producing one million additional college graduates with degrees in Science, Technology, Engineering, and Mathematics. Washington, DC: President’s Council of Advisors on Science and Technology:
  39. Kalyuga, S., Chandler, P., & Sweller, J. (2000). Incorporating learner experience into the design of multimedia instruction. Journal of Educational Psychology, 92, 126-136.
  40. Kay, R., & Kletskin, I. (2012). Evaluating the use of problem-based video podcasts to teach mathematics in higher education. Computers & Education, 59, 619-627.
  41. Keast, R., & Mandell, M. (2014). The collaborative push: Moving beyond rhetoric and gaining evidence. Journal of Management and Governance, 18, 9-28.
  42. Kennedy, J. P., Lyons, T., & Quinn, F. (2014). The continuing decline of science and mathematics enrolments in Australian high schools. Teaching Science, 60, 34-46.
  43. King, D., & Cattlin, J. (2015). The impact of assumed knowledge entry standards on undergraduate mathematics teaching in Australia. International Journal of Mathematical Education in Science and Technology, 46(7), 1032-1045.
  44. Knox, J. (2016). Posthumanism and the massive open online course. New York, NY: Routledge.
  45. Kuenzi, J. (2008). Science, technology, engineering, and mathematics (STEM) education: Background, federal policy, and legislative action. Retrieved from http://www.fas.org/sgp/crs/misc/RL33434.pdf
  46. Lake, W., Boyd, W., Boyd, W., & Hellmundt, S. (2017). Just another student survey? Point of contact survey feedback enhances the student experience and lets researchers gather data. Australian Journal of Adult Learning, 57, 82-104.
  47. Lake, W., Wallin, M., Boyd, W. E., Woolcott, G., Boyd, W., Foster, A., & Markopoulos, C. (2018). Optimising the efficacy of hybrid academic teams: Lessons from a systematic review process. Australian Universities’ Review, 60(1), 16-24.
  48. Lake, W., Wallin, M., Woolcott, G., Boyd, W. E., Foster, A., Markopoulos, C., & Boyd, W. (2017). Applying an alternative mathematics pedagogy for students with weak mathematics: Meta-analysis of alternative pedagogies. International Journal of Mathematical Education in Science and Technology, 48, 215-228.
  49. Lakhani, J., Benzies, K., & Hayden, K. A. (2012). Attributes of interdisciplinary research teams: A comprehensive review of the literature. Clinical & Investigative Medicine, 35(5), E260-E265.
  50. Lawson, D., Croft, T., & Waller, D. (2012). Mathematics support past, present and future. Proceedings of the International Conference on Innovation, Practice and Research in Engineering Education, 18-20. Loughborough University, UK: Centre for Engineering and Design Education.
  51. Liyanagunawardena, T. R., Adams, A. A., & Williams, S. A. (2013). MOOCs: A systematic study of the published literature 2008-2012. The International Review of Research in Open and Distributed Learning, 14, 202-227.
  52. Lyons, T., Cooksey, R., Panizzon, D., Parnell, A., & Pegg, J. (2006). Science, ICT and mathematics education in rural and regional Australia: The SiMERR national survey. A research report prepared for the Department of Education, Science and Training. Armidale, Australia: National Centre of Science, ICT and Mathematics Education for Rural and Regional Australia, and the University of New England.
  53. Mack, J., & Walsh, B. (2013). Mathematics and science combinations NSW HSC 2001-2011 by gender. Technical paper. Retrieved from http://www.maths.usyd.edu.au/u/SMS/MWW2013.pdf
  54. Mackness, J., Waite, M., Roberts, G., & Lovegrove, E. (2013). Learning in a small, task–oriented, connectivist MOOC: Pedagogical issues and implications for higher education. The International Review of Research in Open and Distributed Learning, 14(4). Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/1548/2636?utm_source
  55. Mason, M. (2008). Complexity theory and the philosophy of education. Educational Philosophy and Theory, 40(1), 4-18.
  56. McAndrew, P., & Scanlon, E. (2013). Open learning at a distance: Lessons for struggling MOOCs. Science, 342(6165), 1450-1451.
  57. McAuley, A., Stewart, B., Siemens, G., & Cormier, D. (2015). The MOOC model for digital practice (2010). Retrieved from http://www.elearnspace.org/Articles/MOOC_Final.pdf
  58. Means, B., & Anderson, K. (2013). Expanding evidence approaches for learning in a digital world. Washington, DC: US Department of Education Office of Educational Technology.
  59. van Merriënboer, J. J., & De Croock, M. B. (1992). Strategies for computer-based programming instruction: Program completion vs. program generation. Journal of Educational Computing Research, 8, 365-394.
  60. Mesoudi, A. (2016). Cultural evolution: Integrating psychology, evolution and culture. Current Opinion in Psychology, 7, 17-22.
  61. Moe, R. (2015). MOOCs as a Canary: A Critical Look at the Rise of EdTech. Proceedings of the E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education (Vol. 2015, No. 1, pp. 1037-1042). Hawaii: Association for the Advancement of Computing in Education (AACE), Chesapeake, VA
  62. Office of the Chief Scientist (2012). Mathematics, engineering and science in the national interest. Canberra, Australia: Office of the Chief Scientist.
  63. Office of the Chief Scientist (2016). Australia’s STEM Workforce: Science, Technology, Engineering and Mathematics. Canberra, Australia: Australian Government.
  64. Okimoto, H., & Heck, R. (2015). Examining the impact of redesigned developmental math courses in community colleges. Community College Journal of Research and Practice, 39, 633-646.
  65. Paas, F. G., & van Merriënboer, J. J. (1994). Variability of worked examples and transfer of geometrical problem-solving skills: A cognitive-load approach. Journal of Educational Psychology, 86, 122-133.
  66. Parikh, A., McReelis, K., & Hodges, B. (2001). Student feedback in problem based learning: A survey of 103 final year students across five Ontario medical schools. Medical Education, 35, 632-636.
  67. Penney, C. G. (1989). Modality effects and the structure of short-term verbal memory. Memory & Cognition, 17, 398-422.
  68. Penuel, W. R., Allen, A. R., Coburn, C. E., & Farrell, C. (2015). Conceptualizing research–practice partnerships as joint work at boundaries. Journal of Education for Students Placed at Risk (JESPAR), 20, 182-197.
  69. Penuel, W. R., Coburn, C. E., & Gallagher, D. J. (2013). Negotiating problems of practice in research-practice design partnerships. Yearbook of the National Society for the Study of Education, 112, 237-255.
  70. Penuel, W. R., Fishman, B. J., Cheng, B. H., & Sabelli, N. (2011). Organizing research and development at the intersection of learning, implementation, and design. Educational researcher, 40, 331-337.
  71. Penuel, W. R., Fishman, B. J., Cheng, B. H., & Sabelli, N. (2016). Organizing research and development at the intersection of learning, implementation, and design. Annual Review of Policy Design, 4, 1-10.
  72. Penuel, W. R., Roschelle, J., & Shechtman, N. (2007). Designing formative assessment software with teachers: An analysis of the co-design process. Research and Practice in Technology Enhanced Learning, 2, 51-74.
  73. Peters, M. L. (2013). Examining the relationships among classroom climate, self-efficacy, and achievement in undergraduate mathematics: A multi-level analysis. International Journal of Science and Mathematics Education, 11, 459-480.
  74. Petronzi, D., & Hadi, M. (2016). Exploring the factors associated with MOOC engagement, retention and the wider benefits for learners. European Journal of Open, Distance and E-learning, 19(2), 112-129.
  75. Renkl, A., Atkinson, R. K., & Große, C. S. (2004). How fading worked solution steps works–a cognitive load perspective. Instructional Science, 32, 59-82.
  76. Richardson, J. T. (2005). Instruments for obtaining student feedback: A review of the literature. Assessment & Evaluation in Higher Education, 30, 387-415.
  77. Rodriguez, C. O. (2012). MOOCs and the AI-Stanford like courses: Two successful and distinct course formats for massive open online courses. European Journal of Open, Distance and E-Learning, 15(2). Retrieved from http://www.eurodl.org/materials/contrib/2012/Rodriguez.pdf
  78. Scott, A., Woolcott, G., Keast, R., & Chamberlain, D. (2018). Sustainability of collaborative networks in higher education research projects: Why complexity? Why now? Public Management Review, 20(7), 1068-1087.
  79. Siemens, G. (2008). MOOC or mega-connectivism course. Retrieved from http://ltc.umanitoba.ca/connectivism/?p=53
  80. Siemens, G. (2013). Massive open online courses: Innovation in education. Open Educational Resources: Innovation, Research and Practice, 5, 5-15.
  81. Steffens, K., Bannan, B., Dalgarno, B., Bartolomé, A. R., Esteve-González, V., & Cela-Ranilla, J. M. (2015). Recent developments in technology- enhanced learning: A critical assessment. RUSC: Universities and Knowledge Society Journal, 12(2). 73-86.
  82. Stone, M. L., Kent, K. M., Roscoe, R. D., Corley, K. M., Allen, L. K., & McNamara, D. S. (2017). The design implementation framework. In R. D. Roscoe, S. D. Craig & I. Douglas (Eds.), End-User considerations in educational technology design (pp. 76-98). Hershey, PA: IGI Global.
  83. Sweller, J., Ayres, P., & Kalyuga, S. (2011). Cognitive load theory. New York, NY: Springer.
  84. Sweller, J., & Cooper, G. A. (1985). The use of worked examples as a substitute for problem solving in learning algebra. Cognition and Instruction, 2, 59-89.
  85. Verstegen, D. M., Spruijt, A., Dolmans, D., & van Merriënboer, J. J. (2016). Problem-based learning in a MOOC – Exploring an innovative instructional design at a large scale. Proceedings of the 8th International Conference on Computer Supported Education – Volume 2, CSEDU, 369-377. doi: 10.5220/0005757003690377.
  86. Waldrop, M. M. (2013). Campus 2.0. Nature, 495(7440), 160-163.
  87. Watson, S. (2003). Closing the feedback loop: Ensuring effective action from student feedback. Tertiary Education and Management, 9, 145.
  88. Woolcott, G., & Chamberlain, D. (2018). Measuring a university-community collaboration using social network analysis. International Journal of Learning and Change, 11(1), 18.
  89. Woolcott, G., Chamberlain, D., Whannell, R., & Galligan, L. (2018a). Examining undergraduate student retention in mathematics using network analysis and relative risk. International Journal of Mathematical Education in Science and Technology TMES, 50(3). doi: 10.1080/0020739X.2018.1520932.
  90. Woolcott, G., Keast, R., Tsasis, P., Charles, M., Farr-Wharton, B., Kivits, R., & Chamberlain, D. (2018b). A network connectivity framework for person-centred service models. Panel paper presented at the 22nd Annual International Research Society for Public Management Conference, IRSPM2018, University of Edinburgh Business School, Edinburgh, UK, 11-13 April 2018.
  91. Woolcott, G., Mason, R., Markopoulos, C., Boyd, W., Chen, O., Seton, C., Lake, W., Whannell, R., Foster, A., Galligan, L., Marshman, M., Schmalz, J., & Sultanova, N. (2017a). Bite size maths—Building mathematics low SES student capability in regional/remote Australia. Final Report 2017 for the Higher Education Participation and Partnerships Programme (HEPPP) 2015 National Priorities Pool, Australian Government Department of Education and Training. Canberra, Australia: Australian Government.
  92. Woolcott, G., Scott, A., Norton, M., Whannell, R., Galligan, L., Marshman, M., Pfeiffer, L., & Wines, C. (2017b). It’s part of my life: Engaging university and community to enhance science and mathematics education. Final report for Enhancing the Training of Mathematics and Science Teachers. Canberra, Australia: Department of Education and Training.
  93. Woolcott, G., Scott, A., Norton, M., Whannell, R., Galligan, L., Marshman, M., Pfeiffer, L., & Wines, C. (2017c). The Enhancement-Lesson-Reflection process: A resource manual for science and mathematics learning and teaching. Companion Report to the Final report: It’s part of my life: Engaging university and community to enhance science and mathematics education. Canberra, Australia: Department of Education and Training.
  94. Yin, R. K. (2013). Case study research: Design and methods. Thousand Oaks, CA: Sage.
  95. Zheng, S., Rosson, M. B., Shih, P. C., & Carroll, J. M. (2015). Understanding student motivation, behaviors and perceptions in MOOCs. Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, 1882-1895. ACM.

     

Tags

e-learning, distance learning, distance education, online learning, higher education, DE, blended learning, ICT, MOOCs, information and communication technology, collaborative learning, internet, learning management system, LMS, interaction,

Current issue on Sciendo

– electronic content hosting and distribution platform

EURODL is indexed by ERIC

– the Education Resources Information Center, the world's largest digital library of education literature

EURODL is indexed by DOAJ

– the Directory of Open Access Journals

EURODL is indexed by Cabells

– the Cabell's Directories

EURODL is indexed by EBSCO

– the EBSCO Publishing – EBSCOhost Online Research Databases

For new referees

If you would like to referee articles for EURODL, please write to the Chief Editor Ulrich Bernath, including a brief CV and your area of interest.