Effectiveness of Personalised Learning Paths on Students Learning Experiences in an e-Learning Environment

Mohammad Issack Santally [], Virtual Centre for Innovative Learning Technologies, University of Mauritius, Mauritius,
Alain Senteni, School of e-Education, Hamdan Bin Mohammed e-University, United Arab Emirates


Personalisation of e-learning environments is an interesting research area in which the learning experience of learners is generally believed to be improved when his or her personal learning preferences are taken into account. One such learning preference is the V-A-K instrument that classifies learners as visual, auditory or kinaesthetic. In this research, the outcomes of an experiment are described after students in the second year of university were exposed to a unit that was redesigned to fit in the V-A-K learning styles. It was found that the was no performance improvement when the students were exposed to that specific personalised learning environment and it was surprisingly noted from the statistical evidence that they underperformed in general both with respect to their previous performances and their performances in the same course but for a different unit that served as a control. The personalisation framework used an adaptive method to generate learning paths for each student and it was found that the method performed satisfactorily in its selection process. The findings of this research adds to the existing body of discourse and consolidates the belief that learning styles as determined by self-assessment instruments do not necessarily improve performances. On the other hand, it brings an interesting observation with respect to e-learning environments and the use of multimedia. A pedagogical method of instructional design that brings a sound balance in the use of different elements can indeed be of universal application and each and every learner will find his or her space in it. Indeed working towards more flexibility and adaptability of the environment might be a better approach rather than to work on the adaptivity of the environment.

Keywords: Personalisation, Learning Objects, Learning Styles, Online Learning, Adaptation, Multimedia Learning, VAK Instrument


The issue of incorporating learning styles into the design of instruction has initially occurred in traditional classroom settings. With the emergence of web-based instruction a number of researchers have focused on the possibility for extrapolation of the concept for learners engaged in online learning. Most of the experiments were limited and focused on one particular style and are based on rigid ‘if-then-else’ statements limiting the flexibility of the system and defeating the argument that has been established by previous researchers that learning styles do change over time.

The real issue when it comes to the design of personalized learning environment is therefore not really about getting the students preferred learning style on a perfectly accurate scale. The important thing is to get an initial profile of the learner, of course, but as accurately as possible. The other important aspect is also to have appropriate contents designed to match learners’ preferences. There cannot be any content which will perfectly match any learner preference, or there cannot be a student with one learning preference who will not achieve the intended learning outcomes solely because the learning content or teaching method did not meet his preferences. The aim therefore, is to enhance the students learning experience by giving him as far as possible a content matching his preferences. The “iterative analysis of learner interaction and feedback” method proposed by Santally (2009) helps to address the issue of the changing learning styles as well as the one on subjective filling of self-reporting instruments. This can help improve learner profiling in online learning environments as well as provide adequate grounds to determine any adjustments needed in learning content profiling.

The origins of this research date to 2003 when a survey of students’ learning and cognitive styles (also referred to as learning preferences) was carried out at the University of Mauritius (Santally, 2003). Furthermore, a number of evolutions has been taking place in the e-learning framework at the University (Santally, 2005; Santally et al., 2004) and the one-size-fits-all characteristic of web-based learning has been criticized by a number of authors as being limited from a pedagogical perspective. These arguments form the rationale behind this research. It is focused on personalization of web-based learning environments with particular emphasis on learning preferences that relate mainly to the psychological traits of learners and students.

A review of literature revealed a number of studies done on the effects of designing materials or learning activities that take into account the learning styles of learners. There is however no widespread consensus on the effects of these factors on student learning and most of these works or studies focus on one particular learning preference or construct. There has also been a number of critics towards the validity of many of the learning styles constructs. For instance, the instruments of Honey and Mumford (1986) and that of Kolb (1984) have been quite criticized by authors. However, from a careful analysis of the different constructs in the instruments, it is obvious that there are clues suggesting to how learning materials can be instructionally prepared to meet more appropriately the needs of learners and students. On the other hand, the tricky thing with the concept of personalization is that there can be a number of factors and variables that, if taken into consideration can offer varying degrees of personalization. For instance, psychological traits are one possibility, performance is another, visual preferences, spatial cues, metaphors and cultural issues. Psychological traits by themselves encompass a number of variables such as learning styles, cognitive styles and controls, emotional intelligence and levels of motivation and so on. This leads to the need of the conception of a generic personalization framework that makes it easy to choose variables for personalization without having to redesign a whole new system for each of them. The framework can also serve as a research-enabling platform in this area. A first conceptual model of the framework was drafted (Santally & Senteni, 2005a) where three possible adaptation models were described. The framework has been subsequently refined (Santally & Senteni, 2005b) and specified in more detail. The framework has now been used on the field in a real educational setting using a prototype course that was designed and implemented with students to try to address the research questions formulated.

Personalisation of web-based learning environments

Aptitude treatment interaction (ATI) research developed as a way to find the best methods of instruction for the student population. Peck (1983) states that ATI is research correlating teaching methods with measures of student aptitudes finding that students may respond differently to a particular method depending on such variables as intelligence, learning style, or personality. Cronbach and Snow (1977) suggested the matching of instruction to traits at two levels: macro-adaptations, which match treatments to fit different classes of students, and micro-adaptations of treatments on a lesson-by-lesson, student-by-student basis. Macro-adaptation implies a multiple method approach to individualization, the design of alternate treatments that engage different groups of students through different forms of information processing, whereas micro-adaptations focus on treatments to adapt the tasks and forms of instruction to meet more specific learner needs and abilities (Jonassen and Grabowski, 1993). Adaptivity in hypermedia systems to personalize the user’s experience with the system is not a new concept and Brusilovsky (2001) describes three main types of adaptation that exists in web-based hypermedia systems namely content, navigation and layout. In adaptive hypermedia literature they are referred respectively as adaptive presentation and adaptive navigation support. Rumetshofer and Wöß (2003), on the other hand postulate that in learning systems, adaptivity needs to cover more that what Brusilovsky (2001) proposes for web-based hypermedia systems and propose what they call adaptation to psychological factors. These psychological factors are in fact the different factors such as cognitive styles, learning preferences and strategies. Cristea (2004) highlights the importance of connecting adaptive educational hypermedia with cognitive/learning styles on a higher level of authoring. Hong and Kinshuk (2004) developed a mechanism to fully model student’s learning styles and present the matching content, including contain, format, media type, etc., to individual student, based on the Felder-Silverman Learning Style Theory. They use a pre-course questionnaire to determine a student’s learning style or the student may choose the default style and he is then provided with material according to his/her style. There was however no reported evidence on any improved performance or learning experience of the learners from that system.

Learning styles

The terms learning styles and cognitive styles have been often used interchangeably in literature. Jonassen and Grabowski (1993) distinguish between learning and cognitive styles by explaining that learning style instruments are typically self-report instruments, whereas cognitive style instruments require the learner to do some task which is then measured as some trait or preference. It is postulated that during a period in which an individual has strong style preferences, that person will achieve most easily when taught with strategies and resources that complement those strategies (Dunn, 1996). However, McLoughlin (1999) points out to different empirical findings showing that learning styles can hinder or enhance academic performance in several respects. In this respect, Dunn (1996) argues that teaching through learning styles is not enough and stresses the importance of the need for better assessment principles.

Terrell and Dringus (2000) investigated the effects of learning styles on student success in an online learning environment. They tracked 98 Masters level students in an information science programme using Kolb Learning Style Inventory. While they found that a majority of students can succeed in an online learning environment regardless of learning styles, they also found that there have been more drop-outs where the students learning style fell in the accommodator category.

Another study from Ross and Schulz (1999) based on the Gregorc Style Delineator revealed that learning styles significantly affected learning outcomes and that abstract random learners may do poorly with some forms of computer-aided instruction (CAI). From definition within the Gregorc Style Delineator, abstract random learners tend to be “non-linear, multidimensional, and prefer active, free and colourful environments. They thrive on building relationships with others and dislike extremely structured assignments.”

Butler and Pinto-Zipp (2005) conducted a similar experiment as Ross and Schulz (1999) but with mature learners and in an online learning environment rather than traditional CAI settings. The feedback obtained from the learners suggested that with mature students who are practicing professionals, studying for programme in line with their career goals, the real effect of learning styles cannot be established on a cause and effect way. This is because their thrust to complete the course can be due to some intrinsic or extrinsic motivation factors. The study also revealed a significant number of online learners having developed a dual learning style. Butler and Pinto-Zipp (2005) further argue that today’s learners are more flexible, stretch their learning styles to accommodate a variety of instructional methods or simply transcend through preferred methods. Furthermore, Hall and Moseley (2005) argue that translating specific ideas about learning styles into teaching and learning strategies is critically dependent on the extent to which these learning styles have been reliably and validly measured.

Learning styles has also been criticised by a number of authors. The model of Kolb (1984) has often been under criticisms by different authors and there seems a need for a more reliable and valid instrument for the measurement of learning styles (Kinshuk, 1996). The construct of the LSI has been found to be unsatisfactory by different authors (Freedman & Stumpf, 1978; Wilson, 1986) while the face validity, an important aspect of the LSI (Learning Style Inventory) itself was not well-accepted by managers (Kinshuk, 1996).

Zwanenberg et al. (2000) investigated the psychometric properties of the different learning styles instruments such as the ILS (Index of Learning Styles) and the LSQ (Learning Style Questionnaire) argued the poor psychometric features of the instruments and therefore questioned their reliability as they obtained unsatisfactory results in their experiment. Veenman et al. (2003) demonstrated the limitations of self-assessment reports in the determination of learning styles and proposed think-aloud techniques as a more reliable approach. However, it must be pointed out that this technique of read-aloud can pose some practical limitations when the number of learners is elevated and if they are dispersed around geographical locations.

The research questions

The research questions to be addressed in this research can be seen as being a two-fold process. The first part related to the extent to which students’ learning experience is enhanced through personalization of learning with respect to their individual learning preferences and psychological traits. For the sake of this research, the experiment is limited to the V-A-K instrument (Barbe & Milone, 1980) to populate the student model. Consequently the adaptation algorithm takes into account only this attribute of the learner in the choice of learning object. The second part was to assess the framework both qualitatively and quantitatively.

Question related to the students’ personalized learning experience

To what extent (positive or negative) had students’ individual learning experience (in terms of understanding and performance) been affected using the personalized approach with respect to individual psychological traits, most precisely the cognitive style related V-A-K instrument?

Question related to the perceived usefulness of the adaptation method & framework

How efficient was the learning object selection mechanism with respect to each learner’s own perception of the usefulness of each learning object presented to the learner?

The Experiment

Data Subjects

The data subjects were students in 2nd year University enrolled on an online programme in Web and Multimedia Development. The number of students who participated in this real-world experiment was 66. The students are studying the online module titled “Interaction Design”. The module “Interaction Design” was used as the case-study for a number of reasons both subjective and objective. The first reason is that the subject offered a good blend of different types of knowledge (declarative, conceptual and procedural) that needed to be acquired by the student. Furthermore, the author of this thesis was the developer of this course and had been working as instructional designer for two years. This made the task of instructional design of the learning objects simpler. The topic chosen was “User-Centred Design”, the third topic of the course. The students went over the topic for one month under independent study mode. Typically the age group of the students were 20-22 years. The same group of students have been following the first two units of the course where they were given content using the “one-size-fits-all” concept.

Data collection for the group under observation

Step 1

The students were asked to fill in information about their learning preferences through the V-A-K instrument. The main concern at this stage was the element of bias and subjectivity from the students while filling the questionnaires taking into account their relative young age (level of maturity).

Step 2

An individual profile was created for each student and this was stored in the database of the learning environment. Automated data collection was done by the system in-built tracking tools concerning navigational paths of the students, their answers to questions, and their ratings of each learning object presented to them. Data about the learning objects that were proposed by the system to each learner were also recorded.

Observation took place on students’ behaviours and state of mind while using the system. Qualitative data gathered here would be useful in supporting claims arising from quantitative analysis of data.

Step 3

The level of understanding of the students was measured using an online test which contains a mix of MCQs and open-ended question.

Step 4

Data Collection for the control assessment

The marks scored in the test for unit 1 and 2 were kept as a control data while the marks scored on unit 3 (the experimental unit) were kept separately. The MCQs part was automatically marked by the system while the tutor marked the open-ended questions.

Data Analysis

The data gathered was used for both quantitative and qualitative analysis and where necessary to supplement and complement observations of each other. Data analysis and evaluation of the implementation of the framework will be 3-phased process.

Phase 1

In this phase, the instructional design process was critically assessed. The problems encountered while designing the material and other practical constraints that were experienced with the framework were noted. This phase was mainly a qualitative evaluation using main observation techniques (personal and peer) and reflective practices of the activity system that was put in place.

Phase 2

The framework proposed a fuzzy algorithm to select the most appropriate learning object for a particular student based on his individual profile and depending on the parameters we are taking into account in the personalization process. Since the students were themselves elaborating their profiles, and the element of bias and subjectivity was present. This means that the material presented to them might not really have been appropriate as this depended on the accuracy of data gathered to elaborate their profile. One assumption at this level was that expert judgments of the pedagogical designers were reliable with respect to the ratings they gave to the learning objects. To cater for the possible subjective students’ judgments about their learning preferences, the students were able to rate the perceived usefulness of a particular learning object which is presented to them. This phase mainly consisted of an evaluation of the degree of reliability of the instructional design process, the personalization algorithm and course delivery process in general.

Phase 3

The main hypothesis that was verified in the quantitative phase of the study is defined as the following:

H0: There are significant effects on the performance of the students when students learning materials were selected and delivered on the basis of their own individual rating with respect to the V-A-K instrument.

For this purpose, an analysis of variance was carried out on the performance of the students with respect to experimental unit 2 as compared to that of the control units (1&2). The next step of this phase included the in-depth analysis of qualitative data obtained. Students’ feedback and perceptions were analyzed and contrasted with the statistical finding concerning the hypothesis set.

Implementation of the personalisation framework

The personalisation framework that was proposed by Santally and Senteni (2005a) has been implemented as a MOODLE e-learning platform block in order to answer the research questions as established above. MOODLE is the official e-learning platform that is used for the online programmes at the University of Mauritius. The official address of the e-learning platform is and the MOODLE personalisation block has been implemented on that particular instance which hosts the version 1.9 of MOODLE. The MOODLE Personalisation block was implemented in a 2nd year online module at the University of Mauritius. For the sake of the experiment a new course was instantiated for the unit under experimentation.

The steps for the initialisation of the personalisation block in MOODLE are as follows:

Step 1

Enable the personalisation system in the Course settings. This can be enabled by the lecturer and the latter can choose any one of the two possible adaptation algorithms for the system

Step 2

Add the ‘Individual Learning Path’ block in the course. The block works differently for a student and for a lecturer/teacher.

The student can

  1. take the Learning Style Questionnaire,
  2. view the learning objects (their generated learning path).
  3. evaluate each learning object that is presented to them.

The above steps can only be done in precedence to each other. This means that a student who has not taken the learning style questionnaire will not have a learning path generated for them, and if they have not viewed any learning object, obviously they cannot evaluate them.

This has been implemented to ensure maximum integrity of the system and reliability of the experiment. On the other hand, the lecturer/teacher has access to the following functionalities when the block is enabled.

The main functionalities are

  • Customize students’ questionnaire about their learning styles         
    In this functionality the lecturer/teacher can select the learning/cognitive styles he or she wants to use for the personalisation process and he or she can author the relevant learning style questionnaires.
  • Tag and order the learning objects    
    This is a very important functionality in the system. For each learning object that is present for any particular topic the lecturer will tag them with the respective values related to how well they are perceived to match the particular styles in question. Then the learning objects are sequenced in order to build the proposed lesson.
  • Customize evaluation of learning objects      
    In this functionality, the lecturer/teacher can create an evaluation questionnaire with any number of questions/options he or she wants. The student will have access to this questionnaire for the evaluation of each learning object that has been preselected by the system.
  • View statistics about learning styles of students and evaluation of learning objects
    In this functionality the lecturer/teacher gets access to the individual entries by students about their learning style preferences, and the feedback they have given for the individual learning objects that were presented to them.
  • View learning path of students           
    Once a learning path for a particular student is generated the lecturer/teacher can have access to the individual set of learning objects that were selected by the system for each student.

Step 3

The course author/lecturer/teacher enters all the course resources (learning objects) in the e-learning platform. For each unit, there might be one or more learning resources from which the system will choose one for each student after application of the learning object selection algorithm.

Step 4

The course author/lecturer/teacher will enter values related to each element of the learning style for each of the learning object and will sequence the learning objects for the course.

Step 5

Student logs in the system, and carries out sub-steps 1-3 of step 2 above.

Multiple content representations

For this experiment, a learning unit on ‘User-Centred Design’ was developed using a multiple content representations technique. This means that the same content was developed in a variety of formats that would suit the different learning preferences of the learners involved in the learning process.

The different formats and content representations

The original unit on MOODLE e-learning platform contains 14 learning objects. Each learning object is typically one HTML page mainly containing text based elements with a few illustrations as needed. For the experiment, taking into account that the V-A-K learning style instrument will be used, it was decided to use a content authoring methodology relying on the combination of text (T), audio (A), audio transcript (AT), interaction (I) and graphics (G). Each learning object therefore is reproduced into 8 different representations while keeping the same content. This has resulted in a total of 112 resources for the unit. The 8 different representations are as follows:

{T, T+A, T+A+I, T+A+AT, T+I, T+G+I, T+G+A, T+G+A+I}
T (Text) Only

The ‘T only’ learning resource has been designed in such a way that it displays on-screen texts only. The ‘User Centred Design’ unit is being displayed in a book-like style, giving the learner the autonomy to scroll up and down through his or her reading. Therefore, the learner will only have to go through the textual paragraphs with a minimum of interaction by using the vertical scroll displayed on the screen.

T+A (Text + Audio)

The ‘T + A’ learning resource has been designed in such a way that it displays on-screen texts which are being synchronised with the audio in the sense that while the audio is playing, the texts are being displayed one after the other to grab the learner’s attention. In this way, the learner will be able to better follow what is being heard and displayed and thus, he or she will be able to make a connection between both.

T+A+I (Text + Audio + Interaction)

The ‘T + A + I’ learning resource includes text, audio and interaction elements. Such a learning resource has been designed in a way to prompt learners to interact with it. It is also good to note that learners are also prompted through both textual and audio instructions about how to proceed with the learning resource.

T+A+AT (Text + Audio + Audio Transcript)

The ‘T + A + AT’ learning resource includes text, audio and audio transcript elements. Here, the audio transcript found on the left side of the screen, narrates the instructional audio playing in the background. This gives the learner the option to choose whether to read the transcript after hearing the audio or to read it simultaneously. Now, depending on the learner’s learning preferences, some learners might prefer to listen to the audio, instead of reading the transcript while others might prefer the contrary. In this way, the learner can opt for not listening to the audio but simply read the transcript of the audio. .

T+I (Text + Interaction)

The 'T + I’ learning resource involves text and interaction elements only. Therefore, the learners are presented with textual material and at the same time prompted through textual instruction about how to interact with the learning material in order to proceed.

T+G+I (Text + Graphics + Interaction)

The ‘T + G + I’ learning resource consists both of textual and graphical information and some elements of interactivity. Here, instructions are given to learners through texts and graphic images, and they have to interact with the learning material to proceed further to the other screens.

T+G+A (Text + Graphics + Audio)

The ‘T + G + A’ learning resource includes textual, graphical and auditory information. This learning resource also provides for some minimal interactivity elements, whereby the learner can pause or move backward and forward of his or her reading. The audio is also in synchronisation process with the on-screen visuals, so as to avoid creating cognitive load on the learner’s memory.

T+G+A+I (Text + Graphics + Audio + Interaction)

The ‘T + G + A + I’ learning resource includes text, graphics, synchronised audio and interaction elements. Both textual and auditory instructions are given to learners, to prompt them how to interact with the learning material.

The tagging process

Once the learning resources are uploaded in the e-learning platform, the tagging process involves adding metadata values to each learning object with respect to the learning styles as per the adaptation algorithm. In this experiment, it consisted of adding a value for each of the component of the V-A-K instrument. This has been described in Chapter 4 where the algorithm has been explained in detail.

Taking into consideration that the development of interactive learning materials is a team process involving subject matter experts, instructional designers, educational technologist and e-learning developers two possible approaches could be adopted for the tagging of the learning materials.

The first approach consists of taking the individual tagging by each of the team members and averaging them to get a fair value while the second one is a collective process of tagging where a consensus is reached after discussion for each learning object. For this experiment the second approach was adopted given the relatively low number of learning objects. A team of four persons met and agreed on the values to be attributed to the learning objects. This process will of course consist of some element of bias and inclusion of subjective opinion. However the fact that the team consisted of four members of different interconnected fields and vast experience helped in a more accurate and objective tagging of the resources. The tagging process was also not too challenging in the context of this experiment given that the multiple content representation model that was chosen in this experiment deliberately differentiated the different resources with respect to the V-A-K model.

The evaluation questionnaire

After viewing each learning resource that forms part of their generated learning paths, each student would take the evaluation sheet that contains three simple questions. This evaluation sheet was designed in the context of the current experiment to allow further understanding and analysis of certain grey areas to be able to better interpret the results. For each learning objects, students were given the following three statements to provide their feedback

  • Overall Rating of this Resource? (1 – Low Rating ; 5 – Highest) Base yourself on these criteria – Understanding, time spent on resource, your personal learning preference, the need to look at other resources).
  • What element as listed below do you think, from your experience with it, is the main strength of this resource?
    • The content itself
    • The multimedia elements it contains
    • The ease of understanding
    • The time spent on the resource
    • None
  • What element as listed below do you think, from your experience with it, is the main weakness of this resource?
    • The complexity of the presentation of the content
    • The lack of multimedia elements
    • Ease of understanding
    • The time spent on the resource
    • Had to look for additional internet resources

Results and findings

The control units 1 and 2 of the course were held over a period of 1 month each and the experimental unit on ‘User-Centred Design’ was available online for a period of 1 month as from the 13th September 2012 to 13th October 2012. A total of 66 students participated in the experiment and this counted to about 90 percent of the class population. The first session was a face-to-face session of 1 hour where they were briefed in detail on how to go about for the unit. The last session was a controlled computer based test that was conducted after the 13th of October 2012. The class test data that was gathered completed the data collection phase for the experiment.

Performances in the class test

The overall average performance of the group in the class test was 43.7 % and the average cumulative average (CPA) of the group over their first year was 50.3 %. The first observation from the experiment was that the overall class test result reflected the overall academic level and ability of the batch i.e. that of an average group with a Pearson Correlation Coefficient of 0.77.

However a t-test at 95 % level of significance revealed that the average performance of the group in the class test was significantly lower than the overall average CPA of the group. When the average marks of the group for the units1 & 2 (46.97 %) was compared to the average mark scored in the experimental Unit 3 (39.1 %) using a t-test at the same level of significance, it was found that the performance in unit 3 was significantly poorer than their performances in the first two units. Furthermore their performance in unit 3 was significantly poorer from their overall CPA of the previous year. On the other hand, an interesting finding was that their overall average performances from the first two units were not significantly different from the overall CPA average.

The group was then broken into two categories namely high achievers (CPA > 55) and low achievers (CPA <=50) and their performances in the test were further analysed. The average marks scored by those who were classified as low achievers were 33.8 while their average CPA was 45.1. The t-test revealed that the two performances were significantly different and the lower achievers performed lower than their actual CPA in the class test. When their marks in the test for the first two units and unit 3 were separated and analysed, it was found that the low achievers performed significantly better in the first two units than in the experimental unit. The study further revealed no significant difference from their marks in units 1&2 with respect to their CPA. With respect to the high achievers the t-test revealed that their average performance in the class test (42.8 %) was significantly lower than their combined average CPA (61.4 %). However there was no significant difference when their average marks for units 1 & 2 were compared with their CPA. Finally statistical analyses revealed a high degree of correlation (0.7) for those who performed well in the test with respect to their CPA. The same observation was made when the correlation of their performances only in the experimental unit was compared to their overall CPA.

The personalisation framework and the algorithm

To try to further explain the above findings, an evaluation of the personalisation and the analysis of the students’ feedback were carried out. The first element that was looked at was the learning objects selection process by the system. The data about each learning path generated for each student was compiled and the charts below illustrate the frequency of selection for each of the type of learning objects by the system and compared to the preferred learning object choice of the learners.

From the table below, we find that there are significant discrepancies between the frequencies of system selection versus preferred student choice for the following:

  • T+A selections by the system amounts to 29.5 % while the selections by student choice amounts to only 7.3 %
  • T+I selections by the system amounts to 26.2 % while the selections by the student choice amounts to 7.4 %
  • T+A+AT selections by the system amounts to 5.6 % while the selections by the student choice amounts to 22 %
  • T+G+A+I selections by the system amounts to 10 % while the selections by the student choice amounts to 21 %

Table 1










T+ G + A



















The V-A-K survey was taken by the students in the class classified 68 % of them as visual learners. 17% were classified as auditory learners while 9 % were classified as kinaesthetic.

Table 2





% of Learners with this preference




% System Selections containing this element





%Student Selections containing this element






Given that text is a common element of all resources, for the sake of compilation of the above table, the T only resources were not counted in any of the computations for row 2 and row 3 of the above table. A first observation from the above table from the percentage of visual learners, the relatively low percentage of system selections and relatively high choice of visual objects by students reveals a difference between the preferred learning styles of students, their preferred learning objects and the system choice of learning object. This might lead to the understanding that the algorithm did not work as expected or the tagging process was not really accurate. However, looking further at the table in the auditory column reveals that while 17 % of the learners were classified as auditory, the system provided a selection of 54.3 % of resources with audio and 69 % of student selections contained an audio component. The same observation is made for the kinaesthetic column. However, these observations tend to consolidate the hypothesis that learners’ reports of their own learning preferences through standardised instruments might be highly subjective and inconsistent.

Looking further at the algorithm, another classification was worked out. The individual selection of the system was compared to each individual choice of the student in an attempt to check what percentage of times did the system choice:

  1. Match directly the students’ choices?
  2. Was appropriate with respect to the students’ choices?
  3. Completely mismatched with the students’ choices?

The table below illustrates the results of the analysis.

Table 3


Direct Match



% of occurrence





The above data strongly suggests that overall the algorithm did work and that the students own evaluations of their preferences in the V-A-K instrument was quite reliable because of the fuzzy nature of the algorithm (Santally and Senteni, 2005a). While the table reporting their preferred learning styles provides for a holistic and singular classification of the students (either visual or auditory or kinaesthetic), the algorithm rather worked on the extent to which they are visual or auditory or kinaesthetic and that it chose the ‘most appropriate’ learning object.

Students’ feedback on learning objects

When the students’ feedback on the learning objects were analysed it was found that on average for the 14 learning objects of unit 3, 83 % of the feedback classified the material to be pedagogically correct and sufficient for them. 65 % of the feedbacks received on the main weaknesses of the T+I resource mentioned multimedia and the high amount of time spent on the material. This weakness related to the lack of multimedia is mainly obvious in those resources that did not contain a graphic component. Incidentally, among the 31 % of mismatched learning objects, it was found that in 50 % of the cases of mismatch, the system selected the T+I resource.

Further analysis was needed here to understand the behaviour of the system in relation to the excessive selection of the T+I resources. The first element of analysis was to look at the feedback given for each of the T+I resources that fell in the mismatched category and the second element was to look at the overall learning objects that fell in the mismatched category. The probe was extended further to look at each individual entry classifying the material as ‘not ok’ to see if there is a causal relationship specifically with the mismatched category. It was found that 35 % of the 108 (T+I) learning objects that were evaluated by the students were found to be good material while 37 % of them were found to be OK where students claimed they grasped essential elements although they had to do more effort. Only 25 % of the evaluations claimed that the material was not sufficient. The analysis further revealed a weak association (0.18) between negative feedback and the category (mismatched, appropriate, direct match). This suggests that negative feedback or positive feedback on a learning object was not necessarily linked to the learning styles based selection but was rather dependent on other elements more likely related to other preferences of the learner.


The aim of this research was three-fold. The first element was to propose a personalisation framework based on a simple fuzzy algorithm that processes learning preferences that are mainly obtained through self-assessment instruments. The main strength of the framework is that it is generic and therefore can be independently applied in different personalisation context. In this experiment it was applied to learning styles with particular reference to the V-A-K instrument. This instrument allows the classification of students in one of the three categories namely visual, auditory and kinaesthetic.

From Table 2, it was clear that there is a discrepancy between the numbers of students falling in the ‘kinaesthetic’ category while a high percentage of the students preferred learning objects contained a ‘kinaesthetic’ or ‘interaction’ component. The same students were asked about their learning preferences through a self-assessment instrument and the same students were then asked to choose their preferred learning objects from a pool of available resources. This confirms the critique directed by different authors towards the face validity of those instruments. However, on the other hand, surprisingly it was found that overall the algorithm worked fine and only 31 % of the total selections could be classified as a mismatch by the algorithm.

This illustrates the ability of the system to degrade gracefully through its fuzzy selection mechanism for matching appropriate resources with the profile of the learners. The reliability of the algorithm in making an appropriate selection also depends on the accuracy of the learning object tagging process which was carried out by the instructional design team. The strength of the personalisation framework lays the process of fuzzy tagging by the instructional design team and selection by the algorithm.

The second aim of the research was to investigate whether successful adaptation to learning styles (one or more styles), in other words to psychological traits of learners could improve their performance and the overall learning process. In terms of performance the results were not conclusive. It was even found that overall the students worked less well and that actual performance was mainly linked to the overall ability of the student as characterised by previous performances in traditional e-learning environments.

While overall the students found the materials to be ok and to be meeting their expectations, and that the learning path for each student was found to be reasonable, there is a need to probe into the reasons which could lead to the students performing less well in the experimental unit. With respect to the current experiment there are a few factors that could explain this.

The first one is directly related to the test. The test was comprised of two sections with one section focusing on units 1 and 2 while the second section focusing on unit 3. The students were given 1.5 hours to do both sections. From observation it was noticed many students spent more time on the first section and presumably rushed to answer the second section. Furthermore 50% of the students revealed that they prepared the unit well but felt they were short of time in the test.

The second reason is that those students were exposed in theory to each unit over a period of one month. However in practice it means they had access to unit 1 for three months, to unit 2 for two months and to unit 3 for 1 month. Furthermore each unit builds on the previous ones therefore resulting in a consolidation of the learning process. It is also a trend among these students that class tests are not taken with too much rigour. Therefore preparing for the first two units could be seen as just enough for many of them. 84 % of the students claimed they had revised for all the units but 45 % of them agreed that they prepared the first two units better than the experimental unit.

The third reason might be related to learning habits acquired over time and this explanation might be linked to Dunn’s theory that styles do change over time (Dunn, 1996). These students at the age of 21-22 with most of them working students and in the second year of university have acquired a degree of maturity as students. For one full academic year they have been trained and pushed towards adopting a new online learning culture where most of the materials presented to them were text-based and text-intensive formats. From the different statistical data and analyses obtained we are led to believe that those students have ultimately adapted to become efficient learners in an environment that has been presented to them over the past year. Ultimately they outperformed the experimental unit which presented to them a new style or learning habit as compared to the material presented to them in units 1 and 2 to which they were used to. This explanation is further consolidated by the fact that the system only selected a low percentage of learning objects to be text only resources in unit 3.

Limitations of the study and future work

This study was limited to the V-A-K instrument, and future experiments should include other learning and cognitive styles as well. Longer and sustained observation periods and a widespread use of the personalisation framework over different courses will be definitely helpful in the evaluation of the technique.

Another limitation of the study was that there was no means to ensure that the students spent the same amount of time and the required amount of time on the resources presented to them. However, the idea behind the project was to test the personalisation framework in a real-world setting. Therefore the experiment was done in a semi-controlled manner. The result could have been different if the experiment was carried out in closed and controlled laboratory conditions.

In this experiment the focus was on the multiple representation of the same content to fit the preferred modality of the learner. The experiment has be to be pushed further to focus on the multiple representation of different content but that is found to fit in the proposed curriculum and desired learning outcomes.

Finally this research has opened up a new possibility in the area of personalisation of learning environments. This research started about a decade ago when adaptive intelligent systems were still under investigation. The web evolved from Web 1.0 to Web 2.0 where the learners and the teachers had important roles co-creators and consumers of knowledge while in the era of Web 3.0 the intelligent semantic web concept has surfaced. One significant implementation barrier of the technique in this research was about the enormous effort that seems was needed, first for developing multiple content representations, and second for the tagging of the learning objects. It would be costly to develop the learning objects and then time-consuming to tag them appropriately based on the personalisation model.

In the current era of web 3.0, both of the overheads as described above can be significantly reduced. The world-wide web is flooded with Open Education Resources and repositories keep growing everywhere. Most content need not be developed or redeveloped. A simple search on the web will reveal multiple representations of content in terms of modality, level of study, type of learning approach and other attributes. Regarding the tagging process, an extension of metadata of such resources can easily be done and instead of one teacher needing to tag resources, many of these resources can already contain a significant amount of metadata information relevant to the personalisation we want to achieve. Once the algorithm is applied, a personalised learning path can easily be generated for any learner in a course where the system automatically looks for content from a pre-selected list of repositories. This element will definitely constitute an area for future investigation.


The application of Learning Styles theories in courseware design and development continues to be an area that shows potential for further research especially with respect to personalisation of web-based learning environments. The findings of this research remain inconclusive as to the case for performance improvement when individual learning paths are generated for learners based on their preferred modality with respect to the V-A-K styles. It seems that multimedia resources, when they are pedagogically well designed overshadowed the V-A-K modalities and become universally pervasive in fitting the learning preferences of each and every learner. Finally there exists a set of reasons that can explain good and bad performances in controlled examinations and it’s not necessarily directly linked to the dissemination of information through the preferred modalities of a learner. Furthermore, success of the learner is also determined on aspects like the commitment to the course, thorough revision for a class test and above all spending the time needed on a course. Irrespective of the type of content that was presented to the learners, they found them to be enough for them and they were overall satisfied with the educational process.


  1. Barbe, W. and Milone, M. (1980). Modality. Instructor, 89(6), (pp. 44-46).
  2. Brusilovsky, P. (2001). Adaptive Hypermedia. User Modeling and User-Adapted Interaction, 11, (pp. 87-110).
  3. Butler, T. and Pinto-Zipp, G. (2005). Students’ learning styles and their preferences for online instructional methods. Journal of Educational Technology Systems, 34(2), (pp. 199-221).
  4. Cristea, A. (2004). Adaptive and Adaptable Educational Hypermedia: Where Are We Now and Where Are We Going? Proceedings of Web-based Education, Feb 16-18, Innsbruck, Austria.
  5. Cronbach, L. and Snow, R. (1977). Aptitudes and Instructional Methods: A Handbook for Research on Interactions. New York: Irvington.
  6. Dunn, R. (1996). How to implement and supervise a learning style program. USA: ACSD
  7. Freedman, R.D. and Stumph, S.A. (1978). What can one learn from the Learning Style Inventory? in Kinshuk (1996). Computer-Aided Learning for Entry-Level Accountancy Students. PhD Thesis. De Montfort University, United Kingdom.
  8. Hall, E. and Moseley, D. (2005). Is there a role for learning styles in personalised education and training? International Journal of Lifelong Education, 24(3), (pp. 243–255).
  9. Honey, P. and Mumford, A. (1986). Using your learning styles. Maidenhead: Honey Publications
  10. Jonassen, D.H. and Grabowski, B. (1993). Individual differences and instruction. New York:Allen & Bacon.
  11. Kinshuk (1996). Computer-aided learning for entry-level accountancy students. Thesis (PhD). De Montfort University, United Kingdom.
  12. Kolb, D. (1984). Experiential Learning. Prentice-Hall, Englewood Cliffs, NJ.
  13. McLoughlin, C. (1999). The implications of research literature on learning styles for the design of instructional material. Australian Journal of Educational Technology, 15(3), (pp. 222-241). Available from Accessed 20 June 2003
  14. Peck, M.L. (1983). Aptitude treatment interaction research has educational value. Proceedings of selected research paper presentations at the 1983 convention of the Association for Educational Communications and Technology. (pp. 564-622). New Orleans: Association for Educational Communications and Technology.
  15. Ross, J. and Schulz, R. (1999). Can computer-aided instruction accommodate all learners equally? British Journal of Educational Technology. 30(1), (pp. 5-24).
  16. Rumetshofer, H. and Wöß, W. (2003). XML-based Adaptation Framework for Psychological-driven E-learning Systems. Educational Technology & Society, 6(4), available from
  17. Santally, M. (2003). Individual instruction and distance learning: application of learning and cognitive styles. Malaysian Journal of Distance Education, 5(2), (pp. 15-26).
  18. Santally, M.; Govinda, M. and Senteni, A. (2004). Reusable learning object aggregation for e-learning courseware development at the University of Mauritius. International Journal of Instructional Technology and Distance Learning, 1(7). Available from
  19. Santally, M. (2005). From face-to-face classrooms to innovative computer-mediated pedagogies: Observations from the field. Journal of Interactive Online Learning, 3(4). Available from
  20. Santally, M. and Senteni, A. (2005a). Adaptation models for personalization in web-based learning environments. Malaysian Online Journal of Instructional Technology, 2(1). Available
  21. Santally, M. and Senteni, A. (2005b). A Learning Object Approach to Personalised Web-Based Instruction. European Journal of Open, Distance and e-Learning, 2005/I.
  22. Santally, M. (2009). Informing the Design of Personalized Learning Environments through Iterative Analysis of Learner Interaction and Feedback. International Journal of Instructional Technology and Distance Learning, Retrieved 10/1/2013
  23. Terrell, S. and Dringus, L. (2000). An investigation of the effect of learning style on student success in an online learning environment. Journal of Educational Technology Systems, 28(3), (pp. 231-238).
  24. Wilson, D.K. (1986). An investigation of the properties of Kolb’s learning style inventory. in Kinshuk (1996). Computer-Aided Learning for Entry-Level Accountancy Students. PhD Thesis. De Montfort University, United Kingdom.
  25. Zwanenberg, V.N.; Wilkinson, L.J. and Anderson, A. (2000). Felder and Silverman’s index of learning Styles and Honey and Mumford’s learning styles questionnaire: how do they compare and do they predict academic performance? Educational Psychology, 20(3), (pp. 365-380).


e-learning, distance learning, distance education, online learning, higher education, DE, blended learning, MOOCs, ICT, information and communication technology, collaborative learning, internet, interaction, learning management system, LMS,

Current issue on Sciendo

– electronic content hosting and distribution platform

EURODL is indexed by ERIC

– the Education Resources Information Center, the world's largest digital library of education literature

EURODL is indexed by DOAJ

– the Directory of Open Access Journals

EURODL is indexed by Cabells

– the Cabell's Directories

EURODL is indexed by EBSCO

– the EBSCO Publishing – EBSCOhost Online Research Databases

For new referees

If you would like to referee articles for EURODL, please write to the Chief Editor Ulrich Bernath, including a brief CV and your area of interest.