|
|
Learning Technology publication of IEEE
Computer Society’s |
|
|
Volume
12 Issue 4 |
ISSN
1438-0625 |
October
2010 |
Special Theme Section: Pervasive Learning
and Usage of Sensors in Technology Enhanced Learning
Prompting
Lifecycle-Oriented Learning of Ubicomp Applications Leveraging Distributed
Wisdom
An Approach for
Modelling Pervasive Learning Scenarios
Coupling Pedagogical
Scenarios and Location-based Services for Learning
Using Sensors as an
Alternative to Start-up Lectures in Ubiquitous Environments
Location Awareness for
Pervasive Learning
From Delphi to
Simulations: How Network Conditions Affect Learning
Web 2.0-Based
E-Learning: Applying Social Informatics for Tertiary Teaching
Welcome to
the October 2010 issue of the Learning Technology newsletter.
Due to the
advances in mobile technologies, learning can take place at any time and any
place, allowing learners to learn in authentic environments, outside the
classroom, through their mobile devices. By using sensors and small computers
embedded in learning objects, the current leaning context as well as
surrounding learning objects can be identified and considered in order to
provide learners with context-based learning experiences by adjusting
information, resources and activities to the learners’ current context and
situation. This issue introduces papers which describe research on pervasive
learning and the use of sensors for providing context-based learning
experiences as well as present prototype systems and tools that facilitate
pervasive learning.
Guo et al.
discuss the concept of distributed wisdom (building on crowd wisdom), and
outlines a prototype which aims to support lifecycle-oriented learning in a
ubiquitous computing context. Malek et al. present a new modelling language and
an authoring tool for modelling and generating pervasive, context-aware and
adaptive learning scenarios which can be transformed in to IMS LDs. Kaddouci et
al. describe a series of prototypes which provide positioning and points of
interest (POI) management, and describe how these prototypes are used for
investigating different pedagogical scenarios for pervasive learning. Arantes
et al. describe the DiGaE CSCL environment together with a use case aiming to
illustrate its use in a ubiquitous learning setting. Finally, O'Grady reviews
electronic positioning devices and technologies, and discusses their
implications for outdoor (and indoor) learning activities which are based on
location services.
The issue
also includes a section with regular articles (i.e. articles that are not
related to the special theme on pervasive learning). In this section, Moebs and
McManis describe a study which aimed to investigate the impact of quality of
service for multimedia learning.
We
sincerely hope that this issue will help in keeping you abreast of the current
research and developments in Pervasive Learning and the Usage of Sensors in
TEL, as well as advanced learning technologies in general. We also would like
to take the opportunity to invite you to contribute your own work on technology
enhanced learning (e.g., work in progress, project reports, case studies, and
event announcements) in this newsletter, if you are involved in research and/or
implementation of any aspect of advanced learning technologies. For more
details, please refer to the author guidelines at http://www.ieeetclt.org/content/authors-guidelines.
Deadline for submission of articles: 15
December, 2010
Special theme of the next issue: Semantic
Web Technologies for Technology Enhanced Learning
Articles
that are not in the area of the special theme are most welcome as well and will
be published in the regular article section!
Editors
Sabine Graf
sabineg@athabascau.ca
Charalampos Karagiannidis
karagian@uth.gr
Special Theme Section: Pervasive Learning and Usage of Sensors in Technology Enhanced Learning
Ubiquitous
computing (ubicomp) is extending the computing domain from desktop computers to
sensor-augmented smart objects (e.g., smart furniture, smart cups). By
analyzing the sensed intelligence from smart objects, ubicomp applications can
sense the ambient context change and adapt their behavior to assist users.
Compared to desktop applications, ubicomp applications are more deeply and
widely embedded into our daily lives which requires more complex knowledge on
user requirement understanding, heterogeneous sensor data processing,
application/device administration, and hardware/software failure handling. An
application that cannot adapt to its users’ needs may simply annoy the user;
while complex operations on application behavior control/tailoring will hinder
the prevalence of ubicomp applications. Therefore, thinking about a method that
can benefit ever-changing user needs while lowering user cost becomes a
substantial challenge in ubicomp domain.
The
lifecycle of a typical ubicomp application can be divided into three main
stages: development stage, distribution stage, and maintenance stage. Existing
systems focus merely on the issues involved in one of these stages. For
example, a number of toolkits have been developed to allow rapid development of
ubicomp applications for developers [1] or end users [2]. Some
zero-configuration tools are also built to lower the maintenance cost of end
users [3]. There is still no research work addressing the three stages as a
whole, let alone the interrelationship among the three stages. For example, the
gap between developers and end users on experience sharing has not filled in
those studies.
In his book
[4], Surowiecki puts forward the concept of “crowd wisdom”, which is defined
as: “the aggregation of information in groups, resulting in decisions that are
often better than could have been made by any single member of the group”.
Here, crowd wisdom is used for decision making. We are inspired by this
definition and extend it to “distributed wisdom” in ubicomp domain. Distributed
wisdom has three key elements (see Table 1). The prior two elements emphasize the
diversity of various participants and human groups (e.g., professional
developers, novice programmers, and average users) in the lifecycle of ubicomp
applications. The diversity is reflected in two aspects: wisdom/quality and
learning needs, as summarized in Table 2. A deep analysis of the “diversity” in
Table 2 reveals the complementarily among people (inter-group and intra-group)
on their qualities and learning needs. For example, the application-creation
quality of professional developers can generate some application templates for
novice programmers’ learning, and average-user-ratings to applications can
impact other average users on software foraging. The last element −
“aggregation”, however, indicates the core of our proposal: leveraging the
aggregated power of “distributed” wisdom to augment mutual learning and
knowledge/experience transfer during the lifecycle of ubicomp applications.
Table 1: Three key elements of distributed wisdom
|
Criteria |
Description |
|
Diversity
(knowledge distribution) |
Each user
has their specialized knowledge, skills, and needs |
|
Decentralization |
People are
able to specialize and draw on local knowledge |
|
Aggregation |
There are
mechanisms/tools to facilitate knowledge transfer and information sharing
among people |
Table 2: Distributed wisdom and
learning needs over ubicomp user groups
|
User group |
Wisdom/Quality |
Learning needs |
|
Sensor makers, Professional
developers (High-level) |
Sensor knowledge, application creation, programming languages, knowledge on error-handling |
Application requirements, Understanding
requirements and gathering feedback to improve the system |
|
Novice programmers (Middle-level) |
Simple toolkits, modification to open- source
software (co-design) |
Learn to create applications using examples
(sample codes) |
|
Average users (Low-level) |
User experience and evaluation (user ratings),
problem/requirement owners, rich domain knowledge |
Which software to choose? Knowledge on
system control/configuration, Error handling method |
As shown in Fig. 1, the aggregation of distributed wisdom makes CoLL
(our approach) a continuously evolving socio-technical system by (1) providing
a set of tools to support different degrees of ubicomp design and use activities,
(2) empowering end users to engage co-design activity while not restricting
them to existing systems, and (3) promoting knowledge transfer and mutual
learning among people. We have developed a prototype platform to demonstrate
CoLL (some user interfaces are included in Fig. 2), which empowers
lifecycle-oriented learning through a series of activities. For instance, in
Fig. 2, User-A from Family-A can “create” a rule-based meta-game (a template
with several configurable slots) and then “publish” it to a social website.
User-B finds this “high-rating” application through a “foraging” activity, and
“co-designs” it in terms of his domestic settings and preferences via a
graphical interface (e.g., changing the slot-values in a rule, altering the app-behavior
exploring domestic resources). If he has questions (e.g., system failures) and
new needs to improve the application, he can “contact” the developer for help.
If he finds it an interesting game, he can “recommend” it to his friends. A
semantic sensing infrastructure has been explored to develop the prototype, and
some programming activities have been tested in our past study [5]. We extend
it to the whole lifecycle of ubicomp apps and propose a unified approach to
formulate it.
To
conclude, we believe that the aggregation of distributed wisdom can bridge the
cognitive gap between developers and users in ubicomp domain, and promote
knowledge transfer and mutual learning among them. We will do further
experiments to quantitatively measure the performance of the CoLL approach.

Fig. 1. The collaborative lifecycle-oriented learning
(CoLL) approach for ubicomp applications.

Fig. 2. A game scenario for CoLL.
References
[1]
Salber,
D., Dey, A. K., Abowd, G.D. (1999). The Context Toolkit: Aiding the development
of context-enabled applications. In: Proc. of CHI’99, pp 434–441.
[2]
Dey,
A. K., Sohn, T., Streng, S., Kodama, J. (2006). iCAP: interactive prototyping
of context-aware applications. In: Proc. of Pervasive 2006, pp 254-271.
[3]
Lupu,
E., et al. (2008). AMUSE: autonomic management of ubiquitous e-Health systems.
Journal of Concurrency and Computation: Practice and Experience 20 (3) 277-295.
[4]
Surowiecki,
J. (2004). The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How
Collective Wisdom Shapes Business, Economies, Societies and Nations, Doubleday
Publisher.
[5]
Guo,
B., Zhang, D., Imai, M. (2010). Towards a Cooperative Programming Framework for
Context-Aware Applications. Personal and Ubiquitous Computing, Springer (DOI
10.1007/s00779-010-0329-1).
Bin Guo
Institut TELECOM
SudParis
guobin.keio@gmail.com
Daqing Zhang
Institut TELECOM
SudParis
Daqing.zhang@it-sudparis.eu
Michita Imai
michita@ayu.ics.keio.ac.jp
This paper
presents the results of our innovative approach that aims at supporting
pedagogical designers and teachers to model, generate and simulate pervasive,
context-aware and adaptive learning scenarios. Its core element consists of an
Educational Modelling Language called CAAML (Context-aware Adaptive Activities
Modeling Language). An authoring tool called ContAct-Me was developed based on
CAAML language.
Introduction
Several
studies claim advantages of using wireless, mobile and pervasive technologies
to enhance learning processes [1], but a review of existing Educational
Modeling languages and their authoring tools shows that none of them supports
pervasive learning related concepts when dealing with the modeling of learning
activities [2]. From then on, we propose a model-driven approach that supports
pedagogical designers and teachers to model, generate and simulate innovative
context-aware adaptive and pervasive learning scenarios and activities. This
can have a pedagogical added value to learning processes and not only be a
mobile version of existing e-learning activities. As a matter of fact, this new
learning philosophy ensures learner’s autonomy, motivation and challenge by
experimenting with various learning scenarios indoors and outdoors.
Additionally, it helps to improve interaction and collaboration through
collaborative and challenging learning activities taking place in different
locations and various stages.
In order to
apply and test this approach, an authoring tool called ContAct-Me (CONText
and ACTivity Adaptive Modeler for Malleable Learning Environments)
has been created. It transforms models represented in CAAML language into
models represented in IMS-LD standard to ensure interoperability of the
designed activities across different learning platforms. ContAct-Me is based on
CAAML (Context-aware Adaptive Activities Modeling Language) language that takes
into account the concepts of context and co-adaptivity defined in previous
works [3] [4].
CAAML: A Visual Educational Modeling Language
for Pervasive Learning
In this section, we present the CAAML language through a description of its
meta-model. To define
its elements, we based our approach on the activity theory, a philosophical framework used to conceptualize
human activities [5]. There are two main reasons for using activity theory. On
one hand, it provides a simple and standard form for describing human activity.
On the other hand, it takes into account the concepts of tool, community, rules
and division of labour, which are important in a Context-aware collaborative
learning environment (cf. Figure 1). The CAAML meta-model describes a learning scenario as being a
composition of several phases. Each phase includes role-parts (activities and
their relevant contexts) as shown in Figure 2. The context can be:
·
Static:
does not change during interaction ( e.g., season, student’s name);
·
Dynamic: changes
during the interaction (e.g., noise level, temperature…). A dynamic contextual
element can be directly acquired through “Embedded environmental sensors” or
“mobile device sensors”.

Figure 1. Applying Activity Theory for Pervasive Learning
In the
previous work [3], we proposed an innovative approach based on co-adaptivity or
bijective adaptation between context and learning activities within pervasive
learning environments. Indeed, the CAAML Meta-model defines two classes of “co-adaptivity rules”: rules for adaptivity of context to
activity and others for activity to context. A rule is based on a context to
trigger the adequate co-adaptivity’s actions.
The CAAML
Meta-model defines also components related to pervasive learning environments
such as “Smart objects”, “Sensors” and “

Figure 2 The CAAML Meta-model
ContAct-Me
Architecture
ContAct-Me
is an authoring tool based on CAAML language through an MDD approach
(Model-driven development). It aims at supporting pedagogical designers to
model and simulate context-aware adaptive learning scenarios using friendly
interfaces.

Figure 3. ContAct-Me Architecture
The
architecture of ContAct-Me includes three interrelated Modules (as shown in
Figure 3):
The graphical Modeler: Through this module, the
pedagogical designer can:
·
Model
context-aware activities;
·
Define
pervasive learning environment components and resources (mobile devices, smart
objects, sensors, mobile services...);
·
Model
Co-adaptivity rules.
The CAAML/IMS-LD models
transformation module: In order to ensure interoperability of the designed activities across
different learning platforms, this module transforms models represented in
CAAML language into executable models represented in IMS-LD. This is done in a
way the IMS-LD complexity is hidden by the use of concepts related to
context-awareness.
The Simulator of
pervasive learning scenarios module: This module allows:
·
The
CAAML model-driven generation of mobile user interfaces;
·
The
simulation of the execution of the scenario in run time (execution of
co-adaptivity between the context and the application).
Conclusion
ContAct-Me is an authoring tool
based on CAAML language, which is an Educational Modeling Language that aims at
supporting pedagogical designers to model, generate and simulate
innovative
context-aware, adaptive and pervasive learning scenarios.
We tested and evaluated the proposed
approach by pedagogical designers who has appreciated modeling scenarios with
friendly graphical interfaces and the automatic generation of mobile interfaces
as well as the simulation of the modeled scenario, which shows interactions
between pervasive environment and the application.
References
[1]
Derycke A., Chevrin V., Rouillard J.,
Intermédiations Multicanales et Multimodales pour l’e-Formation :
l’Architecture du Projet Ubi-Learn. Actes de la Conférence EIAH 2005,
Montpellier, 25-27 mai, ATIEF, INRP (eds) pp 407-412.
[2]
Nodenot T., Scénarisation pédagogique
et modèles conceptuels d’un EIAH : Que peuvent apporter les langages
visuels, Revue Internationale des Technologies en Pédagogie
Universitaire / International Journal of Technologies in Higher Education:
Numéro spécial "Scénariser l'apprentissage, une
activité de modélisation", Volume 4 numéro 7,
december 2007.
[3]
Malek J., Laroussi M., Deryicke A., Ben Ghezala
H., "A Context-Aware Approach for Modeling Bijective Adaptations between
Context and Activity in a Mobile and Collaborative learning",
International Journal of Interactive Mobile Technologies (iJIM), Vol 2, No 1,
January 2008.
[4]
Malek J., Laroussi M., Derycke A.,
"Model-Driven Development of Context-aware adaptive Learning systems”, The
10th IEEE International Conference on Advanced Learning Technologies, July 5-7,
2010
[5]
Uden L., Activity theory for designing mobile
learning, Int. J. Mobile Learning and Organisation, Vol. 1, No. 1, pp.81–102,
2007.
Jihen Malek
RIADI-ENSI
Laboratory (
Noce–LIFL
laboratory (
Mona Laroussi
NOCE/LIFL
laboratory (
Alain Derycke
NOCE/LIFL
laboratory (
Henda Ben Ghezala
RIADI-ENSI
laboratory (
Introduction
Pedagogical
scenarios popularized by the IMS-LD standard offer a way to describe the
learning activities and their organization. This language is both a means to
elaborate and share pedagogical design and a support for the execution within
Learning Management Systems (Peter el al., 2007). However, with the progress of
technology for wireless networks and sensors, the mobile devices are becoming a
link between the physical space and the digital one. Hence, they have become a
learning tool which permits authentic and situated activities (Traxler, 2009).
Mobile learning activities can be spontaneous and context-driven or based on a
careful design of the learning space (either at the physical (Rogers et al.,
2004) or at the digital level (Facer et al., 2004)). In this research work, we
want to see the benefit of using pedagogical scenarios for the design of the
mobile learning activities and how location-based services, can support the
learning activities.
Scenario and user interfaces
We have
developed prototypes which provide positioning and points of interest (POI) management
and that associate a set of activities to the POIs based on a pedagogical
scenario execution engine (Peter et al., 2007). The prototypes user interfaces
are developed on top of the Android platform and tested on HTC smartphones. The
interaction with the physical world relies on two mechanisms:
·
The
phone GPS supports positioning of the user and POIs.
·
The
camera is used to scan QR codes. This provides both a way to locate precisely
the user and a trigger to access data and activities linked to the place.
We have two
alternative prototypes for the experimentation. One is based on Google maps
which shows the position of the user and the POIs so s/he can go from one point
to the other. The other one relies on the Augmented Reality browser Wikitude
(Wikitude) (see Figure 1) which enable the user to see POIs on top of the
camera view. A third application is used to scan QR codes and to retrieve
activities for the current location (see Figure 2).
These
prototypes are used in a scenario to discover the university campus for new
students. Usually the university services available to the students (health
care, job & traineeships search…) are presented during an introductory
speech at the beginning of the year and a booklet is given to them. However,
students do not give full attention to this, rather long, presentation. Based
on that, we have defined a route visiting the main services. At each service,
they have to find information (see Figure 2) by asking people or looking at the
available documentation. The order of the activities (and hence the route) is
defined in a workflow based language. QR code scanning triggers the scheduling
of the activities.
|
|
|
|
Figure 1 - Wikitude
augmented reality interface |
Figure 2 - Location related
activities to perform |
System architecture
Our system
is composed of three main elements:
·
The
pedagogical scenario engine provides available activities for a user according
to the scenario (Peter et al., 2007).
·
The
task manager instantiates the scenario for a user and keeps track of the user
context while interacting with the scenario engine.
·
The
user application runs on the Android smartphone. Upon scanning a QR code, it
will retrieve tasks from the manager.

Figure 3 - Components of the prototypes
User evaluation and
outlook
For the
evaluation, we use the task manager to support the scenario and three
modalities to help navigation around the campus: using a paper map, using
Google map and using Wikitude. The last two modalities will help us show the
usefulness of the navigation applications and we will be able to compare the
friendliness of the two interfaces for the task of finding POIs and navigating
from one to another. Each user passes a questionnaire prior to the experiment
and a questionnaire after the activity so that we can evaluate the familiarity
with the technologies, the current practices, the user interfaces, the design
of the activities as well as their knowledge of the university campus. We are
just beginning the trials with 2 students using Google maps and 2 students
using Wikitude so it is far too early to draw any conclusion. The students were
quite used to touch interfaces and tools like Google maps so it was not
difficult for them to use the prototypes. They would mainly have problems due
to inconsistent GPS positioning at some times which is not something we can
improve. Otherwise, they were positive about the scenario as a guidance to
discover the campus and its services. However, we observed that the task to
fulfill should be more precise and that we should plan a summary activity at
the end based on information collected at each place. This is because the
students who did the experiment, gained a good knowledge of the campus
geography but their memory of details of the services offered was not so good.
These first observations will be completed during the experiment and we will
refine the scenario according to the results.
References
Facer, K., Joiner, R., Stanton, D., Reid, J.,
Peter, Y., Le Pallec, X. and Vantroys, T.
(2007). Pedagogical scenario modelling, deployment, execution and evolution. In
Claus Pahl, editor, Architecture Solutions for E-Learning Systems. Information
Science Reference, ISBN : 978-1-59904-633-4.
Traxler, J. (2009). Current State of
Wikitude, http://www.wikitude.org
last visited 20 September 2010.
Sarra Kaddouci
USTL, LIFL, France
Sarra.Kaddouci@ed.univ-lille1.fr
Yvan Peter
USTL, LIFL, France
Yvan.Peter@univ-lille1.fr
Thomas Vantroys
USTL, LIFL, France
Thomas.Vantroys@univ-lille1.fr
Philippe Laporte
USTL, LIFL, France
Philippe.Laporte@univ-lille1.fr
Introduction
Technology can support various forms of
collaborative teaching and learning activities [1]. The DiGaE (Distributed Gathering Environment) is a software tool which,
when associated with instrumented environments, allows users to participate in
distributed meetings. Given that DiGaE
supports collaborative meetings, one of its main use-cases scenarios is related
to learning activities. DiGaE
combines a set of tools designed to support learning activities in synchronous
sessions: the Whiteboard, Conference and Chat tools. Considering the several
alternatives for using an instrumented environment with several tools, it is
important to offer alternatives not only for configuring a given synchronous
session but also for starting up the session --- DiGaE employs a session concept for the former [2], and RFID
sensors for the latter.
DiGaE-Room and DiGaE-Home
In order to provide communication
support during a distributed lecture, a DiGaE-Room
environment is equipped with a video camera, an audio capturing system, an electronic
whiteboard, and a RFID reader. A teacher uses the DiGaE session tool to
prepare a DiGaE session in advance so
as to configure the use of other software tools allowing the exchange and
capture of audio and video (Conference tool), the exchange of slides and
pen-based interaction (Whiteboard tool), and the exchange
of text (Chat tool). The DiGaE
session tool also allows the
identification of the participants and of the meeting (e.g. title, description
and start and ending times). The information exchanged during a session is
captured and used to automatically generate automatically multimedia documents
for review.

Figure
1 – DiGaE instrumented room with local teacher and remote students.
One of the
uses of the DiGaE-Room in an e-learning
scenario is illustrated in Figure 1. The teacher is located in a DiGaE instrumented room, and the
students are located in a remote environment. The environment is instrumented
with an RFID reader, an electronic whiteboard, a camera and a TV set. The
teacher interacts with remote students using applications which start up
automatically once she touches the RFID reader with the identification card.
Applications which may start up automatically include the Whiteboard tool, the
Chat tool, and the Conference tool. Remote students participate in the lecture
using the DiGaE-Home tool, shown in
Figure 2.

Figure 2 –Remote students using DiGaE-Home tool with Whiteboard
tool, text-based Chat tool and videoconferencing Conference tool.
Web-based start up
The DiGaE tools run in a web portal[1] built on top of the
In order
to start up an environment as the one depicted in Figure 1 using a web portal,
the teacher would need to log in the several computers associated with the many
tools (e.g., one computer for each Whiteboard, Chat, video and audio Conference
tools) before each lecture. Similarly,
each remote student would have to log in and start up each tool separately.
Sensor-based startup
To support the scenario presented in Figure 1, the computers in the DiGaE Room run a set of software agents
which control the automatic startup of each software tool in the appropriate
machine. As a result, to start a lecture a teacher has to (a) schedule the
class with the DiGaE Session tool [2],
(b) enter the DiGaE instrumented room
and (c) swipe her RFID card in the reader. As a result, all configured tools
are automatically started in their corresponding machines, and the DiGaE Room environment is ready for the
lecture.
Shortcut-based startup
In order to enter the DiGaE-Home
tool as illustrated Figure 2, we provide users with the following
alternative: by using the DiGaE-Session
tool they can create a DiGaE personal
shortcut. This demands the users to provide their login information (with
an extra password) and to download a software shortcut on the desktop of the
computer used to participate in the lecture. To enter a lecture students
execute their personal shortcut (clicking in the shortcut and providing the
extra password): as a result, the student is automatically placed in the
lecture scheduled for that time of the day using the set of software tools
selected by the teacher.
Conclusion
Our aim in
building the several DiGaE tools was
to facilitate the use of distributed environments by teachers and students. In
a distributed class, users usually enter a web portal in a predefined date and
time, provide their login and password, remember in which workspace they should
enter, and possibly which tool or tools to use. With DiGaE, the teacher only has to schedule the lecture providing some
configuration information [2]. As a result, session-based applications can use
that configuration information to take several actions, such as to
automatically log users in the lecture.
In this article, we have shown scenarios in which ubiquitous computing facilitates that teachers and students participate in distributed lectures. We have used this infrastructure in several meetings involving remote peers.
References
[1]
Tambouris,
E. et al. Collaborative learning through advanced Web2.0 practices. IEEE
Learning Technology Newsletter. 2010; v.12, n.3, p.13-16.
[2]
Arantes,
F.L.; Moraes, C.R.; Silva, S.H.P.; Fortes, R.P.M.; Pimentel, M.G.C. Where and
with whom do you wanna meet? Session-based collaborative work. Proceedings
of WebMedia'10, v.1, p.123-130,
Nucleus of
Informatics Applied to Education
Campinas, Brazil
farantes@unicamp.br
Universidade de
São Paulo, São Carlos, Brazil
claudiarm@icmc.usp.br
Universidade de
São Paulo, São Carlos, Brazil
mgp@icmc.usp.br
Electronic
aids for navigation represent one of the success stories of consumer
electronics in recent years. Sensors for determining location have been
integrated into a range of products from automobiles to watches, and a range of
services have been developed that harness such sensors. Thus, educators now
have an unparallel opportunity to facilitate students move outside their
classrooms and laboratories, thereby enabling learning to take place in a range
of alternative but relevant situations. However, depending on the discipline in
question, a broad understanding of issues relating to location accuracy is
necessary if full advantage is to be gained from the proliferation of
electronic positioning devices on the market.
Introduction
Many
disciplines demand extracurricular work that invariably takes place outside the
classroom. In many cases, this occurs under the guise of field work, and
usually involves groups of students and their mentors visiting some geographic
location where issues raised in class can be illustrated in a relevant
environment. This approach has stood the test of time, and is consistent with
experiential [1] and situated learning [2]. However, by equipping students with
cheap navigation devices, responsibility for learning can be transferred to the
student in many instances, allowing them conduct field work on their own
initiative. Most students are equipped with mobile phones, many of which are
augmented with a position-sensing mechanism, and include a selection of
software packages that can take advantage of position. To gain a deeper
understanding of positioning, two key categories are now considered - satellite
and terrestrial.
Satellite Systems
Satellites
are the key enabling technology for most of the navigation services available
at present [3]. The Global Positioning System (GPS) is predominant, and most of
the position sensors available on the market use this technology. A position
accuracy of 20 meters or greater may be expected, on average. This is
sufficient for many purposes but in some circumstances, greater accuracy may be
needed. In this case, Satellite Based Augmentation Systems (SBAS) may be
harnessed. Examples include the Wide Area Augmentation System (WAAS) in North
America and the European Ground Navigation Overlay Service (EGNOS) in
From a
learning perspective, learning situations that require accurate positioning of
the student demand the use of GPS, ideally augmented with an SBAS technology.
Consider the case of a field study in geology, for example. Broad
characteristics of a rock formation may be visible, or indeed only appreciated,
when a certain distance from the rock outcrop itself. Thus a standard GPS
position should be adequate. However, in cases where the instructor wishes to
focus student attention on some minute aspect that will only be visible when
physically in its immediate vicinity, GPS augmented with an SBAS will be
required. A key issue here is that the student can go the point of interest
with a minimum of difficulty. Thus it beholds the instructor to explicitly
visit the area in question, and consider it from a student perspective, and in
terms of the preferred positioning technology. The scenario outlined here is
applicable in a number of disciples.
Terrestrial Systems
A range of
techniques for determining location using terrestrial wireless technologies [4]
are in operation. The most common, however, have been deployed by cellular
network operators in response to regulations concerning emergency call
management. One popular technique, Cell-ID, involves associating a subscriber
with the Base Station that routes their calls. The radius of the area served by
the Base Station will determine position accuracy. A key difficulty with
techniques based on cellular network technologies is that the accuracy varies,
and this variation cannot be quantified by the subscriber. This has significant
implications when being harnessed in mobile learning scenarios. However, the
difficulties are not insurmountable if the limitations are understood, and the
learning issues in question are tolerant of a relatively large positional
error. In the study of physical geography, it may be desired to direct
students' attention to large scale features or select features of a landscape.
Obviously, a large scale object can be observed from a large distance and from
a variety of viewpoints, although some may be preferable from an instructional
view point. However, the scale of the objects under investigation will
frequently disguise the inherent error position of the technique being
harnessed. Thus learning is not compromised.
A Note on Indoor
Scenarios
Though
field work is usually synonymous with the outdoors, this need not always be the
case. However, all the techniques described previously will not operate
satisfactory in indoor environments due to interference with signals. A number
of dedicated solutions for indoor environments exist, for example, Ubisense
[5]. However, these are rarely deployed, primarily due to expense. Museums and
art galleries are obvious places of learning, but technologies for guiding
visitors are rarely deployed. Thus conventional approaches must be adopted, and
given the relatively small scale nature of buildings, this is not a major
problem and should not hinder learning.
Concluding Remarks
Cheap
navigation, tracking and positioning systems are now commonplace. This
development offers training professionals an unprecedented opportunity to
incorporate mobile learning strategies into their curricula. Where the subject
matter allows it, instructors can encourage and easily facilitate learning in
environments where students should obtain a greater benefit and understanding
of the topic under consideration.
References
[1]
Kolb,
D. (1984). Experiential learning: Experience as the Source of Learning and
Development.
[2]
Lave,
J. and Wenger, E. (1991). Situated Learning: Legitimate Peripheral
Participation.
[3]
Prasad
R. and Ruggieri, M. (2005). Applied Satellite Navigation Using GPS, GALILEO,
and Augmentation Systems, Artech House,
[4]
Bensky,
A. (2007). Wireless Positioning Technologies and Applications. Artech House,
Inc.
[5]
Unisense
- http://www.ubisense.net
Michael O'Grady
michael.j.ogrady@ucd.ie
Introduction
Web-based
e-learning is not only supplementing traditional classroom teaching, but also providing
educational access to diverse populations in potentially remote locations that
in the past would not have been reachable. Despite these benefits, there are
well-identified issues with student retention due to lack of social interaction
and difficulties encountered by students when interacting with the
technological medium [1]. To compensate for these difficulties there is an
imperative to make the course material as attractive as possible, an effort which
frequently equates to providing media-rich content. However, this material
places a greater demand on network resources possibly leading to Network
Quality of Service (QoS) problems and in turn student frustration with the
delivery of material.
In the e-learning
sphere, QoS concerns have in the past either been ignored or resulted in the
use of only the least demanding resources. If the differences in network
quality are ignored, multimedia e-learning can get very tedious. Every resource
with slightly higher demands on network resources will be burdened with long
start-up delays. If on the other hand e-learning utilizes only the least
demanding resources, usually a combination of text and images, the missing mix
of different media can negatively affect the motivation of the learner [2]. Considering the challenge of student retention in
e-learning, both approaches must be seen as problematic.
We present
expert opinion on the impact of QoS in e-learning as garnered from a Delphi
study surveying experts in the field of multimedia e-learning and contrast this
with the findings of a simulation study looking at the impact of QoS for a
number of typical e-learning scenarios. In this context we postulate that the
QoS has a strong impact on the student experience in e-learning. Considering
that it is widely accepted that any improvement in speed of connection is
compensated by an even bigger increase in demand [3], this impact is likely to continue despite the
ever-increasing delivery capabilities of the network.
The
Network Simulations
The
simulations were done with the NS-2 network simulator [6]. An application model was built which represents the
behavior of a multimedia e-learning application. The simulations consider two
media mix (MM) profiles for a dial-up as well as a DSL2 connection. The
profiles are characterized by percentages for the different types of media (see
Table
1) typically found in e-learning scenarios.
Table 1: Media Mix Profiles
|
|
Text+images |
Audio |
Video |
|
MM1 |
80% |
10% |
10% |
|
MM
2 |
40% |
30% |
30% |
MM1 represents a traditional profile, consisting mainly of text and
images, resulting in lower demands on bandwidth. MM2 includes a higher
percentage of audio and video and therefore has higher demands on bandwidth.
From
We consider
three of the 17 hypotheses related to the desirability of multimedia in
e-learning and the impact of QoS on learning (see Table 2). As we can see from HA, there was a high level of
agreement that multimedia improved learning, and further that it was an
important factor (ranking 5 out of 17). HB and HC were the two hypotheses
concerning the impact of QoS on learning. These were two of the three lowest
ranked hypotheses in the
Table 2:
Selected Results from the
|
|
Final Rank |
Hypothesis |
Percentage Agreement / No
opinion / Disagreement |
|
HA |
5 |
Learning materials providing a mix of different media lead to improved
learning results. |
84% / 8% / 8% |
|
HB |
15 |
Using selected still images rather than streaming video can increase
learning if the auditory narration quality of the original video is
maintained. |
68% / 8% / 24% |
|
HC |
17 |
A clear increase of the resolution of videos and images leads to
increase of learning. |
20% / 36% / 44% |
So, the
experts remain unconvinced about the role of QoS in e-learning. Perhaps the
following simple simulations results might change their mind. The simulation results
show significant start-up delays for both profiles and the different network
conditions (see Figure
4). Start-up delays are lowest for DSL2 and MM1 – the
least demanding profile and the connection with the most generous bandwidth
conditions. Nevertheless MM1/DSL2 shows start-up delays of up to 50 sec,
depending on the number of sessions. And even though DSL2 can accommodate
bandwidth demands much better than the dial-up connection, start-up delays
begin at 50 sec for MM2 and can go up to almost 140 sec.

Figure
4: Simulation Results
Conclusions
The results
of the
Acknowledgements
This work
is supported by Science Foundation Ireland (SFI) Research Frontiers Project
CMSF 696. The authors wish to thank Dr, Seung-bum Lee for his valuable comments
and contribution to this work.
References
[1]
Waycott,
J., Bennett, S., Kennedy, G., Dalgarno, B., & Gray, K. (2010). Digital
divides? Student and staff perceptions of information and communication
technologies. Computers & Education, 54(4), 1202-1211.
[2]
Taran,
C. (2005). Motivation Techniques in eLearning, IEEE Computer Society, pp.
617-619.
[3]
Mackenzie,
D. (2010) Who Has the Fastest Internet?, IEEE Spectrum, 8.10, 52.
[4]
Delphi
Hypotheses, http://specialtrees.net/wordpress/?p=33
[5]
Moebs,
S. A. (2008). A learner, is a learner, is a user, is a customer: QoS-based
experience-aware adaptation. In Proceeding of the 16th ACM international
conference on Multimedia (pp. 1035-1038).
[6]
Network
Simulator - NS-2, http://www.isi.edu/nsnam/ns/
[7]
Palmer,
J. W. (2002). Web Site Usability, Design, and Performance Metrics. Information
Systems Research, 13(2), 151-167. doi:10.1287/isre.13.2.151.88
Sabine Moebs
sabine@eeng.dcu.ie
Jennifer McManis
mcmanisj@eeng.dcu.ie
ISBN: 978-1-60566-294-7; 483 pages; July 2010
Published
by IGI Global under the imprint Information Science Reference
(formerly Idea Group Reference)
Edited by:
Mark J. W.
Lee,
Catherine McLoughlin,
Educational
communities today are rapidly increasing their interest in Web 2.0 and
e-learning advancements for the enhancement of teaching practices. Web
2.0-Based E-Learning: Applying Social Informatics for Tertiary Teaching
provides a useful and valuable reference to the latest advances in the area of
educational technology and e-learning, with an emphasis on the use of social
software tools such as blogs, wikis, podcasts, and social networking sites for
teaching, learning, and assessment. This innovative book offers an excellent
resource for any practitioner, researcher, or academician with an interest in
the use of the Web for providing meaningful learning experiences.
Table of Contents
·
Foreword
(by Prof. John G. Hedberg,
·
Preface
·
Acknowledgment
Section 1: Emerging
Paradigms and Innovative Theories in Web-Based Tertiary Teaching and Learning
1.
Back
to the Future: Tracing the Roots and Learning Affordances of Social Software
2.
Understanding
Web 2.0 and its Implications for E-Learning
3.
Pedagogy
2.0: Critical Challenges and Responses to Web 2.0 and Social Software in
Tertiary Teaching
4.
Learner-Generated
Contexts: A Framework to Support the Effective Use of Technology for Learning
5.
Considering
Students’ Perspectives on Personal and Distributed Learning Environments in
Course Design
Section 2: Towards Best
Practice: Case Studies and Exemplars of Web 2.0-Based Tertiary Teaching and
Learning
6.
Personal
Knowledge Management Skills in Web 2.0-Based Learning
7.
Teaching
and Learning Information Technology through the Lens of Web 2.0
8.
University
Students’ Self-Motivated Blogging and Development of Study Skills and Research
Skills
9.
Using
Wikis in Teacher Education: Student-Generated Content as Support in
Professional Learning
10.
Mobile
2.0: Crossing the Border into Formal Learning?
11.
Meeting
at the Wiki: The New Arena for Collaborative Writing in Foreign Language
Courses
12.
Podcasting
in Distance Learning: True Pedagogical Innovation or Just More of the Same?
13.
Using
Web 2.0 Tools to Enhance the Student Experience in Non-Teaching Areas of the
University
14.
“You
Can Lead the Horse to Water, but ... ”: Aligning Learning and Teaching in a Web
2.0 Context and Beyond
15.
Facebook
or Faceblock: Cautionary Tales Exploring the Rise of Social Networking within
Tertiary Education
16.
Catering
to the Needs of the “Digital Natives” or Educating the “Net Generation”?
17.
Activating
Assessment for Learning: Are We on the Way with Web 2.0?
Section 3: Web 2.0 and
Beyond: Current Implications and Future Directions for Web-Based Tertiary Teaching
and Learning
18.
Dancing
with Postmodernity: Web 2.0+ as a New Epistemic Learning Space
19.
Web
2.0 and Professional Development of Academic Staff
20.
When
the Future Finally Arrives: Web 2.0 Becomes Web 3.0
21.
Stepping
over the Edge: The Implications of New Technologies for Education in the Web
2.0 Era
List of Contributors
· Jon Akass, Media Citizens Ltd,
· Cameron Barnes,
· Tony Bates, Tony Bates Associates,
· Maria Elisabetta Cigognini,
· Wilma Clark,
· Lisa Cluett, The
· Gráinne Conole, The Open
University,
· John Cook,
· Matt Crosslin, The
· Nada Dabbagh,
· Peter Day,
· Lone Dirckinck-Holmfeld,
· Peter Duffy, The
· Nigel Ecclesfield,
· Palitha Edirisingha,
· Henk Eijkman,
· Idoia Elola,
· Mark Frydenberg,
· Fred Garnett,
· Tom Hamilton,
· Henk Huijser,
· Chris Jones, The Open University,
· Lucinda Kerawalla, The Open
University,
· Agnes Kukulska-Hulme, The Open
University,
· Mark J. W. Lee,
· Rosemary Luckin,
· Catherine McLoughlin,
· Shailey Minocha, The Open
University,
· Ana Oskoz, University of
· Kai Pata,
· Maria Chiara Pettenati,
· John Pettit, The Open University,
· Rick Reo,
· Judy Robertson,
· Thomas Ryberg,
· Michael Sankey,
· Judy Skene, The
· Kairit Tammets,
· Belinda Tynan,
· Terje Väljataga,
· Steve Wheeler,
· Denise Whitelock, The Open
University,
· Andrew Whitworth,
For more information about Web 2.0-Based E-Learning: Applying Social Informatics
for Tertiary Teaching, please visit
http://igi-global.com/Bookstore/TitleDetails.aspx?TitleId=40272
On this
site you will be able to read the full text of the Preface of the book, which
provides a detailed introduction and thematic overview of the various chapters.
You can also download the first chapter of the publication in PDF format for
free.