Introduction - users.dcc.uchile.clnbaloian/papers/paper usabilidad v4.0.…  · Web viewIn [1]...

20
1. Introduction Usability is with no doubt a critical aspect in any software application to raise the possibilities it will be used. This has been recognized already long ago by experts in the development of Computer Supported Learning software applications. In [2] authors state that “the evaluation of the use of educational software must take account of usability as well as learning and, crucially, the integration of tissues concerned with usability and learning”. In [1] authors state that it is important to perform usability testing to guide the development and design process of an e- learning product or tool [3]. According to [1] there are two important related concepts for measuring the appropriateness of a computer-based learning product or tool: the first one is usability, which “is the degree to which users can easily and efficiently use an e-learning product or tool to satisfy the goals and needs of learners” [4]. The second one is the user perception, which “is defined as the process by which individuals select, organize, and recognize their sensory impressions in order to interpret and understand their learning environment”. A well- designed computer-based learning tool will raise the motivation which increases the probability that users have a successful learning experience. One of the recurrent learning topics when developing computer supported collaborative learning tools has been supporting activities for improving reading skills of small children [11]. However although statistics show that all over the world the illiteracy rates have sharply declined in the las years [5], it seems that the skills of the readers for understanding what they read stays behind acceptable levels [6] [7] [8] [9] [10]. For this reason, the authors of the present work developed in the past a tool for implementing computer-supported learning activity to train the reading comprehension of middle school. For this activity students had to read a text and identify the key words for understanding its message by marking them. However, although this is a very natural gesture when reading a paper text and highlighting words with a marker, or even when reading a document in docx format or pdf with the most popular software, it turned to be a difficult one when using tablets. The authors of the present work have also developed a work which takes this second direction (For secondary school learners) and

Transcript of Introduction - users.dcc.uchile.clnbaloian/papers/paper usabilidad v4.0.…  · Web viewIn [1]...

Page 1: Introduction - users.dcc.uchile.clnbaloian/papers/paper usabilidad v4.0.…  · Web viewIn [1] authors state that it is important to perform usability testing to guide the development

1. Introduction

Usability is with no doubt a critical aspect in any software application to raise the possibilities it will be used. This has been recognized already long ago by experts in the development of Computer Supported Learning software applications. In [2] authors state that “the evaluation of the use of educational software must take account of usability as well as learning and, crucially, the integration of tissues concerned with usability and learning”. In [1] authors state that it is important to perform usability testing to guide the development and design process of an e-learning product or tool [3]. According to [1] there are two important related concepts for measuring the appropriateness of a computer-based learning product or tool: the first one is usability, which “is the degree to which users can easily and efficiently use an e-learning product or tool to satisfy the goals and needs of learners” [4]. The second one is the user perception, which “is defined as the process by which individuals select, organize, and recognize their sensory impressions in order to interpret and understand their learning environment”. A well-designed computer-based learning tool will raise the motivation which increases the probability that users have a successful learning experience.

One of the recurrent learning topics when developing computer supported collaborative learning tools has been supporting activities for improving reading skills of small children [11]. However although statistics show that all over the world the illiteracy rates have sharply declined in the las years [5], it seems that the skills of the readers for understanding what they read stays behind acceptable levels [6] [7] [8] [9] [10]. For this reason, the authors of the present work developed in the past a tool for implementing computer-supported learning activity to train the reading comprehension of middle school. For this activity students had to read a text and identify the key words for understanding its message by marking them. However, although this is a very natural gesture when reading a paper text and highlighting words with a marker, or even when reading a document in docx format or pdf with the most popular software, it turned to be a difficult one when using tablets.

The authors of the present work have also developed a work which takes this second direction (For secondary school learners) and was based on the strategy of training the reading comprehension by highlighting the words inside a piece of text which represent the key idea exposed by it [10n]. However, this experience has shown that the approach of using constructed development responses has its disadvantages, especially when applied to mass courses. Some of them are the following:

- it is complicated for students to develop a constructed response using mobile computer systems, mainly because HCI problems related to the screen’s size and interaction mode

- it is difficult for the teacher to evaluate all the answers developed by students- It is difficult to monitor the degree of progress of the students’ work, how many have

responded, have they responded well.- It is difficult to manage the interactions inside the classroom in order to achieve learning- It is difficult for the teacher to give appropriate feedback to each student based on their

answers

Due to these difficulties, we came to the idea that a learning activity based on questions that are answered through the selection of multiple alternatives, would make the whole process much easier; since the evaluation of the correctness of the answers are easy to validate. In fact, according to [18n] the advantages of multiple choice tests are:

Page 2: Introduction - users.dcc.uchile.clnbaloian/papers/paper usabilidad v4.0.…  · Web viewIn [1] authors state that it is important to perform usability testing to guide the development

They are easy and cheap to apply. They are reliable in their application Since they are standardized they can be applied in the same way to all students They are perceived as fair and objective evaluation instruments

However, this author [18n] joins other [19n] [20n] expressing some critics to this kind of evaluation tests:

Students can just "hit" the correct answer without having real knowledge Usually, students do not receive specific feedback, but rather global scores of results The examiner might not be an expert in the subject of the examination, and can decide to

take questions from the test without any expert criteria. The SAT, the most famous multi-choice test, only explains 20% the performance in the

university.

The critics are related to the fact that a constructed response is supposed to require more complex skills on the part of the student; and therefore allows a student to perform a more elaborate learning activity.

However, in [16n] and [17n] the authors show that there is equivalence between constructed responses, to those of alternatives. The justification for the above is that for these authors, one way to achieve response is that the student must first build a response, then verify/check against possible alternative responses. En [21n] los autores argumentan que en una prueba de multiple choice no solo se debería marcar la alternativa que el estudiante considera correcta sino que además hay que justificarla.

In Chile there is a unique test of income selection for almost all universities, whether private or public, called PSU (University Selection Examination). An important part of it consists of reading comprehension, measured by multiple-choice questions. This evaluation is summative, that is, it is meant to measure what the students know. Based on [23n], we can state that it is possible to convert an activity with summative evaluation into a formative one if the evaluation is used as feedback for the student to reflect on and reformulate their original answers. This learning technique is also applied in other contexts like sports [24n].

Given the usability problems that we had in the first version of this system (described in [10]) in this work we report a finished study of the usability and perception of utility that the students found in the system

For that, we developed a computer-supported learning activity in which students have to read text, taken from past versions of the PSU. This is an important aspect of the learning activity since the PSU is designed for having all problems pedagogically validated. This allows us to use this problems knowing that they well designed and concentrate in other aspects of the learning design.

The research question addressed by this work is whether it is possible to develop an application which supports reading comprehension learning and has a good usability. In other words, we would like to achieve the same pedagogical goals pursued in [10n] but with an application which can be used in the context of a real class with relative large number of students (50) successfully.

The rest of the paper is organized as follows: Section 2 reviews the literature on usability

Page 3: Introduction - users.dcc.uchile.clnbaloian/papers/paper usabilidad v4.0.…  · Web viewIn [1] authors state that it is important to perform usability testing to guide the development

2. Usability Attributes for Mobile Applications in learning environments

In the field of education assisted by applications supported by mobile devices, usability studies are typically applied when they are used for individual learning, collaborative learning, exchange of information, or for communication support among students, (Kukulska-Hulme, 2007). During the earlies 2000s, mobile devices and wireless networks posed a number of significant challenges for examining usability of mobile applications including mobile context, multimodality, connectivity, small screen size, different display resolutions, limited processing capability and power, and restrictive data entry methods; therefore, the usability evaluation were primarily oriented to these aspects, (Zhang & Adipat, 2005). Nevertheless, nowadays the aspects mentioned before have improved considerably regarding the connectivity speed, screen resolution, processing power and battery capacity. Also human-computer interfaces of mobile devices as well as interaction paradigms have been standardized and thus are easier to understand and use by more people. In other words, mobile devices have reached a maturity level which allows us evluate usability of its software almost without considering the hardware limitations. (Harrison, Flood, & Duce, 2013)

A new generation of location-aware mobile devices offer further possibilities, of education services and educational media matched to the learner's context and interests. (Sharples, Taylor, & Vavoula, 2010; Zurita & Nussbaum, 2004).

Que pueden ser medidos mediante test de usabilidad de acuerdo al proposito de la aplicación construida, y sobre la base de métricas.

Usability attributes are various features that are used to measure the quality of applications. Based on the standard ISO 9241, human–computer interaction handbooks, and existing usability studies on mobile applications, there are nine generic usability attributes: a) Learnability focuses on how easily users can finish a task the first time using an application and how quickly users can improve their performance levels (i.e., ease of use). b) Efficiency is defined as how fast users can accomplish a task while using an application. c) Memorability refers to the level of ease with which users can recall how to use an application after discontinuing its use for some time. d) User satisfaction reflects the attitude of users toward using a mobile application. e) Effectiveness is defined as completeness and accuracy with which users achieve certain goals. f) Simplicity is the degree of comfort with which users find a way to accomplish tasks. g) Comprehensibility, sometimes used interchangeably with the term readability, measures how easily users can understand content presented on mobile devices. And h) Learning performance measures the learning effectiveness of users in mobile education (using mobile applications to facilitate learning or communication with other learners or instructors).

2.1.Metrics to measure Usability Attributes

ISO/IEC 9126-1 defines usability as “the capability of the software product to be understood, learned, used and be attractive to the user, when used under specified conditions”. This definition is primarily concerned with a software product; however, it can be applied to mobile learning software taking into consideration features specific to mobile phones and eLearning aspects, (Ivanc, Vasiu, & Onita, 2012).

There are standardized questionnaires that have been specifically designed to assess participants’ perceived usability and satisfaction of products and systems. Benefits of using standardized

Nelson Baloian, 18/07/18,
En que contexto viene esta frase?
Page 4: Introduction - users.dcc.uchile.clnbaloian/papers/paper usabilidad v4.0.…  · Web viewIn [1] authors state that it is important to perform usability testing to guide the development

questionnaires are: a) Quantification: Standardized measurements allow practitioners to report results in finer detail than they could by using only personal judgment; b) Scientific generalization: Standardization is key to generalizing a finding from a sample to the greater population; c) Communication: It is easier for researchers to communicate findings when referring to standardized metrics; d) Quick Comparisons: By using standardized questionnaires, it’s easy to compare different design iterations throughout the development process.

The ISO/IEC 9126-4 Metrics recommends that usability metrics should include: a) Effectiveness, the accuracy and completeness with which users achieve specified goals; b) Efficiency, the resources expended in relation to the accuracy and completeness with which users achieve goals; and c) Satisfaction, the comfort and acceptability of use.

However, the actual ways of how these should be measured is very often left at the discretion of the evaluator. En nuestro caso consideramos que la metrica de satisfaction es la mas importante de medir en el estado de la evaluación del prototipo que se está aplicando, junto con la de learning performance.

User satisfaction is measured through standardized satisfaction questionnaires which can be administered after the usability test session; in order to measure their impression of the overall ease of use of the system being tested. There are standardized questionnaires with a different number of questions, as for example:

a) System Usability Scale (SUS) - 10 questions, (Brooke, 1996); b) Post-Study Usability Questionnaire (PSSUQ) - 16 questions, (Lewis, 2002); c) Questionnaire for User Interaction Satisfaction (QUIS) - 24 questions, (Chin, Diehl, &

Norman, 1988; Harper & Norman, 1993); d) Software Usability Measurement Inventory (SUMI) - 50 questions), (Kirakowski, 1994).

(Sauro, 2011) recommends using SUS to measure the user satisfaction with software, hardware and mobile devices while the SUPR-Q should be used for measuring test level satisfaction of websites. SUS is also favored because has been found to give very accurate results. Moreover, it consists of a very easy scale that is simple to administer to participants, thus making it ideal for usage with small sample sizes. According to (Brooke, 1996), SUS is perhaps the most popular standardized usability questionnaire, accounting for approximately 43% of unpublished usability studies. It is a 10 items' 1 to 5 Likert’ scale questionnaire designed to measure users’ perceived usability of a product or system; where 1 is to strongly disagree and 5 to fully agree. The SUS is highly reliable (.91) and is entirely free. It needs little time to answer, is simple to qualify and is easily comparable with other instruments. It is recommended to apply it once users have worked with the application. (Sauro, 2011).

To score the SUS, subtract the scale position from 1 on all oddly numbered items, and subtract 5 from the scale position on all evenly numbered items, then multiply the sum of all items by 2.5 to get an overall SUS score that ranges from 0-100. If is necessary, it can be applied transformations to the questions` scores to summary the final grade on a scale of 0 to 100.

The SUS questionnaire:

1. I think that I would like to use this system frequently

2. I found the system unnecessarily complex

3. I thought the system was easy to use

4. I think that I would need the support of a technical person to be able to use this system

Page 5: Introduction - users.dcc.uchile.clnbaloian/papers/paper usabilidad v4.0.…  · Web viewIn [1] authors state that it is important to perform usability testing to guide the development

5. I found the various functions in this system were well integrated

6. I thought there was too much inconsistency in this system

7. I would imagine that most people would learn to use this system very quickly

8. I found the system very cumbersome to use

9. I felt very confident using the system

10. I needed to learn a lot of things before I could get going with this system

2.2.Usability in the context of teaching and learning

Usability has to be considered in a different manner when it is being evaluated in the context of teaching and learning, and the concept of pedagogical usability can be helpful when considering the close rela tionship between usability and pedagogical design, (Ivanc et al., 2012).

The usability is reviewed as a determinant factor of success of an educational platform and it should be given special attention also when talking about accessing it on mobile devices that were not designed for educational purposes. Based on the fact that the device’s and UI usability issues influence in different ways the user experience (Hussain & Kutar, 2012), and on the fact that mobile learning is usually considered an extension to eLearning and have similar characteristics, (Ivanc et al., 2012) consider that the usability of a Mobile Learning platform has to be regarded from four different perspectives that are presented in Table 1 and described in the following paragraphs.

Pedagogical usability is the analysis of the way an educational application (tools, content, tasks and interface) supports students in their learning process within various learning contexts according to learning objectives. It should be especially concerned with educational aspects such as the learning process, purposes of learning, user’s needs, the learning experience, learning content and learning outcomes (Hussain & Kutar, 2012).

According to (Ivanc et al., 2012), when developing or improving a mobile learning application is not sufficient to intend that people can use it, they must want to use it (Khomokhoana, 2011).

Some relevant pedagogical usability metrics and a description of what each of the usability metrics measures are provided in Table 2. Our research on pedagogical usability testing needs to go

Gustavo Zurita Alarcón, 04/10/18,
Esto puede no ir
Page 6: Introduction - users.dcc.uchile.clnbaloian/papers/paper usabilidad v4.0.…  · Web viewIn [1] authors state that it is important to perform usability testing to guide the development

further and establish practical methods for evaluating the mobile learning platform from the pedagogical point of view.

Usability of the Educational Content. There could be usability issues regarding the content delivered by the mLearning application or web user interface. The format of electronic learning content is not always compatible to most mobile browsers, scripts and plug-ins are usually not supported and the ability to display information in various multimedia content is limited. Integrated graphics and animation should be provided in compatible mobile formats as well as audio and video files. Placing large images, video, PDF, MS Office files and usually any types of similar resources directly on the front of course pages should be avoided, links to images and video via activities, resources and pages is recommendable to be used. These and similar multimedia content compatibility issues must be considered when structuring the mobile Learning content.

As a result of the usability testing of the content/courses provided by the mobile web interface we intend to elaborate a guideline for designing, developing and especially organizing content in a Moodle course for best use on mobile devices.

Page 7: Introduction - users.dcc.uchile.clnbaloian/papers/paper usabilidad v4.0.…  · Web viewIn [1] authors state that it is important to perform usability testing to guide the development

Usability of the Mobile Web Interface. In this case, the mobile web interface strictly refers to the elements and structure of the mobile-friendly interface. The usability testing can be similar to any other mobile web-page or application, and unlike the other perspectives, it can be evaluated using heuristic methods. The goals of the usability testing, in this case are: discovering navigation issues, improving the positioning and use of the menu elements, discovering bugs and verifying compatibility and interoperability of the UI elements on different devices, assuring efficiency, learnability and satisfaction in using the system and completing the tasks.

2.2.1 Metricas para medir Pedagogical usability, Usability of the educational content and Usability of the Mobile Web Interface

En nuestro caso utilizaremos 3 perspectivas: Pedagogical usability, Usability of the educational content and Usability of the Mobile Web Interface:

Para lo anterior, tenemos que proponer una serie de preguntas para las metricas de la tabla 2 de Pedagogical Usability, y otras preguntas para Usability of the educational context and Usability of the Mobile Web Interface; todas en función de la aplicación que vamos a testear y sus propósitos:

Pedagogical usability metrics and their questions

Instruction 11.- Revisar las respuestas al final de la actividad (Individual, Anónimo y Grupal), me permite tomar conciencia sobre mi proceso de aprendizaje.12.- Las indicaciones para usar la aplicación son claras, y por tanto, no es necesaria la intervención del profesor para poder realizar la actividad de aprendizaje.

Learning content relevance

13.- El texto a leer presentado en la aplicación, facilita mi tarea de responder correctamente las preguntas de selección múltiple.14.- El texto a leer presentado y las preguntas a responder están organizadas de manera clara y coherente, para que pueda apoyarme en la actividad de aprendizaje

Learning content structure

15.- Los distintos pasos a seguir en la aplicación, me permiten responder las preguntas de selección múltiple de manera efectiva.16.- Poder volver a revisar mis respuestas, y tener la posibilidad de cambiarlas junto con su comentario y/o justificación facilita el logro de mis aprendizajes

Tasks 17.- Poder responder una vez más una respuesta errónea dada en forma conjunta, apoya mi aprendizaje.

Gustavo Zurita Alarcón, 04/10/18,
Estas fueron las preguntas aplicadas, y están con su respectivo numero que aparece en el excel de resultados.
Page 8: Introduction - users.dcc.uchile.clnbaloian/papers/paper usabilidad v4.0.…  · Web viewIn [1] authors state that it is important to perform usability testing to guide the development

18.-Generar comentarios y/o justificación a mis respuestas a las preguntas, me permiten estar seguro de mi selección de las respuestas que voy dando.

Learner variables

Collaborative learning

19.- Ver en forma anónima la respuesta y los comentarios/justificaciones de mis compañeros me facilita estar seguro de mi respuesta, o bien corregirla.20.- Intercambiar cara-a-cara puntos de vista, opiniones y comentarios en conjunto con otros de mis compañeros, apoya mis aprendizajes.

Ease of use 21.- La aplicación me apoya de manera clara en lo que tengo que hacer en cada etapa de la actividad de aprendizaje22.- Poder volver a revisar mis respuestas, y tener la posibilidad de cambiarlas junto con su comentario y/o justificación facilita el logro de mis aprendizajes

Learner Control

26.- Las preguntas a responder y los comentarios a realizar tienen un formato adecuado.27.- La navegación entre las distintas etapas de la aplicación es simple.28.- Indicar el grado de certeza me es útil para expresar mi nivel de seguridad sobre la correctitud de una respuesta.29.- Indicar el grado de certeza tiene relación directa con el comentario asociado a mi respuesta.

Motivacion 23.- El uso de la aplicación me motiva a realizar las actividades de aprendizaje.31.- Mencione, con detalle, la funcionalidad o funcionalidades que más le gusto de la aplicación

Usability of the educational content

The format of electronic learning content

24.- La actividad de aprendizaje apoyada por la aplicación tiene un diseño adecuado.

Documents, Texts

25.- Los textos presentados tienen el formato adecuado26.- Las preguntas a responder y los comentarios a realizar tienen un formato adecuado.

Page 9: Introduction - users.dcc.uchile.clnbaloian/papers/paper usabilidad v4.0.…  · Web viewIn [1] authors state that it is important to perform usability testing to guide the development

Compatibility issues of the content

30.- Con el mayor detalle posible, mencione qué cambios realizaría en la aplicación

3. Tool Description

Durante la ejecución de la evaluación experimental de la aplicación RedCoApp, [1] de apoyo computacional a la actividad colaborativa de soporte a la comprensión lectora mediante el uso de key words,  se encontró los siguientes problemas de Usabilidad.

Los alumnos, después de leer de uno hasta dos textos en forma individual, debían proceder  a marcar para cada uno de estos textos, tres palabras claves (key words) más relevantes y asociados para que puedan dar respuesta al factor detonante u propósito de la comprensión lectora de la actividad educativa. 

Para marcar una palabra clave en el iPad, en RedCoApp se debe realizar la acción de selección de texto, tal y como funciona este proceso en el sistema operativo IOS de dispositivos móviles: tocar en la pantalla táctil una palabra que se quiere seleccionar, para luego ajustar dos punteros que marcan el comienzo (línea con una pelotita en la parte superior) y fin (una linea con una pelotita en la parte inferior), de la selección que se

Page 10: Introduction - users.dcc.uchile.clnbaloian/papers/paper usabilidad v4.0.…  · Web viewIn [1] authors state that it is important to perform usability testing to guide the development

sobreponen a la palabra tocada y que además aparece en un color que lo resalta y ya con las opciones en la parte superior de "copiar, consultar, compartir ...". 

Este proceso no es difícil excepto si por equivocación no se ha marcado la palabra correcta. En tal caso, para corregir el error, es necesario comenzar nuevamente el proceso tocando en la pantalla la palabra correcta dejando pasar al menos unos 2 segundos. Es aquí donde se produce el primer problema de usabilidad: "1. ante la selección de una palabra incorrecta, el alumno intenta marcar otra palabra que si es la correcta, pero lo hace casi instantáneamente, por lo que la palabra incorrecta macada anteriormente, queda aun seleccionada, mientras el alumno presiona insistentemente otra palabra sin lograr corregir su error".

El segundo problema de usabilidad, viene por el manejo del inicio y fin de la selección, que implica arrastrar desde las líneas con las pelotitas arriba y abajo que marcaron inicialmente la palabra seleccionada, de modo que se puedan agregar otras palabras contiguas a izquierda (inicio de la selección) o bien a derecha (fin de la selección). Este proceso puede generar el siguiente segundo problema de usabilidad "2. si no se cuenta con la suficiente habilidad motriz para tomar la linea con la pelotita arriba o abajo para aumentar a izquierda o derecha la selección; la selección de la palabra inicial se modifica y el proceso modificación de selección deja marcada palabras o partes de palabras que no se quería". 

El tercer problema de usabilidad surge cuando "3. al querer aumentar la selección a izquierda (inicio de la selección), si además de mover la línea a la izquierda, se mueve la línea ligeramente hacia abajo de la palabra seleccionada; lo que hace que se quede marcada el último caracter de la palabra inicialmente seleccionada"

Los anteriores problemas de usabilidad, se producen en usuarios que no están acostumbrados a utilizar IOS, y que requieren de un adiestramiento previo para superar las dificultades antes mencionadas. De los 45 alumnos que utilizaron RedCoApp, entre 16 y 18 años, todos tienen celulares principalmente con Android, y solo un 25% tenía celulares iPhone con IOS. 

Lo anterior, introducía un retraso importante en la actividad, toda vez que los alumnos tenía que lidiar con los tres problemas de usabilidad antes mencionados.  Se debe considerar que los alumnos repiten el proceso de marcar tres palabras claves por cada texto, también en la etapa de re-elaboración colaborativa anónima (cuando veían los resultados de otros), y en la re-elaboración colaborativa no anónima.

Uso de multiple choice

RESULTADOS USABILITY / UTILITY PERCEPTION – PEDAGOGICAL (Gustavo)

Test ulizado: SUS - (test standarizado de satisfacción de uso de la aplicación)

Número de participantes que respondieron el test: 78

Page 11: Introduction - users.dcc.uchile.clnbaloian/papers/paper usabilidad v4.0.…  · Web viewIn [1] authors state that it is important to perform usability testing to guide the development

Nota promedio del Test SUS aplicado a los 78 participantes: 80.641 (máxima nota que se puede obtener es 100)

Distribución de frecuencias de las notas obtenidas en el Test SUS:

Nota Frecuencia50 255 260 165 570 1175 780 1085 1290 995 9

100 10

Distribución de frecuencia de respuestas obtenidas a la usabilidad pedagógica de acuerdo ala escala de likert de 1 al 5. Donde 1 es Totalmente en Desacuerdo, 2 en Desacuerdo, 3 Ni en Desacuerdo ni en Acuerdo, 4 en Acuerdo, y 5 Totalmente de Acuerdo.

Page 12: Introduction - users.dcc.uchile.clnbaloian/papers/paper usabilidad v4.0.…  · Web viewIn [1] authors state that it is important to perform usability testing to guide the development

30.- En general, la tarea realizada fue fácil de completar

28.- Indicar el grado de certeza me es útil para expresar mi nivel de seguridad sobre la correctitud de una respuesta.

26.- Las preguntas a responder y los comentarios a realizar tienen un formato adecuado.

24.- La actividad de aprendizaje apoyada por la aplicación tiene un diseño adecuado.

22.- Poder volver a revisar mis respuestas, y tener la posibilidad de cambiarlas junto con su comentario y/o justificación facilita el logro de mis aprendizajes

20.- Intercambiar cara-a-cara puntos de vista, opiniones y comentarios en conjunto con otros de mis compañeros, apoya mis aprendizajes.

18.-Generar comentarios y/o justificación a mis respuestas a las preguntas, me permiten estar seguro de mi selección de las respuestas que voy dando.

16.- Poder volver a revisar mis respuestas, y tener la posibilidad de cambiarlas junto con su comentario y/o justificación facilita el logro de mis aprendizajes

14.- El texto a leer presentado y las preguntas a responder están organizadas de manera clara y coherente, para que pueda apoyarme en la actividad de aprendizaje

12.- Las indicaciones para usar la aplicación son claras, y por tanto, no es necesaria la intervención del profesor para poder realizar la actividad de aprendizaje.

10.- Necesito aprender muchas cosas antes de manejarme con la aplicación

8.- Encontré la aplicación muy engorrosa de usar

6.- Pensé que había demasiada inconsistencia en la aplicación

4.- Pienso que necesitaría del apoyo de una persona para usar la aplicación

2.- Encontré la aplicación innecesariamente compleja

0 10 20 30 40 50 60 70

1

3

1

1

1

1

2

1

1

2

1

46

3

47

39

40

4

28

1

3

2

3

1

1

2

1

3

2

1

2

5

1

25

3

19

2

18

1

18

4

21

6

18

4

10

6

6

12

7

6

8

8

11

11

7

4

11

8

18

16

7

6

11

10

8

14

10

13

19

16

7

28

25

23

20

24

24

21

19

23

27

14

21

21

21

21

29

28

19

37

16

1

22

2

23

7

32

4

11

6

29

43

31

46

45

48

47

43

50

48

43

55

44

41

47

51

38

42

39

18

53

39

45

35

3

40

7

42

1 2 3 4 5

Escala Likert

CONCLUSIONS (Nelson)

Page 13: Introduction - users.dcc.uchile.clnbaloian/papers/paper usabilidad v4.0.…  · Web viewIn [1] authors state that it is important to perform usability testing to guide the development

Bibliografia de Introduccion

[1] Gould, D. J., Terrell, M. A., & Fleming, J. (2008). A usability study of users' perceptions toward a multimedia computer‐assisted learning tool for neuroanatomy. Anatomical sciences education, 1(4), 175-183.

[2] Squires, D., & Preece, J. (1996). Usability and learning: evaluating the potential of educational software. Computers & Education, 27(1), 15-22.

[3] Koohang, A., & Du Plessis, J. (2004). Architecting usability properties in the e-learning instructional design process. International Journal on ELearning, 3(3), 38.

[4] Koohang, A. (2004). A study of users’ perceptions toward e-learning courseware usability. International Journal on e-learning, 3(2), 10-17.

[5] VAN STADEN, Surette; ZIMMERMAN, Lisa. Evidence from the Progress in International Reading Literacy Study (PIRLS) and How Teachers and Their Practice Can Benefit. En Monitoring the Quality of Education in Schools. SensePublishers, 2017. p. 123-138.

[6] VALDÉS, Mónica. ¿ Leen en forma voluntaria y recreativa los niños que logran un buen nivel de Comprensión Lectora?. Ocnos: Revista de estudios sobre lectura, 2013, no 10.

[7] Andrades Cornejo, J. (2015). Variables de gestión y rendimiento académico: un análisis de regresión lineal de los resultados SIMCE 2013 de matemática de cuarto año básico.

[8] Mullis, I. V., Martin, M. O., Foy, P., & Drucker, K. T. (2012). PIRLS 2011 International Results in Reading. International Association for the Evaluation of Educational Achievement. Herengracht 487, Amsterdam, 1017 BT, The Netherlands.

[9] Slavin, R. E., Cheung, A., Groff, C., & Lake, C. (2008). Effective reading programs for middle and high schools: A best‐evidence synthesis. Reading Research Quarterly, 43(3), 290-322.

[10] Wolters, C. A., Barnes, M. A., Kulesz, P. A., York, M., & Francis, D. J. (2017). Examining a motivational treatment and its impact on adolescents' reading comprehension and fluency. The Journal of Educational Research, 110(1), 98-109.

[11] Cheung, A. C., & Slavin, R. E. (2013). Effects of educational technology applications on reading outcomes for struggling readers: A best‐evidence synthesis. Reading Research Quarterly, 48(3), 277-299.

[10n] Gustavo Zurita, Nelson Baloian, Oscar Jerez, Sergio Peñafiel. Practice of Skills for Reading Comprehension in Large Classrooms by Using a Mobile Collaborative Support and Microblogging. In Proc. 23rd Collaboration Researchers' International Workshop on Groupware (CRIWG), pp. 81-94, Aug 2017. Saskatoon, Canada. Springer-Verlag, Berlin/Heidelberg, Germany. Lecture Notes in Computer Science vol. 10391

[18n] Gunderman, R. B., & Ladowski, J. M. (2013). Inherent Limitations of Multiple-Choice Testing. Academic Radiology, 20(10), 1319–1321.

[19n] DiBattista, D., Sinnige-Egger, J.-A., & Fortuna, G. (2014). The “None of the Above” Option in Multiple-Choice Testing: An Experimental Study. The Journal of Experimental Education, 82(2), 168–183.

[20n] Park, J. (2010). Constructive multiple-choice testing system. British Journal of Educational Technology, 41(6), 1054–1064.

Page 14: Introduction - users.dcc.uchile.clnbaloian/papers/paper usabilidad v4.0.…  · Web viewIn [1] authors state that it is important to perform usability testing to guide the development

[21n] Marsh, E. J., Lozito, J. P., Umanath, S., Bjork, E. L., & Bjork, R. A. (2012). Using verification feedback to correct errors made on a multiple-choice test. Memory, 20(6), 645–653.

[22n] Laal, M., & Ghodsi, S. M. (2012). Benefits of collaborative learning. Procedia-Social and Behavioral Sciences, 31, 486-490.

[23n] Dixson, D. D., & Worrell, F. C. (2016). Formative and summative assessment in the classroom. Theory into practice, 55(2), 153-159.

[24n] Mouratidis, A., Lens, W., & Vansteenkiste, M. (2010). How you provide corrective feedback makes a difference: the motivating role of communicating in an autonomy-supporting way. Journal of Sport & Exercise Psychology, 32(5), 619–637.

Brooke, J. (1996). SUS-A quick and dirty usability scale. Usability evaluation in industry, 189(194), 4-7.

Chin, J. P., Diehl, V. A., & Norman, K. L. (1988). Development of an instrument measuring user satisfaction of the human-computer interface. Paper presented at the Proceedings of the SIGCHI conference on Human factors in computing systems.

Harper, B. D., & Norman, K. L. (1993). Improving user satisfaction: The questionnaire for user interaction satisfaction version 5.5. Paper presented at the Proceedings of the 1st Annual Mid-Atlantic Human Factors Conference.

Harrison, R., Flood, D., & Duce, D. (2013). Usability of mobile applications: literature review and rationale for a new usability model. Journal of Interaction Science, 1(1), 1.

Hussain, A., & Kutar, M. (2012). Apps vs devices: Can the usability of mobile apps be decoupled from the device? IJCSI International Journal of Computer Science, 9(2012), 3.

Ivanc, D., Vasiu, R., & Onita, M. (2012). Usability evaluation of a LMS mobile web interface. Paper presented at the International Conference on Information and Software Technologies.

Khomokhoana, P. J. (2011). Using mobile learning applications to encourage active classroom participation: Technical and pedagogical considerations. University of the Free State.

Kirakowski, J. (1994). Background notes on the SUMI questionnaire. Human Factors Research Group University College Cork, Ireland. Originally.

Kukulska-Hulme, A. (2007). Mobile usability in educational contexts: what have we learnt? The International Review of Research in Open and Distributed Learning, 8(2).

Lewis, J. R. (2002). Psychometric evaluation of the PSSUQ using data from five years of usability studies. International journal of human-computer interaction, 14(3-4), 463-488.

Sauro, J. (2011). Measuring Usability with the System Usability Scale (SUS). https://measuringu.com/sus/.

Sharples, M., Taylor, J., & Vavoula, G. (2010). A theory of learning for the mobile age Medienbildung in neuen Kulturräumen (pp. 87-99): Springer.

Zhang, D., & Adipat, B. (2005). Challenges, methodologies, and issues in the usability testing of mobile applications. International journal of human-computer interaction, 18(3), 293-308.

Page 15: Introduction - users.dcc.uchile.clnbaloian/papers/paper usabilidad v4.0.…  · Web viewIn [1] authors state that it is important to perform usability testing to guide the development

Zurita, G., & Nussbaum, M. (2004). Computer supported collaborative learning using wirelessly interconnected handheld computers. Computers & education, 42(3), 289-314.