Evaluating Language Learning Spaces: Developing Formative Evaluation Procedures to Enable Growth and Innovation

Katherine Thornton, Otemon Gakuin University, Japan

Thornton, K. (2016). Evaluating language learning spaces: Developing formative evaluation procedures to enable growth and innovation Studies in Self-Access Learning Journal, 7(4), 394-397. https://doi.org/10.37237/070407

Download paginated PDF version

Despite the frequent presence of language learning spaces (LLSs) at institutions across the world since the 1990s, there is still no consensus on how to evaluate such centres. With the exception of Morrison (2005), we do not even have a good number of well-documented approaches or frameworks for those wishing to conduct an evaluation to draw from. The very nature of a self-access centre, with its fluid population of users pursuing a diverse variety of learning goals, makes the task considerably more challenging than a course evaluation, which usually has clearly defined objectives and a fixed group of participants. This problem has been recognised since self-access first emerged as its own field (Riley, 1996), but recent surveys of the field reveal a similar picture today (Gardner & Miller, 2015; Reinders & Lazaro, 2008).

Setting the Focus for Evaluation

Before one even starts to attempt an evaluation, it is first necessary to determine what is to be evaluated. LLSs are established for many different reasons and, in many contexts, not all stakeholders share the same vision for the centre, and what constitutes a successful programme.

While front line staff who work in the LLS may emphasise qualitative aspects like the development of autonomous learning skills, administrators may be more concerned with quantitative measures such as the number of users, or the language proficiency gains of users, as determined by standardised tests. Some institutions, in contexts where LLSs are less common, may have established a centre in part to attract students to attend that school over others, and therefore measure its success in its ability to raise the number of admissions.

There may also be pressure on a centre to show ever increasing growth in usage, but little understanding of what this growth in user numbers actually means. Too much emphasis on the “headcount” aspect of a LLS evaluation can lead to pressure to fill the space with users by any means necessary, which can mean overlooking initiatives to improve learning gains, and the effectiveness of the services offered. Similarly, an evaluation which deals only with qualitative measures, unless it is presented in a compelling way, can fail to have the impact necessary to convince funding bodies and management teams to support the LLS sufficiently, potentially resulting in loss of specialised staff, downsizing, and in some cases even closure of the physical space.

A good evaluation needs to take into consideration the needs of different stakeholders, and generate data which can be used to inform further decision-making to enhance the effectiveness and efficiency of the services offered through the LLS. While there is still a need for more creative tools which facilitate the evaluation process and provide truly useful insights into the workings of language learning spaces, the papers in this final column instalment put forward some innovative evaluation ideas.

Evaluation in Three Contexts

Daya Datwani-Choy from the University of Hong Kong (HKU) describes the findings from a very detailed case study which aimed to evaluate the self-access centre at HKU using Morrison’s (2003) SAC Mapping and Evaluation Framework, the most comprehensive model for SAC evaluation yet produced. In this paper, based on her doctoral research, Datwani-Choy identifies the major findings of the case study and the changes that have since been implemented, especially in terms of staffing and training, to improve the effectiveness and cost-effectiveness of the human and non-human support services. Her research has also lead to the development of an adapted and simplified version of the SAC Mapping for HKU.

While Datwani-Choy’s paper describes a very comprehensive SAC evaluation project, the second case study in this instalment is on a much smaller scale. In my own contribution from Otemon Gakuin University, Japan, my colleague Nao Noguchi and I describe how we developed an enhanced head-count tool (a common technique for developing a picture of LLS usage). The enhanced tool can provide useful data about how students are using the space, at the same time as providing stakeholders from the university administration the necessary information on user numbers that they have requested. Combined with the results of a qualitative survey administered with users, we explain how we have used the data from the two tools to make some informed decisions about the services offered in our LLS.

The final paper in this instalment, indeed in this collection, comes from Jo Mynard at the Self-Access Learning Centre (SALC) at Kanda University of International Studies (KUIS), which has one of the larger LLSs in Japan, offering various advising and other services to its student body. Mynard distinguishes between retrospective approaches to evaluation and reflects on the evaluation procedures currently in place, and more future-looking and predictive approaches, which she suggests could facilitate further innovations in the field. She describes an approach which is grounded in a detailed ten-year strategic plan which sets out the proposed future direction of the SALC. In order to make the evaluation of multiple aspects of the SALC as efficient as possible, Mynard suggests a carefully scheduled timeline of ongoing research projects, designed to investigate different services at regular intervals over the ten years of the strategic plan. Finally, Mynard suggests that future evaluations could also be predictive as well as retrospective, taking advantage of the possibilities presented by big data and learning analytics in terms of, for example, building a detailed profiles of the student body, which could be used to make more informed decisions.

Reflecting on the Language Learning Spaces Column

When I was first planning this column, it seemed fitting that evaluation would be a good way to finish the series, as it is a necessary process conducted after an initiative has been implemented. While I have always known that this was too simplistic a characterisation of good evaluation practices, on reading, editing and indeed writing about this issue, it has become ever more clear to me that evaluation needs to be not a summative end point, but a necessary step in the facilitation of further growth and innovation.

I hope that this collection as a whole has served to highlight the many innovative practices being implemented across the world of self-access language learning, and has provided readers with new perspectives on their own practices. I would like to thank all the authors for their contributions, especially their detailed and honest reflections on successes and failures, which can inform the decision-making of others and save us from repeating others’ mistakes. I would also like to show my appreciation for all the many reviewers who contributed precious time to offer insightful and constructive advice to the authors on their manuscripts, and made my job as editor so much easier. Finally, I would like to thank the SiSAL Editor, Jo Mynard, for her support of this project at all stages, right up to her own contribution in this issue. I am extremely grateful for the forum that SiSAL Journal provides for us to share our practices in such a supportive environment.

Column Reviewers

Thank you to everyone who gave precious time to review the manuscripts for this column:

Marina del Carmen Chávez Sánchez, Universidad Nacional Autónoma de México
Phil Cozens, (formerly) University of Macau
Kerstin Dofs, Christchurch Polytechnic Institute of Technology, New Zealand
Carol J. Everhard, (formerly) Aristotle University of Thessaloniki, Greece
Chris Fitzgerald, University of Limerick, Ireland
Caleb Foale, IES Abroad, Japan
David Gardner, University of Hong Kong
Moira Hobbs, Unitec Institute of Technology, New Zealand
Jane Elisabeth Holmes, Universidad del Caribe, Mexico
Shu Hua (Vivian) Kao, Chihlee University of Technology, Taiwan
Diane Malcolm, Canada
Ashley R. Moore, Osaka Institute of Technology, Japan
Nick Moore, Languages International Ltd, New Zealand
Garold Murray, Okayama University, Japan
Jo Mynard, Kanda University of International Studies, Japan
Satomi Shibata, Shizuoka University, Japan
Joe Sykes, University of Sheffield / Akita International University, Japan
Maria Giovanna Tassinari, Freie Universität Berlin, Germany

References

Gardner, D. & Miller, L. (2015). Managing self-access language learning. Hong Kong: City University Hong Kong Press.

Morrison, B. (2003). The development of a framework for the evaluation of a self-access   language learning centre (Doctoral dissertation, The Hong Kong Polytechnic University).

Morrison, B. (2005). Evaluating learning gain in a self-access language learning centre. Language Teaching Research, 9(3), 267-293. doi:10.1191/1362168805lr167oa

Reinders, H., & Lazaro. N. (2008). The assessment of self-access language learning: practical challenges. Language Learning Journal, 36(1), 55-64. doi:10.1080/09571730801988439

Riley, P. (1996). The blind man and the bubble: Researching self-access. In R. Pemberton, E. S. L. Li, W. W. F. Or & H. D. Pierson. (Eds.). Taking control: Autonomy in language learning, (pp. 251-264). Hong Kong: Hong Kong University Press.