A Framework for the Evaluation of a Self-Access Language Learning Centre

Bruce Morrison, Hong Kong Polytechnic University, Hong Kong

*Morrison, B. (2011). A framework for the evaluation of a self-access language learning centre. Studies in Self-Access Learning Journal, 2(4), 241-256.

Paginated PDF version

Abstract

For self-access language learning centres (SACs) to be accepted as efficient and effective alternatives or complements to more established modes of language learning and teaching, it is of serious concern that there is no research-based framework specifically developed for their evaluation. In this paper, a framework for the evaluation of a SAC is proposed which aims to recognize both the elements common to most SACs as well as the diversity inherent in a centre that predicates upon learner individuality. The study upon which this paper reports employed a grounded theory methodology examining data collected from participants representing various SAC stakeholder roles and subsequently proposing an evaluation framework consisting of 4 elements: context, key questions, decisions and actions. These elements, although presented individually, are interdependent and provide the framework with a multidimensional aspect that allows the evaluation team to carefully plan the various aspects of the evaluation while at the same time ensuring that these are brought together to ensure coherence in terms of aims, methodology and reporting. Central to the framework is recognition that the evaluation should be theory based and thus that it should incorporate a means of clearly representing the particular SAC theory (or mapping) upon which the evaluation is to be focused. The paper concludes with brief discussion of potential implications of the use of such a framework, both theoretical and practical.

Note from the author 

I cannot believe it is now some six years since I wrote this article. When Jo Mynard asked me if I would consider its inclusion in this edition of SiSAL Journal, my first thought was “why?” – I couldn’t see how it could still have relevance after six years in a field which has moved on apace. However, I was flattered and so went back, re-read the article and began to consider its relevance today.

Sure, much has changed in the world of self-access language learning, not least the technological context which has impacted on so many aspects of pedagogy and learning. Whether the SAC today is the same place as it was in 2005 is a question for another day, but it is clear that the fields of independent learning and learner autonomy are more fully developed and more extensively researched. In these fields, new understandings have been forged in many areas including supporting our students’ development as autonomous learners through encouraging critical engagement with both human and technological interfaces, assessing the degree of autonomy our learners achieve, developing and implementing eportfolios as a way of allowing our learners to track and take greater control of their learning, and how learner autonomy affects and is affected by teacher autonomy. 

What I feel might be of interest to readers of this issue of SiSAL Journal is not so much the SAC evaluation model (the product of my research), but rather the underlying thinking and process by which I arrived at my conceptualisation of what a SAC is and how it might be evaluated – all of which were very much shaped by what were then very much newly-developing fields.

Bruce Morrison

Evaluation has played a central role in education for more than 100 years (Madaus, Stufflebeam, & Scriven, 1983) and its importance in English language learning and teaching has been widely recognized (Strevens, 1977; Stern, 1983; Beretta, 1992; Alderson & Beretta, 1992; Lynch, 1996, 2003). In contrast, however, little attention has been paid to the evaluation of self-access language learning (SALL) and self-access language learning centers (SACs). SAC evaluations have tended to be undertaken as administrative necessities, summative in nature, narrowly focused on accountability, and often mainly expressed in terms of statistics of usage and cost-effectiveness.

This is slowly changing. As the area of SALL displays greater educational maturity and there is increased recognition of the role SACs can play in the English language learning and teaching processes, both administrators and educators are demanding evidence of the efficacy of a self-access approach. This is particularly true in Hong Kong where a considerable amount of public funding has been provided for the establishment and running of SACs in order to enhance language learning and teaching provision.

An End-of-Year SAC Vignette

It is April and the end of the second semester of the university year is approaching. The head of the English Language Centre (ELC) drops into the office of Joan, the SAC coordinator, to remind her that that the annual report on the work of the ELC required by the funding body will shortly be due. This means that Joan will need to prepare her part of the report focusing on the work of the SAC. Last year the funders had stressed the need for quantified, objective evidence of how effective the SAC and its various initiatives were in enhancing students’ English proficiency. This year, they have requested a particular focus on qualitative data such as learners’ perceptions of efficiency.

Joan needs to start the annual process of going back to various data sources including usage figures, learner questionnaire results and diary comments to mine them for data that are particularly relevant for this year’s evaluation request. She then needs to collate the data and ensure that they are presented in such a way that they meet the expectations of the funding stakeholders. She will also use the data to provide herself and the SAC staff with indications of aspects the center’s operation that warrant development or change.

After the ELC head has left, Joan looks around the SAC and reflects on both the very different uses the learners make of the centre and the SAC’s various stakeholders’ very different perceptions of what the centre actually is. She then considers how she might more truly evaluate a centre with such diversity in such a way that the expectations and needs of all the various stakeholders might be met.

In her reflections, Joan considers what she sees as the role of a SAC evaluation. She decides it should encompass both developmental and judgmental functions. The developmental aspects derive particularly from the motivation of herself, the SAC teachers and the SAC learners to improve the operation of the SAC in order for it to better provide an effective and efficient service to its users. The judgmental aspects are, however, more likely to derive from the demands of stakeholders such as the SAC funders who perceive such judgments as necessary to inform decisions concerning the future funding. Joan also, however, understands that such a binary conception of an evaluation’s functions is simplistic and that the two functions are not entirely either discrete, or independent of each other. Her mind then drifts back to the annual report.

From examination of the annual reports on language enhancement initiatives written by each of the Hong Kong universities, it is clear that evaluation of Hong Kong tertiary SACs has been primarily, and fairly narrowly, focused on the summative function of justifying continued funding: There are few references to developmental issues. It is with such thoughts in mind, and my own experience of the practicalities of the Hong Kong SAC context, that the study I introduce briefly below began.

The Need for an Evaluation Framework

There has been little evaluation of SACs and even less which has been conducted in a systematic manner, indeed “evaluation has taken a backseat in the development of SALL” (Gardner, 2002, p. 48). There are two main reasons why an informed framework for the evaluation of SACs is needed. Firstly, evaluation as acentral developmental element in language learning programs is recognized by researchers (e.g. Lynch, 1996) and its role is implicit in much of the self-access literature which describes SACs and then proposes future developments based on elements of that description (Ma, 1994; Lee, 1996; Gardner & Miller, 1997). From a more summative viewpoint, funding bodies commonly demand demonstrable accountability from funded initiatives in terms of something more than simple cost-effective, budget-focused accounting.

If it is to be argued that SACs provide an effective and efficient alternative or complement to the more traditionally accepted modes of language learning and teaching, it is of serious concern that there is no research- based framework specifically developed for their evaluation. SACs differ in many respects, both pedagogically and administratively, from other educational entities such as schools or language programs and, therefore, frameworks developed for such entities are not necessarily appropriate for the evaluation of SACs.

The Study

In the study (Morrison, 2003), I argue the need for a theory-based evaluation framework based upon a coherent SAC theory which identifies SAC resources and activities, and indicates the causal links between such resources, activities and outcomes (Chen, 2005; Wholey, 1987).

Study aims and objectives

 The study has as its two main objectives the development of a SAC mapping and a SAC evaluation framework. The first is a theory based upon those defining elements identified as constituting a SAC and is described below. The second is an evaluation framework that aims to handle the diversity inherent in a SAC context.

It is clear from my experience that each SAC is unique in terms both of its constituent elements and in the way these elements interact with the individual, independent learner and with each other. An evaluation framework, however, needs to be able not only to recognize learner individuality, but also the systemic commonality of SACs in terms of how they operate to support the study of such learners collectively. 

Study methodology

Since one of the primary objectives of the study was to develop a theory of SAC operation, I did not initiate the study with any pre-determined hypothesis or theory to prove or disprove. Rather, I collected and analyzed data related to the topic, and from this derived a theory that can explain the data. Grounded theory was the underpinning methodology of the study. Grounded theory employs an inductive approach, “an initial, systematic discovery of the theory from the data” (Glaser & Strauss, 1967, p. 3) which “emerges from the bottom up…from many disparate pieces of collected evidence that are inter-connected” (Bogdan & Biklen, 1992, p. 3).

Data Collection

I selected 16 study participants on the basis of their fulfilling the roles of various types of Hong Kong SAC stakeholders. These roles included that of SAC learner, teacher, co-ordinator and support staff, as well as those of researchers in the areas of SALL and SACs.

Data were collected through semi-structured interviews and post-interview e-mail questionnaires. Interview protocols identified various topics of discussion relating to SALL, SACs and language learning more generally. The course of each individual interview, which I wished to be exploratory conversations between two professionals, was, however, finally determined by the interviewer and interviewee collaboratively. As interviewer, I strove to be open to new ideas and interpretations, and to display a “deliberate naivete” (Kvale, 1996, p. 31) through the use of open-ended questions that were intended to be appropriate to the role the participant played within a SAC. These open-ended questions were followed by probing questions to clarify meaning where necessary for further interpretation. After the interviews, I sent follow-up e-mails of varying lengths to a number of the interview participants to request elaboration or clarification upon comments made in the interview.

After the initial stages of data analysis, an email questionnaire was sent to all interview participants to clarify, confirm and enrich the interview data. Unlike the interviews, the questionnaire gave participants an opportunity to consider the issues carefully before answering in their own time.

Data analysis

I analyzed and interpreted the interview data using an iterative process of annotation, coding and checking of the data in a series of increasingly-focused steps (Miles & Huberman, 1994; Flick, 1998; Richards, 2003).

After transcribing the interviews, I revisited the data many times and recorded the coding processes in the form of: memos; a paper-based mind map where data were represented in a non-hierarchical manner to avoid premature grouping of data that might have biased further analysis; structured entries using an electronic coding tool that enabled easy storage and sorting of data; summaries of both the mind map and electronic coding; and a final consolidated summary that compared and synthesized the paper-based and electronic records of analysis.

How the SAC theory and evaluation framework emerged from the process of analysis is characterized in Figure 1.

Figure 1. Data analysis stages of the study

An Evaluation Framework

In this section, I describe the evaluation framework that derived from the data analysis process outlined above. I firstly present and briefly introduce the complete framework, before continuing to describe the four main elements separately. I finally discuss the way the elements interact as constituent parts of the evaluation as a whole.

The framework

The framework (see Figure 2) comprises four elements. At the top is the Context element that directly affects the key questions elements and indirectly the other two elements. From left to right, the decisions element relates to decisions that have to be made at various stages of the evaluation process. The four actions boxes in the middle of the figure represent the actual operation of the evaluation. The key questions, that are discussed in order to reach the decisions on the left of the figure, are directly linked to the context in that the three foci questions (why, what, and how) are entirely context dependant.

Figure 2. A framework for the evaluation of a SAC

Different types of arrow are used to denote different types of relationships between the elements:

1.            The heavy black arrows signify the very significant influence that the context has upon the three key questions, the two decision elements upon the mapping (see The Actions Element section below) and evaluation plan, and the first two decision elements upon the second and third respectively;

2.            The two lightly shaded arrows at the top indicate the fact that the context has an indirect effect on the decisions and actions elements, in addition to its direct effect on the key questions;

3.            The curved, double-headed arrows indicate the repeated interaction between the key questions that takes place when these are discussed;

4. The simple black vertical arrows between the actions are indicative of the generally chronological progression from the first to the fourth; and, finally,

5.            The call-out lines from the decisions boxes are used to indicate that the decisions result from discussion of the key questions and that the three stages of the ‘Evaluation process’ are component parts of the ‘Evaluation’ action element.

The key questions and decisions elements

The key questions, why, what and how, are those three questions that I had identified in the program evaluation literature as being central to existing evaluation constructs and central to the focusing and planning of an evaluation. As noted in The Framework Section above, and as illustrated in Figure 3, the evaluation decisions result directly from discussion of the key questions.

Figure 3. Framework elements 1 – Decisions and key questions

Study participants identified the key question why as relating primarily to the evaluation audience, which would include direct stakeholders such as the learners and indirect stakeholders such as the centre funders, as well as to the need to identify the evaluation functions in terms of whether they are primarily formative or summative. The question what was perceived as relating to the need for: specific evaluation foci which, might include, for example, the efficiency of procedures, staffing and systems, or effectiveness in terms of learning gain; the need for clear, shared understanding of key terms (or boundaries); and agreement on what type of data (or truth) the evaluation is seeking. The question of how was seen to relate to three aspects. The first is the evaluation approach in terms, for example, of its formality or whether it is to be primarily qualitative or quantitative in nature. This clearly leads to the question of methodology which in turn generates operational questions concerning, for example, sampling and the choice of data collection tools. Finally, and crucially, the composition of the evaluation team and team members’ roles was felt to influence, and be influenced by, decisions concerning the previous two aspects.

As reflected in the framework, the three key questions are not discrete and isolated from each other. Instead, they are interdependent with decisions taken as a result of the discussion of one question impacting on, or being impacted upon by, another. Discussion and resulting decisions will clearly be greatly influenced by the context (see The Context Element section). Furthermore, since the three key questions are not discrete, neither are the resulting decisions made in isolation from each other. Clearly, for example, a decision regarding the foci of an evaluation will affect its methodology.

The actions element

The actions element is, as presented in isolation in Figure 4, linear in presentation. It would, generally, also tend be chronological in operation but not necessarily strictly so.

Figure 4. Framework elements 2 – Actions

Within the element, I have identified four discrete actions that I believe are crucial to the effectiveness of any SAC evaluation: the development of the SAC mapping; the evaluation plan which would outline the stages and timing of the evaluation process; together with the evaluation itself; and a plan for a meta-evaluation.

Mapping

The first action, which aims to address the key question of what is to be evaluated, is that of developing a SAC mapping (Morrison, 2002, 2003). I use the term mapping to refer to both the process of examining data concerning the SAC and the subsequent product that aims to present the interpreted data relating to the relationships between the learner and the various elements that comprise the SAC. This presentation needs to be in a form that is easily accessible for the audience, similar to the way that a topographical map defines geographical features from the point of view of the map reader. With reference to the mapping process, the term is used to allude to a subjective and interpretive act that is, at the same time, embedded within a framework of analytical research. It also aims to reflect the highly iterative process that is needed in order to produce a clear representation of an individual learner’s interaction with the various component parts of a SAC: “Mapping…uncovering realities previously unseen or unimagined, even across seemingly exhausted ground …. it remakes territory over and over again, each time with new and diverse consequences” (Corner, 1999, p. 213).

While each mapping of an individual SAC will differ in many ways, my research and experience suggests a common centrality of interaction of the various SAC components with the learner’s progress from initial assumptions regarding, for example, language level and needs, through potential effects the SAC might have upon the learner to the realization, to varying degrees, of the learner’s objectives.

Evaluation plan

Although in my review of evaluation constructs few (e.g., Worthen & Sanders, 1987; Pollard, 1989; Chen, 2005) explicitly featured an evaluation plan as an element, in all the development of a plan is clearly implied, as it is in data derived from my study. Such a plan is clearly necessary in order for all the evaluation stakeholders to share the decisions made relating to the key question (how) regarding the evaluation methodology.

Evaluation

The ‘Evaluation process’ box refers to the actual processes involved in the implementation of the evaluation itself. It serves to represent the stages of an evaluation that result from the evaluation plan. The stages in the process that are indicated in Figure 4 are indicative since, clearly, in any specific evaluation, they would be affected by decisions made earlier that, in turn, had been informed by the SAC mapping and had subsequently informed evaluation plan.

Meta-evaluation

The meta-evaluation action refers to the process of evaluating the evaluation (Stufflebeam, 1978; Nevo, 1986; Straw & Cook, 1990). The dotted line linking it to the SAC mapping box indicates its potential for informing a future evaluation process, including any revision to the SAC mapping.

The context element

I use a large box and arrows to represent the context to emphasize its huge importance within the evaluation process. Lynch (1996, 2003), in his context -adaptive evaluation model, suggests a “checklist, or inventory” which can be used to define the program context. He characterizes this context inventory as a ‘step’ in the model which enables the evaluator to “develop a preliminary sense of the important themes and issues” in order to “determine what is being evaluated” (Lynch, 1996, p. 170). While I agree that these are important tasks, I do not perceive the concept of context in exactly the same way. Rather, reflecting more closely his (2003) view in relation to context and themes, I see context impacting more widely and iteratively upon various stages of the evaluation and not just in terms of ‘setting the scene’ and determining foci.

I see the context as being constituted of everything that might impact upon the key questions and consequently upon the decisions elements and, through them, upon the actions. Furthermore, the context will impact on all questions, decisions and actions involved in the evaluation process. In an evaluation, each question will need to be addressed; each decision will need to be made; and each action will need to be taken with specific reference to the evaluation context. This can be seen, for example, in the huge effect that the contextual element, funding, can have on the focus, scope and nature of the evaluation.

Bringing together the framework elements

Taken separately, the four individual elements comprising the evaluation framework might be said to simply reflect different aspects of existing, recognized evaluation practice. I believe it is the bringing together of these in an attempt to reflect more meaningfully the SAC realities, while at the same time presenting this in as simple and accessible a manner as possible, that provides the descriptive power of the framework.

In highlighting the interaction not only between the elements but also within them, I believe the SAC evaluation framework represents integration of the elements in a way that tries to address concerns regarding the need to consider the evaluation as whole and not just a series of discrete elements: “When you are in a three-dimensional game, you will lose if you focus only on the top board and fail to notice the other boards and the vertical connections between them” (Nye, 2002, p. 24).

Conclusion: Implications and Applications

Joan’s annual reporting task is perhaps less an evaluative exercise than a bureaucratic accounting one. The framework would hopefully provide her with the starting point for a meaningful evaluation of her SAC, being used to guide her to ensure that all relevant dimensions of evaluation planning are properly considered.

Reflecting the necessity for careful thought and planning prior to the conducting of the evaluation itself and for the identification of the complex links between the SAC’s component parts and learner achievement, the primary theoretical strength of the framework lies, I believe, in its multi-dimensional nature. Unlike some theoretical evaluation constructs, it does not present a view of evaluation as a rather simple linear process but rather recognizes its complexity presented in terms of the interaction between the key questions, decisions, actions and context elements.

It is, however, a framework and not a model and does not, in itself, dictate a particular approach or methodology. Its prescription is only in terms of: the questions to be asked (not the answers); the decisions to be made (and not the decisions themselves); and, a simple recognition of the actions to be taken in, I would argue, any evaluation. A major implication of its adoption lies in the recognition of the context as the major determinant in addressing how questions are to be answered and subsequent decisions made.

The primary practical implications of the study lie in the potential application of the framework in conjunction with the mapping of an individual SAC. Incorporating the need to develop a SAC mapping, it is to be hoped that the evaluation framework provide the basis for the planning of a SAC evaluation. It is designed as a framework for the evaluation initiators supporting them in drawing up the evaluation plan that will guide the evaluation in a step-by- step manner. If used effectively, I believe it has the potential to ensure that all dimensions of the evaluation planning are considered in terms of the questions asked, the resulting decisions made and subsequent actions taken. It is intended that the centrality of the notion of context within the framework might help to ensure that the type of inappropriate evaluation approach sometimes employed by outside experts with little or no understand of local conditions (Alderson & Scott, 1992; Lynch, 1996) can be avoided.

While the framework was designed for use in the evaluation of a Hong Kong SAC, I now feel that it can have application in the evaluation of SACs more widely and of other educational entities. With its emphasis on flexibility and the critical influence of context, it may be particularly applicable to those of an ‘alternative’ nature which are more context dependent and perhaps less predictable in make-up than, for example, others longer established and operating within a well defined and more commonly understood educational context.

Notes on the contributor

Bruce Morrison is Director of the English Language Centre at the Hong Kong Polytechnic University. His research interests are primarily in the areas of self-access language learning and program evaluation, focusing in particular on the development, administration and evaluation of self-access language learning centers; evaluating learning gain in a self-access centre; the roles of the self-access centre learner and teacher; and non-native speaker learner experiences of English-medium education.

References

Alderson, J. C., & Beretta, A. (Eds.). (1992). Evaluating second language education. Cambridge, England: Cambridge University Press.

Alderson, J. C., & Scott, M. (1992). Insiders, outsiders and participatory evaluation. In J. C. Alderson & A. Beretta (Eds.), Evaluating second language education (pp. 25-57). Cambridge, England: Cambridge University Press.

Beretta, A. (1992). Evaluation of language education: An overview. In J. C. Alderson & A. Beretta (Eds.), Evaluating second language education (pp. 5-24). Cambridge, England: Cambridge University Press.

Bogdan, R. C., & Biklen, S. K. (1992). Qualitative research for education. MA: Allyn & Bacon. Chen, H.T. (2005). Practical program evaluation. London: Sage Publications.

Corner, C. (1999). The agency of mapping: Speculation, critique and invention. In D. Cosgrove (Ed.), Mappings. (pp. 213-252). London: Reaktion Books.

Flick, U. (1998). An introduction to qualitative research. London: Sage Publications.

Gardner, D. (2002). Evaluating self-access language learning. In P. Benson and S. Toogood (Eds.), Learner autonomy 7: Challenges to research and practice (pp. 55-64). Dublin, Ireland: Authentik.

Gardner, D., & Miller, L. (1997). A study of tertiary level self-access facilities in Hong Kong. Hong Kong, China: City University of Hong Kong.

Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. New York: Aldine Publishing.

Kvale, S. (1996). InterViews. London: Sage Publications.

Lee, W. (1996). The role of materials in the development of autonomous learning. In R. Pemberton, E. S. L. Li, W. F. Or, & H. D. Pierson (Eds.), Taking control: Autonomy in language learning (pp. 167-184). Hong Kong, China: Hong Kong University Press.

Lynch, B. (1996). Language program evaluation. Cambridge, England: Cambridge University Press. Lynch, B. (2003). Language assessment and programme evaluation. Edinburgh, Scotland: Edinburgh University Press.

Ma, B. (1994). A study of independent learning: The learner t raining programme at the Chinese University of Hong Kong. In E. Esch (Ed.), Self-access & the adult language learner (pp. 140-145). London: Centre for Information on Language Teaching & Research.

Madaus, G. F., Stufflebeam, D. L., & Scriven, M. S. (1983). Program evaluation: A historical overview. In G. F. Madaus, D. L. Stufflebeam, & M. S. Scriven (Eds.), Evaluation models: Viewpoints on  ducational & human services evaluation. Boston: Klujer-Nijhoff.

Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis. London: Sage Publications.

Miller, L., & Gardner, D. (1994). Directions in self-access language learning. Hong Kong, China: Hong Kong University Press.

Morrison, B. J. (2002). The troubling process of mapping and evaluating a self-access language learning centre. In P. Benson & S. Toogood (Eds.), Learner autonomy 7: Challenges to research and practice (pp. 55-64). Dublin, Ireland: Authentik.

Morrison, B. J. (2003). The development of a framework for the evaluation of a self-access language learning centre. Unpublished doctoral dissertation, The Hong Kong Polytechnic University, Hong Kong, China.

Nevo, D. (1986). The conceptualization of educational evaluation: An analytical review of the literature. In E. R. House (Ed.), New directions in educational evaluation. London: Falmer Press.

Nye, J. (2002, March). The new Rome meets the new barbarians. The Economist, March 23, 23-25.

Pollard, R. J. (1989). Essentials of program evaluation: A workbook for service providers. Unpublished manuscript.

Richards, K. (2003). Qualitative inquiry in TESOL. New York: Palgrave Macmillan.

Sheerin, S. (1991). State of the art: Self-access. Language Teaching, 24(3), 153-157.

Stern, H. H. (1983). Fundamental concepts of language teaching. Oxford: Oxford University Press.

Straw, R. B., & Cook, T. D. (1990). Meta-evaluation. In H. J. Walberg & G. D. Haertel (Eds.), Encyclopedia of educational research (pp. 58-60). Oxford: Pergamon Press.

Strevens, P. (1977). New orientations in the teaching of English. Oxford: Oxford University Press.

Stufflebeam, D. L. (1978). Meta-evaluation: An overview. Evaluation & the Health Professions, 1(1), 17-43.

Wholey, W. R. (1987). Evaluability assessment: Developing program theory. In L. Bickman (Ed.), Using programtheory in evaluation (pp. 77-92). San Francisco: Jossey-Bass.

Worthen, B. R., & Sanders, J. R. (1987). Educational evaluation: Alternative approaches & practical guidelines. New York: Longman.

*Originally published as

Morrison, B. (2005). A Framework for the evaluation of a self-access language learning centre. Supporting Independent English Language Learning in the 21st Century: Proceedings of the Independent Learning Association Conference Inaugural – 2005

(Reprinted with permission)