by Svitlana Frunchak
At the recent NAFSA annual convention (a major conference for international education professionals), I managed to carve some time to attend several PD sessions focused on assessment of leaning outcomes of study abroad. I would like to share several points I found interesting and thought-provoking. It seems to me that some of them might engage not only colleagues working in the areas of study abroad and student exchange, but potentially others dealing with short-term student programs.
Most of us tend to agree that going on exchange is very beneficial to students. In the exchange office, we see students transformed and inspired by their international experiences on daily basis. However, it turns out to be darn hard to formulate common benefits, expectations, and desired results as they tend to be very different for various groups of students. And yet, not only do we want to measure our students’ successes and transformations for our own program development purposes, but we are also pressured to report clear, usable and preferably quantitative data to a variety of audiences. Here is a list of common challenges of measuring – and achieving, for that matter – desired outcomes of study abroad programming:
-Complexity of articulating learning outcomes
-Difficulties of aligning activities to outcomes
– Diversity of student backgrounds
-Exertions of establishing that outcomes have been achieved.
Currently, due to the abovementioned challenges, most assessment of study abroad programs tends to happen in the forms of student satisfaction surveys. As was pointed out in several earlier posts (and as we all know), the major disadvantage of this method is that we are collecting students’ thoughts and feelings about their learning and change, but do not ask them to directly demonstrate new knowledge and skills. But how do we ask students of various academic and cultural background who study in different countries and institutions to demonstrate their learning directly in a standard survey?
One idea I found captivating (if not new) was focusing on the personal development of students and monitor the development of students’ personality traits resulting from participation in mobility programs. In particular, European experts developed a tool called MEMO (Monitoring Exchange Mobility Outcomes) which uses data based on students’ assessments of their own behaviour both before and after the mobility period. The key features of this approach:
- Pre-departure and post-return surveys
- To provide additional motivation, MEMO tool offers personal feedback about their personality traits changes to each student who fully completes the survey.
- Comparing pre-departure and post-arrival values allows showing the real development of students.
- Students are asked groups of questions:who students are (facts): socioeconomic background; what students think (perceptions): perceptions of personal, academic and non-academic aspects of their study/training programmes at home and at their host institutions; how they behave (personality traits)-psychometric items reflecting the behaviour of students.
Definitely interesting and potentially promising, but… To begin with, using the MEMO tool itself is very costly. But even if we would want to use this approach in our own assessment tools and methods, we would run the risk of facing several problems. The two major ones I thought of include:
- Lack of resources. Even to substitute a standard end-of-program survey with a pre- and post- series requires substantial increase in time. In order to make use of such surveys, it is inevitable to get involved in more substantial qualitative analysis (= more work!)
- A good chance to get lost in the controversial woods of psychometric testing.
And yet, all things considered, I am still interested in exploring the ways to incorporate parts of the “memo” approach into the toolkit of student exchange assessment. Things I would probably want to try:
- Introduce a pre- and post- assessment (at least as a pilot project with a focus group or a sub-group of exchange students)
- Use a few open-ended direct questions asking students to display their skills, goals, and worldviews in combination with traditional Likert-scale indirect questions aiming in producing more quantifiable results. Examples of question types: Describe your … goals…. in 3 sentences. What are the 3 things you would like to experience… What are your most important … experiences to date…
- Compare answers to similar direct questions form pre- and post- surveys, using simple quantitative analysis to measure number of differences along with qualitative methods to summarize the impact students demonstrate.
It would be great to hear what other people think!
A somewhat useful resource on pre- and post- assessment: http://www.adesinamedia.com/webinars/designing/creating-pre-assessment-activities/
Svitlana Frunchak works as Exchange Officer, Partnerships, Assessment, and Special Initiatives at the Centre for International Experience. Svitlana is also a member of the Learning Outcomes and Assessment Committee