Reflection 21: Change the Course

25th Anniversary ButterflyAt RK&A, we think a lot about intentional practice and we encourage our clients to do the same. In planning meetings and reflection workshops, we ask clients to think about which elements of their work align with their institutional mission and vision (check out Randi’s blog post for more about the challenges of alignment). We push them to consider who might be the right audience for their program or exhibition, and we ask them to talk about the intended outcomes for their projects. Posing these kinds of questions is much easier for an “outsider” to do because we don’t have institutional baggage or a personal connection to a problem project. As consultants, we aren’t beholden to the way things have always been done. I get it – it can be hard to let go; but seeing clients seek information to make informed decisions is a powerful, exciting process. These clients want more information. They are willing to try new things, to change old (and sometimes new) programs to see if they can improve upon the results. These are museum professionals who want the very best experiences for their visitors.

We recently completed a project with a history museum and the results were, well, not as rosy as one might hope. Change is HardAfter explaining the challenges of changing students’ perspectives in a short, one-time museum visit, we started talking about what could be done to increase the effectiveness of the program. One of our suggestions was to increase the time allotted for the program and rather than spending that extra time in the exhibition, use that time to facilitate a discussion with students so they can process and reflect on what they had seen. Changing a program’s format and duration is a difficult task for the museum to undertake – it may require extra staff and certainly a different schedule – but it could make a difference. A few days later, our client asked us if there are any studies that show that longer programs are more effective. After failing to come up with any examples (if you know of any such studies, please leave a comment), the client asked for another study to see if a longer program leads to a different outcome.

As an evaluator, I want to support museums as they change the way they do their work. Evaluation can provide the necessary information to see if new ideas work. It can give clients the data-based push they need to let go of the way things have always been done and to try something new. If nothing else, the evaluation process can be a forum to remind people that even when you are changing course, there is a place for you on the Cycle of Intentional Practice: Plan, Align, Evaluate, Reflect.

Reflection 1: Using Rubrics to Assess Complex Outcomes

25th Anniversary ButterflyOver the years there have been pivotal moments in which we at RK&A tried something out-of-the- ordinary to meet the needs of a particular project that then became a staple in how we do things.  It wasn’t always clear at the time that these were “pivotal moments,” but in retrospect I can see that these were times of concentrated learning and change.  For me, one of these pivotal moments was the first time we used rubrics as an assessment tool for a museum-based project.  I had been introduced to rubrics in my previous position where I conducted educational research in the public school system, which sometimes included student assessment.  Rubrics are common practice among classroom teachers and educators who are required to assess individual student performance.

Rubrics had immediate appeal to me because they utilize qualitative research methods (like in-depth interviews, written essays, or naturalistic observations) to assess outcomes in a way that remains authentic to complicated, nuanced learning experiences; while at the same time, they are rigorous and respond to the need to measure and quantify outcomes, an increasing demand from funders.  They are also appealing because they respect the complexity of learning—we know from research and evaluation that the impact of a learning experience may vary considerably from person to person. These often very subtle differences in impact can be difficult to detect and measure.

To illustrate what a rubric is, I have an example below from the Museum of the City of New York, where we evaluated the effect of one of its field trip programs on fourth grade students (read report HERE).  As shown here, a rubric is a set of indicators linked to one outcome.  It is used to assess a performance of knowledge, skills, attitudes, or behaviors—in this example we were assessing “historical thinking,” more specifically students’ ability to recognize and not judge cultural differences.  As you can see, rubrics include a continuum of understandings (or skills, attitudes, or behaviors) on a scale from 1 to 4, with 1 being “below beginning understanding” to 4 being “accomplished understanding.”  The continuum captures the gradual, nuanced differences one might expect to see.

Museum of the City of New York RubricThe first time we used rubrics was about 10 years ago, when we worked with the Solomon R. Guggenheim Museum, which had just been awarded a large research grant from the U.S. Department of Education to study the effects of its long-standing Learning Through Art program on third grade students’ literacy skills.  This was a high-stakes project, and we needed to provide measurable, reliable findings to demonstrate complex outcomes, like “hypothesizing,” “evidential reasoning,” and “schema building.”  I immediately thought of using rubrics, especially since my past experience had been with elementary school students.  Working with an advisory team, we developed the rubrics for a number of literacy-based skills, as shown in the example below (and note the three-point scale in this example as opposed to the four-point scale above—the evolution in our use of rubrics included the realization that a four-point scale allows us to be more exact in our measurement).  To detect these skills we conducted one-on-one standardized, but open-ended, interviews with over 400 students, transcribed the interviews, and scored them using the rubrics.  We were then able to quantify the qualitative data and run statistics.  Because rubrics are precise, specific, and standardized, they allowed us to detect differences between treatment and control groups—differences that may have gone undetected otherwise—and to feel confident about the results.  For results, you can find the full report HERE.

Solomon R. Guggenheim Museum RubricFast forward ten years to today and we use rubrics regularly for summative evaluation, especially when measuring the achievement of complicated and complex learning outcomes.  So far, the two examples I’ve mentioned involved students participating in a facilitated program, but we also use rubrics, when appropriate, for regular walk-in visitors to exhibitions.  For instance, we used rubrics for two NSF-funded exhibitions, one about the social construction of race (read report HERE) and another about conservation efforts for protecting animals of Madagascar (read report HERE).  Rubrics were warranted in both cases—both required a rigorous summative evaluation, and both intended for visitors to learn complicated and emotionally-charged concepts and ideas.

While rubrics were not new to me 10 years ago (and certainly not new in the world of assessment and evaluation), they were new for us at RK&A.  What started out as a necessity for the Guggenheim project has become common practice for us.  Our use of rubrics has informed the way we approach and think about evaluation and furthered our understanding of the way people learn in museums.  This is just one example of the way we continually learn at RK&A.