Reflection 1: Using Rubrics to Assess Complex Outcomes

25th Anniversary ButterflyOver the years there have been pivotal moments in which we at RK&A tried something out-of-the- ordinary to meet the needs of a particular project that then became a staple in how we do things.  It wasn’t always clear at the time that these were “pivotal moments,” but in retrospect I can see that these were times of concentrated learning and change.  For me, one of these pivotal moments was the first time we used rubrics as an assessment tool for a museum-based project.  I had been introduced to rubrics in my previous position where I conducted educational research in the public school system, which sometimes included student assessment.  Rubrics are common practice among classroom teachers and educators who are required to assess individual student performance.

Rubrics had immediate appeal to me because they utilize qualitative research methods (like in-depth interviews, written essays, or naturalistic observations) to assess outcomes in a way that remains authentic to complicated, nuanced learning experiences; while at the same time, they are rigorous and respond to the need to measure and quantify outcomes, an increasing demand from funders.  They are also appealing because they respect the complexity of learning—we know from research and evaluation that the impact of a learning experience may vary considerably from person to person. These often very subtle differences in impact can be difficult to detect and measure.

To illustrate what a rubric is, I have an example below from the Museum of the City of New York, where we evaluated the effect of one of its field trip programs on fourth grade students (read report HERE).  As shown here, a rubric is a set of indicators linked to one outcome.  It is used to assess a performance of knowledge, skills, attitudes, or behaviors—in this example we were assessing “historical thinking,” more specifically students’ ability to recognize and not judge cultural differences.  As you can see, rubrics include a continuum of understandings (or skills, attitudes, or behaviors) on a scale from 1 to 4, with 1 being “below beginning understanding” to 4 being “accomplished understanding.”  The continuum captures the gradual, nuanced differences one might expect to see.

Museum of the City of New York RubricThe first time we used rubrics was about 10 years ago, when we worked with the Solomon R. Guggenheim Museum, which had just been awarded a large research grant from the U.S. Department of Education to study the effects of its long-standing Learning Through Art program on third grade students’ literacy skills.  This was a high-stakes project, and we needed to provide measurable, reliable findings to demonstrate complex outcomes, like “hypothesizing,” “evidential reasoning,” and “schema building.”  I immediately thought of using rubrics, especially since my past experience had been with elementary school students.  Working with an advisory team, we developed the rubrics for a number of literacy-based skills, as shown in the example below (and note the three-point scale in this example as opposed to the four-point scale above—the evolution in our use of rubrics included the realization that a four-point scale allows us to be more exact in our measurement).  To detect these skills we conducted one-on-one standardized, but open-ended, interviews with over 400 students, transcribed the interviews, and scored them using the rubrics.  We were then able to quantify the qualitative data and run statistics.  Because rubrics are precise, specific, and standardized, they allowed us to detect differences between treatment and control groups—differences that may have gone undetected otherwise—and to feel confident about the results.  For results, you can find the full report HERE.

Solomon R. Guggenheim Museum RubricFast forward ten years to today and we use rubrics regularly for summative evaluation, especially when measuring the achievement of complicated and complex learning outcomes.  So far, the two examples I’ve mentioned involved students participating in a facilitated program, but we also use rubrics, when appropriate, for regular walk-in visitors to exhibitions.  For instance, we used rubrics for two NSF-funded exhibitions, one about the social construction of race (read report HERE) and another about conservation efforts for protecting animals of Madagascar (read report HERE).  Rubrics were warranted in both cases—both required a rigorous summative evaluation, and both intended for visitors to learn complicated and emotionally-charged concepts and ideas.

While rubrics were not new to me 10 years ago (and certainly not new in the world of assessment and evaluation), they were new for us at RK&A.  What started out as a necessity for the Guggenheim project has become common practice for us.  Our use of rubrics has informed the way we approach and think about evaluation and furthered our understanding of the way people learn in museums.  This is just one example of the way we continually learn at RK&A.

Museums and Public Value

Museums and Public ValueThis week we welcome our guest blogger Carol Ann Scott, editor of Museums and Public Value: Creating Sustainable Futures!

Randi Korn & Associates invited me to guest blog on a subject that has important links to intentionality. My passion is the value of museums- how we articulate that value, measure it and create it. So today, I am blogging about the third aspect- the value we create. With that in mind, I want to look at what Mark Moore’s theory of Public Value has to offer museums when we purposefully set out to create value.

Moore’s Creating Public Value: Strategic Management in Government (1995) may be familiar to many of you. In Moore’s view, publically funded organisations are charged with directing their assets to creating value with a strong focus on social change and improvement. This type of value is about more than visitor satisfaction. It is directed towards adding benefit to the public sphere and producing outcomes that are in the general public interest.

Public Value does not occur of its own accord. It is purposeful, intentional, and requires planning to achieve a particular end result.  It confirms a museum’s ‘agency’- its capacity to direct its resources to make a positive difference. When Public Value is embedded in the organisation as a whole, museums move from creating one-off projects to investing in longer term impact.

I am fascinated that there is a strong relationship between the essence of Public Value and a persistent idea in museums—co-production.  Public Value is fundamentally about co-production. If we are planning to make a difference that will affect the lives of others, then the ‘others’ need to be involved. Moore recognises the public as co-producers in the value that, together, we create.

So, what do Public Value programs in museums look like?

Here are some examples: (a) an exhibition co-curated by a museum and members of the local Afro -Caribbean community to explore the history of the TransAtlantic slave trade and interrogate its living legacy in modern Britain; http://www.museumoflondon.org.uk/Docklands/Whats-on/Galleries/LSS/  (b) a youth program aimed at ensuring that a new generation takes forward the lessons of moral responsibility learned from the Holocaust and adopts a commitment to pursue democratic civic engagement, http://www.ushmm.org/education/cpsite/bringlessonshome/index.php?theme=students and (c) a museum education program that is playing its part in Hawaiian language revival http://www.imiloahawaii.org/82/hawaii-s-language-today.

Why should museums adopt a Public Value approach in their planning and programs? Well, perhaps most importantly, we are accountable to the public, policy makers, and funders. All of these groups invest in museums whether through their taxes, their time, or funding provision. An investment seeks a return, and in the not- for- profit sector that return is the value we create. Increasingly, a museum’s value is measured by the contributions we make to benefit the public sphere—a major criterion for measuring museums’ worth as a sector.

The late Stephen Weil challenged museums to look searchingly at their ‘claims to worthiness’. Embedding Public Value into our thinking and planning can result in enhancing the life of citizens-  and that is worthy.

Carol Scott is the editor of Museums and Public Value: creating sustainable futures published by Ashgate in May 2013. More on the three examples cited in this blog can be found in these chapters of the book:

  1. The Public as Co-producers: making the London, Sugar and Slavery gallery, Museum of London Docklands (David Spence, Tom Wareham, Caroline Bressey, June Bam-Hutchison, Annette Day)
  2. Evaluating Public Value: Strategy and Practice (Mary Ellen Munley)
  3. Creating Public Value through Museum Education (Ben Garcia)

Funders’ Learning from Evaluation

Andy Warhol, Dollar Sign (1982).

Andy Warhol, Dollar Sign (1982).

It’s that time of year—federal grant deadlines for NSF, IMLS, NOAA, NEH, and NEA are looming large, and many informal learning organizations are  eyeing those federal dollars.  While government agencies (and private foundations) often require evaluation, we relish working on projects with team members that integrate evaluation into their work process because they are interested in professional learning—rather than just fulfilling a grant requirement.  Sadly, evaluation is often thought of and used as a judgment tool—which is why funders require it for accountability purposes.  That said,   I am not anti accountability.  In fact, I am pro accountability; and I am also pro learning.

This pretty simple idea—learning from evaluation—is actually quite idealistic because I sense an uncomfortable tension between program failure AND professional learning.  Not all of the projects we evaluate achieve their programmatic intentions, in part because the projects are often complicated endeavors involving many partners who strive to communicate complex and difficult-to-understand ideas to the public.  Challenging projects, though, often achieve another kind of outcome—professional and organizational learning—especially in situations where audience outcomes are ambitious.  When projects fail to achieve their audience outcomes, what happens between the grantee and funder?  If the evaluation requirement is focused on reporting results from an accountability perspective, the organization might send the funder a happy report without any mention of the project’s inherent challenges, outcomes that fell short of expectations, or the evaluator’s analysis that identified realities that might benefit from focused attention next time. The grantee acknowledges its willingness to take on a complicated and ambitious project and notes a modest increase in staff knowledge (because any further admission might suggest that the staff wasn’t as accomplished as the submitted proposal stated).  The dance is delicate because some grantees believe that any admission of not-so-rosy results is reprehensible and punishable by never receiving funding again!

Instead, what if the evaluation requirement was to espouse audience outcomes and professional and/or organizational learning?  The report to the funder might note that staff members were disappointed that their project did not achieve its intended audience outcomes, but they found the evaluation results insightful and took time to process them.  They explain how what they learned, which is now part of their and their organization’s knowledge bank, will help them reframe and improve the next iteration of the project and they look forward to continuing to hone their skills and improve their work.  The report might also include a link to the full evaluation report.  I have observed that funders are very interested in organizations that reflect on their work and take evaluation results to heart.  I have also noticed that funders are open to thinking and learning about alternative approaches to evaluation, outcomes, measurement, and knowledge generation.

Most of the practitioners I know want opportunities to advance their professional knowledge; yet some feel embarrassed when asked to talk about a project that may not have fared well even when their professional learning soared.  When funders speak, most pay close attention.  How might our collective knowledge grow if funders invited their grantees to reflect on their professional learning?  What might happen if funders explicitly requested that organizations acknowledge their learning and write a paper summarizing how they will approach their work differently next time?  If a project doesn’t achieve its audience outcomes and no one learns from the experience—that would be reprehensible.

Isn’t it time that funding agencies and organizations embrace evaluation for its enormous learning potential and respect it as a professional learning tool?  Isn’t it time to place professional and organizational learning at funders’ evaluation tables alongside accountability?  In my opinion, it is.