Posts Tagged ‘school’

I25th Anniversary Butterflyn Reflection #3, Emily Skidmore talked about how you can’t rush measuring outcomes and advocated for slowing down and conducting front-end and formative evaluation to improve exhibitions, programs, and experiences prior to jumping into measuring outcomes.  I’d like to piggy-back on the slow movement and talk about Institutional Review Board (IRB) and school district review, which is Slow with a capital ‘s’—for better or worse.

IRB is a formally designated board that reviews social science, biomedical, and behavioral research to determine whether the benefits of the research outweigh the risks for the participants in the study.  To be blunt, IRB can be a real thorn in our side.  It requires extensive, tedious paperwork for something we may consider innocuous (e.g., interviewing teachers about their program experience).  Given the many forms and thorough explanations of research procedures required, we spend a lot of time preparing for IRB, and then there is the fee to the external IRB to review the paperwork and methodology.  In addition to the budgetary implications IRB has for our clients, IRB procedures also can significantly delay the research well past when the museum may have expected its research to take place.  Not all of our work requires IRB review, but generally, most research projects where we measure outcomes do.

When our work includes collecting data from students and teachers, we sometimes have to submit our protocols to 1609_Color_Nit-Picking_IRBschool districts for formal review too.  School district review is separate from IRB review, although a school district’s criteria for reviewing research protocols are normally akin to IRB criteria.  Nevertheless, it is yet another required process that can really put the brakes on a project.  For instance, one school district took five months to review our project—much to the chagrin of our client and its funder (understandably so).

At times, IRB and school district reviews can feel like ridiculous hoops that we have to jump through, or bureaucracy at its worst.  As Don Mayne’s cartoon portrays, sometimes the IRB feels like a bunch of nitpicky people who exist solely to make our lives more difficult when we and our museum clients simply want to improve experiences for museum visitors.  So as I justify our sampling procedures for the fifteenth time in the required paperwork, I may shake my head and curse under my breath, but I truly do appreciate the work that IRB and school districts do (I swear there aren’t IRB reviewers holding my feet to the fire as I type!).

When I take a moment and step back, I realize that the process of submitting to IRB forces me to think through all the nitty-gritty details of the research process, which ultimately improves the research and protects museum visitors as research participants.  The extreme assumptions any given IRB makes about our research—(no, I will not be injecting anyone with an unknown substance)—I try not to take them personally and simply respond as clearly and concisely as possible.  And I have gotten pretty good at navigating the system at this point.  Then, I hold our client’s hand, try to protect them from as much of the paperwork and tedium as I can, and tell them, ever so gently, that their research may be delayed.

Read Full Post »

25th Anniversary ButterflyOver the years there have been pivotal moments in which we at RK&A tried something out-of-the- ordinary to meet the needs of a particular project that then became a staple in how we do things.  It wasn’t always clear at the time that these were “pivotal moments,” but in retrospect I can see that these were times of concentrated learning and change.  For me, one of these pivotal moments was the first time we used rubrics as an assessment tool for a museum-based project.  I had been introduced to rubrics in my previous position where I conducted educational research in the public school system, which sometimes included student assessment.  Rubrics are common practice among classroom teachers and educators who are required to assess individual student performance.

Rubrics had immediate appeal to me because they utilize qualitative research methods (like in-depth interviews, written essays, or naturalistic observations) to assess outcomes in a way that remains authentic to complicated, nuanced learning experiences; while at the same time, they are rigorous and respond to the need to measure and quantify outcomes, an increasing demand from funders.  They are also appealing because they respect the complexity of learning—we know from research and evaluation that the impact of a learning experience may vary considerably from person to person. These often very subtle differences in impact can be difficult to detect and measure.

To illustrate what a rubric is, I have an example below from the Museum of the City of New York, where we evaluated the effect of one of its field trip programs on fourth grade students (read report HERE).  As shown here, a rubric is a set of indicators linked to one outcome.  It is used to assess a performance of knowledge, skills, attitudes, or behaviors—in this example we were assessing “historical thinking,” more specifically students’ ability to recognize and not judge cultural differences.  As you can see, rubrics include a continuum of understandings (or skills, attitudes, or behaviors) on a scale from 1 to 4, with 1 being “below beginning understanding” to 4 being “accomplished understanding.”  The continuum captures the gradual, nuanced differences one might expect to see.

Museum of the City of New York RubricThe first time we used rubrics was about 10 years ago, when we worked with the Solomon R. Guggenheim Museum, which had just been awarded a large research grant from the U.S. Department of Education to study the effects of its long-standing Learning Through Art program on third grade students’ literacy skills.  This was a high-stakes project, and we needed to provide measurable, reliable findings to demonstrate complex outcomes, like “hypothesizing,” “evidential reasoning,” and “schema building.”  I immediately thought of using rubrics, especially since my past experience had been with elementary school students.  Working with an advisory team, we developed the rubrics for a number of literacy-based skills, as shown in the example below (and note the three-point scale in this example as opposed to the four-point scale above—the evolution in our use of rubrics included the realization that a four-point scale allows us to be more exact in our measurement).  To detect these skills we conducted one-on-one standardized, but open-ended, interviews with over 400 students, transcribed the interviews, and scored them using the rubrics.  We were then able to quantify the qualitative data and run statistics.  Because rubrics are precise, specific, and standardized, they allowed us to detect differences between treatment and control groups—differences that may have gone undetected otherwise—and to feel confident about the results.  For results, you can find the full report HERE.

Solomon R. Guggenheim Museum RubricFast forward ten years to today and we use rubrics regularly for summative evaluation, especially when measuring the achievement of complicated and complex learning outcomes.  So far, the two examples I’ve mentioned involved students participating in a facilitated program, but we also use rubrics, when appropriate, for regular walk-in visitors to exhibitions.  For instance, we used rubrics for two NSF-funded exhibitions, one about the social construction of race (read report HERE) and another about conservation efforts for protecting animals of Madagascar (read report HERE).  Rubrics were warranted in both cases—both required a rigorous summative evaluation, and both intended for visitors to learn complicated and emotionally-charged concepts and ideas.

While rubrics were not new to me 10 years ago (and certainly not new in the world of assessment and evaluation), they were new for us at RK&A.  What started out as a necessity for the Guggenheim project has become common practice for us.  Our use of rubrics has informed the way we approach and think about evaluation and furthered our understanding of the way people learn in museums.  This is just one example of the way we continually learn at RK&A.

Read Full Post »

This week, I’d like to begin to hone in on the idea of measuring impact that Randi raised in our first blog post.  We define impact as the difference museums can make in the quality of people’s lives, and measuring it can be both exciting and intimidating.  Exciting because just about every museum professional I’ve ever met believes museums have the potential to affect people in deeply powerful ways.  Stories abound from people who have distinct and palpable memories of museum visits from childhood—memories that became etched in their being and identity (for many, it is the giant heart at the Franklin Institute, for others it may be a beautiful Monet water lily, and for the nine-year old me, it was a historic house, My Old Kentucky Home).  It’s these kinds of experiences that draw museum professionals to their field.  On the other hand, the idea of measuring impact can be intimidating because some think it is impossible to evaluate, measure, or assess something as intangible as a personal connection, engagement, identity growth, a lasting memory, an aesthetic experience, or an “ah-ha” moment.  When these fears emerge, we try our best to allay them and try to move them towards an important first step in measuring impact—describing what impact looks like or sounds like.  Evaluators are accustomed to figuring out how to measure something—once the impact is described.

The Giant Heart at The Franklin Institute

The Giant Heart at The Franklin Institute

I, as a researcher and evaluator, become excited at the thought of being tasked with measuring impacts, such as “engagement” or “creativity.”  I relish the idea of studying something so I can explain the unexplainable, of drawing meaning from and describing unique human experiences.  As long as I can remember, I have been interested in the complexities of the human experience, especially in how it plays out in specific contexts, within the social realm, and in relationship to material culture, such as art, artifacts, and natural history specimens.  These interests led me to the field of anthropology and to work in social science research and museum evaluation, where I have the pleasure of spending my days exploring the ways people make meaning in museums and other similar institutions.

Sometimes when museums cite their impact, they fall back on the common practice of reporting visitation numbers.  While not unimportant, numbers indicate only that people came—they do not indicate the quality of visitors’ experiences.  Imagine hearing that a museum attracted a million visitors and then hearing about the qualitative difference a museum has made in people’s lives—wouldn’t that sound more meaningful? This brings me to discussing an often overlooked methodology in museum evaluation—case study research.  Its low rate of use is interesting in light of museums’ desire for evidence of impact, as a case study can provide rich details of a person’s or entity’s (e.g., a school) experience.  Case study research is “an in-depth description and analysis of a bounded system”—that “system” could be any number of things: an individual museum visitor, a school partner, or a community.  It provides a focused, in-depth study of one particular person or entity.  Practically speaking, what we do is follow several participants (or “cases”) over time (during and after a program for example), by interviewing them repeatedly, observing them in the program, and interviewing others within their sphere of influence (such as a parent, spouse,  museum professional, student, or community member) who can comment about their experience. The outcome is a concrete, contextualized, nuanced understanding of a particular phenomenon (for example,  a person’s growth over time, a relationship between a museum and school, or a museum’s affect on a community) that can explain not only what happened as a result of the program, but how it happened.   Knowing the “what” and the “how” are invaluable to museums; the “what” can offer indictors of impact and the “how” tells you what the museum might have been doing to create the “case” experience.

An example of case study research comes from an evaluation we did for an art museum that was launching a new multi-visit program in middle schools.  We began our work by helping the museum define its intended impact, which is articulated as: “Students are empowered to think and act creatively in their lives, their learning, and their community.”  We then worked with staff to operationalize the impact statement by developing a series of concrete, measureable outcome statements.  We identified our “cases” as three middle schools.  Each “case” study included a series of interviews with students, classroom teachers, a few parents, and program staff, as well as an observation of program activities in the school and in the museum—all over the course of several months.  The data were rich, specific descriptions of what happened to participants and how the program functioned in each school—all in relationship to the impact statement.  Not surprisingly, each school had a slightly different experience, with one school more closely meeting the impact statement than the others.  The case study approach unveiled the complex interplay of variables at each school to help explain why one school was more successful than the others.  Both the successes and challenges provided great insight as the museum considered its second year of program implementation.

I know what you are thinking.  How can we measure impact by focusing on a few individuals or one or two schools?  What about generalizability?  While these questions are reasonable, they miss the point of case studies, which I believe strongly aligns with what we know about museum experiences—case studies account for differences in people’s unique dispositions, life experiences, and knowledge; they value distinctiveness; and they recognize the complexities of life and situations and do not try to simplify them.  If a museum really has trouble accepting the essential value of what a case study can afford, plenty of museum programs are small enough to warrant conducting a case study without worrying about generalizability.  The example above is a relatively small program serving six schools, three of which we examined through our study.  In another example, we used case study research to assess the impact of a museum-based summer camp serving 20 or so teens.  Conducting case studies to demonstrate the impact of museums’ small programs might be just the perfect baby step towards museums’ beginning to measure impact.

I sense urgency in museums’ need to have evidence of the value of museums in the American landscape.  I think it is time we stopped worrying about what might be immeasurable and instead begin describing what success looks like.

Read Full Post »