Posts Tagged ‘research’

25th Anniversary ButterflyWorking in research and evaluation, you become very skeptical of words like “data-driven” and “research-based.” To evaluators, it is quite flattering that these words are so buzzworthy—yes, we want our research and evaluation work to be important, used, and even desired! However, even though these buzzwords grab attention, they can be misleading. For instance, when we talk about data and research at RK&A, we mean original, first-hand data and research, such as interviews, questionnaires, and surveys with museum visitors.

This was on my mind as I recently had the opportunity to help my CARE (Committee on Audience Research and Evaluation) colleague Liz Kunz Kollmann review session proposals for the 2015 AAM Annual Meeting. Liz, as CARE’s representative for the National Program Committee, was charged with reviewing sessions in the Education, Curation, & Evaluation track (all 141 sessions!) along with fellow National Program Committee members in education and curation. Given that audience research and evaluation can be part of many AAM tracks (marketing, development, exhibit design, etc.), Liz recruited some CARE members to help her review sessions in other tracks to see if there were any sessions outside of our designated track that CARE should advocate for.

I volunteered to review sessions in the tracks Development & Membership and Finance & Administration. I had expected to encounter a lot of buzzwords since the AAM session proposals include a description that must be appropriate for display on the AAM website, mobile app, and other published meeting materials. So, I wasn’t surprised but I was struck by the heavy use of terms like “data-driven” and “research-based” (e.g., data-driven strategies for membership recruitment and research-based programming) and was stymied in trying to determine whether these sessions were relevant to CARE—what data is driving the decisions and is it really of interest to CARE members?

Certainly I am not dismissive of research or data that isn’t “original.” There are many definitions of research and data that are applicable to certain scenarios and within certain fields. For instance, arts-based research is a completely valid field of research within art education when conducted well. However, I am biased to collecting original data from visitors first-hand, which is why terminology like “data-driven” and “research-based” makes my ears perk up—because these words prompt many questions for me about the type of data and research and its appropriateness to inform said decisions and practices. Through our work at RK&A, we truly want practitioners to make decisions that are data-driven; that is the greatest outcome of our work! However, we also want our clients to be skilled users and consumers of data and evaluation so much so that their ears perk up at the very mention of “data”—for hopefully, they, too, have become savvy digesters of the language as well as the meaning behind the data when talking about research and evaluation.

Check out our Buzzword Bingo below inspired by Dilbert:  Warning: this Bingo is informed by RK&A’s professional experience and is not based on original data.  Maybe with the help of our museum colleagues, we can make it “research-based.”  Please share your buzzwords!

Reflection 19 blog v4

Read Full Post »

I25th Anniversary Butterflyn Reflection #3, Emily Skidmore talked about how you can’t rush measuring outcomes and advocated for slowing down and conducting front-end and formative evaluation to improve exhibitions, programs, and experiences prior to jumping into measuring outcomes.  I’d like to piggy-back on the slow movement and talk about Institutional Review Board (IRB) and school district review, which is Slow with a capital ‘s’—for better or worse.

IRB is a formally designated board that reviews social science, biomedical, and behavioral research to determine whether the benefits of the research outweigh the risks for the participants in the study.  To be blunt, IRB can be a real thorn in our side.  It requires extensive, tedious paperwork for something we may consider innocuous (e.g., interviewing teachers about their program experience).  Given the many forms and thorough explanations of research procedures required, we spend a lot of time preparing for IRB, and then there is the fee to the external IRB to review the paperwork and methodology.  In addition to the budgetary implications IRB has for our clients, IRB procedures also can significantly delay the research well past when the museum may have expected its research to take place.  Not all of our work requires IRB review, but generally, most research projects where we measure outcomes do.

When our work includes collecting data from students and teachers, we sometimes have to submit our protocols to 1609_Color_Nit-Picking_IRBschool districts for formal review too.  School district review is separate from IRB review, although a school district’s criteria for reviewing research protocols are normally akin to IRB criteria.  Nevertheless, it is yet another required process that can really put the brakes on a project.  For instance, one school district took five months to review our project—much to the chagrin of our client and its funder (understandably so).

At times, IRB and school district reviews can feel like ridiculous hoops that we have to jump through, or bureaucracy at its worst.  As Don Mayne’s cartoon portrays, sometimes the IRB feels like a bunch of nitpicky people who exist solely to make our lives more difficult when we and our museum clients simply want to improve experiences for museum visitors.  So as I justify our sampling procedures for the fifteenth time in the required paperwork, I may shake my head and curse under my breath, but I truly do appreciate the work that IRB and school districts do (I swear there aren’t IRB reviewers holding my feet to the fire as I type!).

When I take a moment and step back, I realize that the process of submitting to IRB forces me to think through all the nitty-gritty details of the research process, which ultimately improves the research and protects museum visitors as research participants.  The extreme assumptions any given IRB makes about our research—(no, I will not be injecting anyone with an unknown substance)—I try not to take them personally and simply respond as clearly and concisely as possible.  And I have gotten pretty good at navigating the system at this point.  Then, I hold our client’s hand, try to protect them from as much of the paperwork and tedium as I can, and tell them, ever so gently, that their research may be delayed.

Read Full Post »

25th Anniversary ButterflyOver the years there have been pivotal moments in which we at RK&A tried something out-of-the- ordinary to meet the needs of a particular project that then became a staple in how we do things.  It wasn’t always clear at the time that these were “pivotal moments,” but in retrospect I can see that these were times of concentrated learning and change.  For me, one of these pivotal moments was the first time we used rubrics as an assessment tool for a museum-based project.  I had been introduced to rubrics in my previous position where I conducted educational research in the public school system, which sometimes included student assessment.  Rubrics are common practice among classroom teachers and educators who are required to assess individual student performance.

Rubrics had immediate appeal to me because they utilize qualitative research methods (like in-depth interviews, written essays, or naturalistic observations) to assess outcomes in a way that remains authentic to complicated, nuanced learning experiences; while at the same time, they are rigorous and respond to the need to measure and quantify outcomes, an increasing demand from funders.  They are also appealing because they respect the complexity of learning—we know from research and evaluation that the impact of a learning experience may vary considerably from person to person. These often very subtle differences in impact can be difficult to detect and measure.

To illustrate what a rubric is, I have an example below from the Museum of the City of New York, where we evaluated the effect of one of its field trip programs on fourth grade students (read report HERE).  As shown here, a rubric is a set of indicators linked to one outcome.  It is used to assess a performance of knowledge, skills, attitudes, or behaviors—in this example we were assessing “historical thinking,” more specifically students’ ability to recognize and not judge cultural differences.  As you can see, rubrics include a continuum of understandings (or skills, attitudes, or behaviors) on a scale from 1 to 4, with 1 being “below beginning understanding” to 4 being “accomplished understanding.”  The continuum captures the gradual, nuanced differences one might expect to see.

Museum of the City of New York RubricThe first time we used rubrics was about 10 years ago, when we worked with the Solomon R. Guggenheim Museum, which had just been awarded a large research grant from the U.S. Department of Education to study the effects of its long-standing Learning Through Art program on third grade students’ literacy skills.  This was a high-stakes project, and we needed to provide measurable, reliable findings to demonstrate complex outcomes, like “hypothesizing,” “evidential reasoning,” and “schema building.”  I immediately thought of using rubrics, especially since my past experience had been with elementary school students.  Working with an advisory team, we developed the rubrics for a number of literacy-based skills, as shown in the example below (and note the three-point scale in this example as opposed to the four-point scale above—the evolution in our use of rubrics included the realization that a four-point scale allows us to be more exact in our measurement).  To detect these skills we conducted one-on-one standardized, but open-ended, interviews with over 400 students, transcribed the interviews, and scored them using the rubrics.  We were then able to quantify the qualitative data and run statistics.  Because rubrics are precise, specific, and standardized, they allowed us to detect differences between treatment and control groups—differences that may have gone undetected otherwise—and to feel confident about the results.  For results, you can find the full report HERE.

Solomon R. Guggenheim Museum RubricFast forward ten years to today and we use rubrics regularly for summative evaluation, especially when measuring the achievement of complicated and complex learning outcomes.  So far, the two examples I’ve mentioned involved students participating in a facilitated program, but we also use rubrics, when appropriate, for regular walk-in visitors to exhibitions.  For instance, we used rubrics for two NSF-funded exhibitions, one about the social construction of race (read report HERE) and another about conservation efforts for protecting animals of Madagascar (read report HERE).  Rubrics were warranted in both cases—both required a rigorous summative evaluation, and both intended for visitors to learn complicated and emotionally-charged concepts and ideas.

While rubrics were not new to me 10 years ago (and certainly not new in the world of assessment and evaluation), they were new for us at RK&A.  What started out as a necessity for the Guggenheim project has become common practice for us.  Our use of rubrics has informed the way we approach and think about evaluation and furthered our understanding of the way people learn in museums.  This is just one example of the way we continually learn at RK&A.

Read Full Post »

25th Anniversary ButterflyThe quarter-century mark feels like the right time to take stock of where RK&A is, and at the very least, think about what we have learned along the way.  Reflecting on the past is a task that feels comfortable; we know where we have been, and we are familiar with the present.  Making sense of the past and forging new ideas from the past is far more difficult—yet, as a staff, that is what we have decided to do for our 25th anniversary celebration—which will take the entire year!  As some of you know, we have been blogging for exactly a year—at  What better platform is there to share what we have learned over the last quarter century?

Evaluation, visitor studies, audience research—that is the work I set out to do and it remains our traditional work.  RK&A has carefully grown to seven people, including a small satellite office in NY.  Along the way all of us have learned so much—about visitors, about cultural organizations, and about the relationship between the two.  As a staff that has always strived for excellence, we try hard to apply new knowledge to our practice.

About 10 years ago I gravitated towards the notion of intentionality as a concept I wanted to explore.  Conducting evaluations had shown me that it would be worthwhile to figure out a way to help cultural organizations focus their passions, skills, and resources towards their vision of impact.  Helping organizations determine what impact they wanted to achieve seemed like the first step, and so was born our intentional practice workshops (which took about two years of R&D).  Achieving impact with audiences is harder than one thinks, so in order for cultural organizations to achieve impact, they need to be intentional in how they carry out their work.

The demand for our intentional practice workshops continues to grow, and because our intentionality work emerged from our evaluation work, it was only a matter of time until we would begin to weave what we have learned back into our evaluation practice.  We now offer intentionality-like workshops as part of our evaluation services to help staff understand and apply evaluation data to their planning.  Rarely do staff from across the organization get together to discuss and debate their visitors and work, but when they do, the results are inspiring.

However, inspiration doesn’t always lead to action.  While most people and organizations want to change, saying so is easier than doing so.  I have learned the virtue of taking baby steps towards change, and sometimes baby steps give people time to learn and internalize a new way of thinking and working.  It’s that way at RK&A. Sometimes our intentionality work feels organic and sometimes we need to be more deliberate and forthright in our decision making in order to sustain a change in our practice.  It is human nature to gravitate to old ways of doing things until new ways become comfortable; it takes conviction and focus to continue to move forward.  To sustain our learning and help RK&A maintain its momentum with our new intentionality work and traditional evaluation practice, we will share 25 years of learning with you over the next 25 blog posts.  We hope our 25th year celebration of RK&A’s learning inspires you to learn along with us.

Read Full Post »

Two weeks ago, I attended the Learning Value of Children’s Museums Research Agenda Symposium.  When I received an email from the project evaluator asking me about what most resonated from the first day of the symposium, I found that my most immediate thoughts had nothing to do with children’s museums, despite the many interesting conversations in which I participated throughout the day.  Rather what most resonated was spurred by a comment from Kevin Crowley, Professor at University of Pittsburgh’s School of Education, as he gave advice on the process of brainstorming around the research agenda.  He recited a quote from Woody Allen: “A relationship, I think, is like a shark, you know?  It has to constantly move forward or it dies.”  He noted that researchers, in particular, can get caught doing mental gymnastics around ideas ad nauseam, but that we needed to push forward and drop pen to paper to avoid having a dead shark of a research agenda.

Damien Hirst, The Physical Impossibility of Death in the Mind of Someone Living, 1991

Damien Hirst, The Physical Impossibility of Death in the Mind of Someone Living, 1991

I have absolutely experienced what Kevin described.  As someone who likes data, it is very easy to get caught up in interview transcripts, wondering, “What did they mean by that?”  I also could spend gobs of time running statistical analyses by every variable available looking for any kind of relationship.  Heck, it took me two years to finish writing my master’s thesis because there was always one more book, article, or podcast to review that might be relevant.  However, the reality of my work as a consultant evaluator is that I have been contracted by a museum client to collect data to inform exhibition, programming, or marketing decisions—all of which have deadlines.  When I first started working with RK&A, “deadline” was a scary word.  I worried about missing something in the data or not wording something exactly right in the report.  Now, I find deadlines as a necessary (and often welcome) parameter in which to work.  They help me hunker down and focus on key trends and salient information.  In that regard, deadlines have become a symbol for progress.  When I finish my work, the museum can move forward with its work—thereby saving the shark.

If you’d like to see how ACM is moving forward on its research agenda, check out the Web site:  The agenda is open to comments through October 20th if you would like to provide your two cents.

Read Full Post »

In July, we moved our office.  Even though we were just moving across the street, moving is moving.  Books, articles, reports, journals, files (I’m not paperless as much as I try), drawer contents—everything needed to be boxed and labeled.  Intentionally, I had accumulated articles that filled two large file drawers—all the ones I collected in graduate school for my thesis, and the hundreds of others that incited my interest over the years.  It was relatively easy to give many of the older books to my staff and other young evaluators who live in D.C., as well as eager students, thanks to my colleagues at The George Washington University.  I was glad that much of this knowledge would be passed on.  Sadly, there was little interest in many of the articles that lived in the two large file drawers.  I suspect that when people looked at them they wondered, “Who is this person?” or astutely noted, “If this is any good, I’ll find it online.”

My interests have evolved, and I believe less is more, so I couldn’t rationalize keeping articles that felt out of sync with my current pursuits, like articles on collecting practices; recycling the paper those articles were on was a fairly easy decision.  And the very dated articles about computers—easy decision.  Then there were the classics—that was a fairly easy decision, too; they were keepers.  Screven - The Museum as a Responsive Learning Environment_Page_1“Exhibits: Art Form or Educational Medium?” by Harris Shettel appeared in Museum News in 1973.  My copy has highlights, underlines, and notes in the margins.  It was then, and remains now, a great piece and my copy feels like a personalized artifact.  All of Chandler Screven’s articles, such as the 1969 article, also from Museum News, titled “The Museum as a Responsive Learning Environment” (yes, it’s about participatory experiences); and then Molly Hood’s 1983 piece, “Staying Away: Why People Choose Not to Visit Museums” (yes, these reasons haven’t changed over the years) (also from Museum News).  I no longer refer to or reference them, but recycling the paper would be like throwing myself into the recycle bin, as I have learned so much from their work.

Decision making became a challenge when I was looking at the piles of folders with articles that I still found interesting even though their contents rarely entered into my current thinking.  They were neither the classics nor the outliers—they were somewhere in the middle—fascinating content and solid pieces but dated and no longer relevant to my work.  And this dilemma caused me to wonder, if I recycle the paper they are on, where will all that knowledge go?  Will the contents of these pieces just dissipate into thin air?  I really struggled with these neither-here-nor-there pieces.  I usually don’t fret about getting rid of stuff, but clearly this stuff had a different hold on me.

Then, while our move preparations were swimming right along, someone thought the piles of folders were forgotten, not as keepers, but as piles that never made their way to the recycle bin, and off they went.  I was away on travel so I couldn’t protest, and I can see the thinking—they weren’t in a box and they were no longer in their original home.  I had shared my dilemma with only one other person in the office—so no one else knew of my struggle until after the fact.  They were just there, piles of manila folders (all neatly labeled with the authors’ names), and if they were neither here nor there, then they were nowhere.  Thus, my dilemma about what to do with the pile of articles that were no longer central to my work was resolved by someone else for whom the folders had no meaning whatsoever.  I felt a weight lifted from me but I wasn’t entirely sure what else I should feel.  I wasn’t mad that they were discarded, but I felt that there was now a black hole; I still haven’t figured where the knowledge might go if I recycled the paper, but I suppose I have plenty of time to think about that, and if I should realize that I needed to guard the knowledge in those pieces, I suspect if the ideas were good enough, the knowledge will re-emerge and live a more contemporary existence in someone else’s article.

Read Full Post »

Insider Out

I’m not a museum evaluator, but I play one on television; at least that’s what I tell people who ask me what I do for a living.  As the Business Manager at Randi Korn & Associates, I don’t have the educational or employment background of my colleagues, but I do have a long-standing love of museums – I had to name my favorite museum (and why) when I interviewed here.

New Yorker Cartoon SteinbergSo, in order to do my job effectively, I had to learn a few things about how museums operate and what museum evaluation is in order to function. Just as Steinberg cleverly shows here how New Yorkers view the world, those in the museum field may also have a skewed perception of their domain from the inside; this poster hangs on our office wall to remind us to “think outside the museum.” Here are my top 5 “surprises” from the outside.

Surprise #1: Who knew that museums had goals and objectives when they put up all that cool stuff? Laugh if you will, but before I came here, I didn’t realize there is more to an exhibition than putting like things together. Now that I know, it’s given me a whole new dimension to explore when I visit a museum.

Surprise #2: There’s a whole world of people conducting research on/in museums.
Being a process person and a big believer in constructive criticism, it’s good to know that, not only did the museum have a purpose when it put this stuff together, but someone is actually gathering information to make it better!

Surprise #3: Wow, I work with really smart people!
Not really surprising, because the museum field is loaded with lifelong learners and people who are naturally curious. And evaluators are really curious, otherwise they would not have the desire to probe and probe further, and question how and why.

Surprise #4: I was evaluating museums, I just didn’t know it.
Invariably, I always had a lot to say after visiting an exhibition: short little me (4’ 10”) couldn’t see through the crowd; I didn’t realize I was going in the wrong direction until I found the orientation panel at the end; this interactive was really cool; etc. etc. How refreshing to learn someone might actually be interested in my opinion!

Surprise #5: Gee, people have some odd opinions to share!Narwhal Horn
I keep a file of amusing interview and survey responses. The winner so far is, when asked if they had anything to add about their experience in the museum that day, the interviewee said “There were no beets in the beet soup!”  You can have the best exhibition ever, but visitors will remind you how they value every aspect of their museum encounter.

Oh, and in case you’re wondering, my favorite museum is the Cluny in Paris. Because it has a narwhal horn on display. Go figure.

Read Full Post »

Older Posts »