Posts Tagged ‘qualitative’

RKA Blog_Evaluation Design Series_LogoInterviews are a commonly used data collection method in qualitative studies, where the goal is to understand or explore a phenomenon.  They’re an extremely effective way to gather rich, descriptive data about people’s experiences in a program or exhibition, which is one reason we use them often in our work at RK&A.  Figuring out sample size for interviews can sometimes feel trickier than for quantitative methods, like questionnaires because there aren’t tools like sample size calculators to use.  However, there are several important questions to consider that can help guide your decision-making (and while you do so, remember that a cornerstone of qualitative research is that it requires a high tolerance for ambiguity and instinct!):

  1. How much does your population vary? The more homogenous the population, the smaller the sample size. For example, is your population all teachers? Do they all teach the same grade level?  If so, you can use a smaller sample size, since the population and related phenomenon are narrow.  Generally speaking, if a population is very homogeneous and the phenomenon narrow, aim for a sample size of around 10.  If the population is varied or the phenomenon is complex, aim for around 40 to 50.  And if you want to compare populations, aim for 25 to 30 per segment.  In any case, a sample of more than 100 is generally excessive.
  2. What is the scope of the question or phenomenon you are exploring? The more narrow the question being explored or phenomena being studied, the smaller your sample size can be. Are you looking at one program, or just one aspect of a program? Or, are you comparing programs or looking at many different aspects of a program?
  3. At what point will you reach redundancy? This is key for determining sample size for any qualitative data collection method.  You want to sample only to the point of saturation—that is, stop sampling when no new information emerges.  Another way to think about this is that you stop collecting data when you keep hearing the same things again and again.  To be clear, I’m talking about big trends here—while each interview will have its own nuance and the small details might vary from interview to interview, you can stop when the larger trends start to repeat themselves and no new trends arise.

The question of “how many” for qualitative studies might always feel a bit frustrating, since (as illustrated by the questions above) the answer will always be “it depends.”  But remember, as the word “qualitative” suggests, it’s less about exact numbers and more about understanding the quality of responses, including the breadth, depth, and range of responses.  Each study will vary, but as long as you consider the questions above the next time you are deciding on sample size for qualitative methods, you can be confident you’re approaching the study in a systematic and rigorous way.

Read Full Post »

Last week I attended the MCN (Museum Computer Network) conference in Minneapolis.  It was an awesome experience—one that on the whole didn’t feel nearly as focused on technology as you might expect for a conference hosted by an organization with the word “computer” in its name.  Rather, the complexities of our relationships—with objects, spaces, and other people—seemed to be on everyone’s mind.  My head is still swimming with post-conference ideas along these lines, so I thought I’d share a few things I’ve been mulling over since returning to DC.

In her riveting keynote speech, Liz Ogbu, founder and principal of Studio O (a multidisciplinary design and innovation firm), challenged us to remember our own humanity when working to create change in our communities.  One thing she said particularly struck me in thinking about my work as an evaluator.  Liz spent three weeks living with and talking to women in Tanzania for a project intended to encourage more people to use cookstoves.  In describing this work she talked about rethinking the data collection process so it is less of a “transaction” and more of a relationship-building process with evaluator and participants on equal ground.  We must not simply come in and “extract” data from participants, Liz argued.  Rather, we need to make them feel like we are “in it” with them.  That difference, she argued, helps builds trust and makes people more willing to share the deep, detailed information she needs to be able to build the best solution.  For Liz, that meant conducting multiple in-depth interviews with women about how they use (or don’t use) cookstoves.  But it also meant getting down on her hands and knees and cooking with Tanzanian women to discover things they may not have thought to tell her but that are nonetheless important for understanding how they decide to feed their families.  In my work in museums, I doubt I’ll ever have the chance to immerse myself so fully in visitors’ “home” environments.  But Liz’s speech did make me wonder how I might work to incorporate more of my own humanity into data collection and establish a deeper sense of trust with visitors that I observe or interview.  Of course, there are still many advantages to taking a more hands-off approach—“staying neutral,” if you will.  But I want to challenge myself in the future to reconsider this as a default.   I think there might be times where it would do me and our museum clients good to approach the data collection process in a way that focuses first and foremost on developing a sense of trust and understanding between evaluator and visitor, so we can ultimately better understand the complexities of the issues at hand.

I wondered too about the implications of collecting data in museum spaces—namely, whether our own comfort in these spaces means we sometimes forget that these are not necessarily places where visitors feel equally comfortable, and how this might affect data collection.  Lack of time and resources certainly makes it difficult to do interviews with and observations of visitors/users in their “home” environments, but I can imagine times when it might be really advantageous to do so.  Take, for example, a museum that wants to learn how teachers use their online resources/collections.  I’m willing to bet the data would be a million times richer if we could go out and conduct interviews with teachers in the their own classrooms and see first-hand how they use those resources, rather than if we tried to learn about their experiences by conducting a phone interview (where we can’t see how what they describe aligns with their practice, and we have to rely entirely on what they tell us when there may also be important factors they don’t think to share).  Sometime we are lucky enough to be able to do this, like in our ongoing evaluation of citizen science programs for the Conservation Trust of Puerto Rico.  Unfortunately, a lack of resources or a desire for large sample sizes often make this approach challenging.

As I chatted about these ideas with others throughout the conference, I became even more convinced of the immense value of doing rigorous, thorough qualitative research.  In my conference presentation on this topic, I shared a few “key competencies” for doing good qualitative research that I hope anyone seeking to understand visitors’ experiences will keep in mind:

Key competencies for doing qualitative research

Key competencies for doing qualitative research

Overall, my biggest takeaway this year is that designing and understanding experiences is never about the technology—it’s about the people.  Having a “digital” mindset towards museum work really just means embracing the many ways that technology allows us to find and tell stories, build and enhance relationships, and discover connections we never knew existed.  Human problems and relationships are at the heart of the “digital transformation” that MCN hopes to advance in the cultural sector.

I look forward to exploring this line of thinking more next year at MCN2016 in New Orleans!

Read Full Post »

25th Anniversary ButterflyOver the years there have been pivotal moments in which we at RK&A tried something out-of-the- ordinary to meet the needs of a particular project that then became a staple in how we do things.  It wasn’t always clear at the time that these were “pivotal moments,” but in retrospect I can see that these were times of concentrated learning and change.  For me, one of these pivotal moments was the first time we used rubrics as an assessment tool for a museum-based project.  I had been introduced to rubrics in my previous position where I conducted educational research in the public school system, which sometimes included student assessment.  Rubrics are common practice among classroom teachers and educators who are required to assess individual student performance.

Rubrics had immediate appeal to me because they utilize qualitative research methods (like in-depth interviews, written essays, or naturalistic observations) to assess outcomes in a way that remains authentic to complicated, nuanced learning experiences; while at the same time, they are rigorous and respond to the need to measure and quantify outcomes, an increasing demand from funders.  They are also appealing because they respect the complexity of learning—we know from research and evaluation that the impact of a learning experience may vary considerably from person to person. These often very subtle differences in impact can be difficult to detect and measure.

To illustrate what a rubric is, I have an example below from the Museum of the City of New York, where we evaluated the effect of one of its field trip programs on fourth grade students (read report HERE).  As shown here, a rubric is a set of indicators linked to one outcome.  It is used to assess a performance of knowledge, skills, attitudes, or behaviors—in this example we were assessing “historical thinking,” more specifically students’ ability to recognize and not judge cultural differences.  As you can see, rubrics include a continuum of understandings (or skills, attitudes, or behaviors) on a scale from 1 to 4, with 1 being “below beginning understanding” to 4 being “accomplished understanding.”  The continuum captures the gradual, nuanced differences one might expect to see.

Museum of the City of New York RubricThe first time we used rubrics was about 10 years ago, when we worked with the Solomon R. Guggenheim Museum, which had just been awarded a large research grant from the U.S. Department of Education to study the effects of its long-standing Learning Through Art program on third grade students’ literacy skills.  This was a high-stakes project, and we needed to provide measurable, reliable findings to demonstrate complex outcomes, like “hypothesizing,” “evidential reasoning,” and “schema building.”  I immediately thought of using rubrics, especially since my past experience had been with elementary school students.  Working with an advisory team, we developed the rubrics for a number of literacy-based skills, as shown in the example below (and note the three-point scale in this example as opposed to the four-point scale above—the evolution in our use of rubrics included the realization that a four-point scale allows us to be more exact in our measurement).  To detect these skills we conducted one-on-one standardized, but open-ended, interviews with over 400 students, transcribed the interviews, and scored them using the rubrics.  We were then able to quantify the qualitative data and run statistics.  Because rubrics are precise, specific, and standardized, they allowed us to detect differences between treatment and control groups—differences that may have gone undetected otherwise—and to feel confident about the results.  For results, you can find the full report HERE.

Solomon R. Guggenheim Museum RubricFast forward ten years to today and we use rubrics regularly for summative evaluation, especially when measuring the achievement of complicated and complex learning outcomes.  So far, the two examples I’ve mentioned involved students participating in a facilitated program, but we also use rubrics, when appropriate, for regular walk-in visitors to exhibitions.  For instance, we used rubrics for two NSF-funded exhibitions, one about the social construction of race (read report HERE) and another about conservation efforts for protecting animals of Madagascar (read report HERE).  Rubrics were warranted in both cases—both required a rigorous summative evaluation, and both intended for visitors to learn complicated and emotionally-charged concepts and ideas.

While rubrics were not new to me 10 years ago (and certainly not new in the world of assessment and evaluation), they were new for us at RK&A.  What started out as a necessity for the Guggenheim project has become common practice for us.  Our use of rubrics has informed the way we approach and think about evaluation and furthered our understanding of the way people learn in museums.  This is just one example of the way we continually learn at RK&A.

Read Full Post »

I love a good story.  Who doesn’t?  It’s how we humans make meaning—we construct narratives to explain and interpret events both to ourselves and for others.  Think about the number of stories you tell or hear in a day, even the mundane ones.  It’s a way to form and sustain connections with others and to understand ourselves.  So I was intrigued to see that this year’s AAM theme was “The Power of Story.”  I remembered that the 2012 AAM keynote address included a couple storytellers from The Moth (the tagline is True Stories told Live, and it features everyday people telling very personal stories on stage and is broadcast on National Public Radio).  The Moth was one the highlights of the 2012 AAM conference for me, so I was especially disappointed that I was unable to attend AAM this year.  But I talked to several people who did attend and read some blogs and, not really surprisingly, its sounds like panelists wove the theme into their presentations in interesting and appropriate ways, (which certainly isn’t always the case with conference themes).

It got me thinking.  Storytelling isn’t something I consider on a daily basis in my work, at least not in a literal, explicit way.  But the more I think about it, the more I realize storytelling permeates my work in nearly every way and has even become a tool for helping museums think about and define their impact.

To begin with, I am a qualitative researcher.  I was drawn to the field as a way to understand the world, in particular, people and groups of people—how they live, experience life, make meaning, and why and how they do what they do.  Of course one can study all this through quantitative research as well, but I am interested in the messiness and ambiguities inherent in qualitative research.  Qualitative data is narrative, and more specifically, I’ve noticed the best data often results when, for example, an interviewee or focus group participant tells a story to illustrate an idea.  And in fact, a strand of qualitative research called narrative research explicitly uses storytelling as a methodology.  Stories as data are powerful because they resonate and illuminate truths about the human experience.

Bed Curtain

Bed Curtain: England (1690-1710), artist unknown, V&A Museum.

Secondly, I was drawn to work in museums because of my love of objects—whether art, natural history specimens, or historical artifacts; to me, objects embody stories.  Objects are the physical evidence demonstrating that something was here; something happened here!  Objects stir the imagination and stimulate storytelling, whether fantastical stories (just listen to a child explain a work of art) or stories based on interpretation and deductive reasoning.  And, based on all my years of conducting research and evaluation in museums, I can tell you visitors feel the same way I do about objects—authentic objects evoke stories for visitors, and as I mentioned earlier, stories are how we construct meaning and connect with others—objects help us bridge a gap between ourselves and another (whether an artist, a dinosaur, or the mysterious person who used this 17th century bed curtain shown to the right).

The final, perhaps more subtle way that stories are important to my work is when we help our museum clients clarify and define impact (as defined by Stephen Weil is “making a difference in the quality of people’s lives”).  Randi has written a lot about impact on our blog so I won’t say too much about it here.  But I will say that one of the best ways for museums to begin thinking about impact is by telling stories about their work and why they love it.  I’ve never explicitly sat down with a client and said, “Okay, tell me a story about your visitors’ experiences.”  But invariably, that is what happens when we ask questions to help museums articulate their impact—they start telling stories (and at least twice, I’ve watched the telling of those stories lead to tears).  These stories are a starting place for museums to think authentically about the affect they have on their audiences.  As I discussed in my previous blog post, Explaining the Unexplainable, it is daunting to sit down and try to articulate impact and outcomes (particularly if you worry about measuring them), but starting with stories grounds you in what’s real and meaningful and can lay the foundation for articulating a distinct and authentic impact statement.

Read Full Post »