RKA Blog_Evaluation Design Series_Logo

This spring, RK&A undertook an ambitious project with the Smithsonian National Museum of Natural History (NMNH) to conduct a meta-analysis of reports from the last 10 years of evaluation completed for the museum.  In this context, “meta-analysis” essentially means reanalyzing past analyses with the goal of identifying larger trends or gaps in research.  This project was both challenging and rewarding, and so I wanted to share our experience on the blog.

The specific goals for this project were to:

  • Understand consistencies or inconsistencies in findings across reports;
  • Identify areas of interest for further study;
  • Help the museum build on its existing knowledge base; and
  • Create a more standardized framework for future evaluations that would help the museum continue building its knowledge base by connecting past studies to present and future evaluations.

The first step of the meta-analysis process was to perform an initial review of the reports and determine criteria for inclusion in the analysis.  One of the underlying goals for the project was to demonstrate to the institution at large (not just the Exhibits and Education departments) that evaluation is a useful, scientific, and rigorous tool that can inform future work at the museum.  Therefore, we wanted to make sure that the evaluation reports included in the study adhered to these high standards.

For this reason, we omitted a few reports that we considered casual “explorations” of an exhibition or component type, rather than a systematic study using acceptable evaluation and research protocols.  For example, an “exploration” might consist of a random short observation of an exhibition and casual conversations with a small number of visitors about their experiences.  While these types of studies can be useful and informative on small-scale projects, they were not rigorous enough to support the larger goals of this project.

We also omitted reports in which the sampling and data collection methods were not clearly stated, because this left us unsure of exactly who was recruited, how they were recruited, and how that data were collected (e.g., Were the observations cued or uncued?  What instrument was used?).  Although these studies may have been rigorous, there is no way for us to know without a clear statement of the methodology in the report.

Next, we needed to develop a framework to use for analyzing and comparing evaluations.  Over the course of several meetings with NMNH, we discussed and clarified the ideas and outcomes that were most important to the museum.  Based on these discussions and a review of NMNH’s existing evaluation framework for public education offerings and the institution’s core messages, we developed a new evaluation framework which would be our analytic lens.  The new framework centered on four main categories, with the most emphasis placed on the Influence category:Metaanalysis

Within the Influence category, we looked at a number of specific outcomes that were important to NMNH, such as whether visitors are “awe-inspired” by what they encounter in the museum or whether visitors report becoming more involved in “preserving and sustaining” the natural world.  To show some of the challenges we faced in making comparisons across reports, I’ll highlight an example from one outcome—“Visitors are curious about the information and ideas presented in the exhibition.”

Understanding whether visitors are “curious” about the information and ideas presented in an exhibition was difficult because many evaluations did not explore visitors curiosity.  Instead, we had to think about what types of questions, visitor responses, and visitor behaviors might serve as proxy indicators that visitors were curious about what they had seen or experienced.  For example, audience research studies conducted between 2010 and 2014 at NMNH asked entering visitors “Which of these experiences are you especially looking forward to in the National Museum of Natural History today” and exiting visitors “Which of these experiences were especially satisfying to you in the National Museum of Natural History today.”  We decided that visitors who indicated they were especially looking forward to (entering) or satisfied by (exiting) “enriching my understanding” may be considered “curious” to learn more about the content and ideas presented by the museum.  For other evaluations that didn’t explore elements associated with curiosity, we looked for indicators such as asking questions or seeking clarifications of staff and volunteers about something a visitor has seen in an exhibition.

However, we also acknowledge that visitors’ desire to “enrich” their “understanding” or “gain more information” about a topic does not always directly relate to curiosity.  For example, one evaluation that asked about both “curiosity” and “gaining information” found that an exhibition exceeded visitors’ expectations about having their “curiosity sparked” but fell short in “enriching understanding” or “gaining information.”  We learned from this that if curiosity is an important measure of NMNH’s influence on visitors, future evaluations should be clear in how they explore curiosity in their instruments and how they discuss it in their findings.

In light of the results of the meta-analysis, we are excited to see how NMNH uses the reporting tool we created from this work.  The tool standardizes the categories that evaluators and museum staff use to collect information and measure impact so the museum can build on its knowledge of the visitor experience and apply it to future exhibition and education practices.

RKA Blog_Evaluation Design Series_LogoInterviews are a commonly used data collection method in qualitative studies, where the goal is to understand or explore a phenomenon.  They’re an extremely effective way to gather rich, descriptive data about people’s experiences in a program or exhibition, which is one reason we use them often in our work at RK&A.  Figuring out sample size for interviews can sometimes feel trickier than for quantitative methods, like questionnaires because there aren’t tools like sample size calculators to use.  However, there are several important questions to consider that can help guide your decision-making (and while you do so, remember that a cornerstone of qualitative research is that it requires a high tolerance for ambiguity and instinct!):

  1. How much does your population vary? The more homogenous the population, the smaller the sample size. For example, is your population all teachers? Do they all teach the same grade level?  If so, you can use a smaller sample size, since the population and related phenomenon are narrow.  Generally speaking, if a population is very homogeneous and the phenomenon narrow, aim for a sample size of around 10.  If the population is varied or the phenomenon is complex, aim for around 40 to 50.  And if you want to compare populations, aim for 25 to 30 per segment.  In any case, a sample of more than 100 is generally excessive.
  2. What is the scope of the question or phenomenon you are exploring? The more narrow the question being explored or phenomena being studied, the smaller your sample size can be. Are you looking at one program, or just one aspect of a program? Or, are you comparing programs or looking at many different aspects of a program?
  3. At what point will you reach redundancy? This is key for determining sample size for any qualitative data collection method.  You want to sample only to the point of saturation—that is, stop sampling when no new information emerges.  Another way to think about this is that you stop collecting data when you keep hearing the same things again and again.  To be clear, I’m talking about big trends here—while each interview will have its own nuance and the small details might vary from interview to interview, you can stop when the larger trends start to repeat themselves and no new trends arise.

The question of “how many” for qualitative studies might always feel a bit frustrating, since (as illustrated by the questions above) the answer will always be “it depends.”  But remember, as the word “qualitative” suggests, it’s less about exact numbers and more about understanding the quality of responses, including the breadth, depth, and range of responses.  Each study will vary, but as long as you consider the questions above the next time you are deciding on sample size for qualitative methods, you can be confident you’re approaching the study in a systematic and rigorous way.

Position Opening: Research Associate

Randi Korn & Associates, Inc., is seeking a Research Associate in its Alexandria, VA office.

Primary Responsibilities:

The Research Associate will be responsible for implementing diverse evaluation projects and services, coordinating contractor data collection teams, collecting and analyzing qualitative and quantitative data, and preparing reports and presentations. Some travel is required.

Requirements:

The ideal candidate will have 3-5 years of experience conducting evaluation and/or research in informal learning settings and a desire to work in a client-centered environment. A master’s degree in social sciences, education, museum studies, or a related field is required. Qualitative data analysis experience is required, and quantitative data analysis experience is a plus. The qualified candidate must have excellent writing skills, be able to juggle multiple projects and work both independently and as part of teams. A passion for museums or other kinds of informal learning environments is preferred.

Application Instructions:

Interested applicants should forward a resume with cover letter, salary requirements, and two independently written and edited writing samples to: jobs@randikorn.com. Your cover letter should be the body of the email and your PDF resume included as an attachment. Please include your last name in the file name of all the documents you send. Closing date for applications is July 15.

RK&A offers a competitive compensation and benefits package.

voice-recording

Last October, I joined RK&A as their newest Research Associate.  I was quickly whisked up into the world of evaluation, meeting with clients, collecting and analyzing data, and preparing reports and presentations.  As a busy period of late spring work transitions into another busy period of early summer work, I’ve had the opportunity to reflect on my first eight months here.

Anyone who has spent much time with me knows that I am a quiet-natured person, contented to be the proverbial “fly on the wall,” but also intensely interested in observing and absorbing my surroundings.  My interest in listening to and observing people and trying to understand their thoughts and actions took me down several different paths before I came to RK&A and entered the world of evaluation.

As a self-proclaimed people watcher, Anthropology seemed like a natural course of study in undergraduate and graduate school.  While pursuing my master’s degree, I worked for several years as a research assistant in the Bureau of Applied Research in Anthropology (BARA) at the University of Arizona, a unit of the Department of Anthropology that focuses on research, teaching, and outreach with contemporary communities.  During my time at BARA, I participated in several research projects that required me to conduct ethnographic interviews with Native American communities in Montana to document traditional cultural places.

At first, the prospect of interviewing others seemed very intimidating.  My introverted nature and my feelings as an cultural “outsider” made these first interviews nerve-racking.  However, working closely with my advisor, Dr. Nieves Zedeño, I learned many valuable things about interviewing, including the importance of making your interviewee comfortable and the power of patience and allowing for long pauses in conversation during an interview, among many other things.  Moreover, I could see the immense value of these conversations and how qualitative and quantitative data can work together to make a strong case—for example weaving archaeological data with contemporary interviews to establish long-term Native ties to a traditional cultural property for a nomination to the National Register of Historic Places.  I carried these lessons with me after graduation when I moved to Virginia and began working in market research for the higher education sector.  Interviewing became a larger part of my daily job, although I was now having conversations with various subject matter experts, administrators and stakeholders at colleges and universities to understand their challenges and successes rather than interviewing Native elders and tribal consultants.

Then, last year I joined RK&A as a newcomer to the museum evaluation field.  Since then, I’ve worked on many projects that allowed me to flex new intellectual muscles and develop new skills, including becoming a stronger, more confident interviewer.  In the process, I’ve become more aware of how to wield my introversion as an interviewing tool.  After all, there is great value in knowing when to talk and when to listen (really actively listen), when to allow for that long pause before moving to a new question, and how to create a safe space where others feel comfortable sharing their honest thoughts and opinions.  Understanding the virtues of these skills has helped me grow as an interviewer and an evaluator.

I’ve also enjoyed learning and reflecting on how to harness interview data to help museums understand their audience, meet visitors “where they are” in terms of the knowledge and experiences they bring to every museum visit, and push to clarify their messages so that visitors leave thinking a little differently than when they arrived (even if that change is small or only focuses on just one new idea).  Interview data is unique from other types of data we collect, such as timing and tracking observations or survey responses, because it provides that essential window into what visitors are actually thinking.  Interviews allow visitors to tell us, in their own words, what they find interesting or confusing or surprising, and lets them explore personal connections with a topic or idea that the interviewer may have never considered.  It is rewarding to hear the excitement from our museum partners when they learn that a key message from an exhibit was well-communicated or realize that visitors are coming away with some ideas that were completely unexpected.  I look forward to continuing to learn and grow as an interviewer and evaluator at RK&A!

Icon made by Freepik from www.flaticon.com

Update: we are no longer accepting applications for this position. Thank you!

 

Position Opening: Research Assistant

 

Randi Korn & Associates, Inc., a consulting firm specializing in evaluating museum programs and exhibitions, is seeking a Research Assistant in its Alexandria, VA office.

 

Primary responsibilities:

  • Data Collection. The Research Assistant will assist with data collection.  This will likely include observations and interviews as part of formative evaluation (e.g, testing prototypes), focused observations of programs or exhibition components, and telephone interviews.  It will also include collecting interviews, surveys, timing and tracking observations—it will be important for the Research Assistant to have a deep understanding of these methods in order to train and manage data collection teams.
  • Hiring and managing data collection teams. We work with a national client base and regularly hire data collectors local to our client’s institution to conduct interviews, observations, and surveys.  Management may include onsite training of data collectors, scheduling data collectors, monitoring quotas, monitoring the quality of the data, and problem solving data collection challenges that arise.
  • Processing data. We work with a variety of data including questionnaires, timing and tracking observations, and interviews.  The Research Assistant will be the primary point person for processing the data, which may include uploading audio files to transcriptionists, preparing files for quantitative data entry (programming an online survey or creating an excel or SPSS file), entering quantitative data or contracting a data entry assistant, and organizing raw data in preparation for analysis.
  • Finalizing reports and other deliverables. RK&A produce numerous reports, proposals, and presentations each month.  The Research Assistant will help in preparing these reports for distribution by proofreading, formatting, and assisting with designing data visualizations.
  • Updating the RK&A website. The Research Assistant will help maintain the website by posting news to the site’s homepage, updating project lists and case studies, among other things.  This includes helping to update our archives.
  • Light analysis. Most analysis and reporting will be done by Associates, but the Assistant may be asked to conduct some analysis, such as coding survey responses, rubric scoring, or analyzing small samples of qualitative data.  Analysis may be something that the Assistant does more of over time given the Assistant’s capabilities and project needs.

The ideal candidate will be a recent graduate from a master’s program in social sciences, education, museum studies, or a related field. The Assistant will be working with multiple RK&A Associates so must be able to communicate effectively within the team about their workload and prioritize based on needs. The qualified candidate must be detail-oriented and able to juggle multiple projects. A passion for museums or other kinds of informal learning environments is preferred.

 

Application Instructions:

Please submit your application via email to jobs@randikorn.com. Your cover letter should be the body of the email and your PDF resume included as an attachment.

 

RK&A offers a competitive compensation and benefits package. For information on RK&A, please visit randikorn.com.

Last week the interdisciplinary journal Museum & Society (M&S) released their latest issue, entitled “Sociology and Museums,” to which I’m proud to have contributed. As a doctoral student in sociology, my research looks within the organizational field of museums – comparing art museums and botanical gardens – to explore what sociologists gain by investigating the “guts” of museum practice. This is because – in agreement with the M&S editors – I believe that through sociological research on museums we might “expect to see sociology adding something new not only to our knowledge of museums, but also more ambitiously, to our understanding of human society as a whole.”

To date, I’ve explored how museums mediate people’s sensory experience. Museums are an apposite case for exploring sensory experiences because they are organized principally around objects, and people perceive objects through their senses: our experiences of them are not reducible to text. The article I published with M&S, for example, shows if you compare art museums and botanical gardens, you can see differences in what I call “sensory conventions:” the rules that shape how we come to use our senses – and which senses we use – in particular settings. One familiar example of sensory conventions regards how we act in a coffee shop versus a library. In either place, you can work on your laptop, but you know you can’t get away with yammering on your phone in the library. The convention is to be quiet. When it comes to museums, the sensory conventions are similarly well-defined. We look, but we don’t touch.

Or at least, that’s the case with art museums. In botanical gardens, as I show, things are a little more complicated. Plants invite certain kinds of sensory interactions (they smell; they rustle in the breeze) that artworks typically don’t. Further, we value art and natural objects differently, and that impacts whether or not we are permitted to touch them. Compared to more traditional (art, history, natural history) museums, the sensory conventions in botanical gardens are not as clear. Visitor confusion persists even as garden staff try to promote a primarily “hands-off” experience, most often by distinguishing botanical garden from parks.

Plants for the Senses

My article focuses not only on how sensory conventions differ by degree (for example, we can touch more in the gardens, compared to the galleries) but also on how they differ by type. Specifically, in botanical gardens, I find people describe aesthetic experience as being organized around how “things” look: the pleasing, unmediated beauty of natural objects and environments. In art museums, in contrast, people are more likely to say aesthetic experience is about how “to” look. It’s about interpretive observation that can further a person’s appreciation or understanding of an artwork. “Aesthetics” thus means different things across museums but this is not simply because the objects are different, it is also because museum staff choose to organize and interpret objects in particular ways to structure perception. I find these differences in aesthetic understandings extend to the multi-sensory museum experiences staff innovate for visitors with disabilities. While museum staff in the gardens facilitate programs that include plants with interesting textures and pleasing scents, those in the galleries tend to emphasize the senses’ ability to further interpretation. For instance, opportunities to touch provide information on an artwork’s weight and temperature: information that is not necessarily visually discernible.

How does all of this inform our understanding of “human society as a whole”? For one, as museums innovate their practices to be more engaging and accessible to diverse audiences, studying sensory conventions can tell us something about how organizational change happens. While external conditions no doubt shape what museums do, the local meanings and material cultures of museums also influence how these institutions differently evolve the “look, don’t touch” rule into more hands-on experiences. Further, looking at the museum-going experiences of visitors with disabilities reveals the assumptions embedded in sensory conventions. Such conventions shape the kinds of perceptual experiences that are possible in a given space – including in museums – and shows how such opportunities vary across the forms of bodily difference we call disability.

C. Wright Mills famously described the sociological imagination as “the vivid awareness of the relationship between personal experience and the wider society.” Sociologists contend such awareness can promote more informed choices and deepen understanding of their effects. Accordingly, the 13 articles in M&S’s “Sociology and Museums” issue aim to foster readers’ sociological imagination of museums while also encouraging a more intentional approach to museum practice. I hope you check it out!

RKA Blog_Evaluation Design Series_LogoSampling is a very important consideration for all types of data collection.  For audience research and summative evaluations in particular, it is important that the sample from which data is collected represents the actual population.  That is, the visitors who participate in a questionnaire or interview should match the entire population of visitors.  For instance, if the population of program visitors are 75% female, the sample should include approximately the same percent of females.  When the study sample and the museum’s visiting population are the same, the sample has external validity.  And when there is external validity, we can draw conclusions from a study’s results and generalize them to the entire population.

 

There are several protocols RK&A follows to work towards external validity.  First, to select study participants, we use a random sampling method, and most often, a continuous random selection method.  To follow the method, we instruct data collectors to position themselves in a designated recruitment location (e.g., museum or exhibition exit) and ask them to visualize an imaginary line on the floor.  Once they are in place, we instruct data collectors to select the first person who crosses the line.  If two people cross the line at the same time, we ask data collectors to select the person closest to them.  After the data collector finishes surveying or interviewing the selected person, the data collector returns to their recruitment location and selects the very next person to cross the line.  It is important for data collectors to follow this protocol every time so as not to introduce bias into the sample.  For instance, data collectors should not move the imaginary line or decide to delay recruiting because the person crossing the line looks unfriendly.

 

Second, we record observable demographics (e.g., approximate age) and visit characteristics (e.g., presence of children in the group) of any visitor who is invited to participate in the study but declines.  We also record the reason these recruited visitors provide for declining (e.g., parking meter is about to run out).  These data points are important to confirm or reject the external validity of the sample because we compare demographic and visit characteristics of those who participated in the study to the demographic and visit characteristics of those who declined participation.  While the data points for comparison are limited, they are still informative.  For instance, a trend we have observed is that visitors 35 – 54 years are most likely to decline participation, so their voices are often underrepresented.  The same goes for visitors with children, which may be a subset of those in the 35 – 54 year age group; they are often underrepresented in visitor studies.  Knowing where your sample may be lacking is important context when interpreting the results.

 

For these two reasons, we aim to systematically recruit visitors for audience research and evaluation studies.  Even for studies that use standardized questionnaires, we hire data collectors who use a random selection protocol to recruit participants and track information about those who declined.  As such, we do not recommend using survey kiosks to collect data since visitors self-select to complete the survey and cannot be compared to those who decided not to complete the survey (and if you think kiosks may be preferable because you could boost the number of surveys collected, see my former post on sample sizes).  Again, there are always some exceptions to these general rules described above.  Yet, our goal is always to use protocols that promote external validity as well as document threats to it…because what you don’t know can hurt you.

 

Presentation1