RKA Blog_Evaluation Design Series_LogoInterviews are a commonly used data collection method in qualitative studies, where the goal is to understand or explore a phenomenon.  They’re an extremely effective way to gather rich, descriptive data about people’s experiences in a program or exhibition, which is one reason we use them often in our work at RK&A.  Figuring out sample size for interviews can sometimes feel trickier than for quantitative methods, like questionnaires because there aren’t tools like sample size calculators to use.  However, there are several important questions to consider that can help guide your decision-making (and while you do so, remember that a cornerstone of qualitative research is that it requires a high tolerance for ambiguity and instinct!):

  1. How much does your population vary? The more homogenous the population, the smaller the sample size. For example, is your population all teachers? Do they all teach the same grade level?  If so, you can use a smaller sample size, since the population and related phenomenon are narrow.  Generally speaking, if a population is very homogeneous and the phenomenon narrow, aim for a sample size of around 10.  If the population is varied or the phenomenon is complex, aim for around 40 to 50.  And if you want to compare populations, aim for 25 to 30 per segment.  In any case, a sample of more than 100 is generally excessive.
  2. What is the scope of the question or phenomenon you are exploring? The more narrow the question being explored or phenomena being studied, the smaller your sample size can be. Are you looking at one program, or just one aspect of a program? Or, are you comparing programs or looking at many different aspects of a program?
  3. At what point will you reach redundancy? This is key for determining sample size for any qualitative data collection method.  You want to sample only to the point of saturation—that is, stop sampling when no new information emerges.  Another way to think about this is that you stop collecting data when you keep hearing the same things again and again.  To be clear, I’m talking about big trends here—while each interview will have its own nuance and the small details might vary from interview to interview, you can stop when the larger trends start to repeat themselves and no new trends arise.

The question of “how many” for qualitative studies might always feel a bit frustrating, since (as illustrated by the questions above) the answer will always be “it depends.”  But remember, as the word “qualitative” suggests, it’s less about exact numbers and more about understanding the quality of responses, including the breadth, depth, and range of responses.  Each study will vary, but as long as you consider the questions above the next time you are deciding on sample size for qualitative methods, you can be confident you’re approaching the study in a systematic and rigorous way.

Position Opening: Research Associate

Randi Korn & Associates, Inc., is seeking a Research Associate in its Alexandria, VA office.

Primary Responsibilities:

The Research Associate will be responsible for implementing diverse evaluation projects and services, coordinating contractor data collection teams, collecting and analyzing qualitative and quantitative data, and preparing reports and presentations. Some travel is required.

Requirements:

The ideal candidate will have 3-5 years of experience conducting evaluation and/or research in informal learning settings and a desire to work in a client-centered environment. A master’s degree in social sciences, education, museum studies, or a related field is required. Qualitative data analysis experience is required, and quantitative data analysis experience is a plus. The qualified candidate must have excellent writing skills, be able to juggle multiple projects and work both independently and as part of teams. A passion for museums or other kinds of informal learning environments is preferred.

Application Instructions:

Interested applicants should forward a resume with cover letter, salary requirements, and two independently written and edited writing samples to: jobs@randikorn.com. Your cover letter should be the body of the email and your PDF resume included as an attachment. Please include your last name in the file name of all the documents you send. Closing date for applications is July 15.

RK&A offers a competitive compensation and benefits package.

voice-recording

Last October, I joined RK&A as their newest Research Associate.  I was quickly whisked up into the world of evaluation, meeting with clients, collecting and analyzing data, and preparing reports and presentations.  As a busy period of late spring work transitions into another busy period of early summer work, I’ve had the opportunity to reflect on my first eight months here.

Anyone who has spent much time with me knows that I am a quiet-natured person, contented to be the proverbial “fly on the wall,” but also intensely interested in observing and absorbing my surroundings.  My interest in listening to and observing people and trying to understand their thoughts and actions took me down several different paths before I came to RK&A and entered the world of evaluation.

As a self-proclaimed people watcher, Anthropology seemed like a natural course of study in undergraduate and graduate school.  While pursuing my master’s degree, I worked for several years as a research assistant in the Bureau of Applied Research in Anthropology (BARA) at the University of Arizona, a unit of the Department of Anthropology that focuses on research, teaching, and outreach with contemporary communities.  During my time at BARA, I participated in several research projects that required me to conduct ethnographic interviews with Native American communities in Montana to document traditional cultural places.

At first, the prospect of interviewing others seemed very intimidating.  My introverted nature and my feelings as an cultural “outsider” made these first interviews nerve-racking.  However, working closely with my advisor, Dr. Nieves Zedeño, I learned many valuable things about interviewing, including the importance of making your interviewee comfortable and the power of patience and allowing for long pauses in conversation during an interview, among many other things.  Moreover, I could see the immense value of these conversations and how qualitative and quantitative data can work together to make a strong case—for example weaving archaeological data with contemporary interviews to establish long-term Native ties to a traditional cultural property for a nomination to the National Register of Historic Places.  I carried these lessons with me after graduation when I moved to Virginia and began working in market research for the higher education sector.  Interviewing became a larger part of my daily job, although I was now having conversations with various subject matter experts, administrators and stakeholders at colleges and universities to understand their challenges and successes rather than interviewing Native elders and tribal consultants.

Then, last year I joined RK&A as a newcomer to the museum evaluation field.  Since then, I’ve worked on many projects that allowed me to flex new intellectual muscles and develop new skills, including becoming a stronger, more confident interviewer.  In the process, I’ve become more aware of how to wield my introversion as an interviewing tool.  After all, there is great value in knowing when to talk and when to listen (really actively listen), when to allow for that long pause before moving to a new question, and how to create a safe space where others feel comfortable sharing their honest thoughts and opinions.  Understanding the virtues of these skills has helped me grow as an interviewer and an evaluator.

I’ve also enjoyed learning and reflecting on how to harness interview data to help museums understand their audience, meet visitors “where they are” in terms of the knowledge and experiences they bring to every museum visit, and push to clarify their messages so that visitors leave thinking a little differently than when they arrived (even if that change is small or only focuses on just one new idea).  Interview data is unique from other types of data we collect, such as timing and tracking observations or survey responses, because it provides that essential window into what visitors are actually thinking.  Interviews allow visitors to tell us, in their own words, what they find interesting or confusing or surprising, and lets them explore personal connections with a topic or idea that the interviewer may have never considered.  It is rewarding to hear the excitement from our museum partners when they learn that a key message from an exhibit was well-communicated or realize that visitors are coming away with some ideas that were completely unexpected.  I look forward to continuing to learn and grow as an interviewer and evaluator at RK&A!

Icon made by Freepik from www.flaticon.com

Update: we are no longer accepting applications for this position. Thank you!

 

Position Opening: Research Assistant

 

Randi Korn & Associates, Inc., a consulting firm specializing in evaluating museum programs and exhibitions, is seeking a Research Assistant in its Alexandria, VA office.

 

Primary responsibilities:

  • Data Collection. The Research Assistant will assist with data collection.  This will likely include observations and interviews as part of formative evaluation (e.g, testing prototypes), focused observations of programs or exhibition components, and telephone interviews.  It will also include collecting interviews, surveys, timing and tracking observations—it will be important for the Research Assistant to have a deep understanding of these methods in order to train and manage data collection teams.
  • Hiring and managing data collection teams. We work with a national client base and regularly hire data collectors local to our client’s institution to conduct interviews, observations, and surveys.  Management may include onsite training of data collectors, scheduling data collectors, monitoring quotas, monitoring the quality of the data, and problem solving data collection challenges that arise.
  • Processing data. We work with a variety of data including questionnaires, timing and tracking observations, and interviews.  The Research Assistant will be the primary point person for processing the data, which may include uploading audio files to transcriptionists, preparing files for quantitative data entry (programming an online survey or creating an excel or SPSS file), entering quantitative data or contracting a data entry assistant, and organizing raw data in preparation for analysis.
  • Finalizing reports and other deliverables. RK&A produce numerous reports, proposals, and presentations each month.  The Research Assistant will help in preparing these reports for distribution by proofreading, formatting, and assisting with designing data visualizations.
  • Updating the RK&A website. The Research Assistant will help maintain the website by posting news to the site’s homepage, updating project lists and case studies, among other things.  This includes helping to update our archives.
  • Light analysis. Most analysis and reporting will be done by Associates, but the Assistant may be asked to conduct some analysis, such as coding survey responses, rubric scoring, or analyzing small samples of qualitative data.  Analysis may be something that the Assistant does more of over time given the Assistant’s capabilities and project needs.

The ideal candidate will be a recent graduate from a master’s program in social sciences, education, museum studies, or a related field. The Assistant will be working with multiple RK&A Associates so must be able to communicate effectively within the team about their workload and prioritize based on needs. The qualified candidate must be detail-oriented and able to juggle multiple projects. A passion for museums or other kinds of informal learning environments is preferred.

 

Application Instructions:

Please submit your application via email to jobs@randikorn.com. Your cover letter should be the body of the email and your PDF resume included as an attachment.

 

RK&A offers a competitive compensation and benefits package. For information on RK&A, please visit randikorn.com.

Last week the interdisciplinary journal Museum & Society (M&S) released their latest issue, entitled “Sociology and Museums,” to which I’m proud to have contributed. As a doctoral student in sociology, my research looks within the organizational field of museums – comparing art museums and botanical gardens – to explore what sociologists gain by investigating the “guts” of museum practice. This is because – in agreement with the M&S editors – I believe that through sociological research on museums we might “expect to see sociology adding something new not only to our knowledge of museums, but also more ambitiously, to our understanding of human society as a whole.”

To date, I’ve explored how museums mediate people’s sensory experience. Museums are an apposite case for exploring sensory experiences because they are organized principally around objects, and people perceive objects through their senses: our experiences of them are not reducible to text. The article I published with M&S, for example, shows if you compare art museums and botanical gardens, you can see differences in what I call “sensory conventions:” the rules that shape how we come to use our senses – and which senses we use – in particular settings. One familiar example of sensory conventions regards how we act in a coffee shop versus a library. In either place, you can work on your laptop, but you know you can’t get away with yammering on your phone in the library. The convention is to be quiet. When it comes to museums, the sensory conventions are similarly well-defined. We look, but we don’t touch.

Or at least, that’s the case with art museums. In botanical gardens, as I show, things are a little more complicated. Plants invite certain kinds of sensory interactions (they smell; they rustle in the breeze) that artworks typically don’t. Further, we value art and natural objects differently, and that impacts whether or not we are permitted to touch them. Compared to more traditional (art, history, natural history) museums, the sensory conventions in botanical gardens are not as clear. Visitor confusion persists even as garden staff try to promote a primarily “hands-off” experience, most often by distinguishing botanical garden from parks.

Plants for the Senses

My article focuses not only on how sensory conventions differ by degree (for example, we can touch more in the gardens, compared to the galleries) but also on how they differ by type. Specifically, in botanical gardens, I find people describe aesthetic experience as being organized around how “things” look: the pleasing, unmediated beauty of natural objects and environments. In art museums, in contrast, people are more likely to say aesthetic experience is about how “to” look. It’s about interpretive observation that can further a person’s appreciation or understanding of an artwork. “Aesthetics” thus means different things across museums but this is not simply because the objects are different, it is also because museum staff choose to organize and interpret objects in particular ways to structure perception. I find these differences in aesthetic understandings extend to the multi-sensory museum experiences staff innovate for visitors with disabilities. While museum staff in the gardens facilitate programs that include plants with interesting textures and pleasing scents, those in the galleries tend to emphasize the senses’ ability to further interpretation. For instance, opportunities to touch provide information on an artwork’s weight and temperature: information that is not necessarily visually discernible.

How does all of this inform our understanding of “human society as a whole”? For one, as museums innovate their practices to be more engaging and accessible to diverse audiences, studying sensory conventions can tell us something about how organizational change happens. While external conditions no doubt shape what museums do, the local meanings and material cultures of museums also influence how these institutions differently evolve the “look, don’t touch” rule into more hands-on experiences. Further, looking at the museum-going experiences of visitors with disabilities reveals the assumptions embedded in sensory conventions. Such conventions shape the kinds of perceptual experiences that are possible in a given space – including in museums – and shows how such opportunities vary across the forms of bodily difference we call disability.

C. Wright Mills famously described the sociological imagination as “the vivid awareness of the relationship between personal experience and the wider society.” Sociologists contend such awareness can promote more informed choices and deepen understanding of their effects. Accordingly, the 13 articles in M&S’s “Sociology and Museums” issue aim to foster readers’ sociological imagination of museums while also encouraging a more intentional approach to museum practice. I hope you check it out!

RKA Blog_Evaluation Design Series_LogoSampling is a very important consideration for all types of data collection.  For audience research and summative evaluations in particular, it is important that the sample from which data is collected represents the actual population.  That is, the visitors who participate in a questionnaire or interview should match the entire population of visitors.  For instance, if the population of program visitors are 75% female, the sample should include approximately the same percent of females.  When the study sample and the museum’s visiting population are the same, the sample has external validity.  And when there is external validity, we can draw conclusions from a study’s results and generalize them to the entire population.

 

There are several protocols RK&A follows to work towards external validity.  First, to select study participants, we use a random sampling method, and most often, a continuous random selection method.  To follow the method, we instruct data collectors to position themselves in a designated recruitment location (e.g., museum or exhibition exit) and ask them to visualize an imaginary line on the floor.  Once they are in place, we instruct data collectors to select the first person who crosses the line.  If two people cross the line at the same time, we ask data collectors to select the person closest to them.  After the data collector finishes surveying or interviewing the selected person, the data collector returns to their recruitment location and selects the very next person to cross the line.  It is important for data collectors to follow this protocol every time so as not to introduce bias into the sample.  For instance, data collectors should not move the imaginary line or decide to delay recruiting because the person crossing the line looks unfriendly.

 

Second, we record observable demographics (e.g., approximate age) and visit characteristics (e.g., presence of children in the group) of any visitor who is invited to participate in the study but declines.  We also record the reason these recruited visitors provide for declining (e.g., parking meter is about to run out).  These data points are important to confirm or reject the external validity of the sample because we compare demographic and visit characteristics of those who participated in the study to the demographic and visit characteristics of those who declined participation.  While the data points for comparison are limited, they are still informative.  For instance, a trend we have observed is that visitors 35 – 54 years are most likely to decline participation, so their voices are often underrepresented.  The same goes for visitors with children, which may be a subset of those in the 35 – 54 year age group; they are often underrepresented in visitor studies.  Knowing where your sample may be lacking is important context when interpreting the results.

 

For these two reasons, we aim to systematically recruit visitors for audience research and evaluation studies.  Even for studies that use standardized questionnaires, we hire data collectors who use a random selection protocol to recruit participants and track information about those who declined.  As such, we do not recommend using survey kiosks to collect data since visitors self-select to complete the survey and cannot be compared to those who decided not to complete the survey (and if you think kiosks may be preferable because you could boost the number of surveys collected, see my former post on sample sizes).  Again, there are always some exceptions to these general rules described above.  Yet, our goal is always to use protocols that promote external validity as well as document threats to it…because what you don’t know can hurt you.

 

Presentation1

RKA Blog_Evaluation Design Series_LogoSample size is a standard question we are asked, particularly for questionnaires since we will be using statistical analyses. For most audience research projects, we recommend collecting 400 questionnaires.  We are not alone in this general rule of thumb—400 is considered by some researchers (and market researchers in particular) to be the “magic number” in the world of sample sizes.  What makes 400 magical is that it is the most economical number of questionnaires to collect (from most populations) while keeping the margin of error at ± 5% (and the confidence level at 95%).  A sample size of 400 questionnaires keeps the cost of the research down while still allowing us to have high confidence in the results. 

 

To dive into this issue deeper, let’s talk about the three primary factors necessary to think about when deciding on a sample size: (1) population; (2) confidence level; and (3) margin of error.  Population is the number of people in the group from which you are sampling.  For instance, your population may be the number of annual visitors to the Museum, members, or visitors to a specific exhibition or program.  A fact that is often enlightening and counter-intuitive is that population does not have a proportional relationship to sample size.  To demonstrate this, follow my calculations by trying out one of the many sample size calculators available on the web, such as this one or this one.  Let’s start by determining a sample size for surveying the National Gallery of Art, which reported nearly 4 million visits in 2014 (3,892,459 to be exact).   Using the margin of error ± 5% and 95% confidence level, the sample size suggested is 385.  By comparison, the sample size suggested for The Phillips Collection, which welcomed 106,154 exhibition visitors in 2014, is 383.  Despite vastly different sized visiting populations, the recommended sample size for each museum differs by just two!  Again, this example demonstrates that sample size is not proportional to the population, but also, having an estimate of your population is often sufficient to determine a sample size (unless you are determining a sample size for a program with small attendance or other small populations).

 

Confidence level and margin of error (or confidence interval), as you might expect, indicate the level of confidence or how “sure” you are about the results of the questionnaires.  Here, the researcher has to make a choice about an appropriate confidence level and margin of error based on how the data will be used.  At RK&A, we generally plan for the margin of error at ± 5% and a confidence level at either 90 or 95% because it provides enough confidence in the data given how our museum clients use the data to make institutional decisions.  If we were working with a medical professional making life-or-death decisions, we would want to be more confident in the results (thus, a lower margin of error and higher confidence level).  So why not plan to be as confident in the results as possible (regardless of how they are used)?  Money.  Confidence comes at a cost because, like population and sample size, the relationship between sample size and margin of error is not proportional.  For instance, see the graph below based on the population reported above for the National Gallery of Art.  Notice that the slope of the line is steepest on the left side of the graph and more gradual on the right side.  This shows the law of diminishing returns at play.  There are great benefits when moving from a sample of 200 to 400 (margin of error diminishes by about 2 percent), but the benefits are not nearly as great when moving from a sample of 400 to 600 (the margin of error diminishes by less than 1 percent).  Thus returning to our initial point, collecting more than 400 questionnaires is rarely prudent since the cost of data collection will be going up disproportionate to the reduction of the margin of error.  For our museum clients, we do not think that increase in confidence justifies the extra costs.

blog chart

I would be remorse to end this post without a footnote.   While 400 is our rule of thumb for audience research data being collected through a standardized questionnaire, there are certainly many considerations and reasons why 400 might not be the magic number in every case.  We joke that the response to any methodological question is the often frustrating retort: “It depends.”  Sample size is no different—it depends.

Follow

Get every new post delivered to your Inbox.

Join 215 other followers