In keeping with this year’s blog series about how my Intentional Practice has evolved over the last 10 years, I will be using the next seven months to present the seven principles of Intentional Practice.  The emergence of these principles was organic; I did not set out to identify these principles prior to embarking on this work—the list just came to me one day last summer.  In fact, I had forgotten that I had even written the list until I was cleaning up my Intentional Practice folder on my computer last week.  To my surprise and delight, there it was!  Suffice it to say, over the next seven months I will mull over the principles, which may shift or change as I clarify my thinking.  For that reason, I will share one per month.

#1: The organization wants to achieve something greater than itself (e.g., impact) among the audiences it serves.

 

The first principle is a prerequisite for Intentional Planning; and a museum cannot move forward in Intentional Practice if it isn’t interested in working for the common good.  Clarifying intended impact isn’t about the museum benefiting; it is about the public—the recipient of the museum’s work—benefiting.  Even the statement, “People become life-long museum visitors” doesn’t place the benefit solely on the museum visitor, as repeated visitation is a means to a greater end—for the visitor.  Achieving impact is about making a difference in people’s lives, which requires the full force of the museum behind it.  A museum that is insular, self-serving, or arrogant may not be able to pursue Intentional Planning.  Likewise, a museum with a relentless focus on the bottom line may thwart Intentional Practice work, not because it wants to but rather, persistent attention on the bottom line has a funny way of interfering with integrity and ingenuity.  People may inadvertently revert to traditional ways, which for some museums may mean looking inward rather than outward.  Fear might overtake confidence, risk-taking might disappear, and working on behalf of the bottom line might seem like the only survival strategy available on the horizon.  While organizations can balance bottom-line concerns with achieving something greater than themselves, more times than not, organizations create an either/or situation rather than an “and” situation.

rk-blog-post-1

The New Museum by John Cotton Dana

The idea of a museum thinking outside of itself for the common good is an age-old idea in museums that holds value and importance today.  A century ago, John Cotton Dana said, “A museum is good only insofar it is of use”—a statement that is often quoted today by museum staff who want their museums to be viewed as convening places where people can gather to have important conversations about contemporary issues.  Dana’s many important writings are compiled in a book called The New Museum (1999) published by the Newark Museum, and they are worth reading.

Stephen Weil

Stephen Weil

And, in Making Museums Matter (2002), noted scholar and museum director Stephen Weil writes in the chapter “Can and do they make a difference” that: “If our museums are not being operated with the ultimate goal of improving people’s lives, on what alternative basis might we possibly ask for public support?”  In this piece and several others, Weil makes a case for museums to do their work to “make a positive difference in the quality of people’s lives,” which is how all of us at RK&A define impact.

In 1996, Harold Skramstad, former director of Henry Ford and Greenfield Village in Dearborn, MI, in a presentation during the 150th celebration of the Smithsonian, noted that mission statements, which museums like to use to demonstrate their purpose, do not answer the “so what?” question.  Museums spend a lot of time agonizing over their mission and visions statements (both of which are about the museum), when it might make more sense to use some of that time thinking about the impact they want to achieve on audiences.

The “so what” question is a running theme, at least implicitly, in Emlyn Koster’s writings; Emlyn, Executive Director of the North Carolina Museum of Natural Sciences, writes about “relevance” as the necessary element that museums in today’s world must boldly embrace.  For me, relevance is connected to the concept of achieving impact, as audiences will benefit from a museum that is relevant to their lives.  I suggest reading these two pieces by Koster, neither of which are available digitally for free: “In search of relevance: Science centers as innovators in the evolution of museums” in Daedalus, 1999; and “The Relevant Museum: A reflection on Sustainability” in Museum News, 2006. Both make a case for relevance as a necessary requirement for today’s museums. Emlyn also makes the point that sustainability of our planet is the relevant topic for science museums.  I believe he is right.

Relevance also is a viable approach to organizational sustainability for any museum, as maintaining the relevance of what your organization does for its audiences will keep your museum fresh, contemporary, and most important—purposeful and meaningful to your audiences.

Ten years have passed since “The Case for Holistic Intentionality” appeared in Curator.  On the one hand, 10 years isn’t that long ago, but on the other hand, a lot has changed in how I think about intentionality.  The article (actually written 12 years ago) presents a concept about the characteristics of an intentional museum and makes a case for such organizations.  What the article had not benefited from—since it was only a concept rather than proof of concept—was my experience helping museums move towards intentional practice.

My colleague, Stephanie Downey, suggested I write 12 blog posts this year—one each month—to share the Intentional Practice strategies we developed and continue to hone and implement with museums.  She thought that this year of reflection and sharing would support the work I already will be doing as I spend this next year writing a book on Intentional Practice.  This undertaking has been in my mind for a while, and I’m excited that I have finally committed myself to this task.

Honestly, what is difficult about applying words to ideas is that the very nature of Intentional Practice presumes nothing is stationary.  Ideas are fluid, strategies are ever changing, the external environment is in constant flux, and learning is continuous.  Much like the law of physics that says everything is in constant motion, my ideas about intentionality and Intentional Practice are forever changing—not in big discernable ways (I might be the only one who notices), but in little ways.  My thinking changes almost daily, which isn’t a bad thing, except if I want to write about it!  At some point I will have to say the acronym, ELMO, “Enough, Let’s Move On”—something I learned from a museum professional who was in one of the first Intentional Practice workshops. ELMO comes in very handy, as you can imagine!

The Cycle of Intentional Practice, presented in a blog posted on January 2, 2013, has changed considerably, at least to me.  Ten years ago, Curator didn’t want to include the graphic in the article, and it is only now that I am grateful.  This is what it looked like in 2013:

Cycle of Intentional Practice

And this is what it looks like today:

 

The most significant shift (aside from its cleaner look) (thank you Amanda Krantz and Cathy Sigmond) is that there are quadrants. I always described the cycle as having quadrants but only recently does the cycle have them.  The order of the quadrants is also different, align now follows reflect instead of plan.  How odd it seems to me now that this wasn’t the original order.  Alignment makes sense after reflection—when you ask how you can align your actions to achieve impact—after reflecting on evaluation results.  Three concepts are unchanged: impact remains the centerpiece of the cycle; one can start anywhere on the cycle; and none of the quadrants are mutually exclusive, as one can reflect when planning, evaluating, and aligning.

While those ideas seem stationary today; what will I think tomorrow?  ELMO!

I look forward to the coming months when I will be sharing my thoughts about how my intentional practice work has evolved.

While waiting to get my hands on Nina Simon’s newest book, The Art of Relevance, I enjoyed working my way through her blog, Museums 2.0.  I was especially touched by a pair of posts I’d read just before Labor Day weekend: (1) a “sneak peek” of The Art of Relevance; and (2) an honest, reflective post about her vacation from 2008, in which she examines the field-wide conflict between elitism and inclusivity in the context of her experience at Yellowstone National Park.  I thought about these posts while biking on the Mt. Vernon Trail that weekend because the trail’s ultra-accessibility (parking lots, paved walkways, picnic tables, etc.) makes for a crowded ride. I almost wished that everyone else would just go away so I could enjoy zipping along the trail’s curves at top speed.  When I felt annoyed after navigating around joggers, walkers, other cyclists, and picnickers of all ages sprawled along the trail, I found myself reexamining my mindset in the context of Simon’s reflection:

Yellowstone…was an access dream—and my nightmare. You could drive right up to the geysers…I hated it…On this trip, for the first time, I truly understood the position of people who disagree with me, those who feel that eating and boisterous talking in museums is not only undesirable but violating and painful…I get it now. I felt it at Yellowstone.”

So how can I, as an advocate for accessibility and relevance in museums and parks, reconcile my advocacy with my attitude?  For parks and museums, there is value in hosting a range of people who fall at different points on a spectrum of museum literacy.  In some of our studies, RK&A helps museums identify different “clusters” of visitors, understood by their ranges of prior knowledge, conceptions, and attitudes.  Audience segmentation allows the museum to meet visitors “where they are,” welcoming people with all levels of museum experience; one segment is not necessarily more ideal than the other.   According to theorists in educational psychology, in classrooms and beyond, peers with different levels of mastery of the same subject can help each other learn.  For example, Lev Vygotsky’s concept of the Zone of Proximal Development (ZPD) refers to the difference between what someone can do on their own, and what they can do with help.  So, those more comfortable with using the museum or park can actually help less experienced users who want to engage with museums to learn more than what they could learn on their own.  (Not to mention, teaching someone something new causes the more experienced person to understand the subject more deeply, too.)  While museums and parks are considering how to best serve different audiences, what opportunities can they create for visitors to “scaffold” for each other, stretching the value of visitors’ museum experiences beyond themselves?

As museums can facilitate opportunities for visitors to grow and learn together, evaluators can scaffold for museums’ development in understanding visitors, too.  Research shows that experiential learning, much like the learning opportunities offered by many parks and museums, is a powerful tool for enhancing our empathetic abilities.  After experiencing a “Yellowstone” moment, I have a better appreciation for the impulse to preserving the authenticity of place or experience I hold dear or sacred, and why we might feel reluctant to welcome people who might not only have varying amounts of experience visiting an art museum, but also have different ways of using a museum or park than you or I do.  Gretchen Jennings reminds us to build our capacity for empathy by remembering “when we have felt like part of an ‘out-group,’ to savor those experiences…that show that our institutions can empathize with the concerns of their audiences.”  Only two years ago, I was brand new at navigating multi-use trails; I remember what it’s like to feel like I didn’t belong on a trail that I now know well and use with ease.

Evaluators move fluidly between empathizing with multiple audiences, sharing the visitor experience with the museum in ways their staff find understandable, meaningful, and useful—stretching them just beyond the realm of what they already know about their visitors.  As a new team member at RK&A, I’ve been observing my colleagues as conduits who transmit information about a museum’s audience to the museum staff who are responsible for enhancing visitors’ experiences.  Evaluators fall in between the visitor and the museum, facilitating a relationship between two entities that seek to better understand the other party.  That understanding can come from walking in either the museum’s or the visitor’s shoes—or, in the case of the evaluator, by wearing both.

img_6510

A calmer moment on the Mount Vernon Trail.

RKA Blog_Evaluation Design Series_Logo

This spring, RK&A undertook an ambitious project with the Smithsonian National Museum of Natural History (NMNH) to conduct a meta-analysis of reports from the last 10 years of evaluation completed for the museum.  In this context, “meta-analysis” essentially means reanalyzing past analyses with the goal of identifying larger trends or gaps in research.  This project was both challenging and rewarding, and so I wanted to share our experience on the blog.

The specific goals for this project were to:

  • Understand consistencies or inconsistencies in findings across reports;
  • Identify areas of interest for further study;
  • Help the museum build on its existing knowledge base; and
  • Create a more standardized framework for future evaluations that would help the museum continue building its knowledge base by connecting past studies to present and future evaluations.

The first step of the meta-analysis process was to perform an initial review of the reports and determine criteria for inclusion in the analysis.  One of the underlying goals for the project was to demonstrate to the institution at large (not just the Exhibits and Education departments) that evaluation is a useful, scientific, and rigorous tool that can inform future work at the museum.  Therefore, we wanted to make sure that the evaluation reports included in the study adhered to these high standards.

For this reason, we omitted a few reports that we considered casual “explorations” of an exhibition or component type, rather than a systematic study using acceptable evaluation and research protocols.  For example, an “exploration” might consist of a random short observation of an exhibition and casual conversations with a small number of visitors about their experiences.  While these types of studies can be useful and informative on small-scale projects, they were not rigorous enough to support the larger goals of this project.

We also omitted reports in which the sampling and data collection methods were not clearly stated, because this left us unsure of exactly who was recruited, how they were recruited, and how that data were collected (e.g., Were the observations cued or uncued?  What instrument was used?).  Although these studies may have been rigorous, there is no way for us to know without a clear statement of the methodology in the report.

Next, we needed to develop a framework to use for analyzing and comparing evaluations.  Over the course of several meetings with NMNH, we discussed and clarified the ideas and outcomes that were most important to the museum.  Based on these discussions and a review of NMNH’s existing evaluation framework for public education offerings and the institution’s core messages, we developed a new evaluation framework which would be our analytic lens.  The new framework centered on four main categories, with the most emphasis placed on the Influence category:Metaanalysis

Within the Influence category, we looked at a number of specific outcomes that were important to NMNH, such as whether visitors are “awe-inspired” by what they encounter in the museum or whether visitors report becoming more involved in “preserving and sustaining” the natural world.  To show some of the challenges we faced in making comparisons across reports, I’ll highlight an example from one outcome—“Visitors are curious about the information and ideas presented in the exhibition.”

Understanding whether visitors are “curious” about the information and ideas presented in an exhibition was difficult because many evaluations did not explore visitors curiosity.  Instead, we had to think about what types of questions, visitor responses, and visitor behaviors might serve as proxy indicators that visitors were curious about what they had seen or experienced.  For example, audience research studies conducted between 2010 and 2014 at NMNH asked entering visitors “Which of these experiences are you especially looking forward to in the National Museum of Natural History today” and exiting visitors “Which of these experiences were especially satisfying to you in the National Museum of Natural History today.”  We decided that visitors who indicated they were especially looking forward to (entering) or satisfied by (exiting) “enriching my understanding” may be considered “curious” to learn more about the content and ideas presented by the museum.  For other evaluations that didn’t explore elements associated with curiosity, we looked for indicators such as asking questions or seeking clarifications of staff and volunteers about something a visitor has seen in an exhibition.

However, we also acknowledge that visitors’ desire to “enrich” their “understanding” or “gain more information” about a topic does not always directly relate to curiosity.  For example, one evaluation that asked about both “curiosity” and “gaining information” found that an exhibition exceeded visitors’ expectations about having their “curiosity sparked” but fell short in “enriching understanding” or “gaining information.”  We learned from this that if curiosity is an important measure of NMNH’s influence on visitors, future evaluations should be clear in how they explore curiosity in their instruments and how they discuss it in their findings.

In light of the results of the meta-analysis, we are excited to see how NMNH uses the reporting tool we created from this work.  The tool standardizes the categories that evaluators and museum staff use to collect information and measure impact so the museum can build on its knowledge of the visitor experience and apply it to future exhibition and education practices.

RKA Blog_Evaluation Design Series_LogoInterviews are a commonly used data collection method in qualitative studies, where the goal is to understand or explore a phenomenon.  They’re an extremely effective way to gather rich, descriptive data about people’s experiences in a program or exhibition, which is one reason we use them often in our work at RK&A.  Figuring out sample size for interviews can sometimes feel trickier than for quantitative methods, like questionnaires because there aren’t tools like sample size calculators to use.  However, there are several important questions to consider that can help guide your decision-making (and while you do so, remember that a cornerstone of qualitative research is that it requires a high tolerance for ambiguity and instinct!):

  1. How much does your population vary? The more homogenous the population, the smaller the sample size. For example, is your population all teachers? Do they all teach the same grade level?  If so, you can use a smaller sample size, since the population and related phenomenon are narrow.  Generally speaking, if a population is very homogeneous and the phenomenon narrow, aim for a sample size of around 10.  If the population is varied or the phenomenon is complex, aim for around 40 to 50.  And if you want to compare populations, aim for 25 to 30 per segment.  In any case, a sample of more than 100 is generally excessive.
  2. What is the scope of the question or phenomenon you are exploring? The more narrow the question being explored or phenomena being studied, the smaller your sample size can be. Are you looking at one program, or just one aspect of a program? Or, are you comparing programs or looking at many different aspects of a program?
  3. At what point will you reach redundancy? This is key for determining sample size for any qualitative data collection method.  You want to sample only to the point of saturation—that is, stop sampling when no new information emerges.  Another way to think about this is that you stop collecting data when you keep hearing the same things again and again.  To be clear, I’m talking about big trends here—while each interview will have its own nuance and the small details might vary from interview to interview, you can stop when the larger trends start to repeat themselves and no new trends arise.

The question of “how many” for qualitative studies might always feel a bit frustrating, since (as illustrated by the questions above) the answer will always be “it depends.”  But remember, as the word “qualitative” suggests, it’s less about exact numbers and more about understanding the quality of responses, including the breadth, depth, and range of responses.  Each study will vary, but as long as you consider the questions above the next time you are deciding on sample size for qualitative methods, you can be confident you’re approaching the study in a systematic and rigorous way.

Position Opening: Research Associate

Randi Korn & Associates, Inc., is seeking a Research Associate in its Alexandria, VA office.

Primary Responsibilities:

The Research Associate will be responsible for implementing diverse evaluation projects and services, coordinating contractor data collection teams, collecting and analyzing qualitative and quantitative data, and preparing reports and presentations. Some travel is required.

Requirements:

The ideal candidate will have 3-5 years of experience conducting evaluation and/or research in informal learning settings and a desire to work in a client-centered environment. A master’s degree in social sciences, education, museum studies, or a related field is required. Qualitative data analysis experience is required, and quantitative data analysis experience is a plus. The qualified candidate must have excellent writing skills, be able to juggle multiple projects and work both independently and as part of teams. A passion for museums or other kinds of informal learning environments is preferred.

Application Instructions:

Interested applicants should forward a resume with cover letter, salary requirements, and two independently written and edited writing samples to: jobs@randikorn.com. Your cover letter should be the body of the email and your PDF resume included as an attachment. Please include your last name in the file name of all the documents you send. Closing date for applications is July 15.

RK&A offers a competitive compensation and benefits package.

voice-recording

Last October, I joined RK&A as their newest Research Associate.  I was quickly whisked up into the world of evaluation, meeting with clients, collecting and analyzing data, and preparing reports and presentations.  As a busy period of late spring work transitions into another busy period of early summer work, I’ve had the opportunity to reflect on my first eight months here.

Anyone who has spent much time with me knows that I am a quiet-natured person, contented to be the proverbial “fly on the wall,” but also intensely interested in observing and absorbing my surroundings.  My interest in listening to and observing people and trying to understand their thoughts and actions took me down several different paths before I came to RK&A and entered the world of evaluation.

As a self-proclaimed people watcher, Anthropology seemed like a natural course of study in undergraduate and graduate school.  While pursuing my master’s degree, I worked for several years as a research assistant in the Bureau of Applied Research in Anthropology (BARA) at the University of Arizona, a unit of the Department of Anthropology that focuses on research, teaching, and outreach with contemporary communities.  During my time at BARA, I participated in several research projects that required me to conduct ethnographic interviews with Native American communities in Montana to document traditional cultural places.

At first, the prospect of interviewing others seemed very intimidating.  My introverted nature and my feelings as an cultural “outsider” made these first interviews nerve-racking.  However, working closely with my advisor, Dr. Nieves Zedeño, I learned many valuable things about interviewing, including the importance of making your interviewee comfortable and the power of patience and allowing for long pauses in conversation during an interview, among many other things.  Moreover, I could see the immense value of these conversations and how qualitative and quantitative data can work together to make a strong case—for example weaving archaeological data with contemporary interviews to establish long-term Native ties to a traditional cultural property for a nomination to the National Register of Historic Places.  I carried these lessons with me after graduation when I moved to Virginia and began working in market research for the higher education sector.  Interviewing became a larger part of my daily job, although I was now having conversations with various subject matter experts, administrators and stakeholders at colleges and universities to understand their challenges and successes rather than interviewing Native elders and tribal consultants.

Then, last year I joined RK&A as a newcomer to the museum evaluation field.  Since then, I’ve worked on many projects that allowed me to flex new intellectual muscles and develop new skills, including becoming a stronger, more confident interviewer.  In the process, I’ve become more aware of how to wield my introversion as an interviewing tool.  After all, there is great value in knowing when to talk and when to listen (really actively listen), when to allow for that long pause before moving to a new question, and how to create a safe space where others feel comfortable sharing their honest thoughts and opinions.  Understanding the virtues of these skills has helped me grow as an interviewer and an evaluator.

I’ve also enjoyed learning and reflecting on how to harness interview data to help museums understand their audience, meet visitors “where they are” in terms of the knowledge and experiences they bring to every museum visit, and push to clarify their messages so that visitors leave thinking a little differently than when they arrived (even if that change is small or only focuses on just one new idea).  Interview data is unique from other types of data we collect, such as timing and tracking observations or survey responses, because it provides that essential window into what visitors are actually thinking.  Interviews allow visitors to tell us, in their own words, what they find interesting or confusing or surprising, and lets them explore personal connections with a topic or idea that the interviewer may have never considered.  It is rewarding to hear the excitement from our museum partners when they learn that a key message from an exhibit was well-communicated or realize that visitors are coming away with some ideas that were completely unexpected.  I look forward to continuing to learn and grow as an interviewer and evaluator at RK&A!

Icon made by Freepik from www.flaticon.com