Posts Tagged ‘evaluation’

While waiting to get my hands on Nina Simon’s newest book, The Art of Relevance, I enjoyed working my way through her blog, Museums 2.0.  I was especially touched by a pair of posts I’d read just before Labor Day weekend: (1) a “sneak peek” of The Art of Relevance; and (2) an honest, reflective post about her vacation from 2008, in which she examines the field-wide conflict between elitism and inclusivity in the context of her experience at Yellowstone National Park.  I thought about these posts while biking on the Mt. Vernon Trail that weekend because the trail’s ultra-accessibility (parking lots, paved walkways, picnic tables, etc.) makes for a crowded ride. I almost wished that everyone else would just go away so I could enjoy zipping along the trail’s curves at top speed.  When I felt annoyed after navigating around joggers, walkers, other cyclists, and picnickers of all ages sprawled along the trail, I found myself reexamining my mindset in the context of Simon’s reflection:

Yellowstone…was an access dream—and my nightmare. You could drive right up to the geysers…I hated it…On this trip, for the first time, I truly understood the position of people who disagree with me, those who feel that eating and boisterous talking in museums is not only undesirable but violating and painful…I get it now. I felt it at Yellowstone.”

So how can I, as an advocate for accessibility and relevance in museums and parks, reconcile my advocacy with my attitude?  For parks and museums, there is value in hosting a range of people who fall at different points on a spectrum of museum literacy.  In some of our studies, RK&A helps museums identify different “clusters” of visitors, understood by their ranges of prior knowledge, conceptions, and attitudes.  Audience segmentation allows the museum to meet visitors “where they are,” welcoming people with all levels of museum experience; one segment is not necessarily more ideal than the other.   According to theorists in educational psychology, in classrooms and beyond, peers with different levels of mastery of the same subject can help each other learn.  For example, Lev Vygotsky’s concept of the Zone of Proximal Development (ZPD) refers to the difference between what someone can do on their own, and what they can do with help.  So, those more comfortable with using the museum or park can actually help less experienced users who want to engage with museums to learn more than what they could learn on their own.  (Not to mention, teaching someone something new causes the more experienced person to understand the subject more deeply, too.)  While museums and parks are considering how to best serve different audiences, what opportunities can they create for visitors to “scaffold” for each other, stretching the value of visitors’ museum experiences beyond themselves?

As museums can facilitate opportunities for visitors to grow and learn together, evaluators can scaffold for museums’ development in understanding visitors, too.  Research shows that experiential learning, much like the learning opportunities offered by many parks and museums, is a powerful tool for enhancing our empathetic abilities.  After experiencing a “Yellowstone” moment, I have a better appreciation for the impulse to preserving the authenticity of place or experience I hold dear or sacred, and why we might feel reluctant to welcome people who might not only have varying amounts of experience visiting an art museum, but also have different ways of using a museum or park than you or I do.  Gretchen Jennings reminds us to build our capacity for empathy by remembering “when we have felt like part of an ‘out-group,’ to savor those experiences…that show that our institutions can empathize with the concerns of their audiences.”  Only two years ago, I was brand new at navigating multi-use trails; I remember what it’s like to feel like I didn’t belong on a trail that I now know well and use with ease.

Evaluators move fluidly between empathizing with multiple audiences, sharing the visitor experience with the museum in ways their staff find understandable, meaningful, and useful—stretching them just beyond the realm of what they already know about their visitors.  As a new team member at RK&A, I’ve been observing my colleagues as conduits who transmit information about a museum’s audience to the museum staff who are responsible for enhancing visitors’ experiences.  Evaluators fall in between the visitor and the museum, facilitating a relationship between two entities that seek to better understand the other party.  That understanding can come from walking in either the museum’s or the visitor’s shoes—or, in the case of the evaluator, by wearing both.


A calmer moment on the Mount Vernon Trail.

Read Full Post »

What’s My Job?

Ever wonder what we do at RK&A or what it’s like being an evaluator? Cathy recently tackled those questions in a column for the Tufts Museum Studies alumni newsletter called “What’s My Job?,” where Museum Studies alums shed light on what its like to work in a wide range of careers in the cultural sector. Check out her response below for a snapshot of what it’s like to work at RK&A! 



Hello!  My name is Cathy Sigmond and I work as a Research Associate at Randi Korn & Associates, Inc. (RK&A). RK&A is a planning, evaluation, and research firm based in Alexandria, VA, that works to support museums and other informal-learning organizations as they pursue achieving impact.

What exactly does that mean?  Basically, it boils down to a few key things.  First, we help museum staff think strategically about and clarify what they hope to achieve (be it on the institutional level or for a specific exhibition or program).  Then, we also help them measure the extent to which they are achieving these goals (or how they might achieve them) through evaluation.  And finally, we help them make sense of the data (evaluation results) so they can move forward with their work more informed.

It means that in my job, I ask why? and what does that mean? a lot.  And as someone who loves to dig deep and find out what makes people tick, I love every second of it.

The day-to-day aspects of working as an evaluator at a small company (seven people across three cities!) aren’t always the most glamourous, but to me they are constantly exciting.  One day I might travel to a natural history museum to kick-off a front-end evaluation of a new fossil exhibition.  Another day I might spend my time managing data collectors I hired to conduct interviews at a botanic garden.  Often times I’m out in the field observing and talking with visitors myself (like when I got to observe citizen science programs in Puerto Rico!).  And some days, while I might be physically in our office in Virginia, I’m mentally holed up in the data, trying to make sense of visitors’ actions and opinions.

To be an evaluator means having a questioning stance about everything.  It requires meticulous attention to detail but also the ability to step back and understand the data holistically, in its broad context.

Perhaps most importantly, however, it requires having a passion for helping people do meaningful work.  And that’s why my favorite aspect of my job is translating the data for our museum clients (either in person or through a written report) and working with them to reflect on what evaluation results might mean for them moving forward.  I love knowing that my work helps others learn and grow into more informed and intentional museum professionals.  And I feel like I’m constantly learning and growing alongside our clients, and that’s a pretty awesome job perk.

If you’d like to learn more about all things evaluation/visitor studies/user experience (or just want to say hi!), please feel free to email Cathy at or RK&A at

Read Full Post »

Last week, Stephanie wrote a thoughtful piece about the recent upswing in museum professionals who are conducting evaluation and the importance of thinking critically about evaluation. Among other things, she asked “how can the field be sure the results produced are reliable and useful?” You can read the full post here. In an effort to open up discussion and conversation on thinking critically about evaluation, we’re excited to announce RK&A’s first Twitter chat. We hope you will join us!

Join the Conversation

From 2-3pm EDT on Tuesday, June 9th, RK&A’s Stephanie Downey, Amanda Krantz, and Cathy Sigmond will host RK&A’s first Twitter chat on thinking critically about evaluation, using the hashtag #RKAchat.  To join the conversation make sure your tweets include this hashtag. 

Why this topic?

Many museum professionals are enthusiastic advocates for evaluation and view it as essential to their work.  As evaluators, we’re ecstatic about this! But for an evaluation to be truly useful, museum professionals need to think evaluatively about evaluation.  In other words, museum professionals must think critically about how evaluations are planned and conducted to fully make sense of evaluation results within the context of the reliability and validity of the study.

To that end, we’re hosting a discussion on thinking critically about evaluation.  We’ll pose some questions so we can hear your thoughts and experiences, and we will share our thoughts on how museum professionals can position themselves to be critical consumers of evaluation.

Twitter Chat Questions

During the Twitter chat, @IntentionalMuse will tweet numbered questions (for example, “Q1: What do you evaluate, and how? #RKAchat”).  Your response tweet should reference the question (for example, “A1: We talk to visitors about what they took away from an exhibition to evaluate the exhibition’s learning goals #RKAchat”).

Q1: What do you evaluate, and how?

Q2: What do you think characterizes a “good” quality evaluation?

Q3: What characterizes a “bad” quality evaluation?

Q4: What are the challenges in conducting high quality evaluation that will provide meaningful results?

Q5: With quality in mind, what is one way you might think about or approach evaluation differently in the future?

How to Participate

If you do not already have one, create a Twitter account.  On Tuesday, June 9th from 2-3pm EDT, tweet using the hashtag #RKAchat.  You can monitor the tweets related to the chat by searching for #RKAchat on Twitter.

Read Full Post »

As a staff, we have noticed the slow by steady upswing in the number of museums doing and requesting evaluation over the years.  While evaluation was uncommon in the museum world 15 or 20 years ago, today many, many museum professionals are enthusiastic advocates for evaluation and view it as essential to their work.  Ultimately, we are thrilled about this trend because we truly believe that evaluation can be used as a learning tool to reflect on and improve practice.  This has to be good for the museum world, right?  But, there is a part of me that worries about this trend.   As someone who values quality, how can the field be sure the results produced are reliable and useful?  Just because someone says they are doing evaluation, should we take at face value that the evaluation they are doing is “good evaluation?”   No, like most things in the world, there is a continuum of quality when it comes to evaluation—there are ways of doing evaluation that will lead to results you can feel confident about and make decisions from and there are ways of doing evaluation that lack purpose and will result in piles of data that are meaningless and therefore never acted upon.

All of this hit home for me recently when I worked with a museum’s education department to build staff capacity for evaluation.  The education department in this museum had been doing evaluation on their own for years, and while much of it had been useful, they felt they were sometimes collecting data for the sake of collecting data and not quite able to make decisions about what to evaluate and what not to evaluate.  None of them are trained in evaluation, but they all have a great respect for it and wanted to learn how to do it better.  Thus, I step in.  Gulp.  I was honored that they wanted me to teach them about evaluation.  As a trained evaluator with many, many years of experience, it should be easy, right?  I quickly realized that teaching others how to do what you do is anything but easy.  And in the process of preparing my teaching materials I did something I hadn’t done in awhile.  I looked very critically at what I do as my chosen profession and asked myself how I do it—I broke down what I do everyday into pieces that I could explain and teach.  And in the process I came to a new appreciation for how hard it is to do evaluation well unless you truly have the training and the experience.  I have to admit, I feel really good about what I have been able to teach the education department of this museum about evaluation, but it hasn’t been easy by any means.

All this is to say, that we would like to start a conversation about how to conduct high-quality evaluation so that evaluation efforts will result in findings you can feel confident about and use.  High quality isn’t about size—good evaluation can be a very small evaluation study (with small sample sizes) done in-house or a large evaluation study with mixed methods and complex analysis strategies.  Quality evaluation hinges mostly on having a purpose when planning and staying true to that purpose when implementing a study.  As a result of all this thinking we have been doing, we are planning to host our first Twitter Chat, where we will invite museum professionals to think critically about evaluation through a series of questions we will pose. Stay tuned for more details!

Read Full Post »

25th Anniversary Butterfly

For the last few months, I’ve been on maternity leave with my second child. My first little girl is three years old and I had forgotten many of the nuances of caring for an infant. Through the process of getting to know my new little one, I began to reflect on a drawing that we often show to our clients in our planning or reflection workshops.


panic learning comfortThe drawing consists of three concentric circles. In the center circle, we write “comfort zone.” This, we tell practitioners, is where most of us operate on a daily basis during our work day.  We feel safe in this zone because it consists of routines and interactions with colleagues we know well. In the middle circle, we write “learning zone.” This is the zone where we have “ah-ha” moments and take in new ideas from our own work or interactions with colleagues. Perhaps we go to a meeting where different points of view are shared, and we have a breakthrough moment about a project we are working on or see something familiar in a new light. We tell clients that this is, ideally, where we want them to be during our workshops—open to sharing and receiving new ideas. Then, of course, there is the outside circle which we label the “panic zone.” This is the zone where we shut down because we are too uncomfortable to take in new learning or ideas. When this happens, we long for the “comfort zone” and, until we find our way back there, we are unlikely to be receptive to our colleagues’ ideas. Instead, we often put up walls or use defense mechanisms to deflect what we find uncomfortable.

I’ve operated in all of these zones the past few months. Before going on maternity leave, I was in my “comfort zone” with my first daughter. Even though she changes every day, we have a daily routine that works pretty well. With her, I happily and regularly enter the “learning zone” as well. She is constantly learning new things and, now that she is in school, she is learning at a rapid rate. As I’m sure many of you who are parents know, this is challenging and surprising in a pleasant sort of way. Then, my second daughter was born, and I entered the “panic zone.” She is amazing but, although I remembered the newborn phase in the abstract, I’d forgotten all the little challenges of caring for a very small baby. Once you think you’ve mastered one thing, it changes, and you have to adapt all over again within a relatively short period of time. As someone who loves organization, schedules, and routines, I find this an uncomfortable state of being. She is slowly shifting into more predictable patterns as she grows but I’ve decided that the newborn phase is my “panic zone.” I’m much more comfortable with older babies and toddlers.

Comfort ZoneWhen we facilitate meetings or workshops, we encourage our clients to invite a diverse group of relevant stakeholders to sit around the table. Many times, those who attend have interacted with one another on a limited basis. Thus, the reason we present the learning zone graphic is because we know from experience that facilitating a conversation among colleagues who rarely come together to discuss and reflect on ideas can create a challenging environment for some. And, as we tell everyone, we all have different thresholds for these three zones. What is comfortable for some might cause others to panic. For example, I am more likely to hit the “panic zone” when dealing with a newborn while another mother might hit this zone more readily with a toddler. Since we want everyone to operate in the “learning zone,” we remind practitioners to pay attention to how they and their colleagues are receiving the ideas being discussed so no one enters the “panic zone” where learning ceases to happen. So, at a time when we are all reflecting on the past year and forming New Year’s resolutions, I find myself thinking how important it is for us all to be honest with ourselves and one another about our thresholds for these different zones so we can spend more time learning and less time panicking.

Read Full Post »

25th Anniversary ButterflyAt RK&A, we think a lot about intentional practice and we encourage our clients to do the same. In planning meetings and reflection workshops, we ask clients to think about which elements of their work align with their institutional mission and vision (check out Randi’s blog post for more about the challenges of alignment). We push them to consider who might be the right audience for their program or exhibition, and we ask them to talk about the intended outcomes for their projects. Posing these kinds of questions is much easier for an “outsider” to do because we don’t have institutional baggage or a personal connection to a problem project. As consultants, we aren’t beholden to the way things have always been done. I get it – it can be hard to let go; but seeing clients seek information to make informed decisions is a powerful, exciting process. These clients want more information. They are willing to try new things, to change old (and sometimes new) programs to see if they can improve upon the results. These are museum professionals who want the very best experiences for their visitors.

We recently completed a project with a history museum and the results were, well, not as rosy as one might hope. Change is HardAfter explaining the challenges of changing students’ perspectives in a short, one-time museum visit, we started talking about what could be done to increase the effectiveness of the program. One of our suggestions was to increase the time allotted for the program and rather than spending that extra time in the exhibition, use that time to facilitate a discussion with students so they can process and reflect on what they had seen. Changing a program’s format and duration is a difficult task for the museum to undertake – it may require extra staff and certainly a different schedule – but it could make a difference. A few days later, our client asked us if there are any studies that show that longer programs are more effective. After failing to come up with any examples (if you know of any such studies, please leave a comment), the client asked for another study to see if a longer program leads to a different outcome.

As an evaluator, I want to support museums as they change the way they do their work. Evaluation can provide the necessary information to see if new ideas work. It can give clients the data-based push they need to let go of the way things have always been done and to try something new. If nothing else, the evaluation process can be a forum to remind people that even when you are changing course, there is a place for you on the Cycle of Intentional Practice: Plan, Align, Evaluate, Reflect.

Read Full Post »

The case study below is from a summative evaluation RK&A did with the Wildlife Conservation Society.  The Madagascar! exhibit at the Bronx Zoo is an indoor exhibit that allows visitors to come face-to-face with wildlife from this island habitat.  The exhibit also features a film, Small Wonders, Big Threats, that addresses environmental challenges the island is facing.

Madagascar! [2009]

A summative evaluation with a zoo

The Wildlife Conservation Society (WCS) contracted Randi Korn & Associates, Inc. (RK&A) to evaluate its new exhibition, Madagascar!, located at the Bronx Zoo. Madagascar! showcases the wildlife and landscapes of the world’s fourth largest island. Built in the historic Lion House, the exhibition transformed the interior, while preserving the historic building’s Beaux-Arts beauty. The exhibition offers opportunities to see the island through the eyes of a conservationist at various interactive stations.

How did we approach this study?

RK&A worked with WCS to clarify its goals and objectives for Madagascar! and to identify criteria to measure visitor outcomes. We conducted a summative evaluation that employed a rigorous, modified pre-test/post-test design to measure visitor learning and attitudinal changes. Through in-depth open-ended interviews, we explored visitors’ attitudes toward and understandings of threats to Madagascar and its animals as well as knowledge of WCS’s conservation efforts on the island. We then scored the interview data using rubrics and compared the achievement of eight objectives by visitors who had not seen the exhibition to visitors who had seen the exhibition.

What did we learn?

Findings demonstrate that the exhibition was extremely successful at achieving its goals. Statistically significant findings showed that visitors who experienced the exhibition gained the following new knowledge, ideas, and beliefs, including: 1) enhanced interest in the animals of Madagascar based on knowledge of their habits, environment, and endangered status (versus interest based solely on novelty); 2) knowledge that Madagascar’s environment and animals are threatened, especially by the loss of trees; and, 3) an understanding of why conservation scientists (including those from WCS) are in Madagascar: to study the animals and environment so that they can implement appropriate conservation strategies toward its protection.

What are the implications of the findings?

Even though recent public discourse on global warming has grown substantially, the general public’s familiarity with environmental issues still tends to be vague or even ill-conceived. Yet, findings demonstrate that Madagascar! shifted visitors’ knowledge of conservation science toward a more accurate, specific, and concrete understanding. These positive findings are remarkable when one considers how difficult it is to change people’s knowledge and attitudes, particularly in one relatively short visit to a single exhibition. Through experiences in exhibitions like Madagascar!, visitors assimilate new ideas and perceptions with their pre-existing ideas and perceptions and create new meaning. The exhibition effectively utilized simple low-tech interactive exhibits, large-scale video walls, live interpretation, and intimate, close-up looks at animals to connect visitors to the environments and wildlife of Madagascar. Evaluation results have shown that zoos can be appropriate environments for moving visitors beyond the novelty of seeing wild animals to developing an understanding of where the animals come from, why they are important, and how conservation efforts can protect them.

Read Full Post »

Older Posts »