Posts Tagged ‘audience’

25th Anniversary ButterflyThe most challenging evaluation report I’ve written consisted of 17 PowerPoint slides. The slides didn’t pull the most salient findings from a larger report; the slides were the report! I remember how difficult it was to start out with the idea of writing less from qualitative data. While I had to present major trends, I feared the format might rob the data of its nuance (17 PowerPoint slides obviously require brevity). The process was challenging and at times frustrating, but in the end I felt extremely gratified. Not only was the report thorough, it was exactly what the client wanted, and it was usable.

As evaluators, we toe the line between social science research, application and usability. As researchers, we must analyze and present the data as they appear. Sometimes, especially in the case of qualitative reports, this can lead to an overwhelming amount of dense narrative. This acceptable reporting style in evaluation practice is our default. Given the number of reports we write each year, having guidelines is efficient and freeing. We can focus on the analysis, giving us plenty of time to get to know and understand the data, to tease out the wonderful complexity that comes from open-ended interviews. As researchers, the presentation takes a backseat to analysis and digging into data.

However most of the time we are writing a report that will be shared with other researchers; it is a document that will be read by userspaper-stack-300x251museum staff who may share the findings with other staff or the board. Overwhelming amounts of dense narrative may not be useful; not because our audience can’t understand it, but because often the meaning is packed and needs to be untangled. I would guess what clients want and need is something they can refer to repeatedly, something they can look at to remind themselves, “Visitors aren’t interested in reading long labels,” or “Visitors enjoy interactive exhibits.” As researchers, presentation may be secondary, but as evaluators, presentation must be a primary consideration.

As my experience with the PowerPoint report (and many other reports since then) taught me, it can be tough to stray from a well-intentioned template. A shorter report or a more visual report doesn’t take less time to analyze or less time to write. In fact, writing a short report takes more time because I have to eliminate the dense narrative and find the essence, as I might with a long report. I also have to change the way I think about presentation. I have to think about presentation!

At RK&A, we like to look at our report template to see what we can do to improve it – new ways to highlight key findings or call out visitor quotations. Not all of our ideas work out in the long run, but it is good to think about different ways to present information. At the end of the day, though, what our report looks like for any given project comes from a client’s needs—and not from professional standards. And I learned that when I wrote those 17 PowerPoint slides!

Read Full Post »

Welcome to our new Throwback Thursday series, where we take a moment to look back at projects from our archives.  Today we’ll be sharing a case study about our planning and evaluation work with the Science Museum of Virginia and their Sphere Corps Program.  You might recall this particular Science On a Sphere program from one of our prior posts, Learning to Embrace Failure, and today we’ll share a bit more about how we approached the study, what we learned, and the implications of those findings.

Sphere Corps Program [2012]

For this planning and evaluation project with The Science Museum of Virginia (SMV), RK&A evaluated Sphere Corps, a Science on a Sphere program about climate change developed by SMV with funding from the National Oceanic and Atmospheric Administration (NOAA).    

How did we approach this study?  

The study was designed around RK&A’s belief that organizations must be intentional in their practice by continually clarifying purpose, aligning practices and resources to achieve purpose, measuring outcomes, and learning from practice to strengthen ongoing planning and actions.  To this end, the Sphere Corps project included five phases of work—a literature review, a workshop to define intended program outcomes, two rounds of formative evaluation, and two reflection workshops.  Formative evaluation data were collected using naturalistic observations and in-depth interviews.  Each phase of work allowed staff to explore their vision for the Sphere Corps program and how it changed over time as they learned from and reflected on evaluation findings.

What did we learn?SOS

SMV staff’s goal was to create a facilitated, inquiry-based Science on a Sphere program about climate change.  RK&A first completed a literature review that revealed a facilitated Sphere experience was in keeping with best practices and that using inquiry methods in a 20-minute program would be challenging but worth exploring further.  Staff then brainstormed and honed the outcomes they hoped to achieve in Sphere Corps, which guided planning and script development.  The first round of formative evaluation identified implementation barriers and an overabundance of iClicker questions, all of which created a challenging environment for educators to effectively use inquiry.  Upon reflection, staff reduced the number of iClicker questions and added visualizations and questions that required close observation of the Sphere.  Following a second round of formative evaluation, staff made additional changes to the program script and began to reflect on the reality of using inquiry in a single 20-minute program.  Since the script covered a range of topics related to climate change, staff wondered if they should instead go deeper with one topic while encouraging more visitor observation and interpretation of Sphere data.  Out of this discussion arose the idea of “mini-programs”—a series of programs that would focus on communicating one key idea about climate change, such as helping people understand the difference between weather and climate.

What are the implications of the findings?

Central to the idea of the “mini-program” is the idea of doing less to achieve more.  Impact and outcomes are incredibly difficult to achieve and trying to achieve too much often results in accomplishing very little.  Through a reflection workshop and staff discussion, the SMV team was able to prioritize and streamline the outcomes and indicators originally written for the Sphere Corps program.  Staff also recognized that their primary goal with the Sphere Corps program is to encourage visitors to think more critically about the science behind climate change.  By scaling down the number of topics covered in the presentation, each program could intentionally focus on: (1) one key idea or question related to climate change; (2) achievement of only a few intended outcomes; and (3) implementation of specific facilitation strategies to achieve those outcomes.  Intentionally covering less content also opens up opportunities to more effectively use inquiry methods.

Read Full Post »

25th Anniversary Butterfly

So often we evaluators are asked to measure outcomes or results, which of course align with our expectations.  When we conduct an evaluation and the results are positive, an organization can wave its flag; and ideally the whole museum field benefits from learning why a particular exhibition or program is so successful at achieving its outcomes.  During my time as an evaluator, I have learned that there is enormous value in walking before running.  Because measuring results sounds compelling to museums and their funders, museums often jump over important evaluation processes and rush into measuring results.  Accordingly, staff, in a moment of passion, forgo front-end and formative evaluation—those early stages of concept testing, prototyping, and piloting a program—that help staff understand the gaps between the intended outcomes for their audience and the successes and challenges of implementing a new project. 

So, when we are asked to measure results, we always ask the client if the project has ever been evaluated.  Even then, we may pull the reins to help slow down our clients enough to consider the benefits of first understanding what is and is not working about a particular program or exhibition.  More often than not, slowing down and using front-end and formative evaluation to improve the visitor experience increases the likelihood that staff will be rewarded with positive results when they measure outcomes later.  In fact, when an organization’s evaluation resources are limited, we often advocate for conducting a front-end and/or formative evaluation because we believe that is where all of us will learn the most.  It is human nature to want to jump right in to the good stuff and eat our dessert first.  We, too, get excited by our clients’ passion and have to remind ourselves of the value of taking baby steps.  So, one of the many lessons I’ve learned (and am still learning) is that when it comes to evaluation, encouraging practitioners to walk before they run (or test before they measure) is key to a successful project and their own personal learning.

Read Full Post »

25th Anniversary ButterflyThe quarter-century mark feels like the right time to take stock of where RK&A is, and at the very least, think about what we have learned along the way.  Reflecting on the past is a task that feels comfortable; we know where we have been, and we are familiar with the present.  Making sense of the past and forging new ideas from the past is far more difficult—yet, as a staff, that is what we have decided to do for our 25th anniversary celebration—which will take the entire year!  As some of you know, we have been blogging for exactly a year—at  What better platform is there to share what we have learned over the last quarter century?

Evaluation, visitor studies, audience research—that is the work I set out to do and it remains our traditional work.  RK&A has carefully grown to seven people, including a small satellite office in NY.  Along the way all of us have learned so much—about visitors, about cultural organizations, and about the relationship between the two.  As a staff that has always strived for excellence, we try hard to apply new knowledge to our practice.

About 10 years ago I gravitated towards the notion of intentionality as a concept I wanted to explore.  Conducting evaluations had shown me that it would be worthwhile to figure out a way to help cultural organizations focus their passions, skills, and resources towards their vision of impact.  Helping organizations determine what impact they wanted to achieve seemed like the first step, and so was born our intentional practice workshops (which took about two years of R&D).  Achieving impact with audiences is harder than one thinks, so in order for cultural organizations to achieve impact, they need to be intentional in how they carry out their work.

The demand for our intentional practice workshops continues to grow, and because our intentionality work emerged from our evaluation work, it was only a matter of time until we would begin to weave what we have learned back into our evaluation practice.  We now offer intentionality-like workshops as part of our evaluation services to help staff understand and apply evaluation data to their planning.  Rarely do staff from across the organization get together to discuss and debate their visitors and work, but when they do, the results are inspiring.

However, inspiration doesn’t always lead to action.  While most people and organizations want to change, saying so is easier than doing so.  I have learned the virtue of taking baby steps towards change, and sometimes baby steps give people time to learn and internalize a new way of thinking and working.  It’s that way at RK&A. Sometimes our intentionality work feels organic and sometimes we need to be more deliberate and forthright in our decision making in order to sustain a change in our practice.  It is human nature to gravitate to old ways of doing things until new ways become comfortable; it takes conviction and focus to continue to move forward.  To sustain our learning and help RK&A maintain its momentum with our new intentionality work and traditional evaluation practice, we will share 25 years of learning with you over the next 25 blog posts.  We hope our 25th year celebration of RK&A’s learning inspires you to learn along with us.

Read Full Post »

Recently, Christine Castle asked readers of her Museum Education Monitor for their “words to live by”—pithy phrases and bon mots that help [them] make it through the museum education day.  This got me thinking about the words I live by as a museum evaluator.  Three little words easily popped into my mind—less is more.  These words epitomize themselves; they are beautiful in their simplicity yet they embody our whole philosophy as an evaluation firm and my own personal approach to evaluation.  What is so interesting to me about the concept of “less is more” is how incredibly hard it is to achieve.  Doing less seems so simple; but to truly live by those words is extraordinarily difficult.

Less is MoreLet me give an example.  We often facilitate planning workshops for our clients.  As we have probably said in many a blog post, planning and evaluation are inextricably linked.  Evaluators are true believers in planning with the end in mind.  Otherwise, how are we going to know that our clients have achieved the effect they desire on the audiences they serve?  The ultimate goal of these planning workshops is to help our clients articulate their desired public impact.  They can use the end result—an Impact Planning Framework—to guide their decision making, and we can use it to guide audience research and evaluation.  In these workshops, we facilitate exercises for museum staff, and one of the exercises asks staff to select a finite number of audiences for which they will envision impact.  You may not be surprised, but often a key sticking point for museum staff is the very notion of limiting the number of target audiences.  At times, it almost feels like we have asked them to remove an appendage; the resistance can be palpable.

It’s touching on many levels that it is so difficult for museum staff to prioritize their audiences.  It speaks volumes about the passion they have for the public dimension of the work they do.  However, and this is a big however, museums cannot be everything to everyone.  It’s just not possible no matter how hard museums try.  I do not say this to sound negative or glass half empty.  I want museums to succeed in achieving their desired impact.  But, here’s the thing.  Impact is really hard to achieve (we know this from countless evaluations).  The rationale for prioritizing audiences is to help the museum focus resources and actions towards achieving results on those audiences.  Trying to be everything to everyone may result in the opposite of what a museum is striving for—nothing meaningful for anyone.  And, while difficult to believe, focusing one’s efforts and resources on a few doesn’t usually lead to others feeling excluded.  So often, what a museum might do for a few will have meaning for so many more.

The beauty of “less is more” is that if you try, you will feel liberated.  Focusing one’s efforts to achieve impact on three or four audiences (instead of “everyone”) is scary, but once a museum bites the bullet, staff may feel like they just received a “get-out-of-jail-free” card.  Finally, staff will have an excuse to focus their efforts on those few audiences where they feel they can make a difference.  Pursuing “less is more” is an ongoing process, which means that it takes a while to embrace it, and, once you do, you have to continue to work at living by those words because everything around us screams “more.”  It’s not easy, but worthwhile pursuits never are.  For me, knowing that doing less will actually help our clients achieve more is worth it in the end.  So, that’s why “less is more” are my words to live by.

Read Full Post »

In June, The Association of Science and Technology Centers (ASTC) invited professionals to respond to these questions for an upcoming issue of Dimensions magazine: When are evaluation and other visitor feedback strategies the most useful for helping advance a science center’s mission?  When are such strategies less successful?  We pondered this at a staff meeting and decided that a small but important tweak may be needed to begin addressing the questions.  First, let’s clarify that mission describes what a museum does and impact describes the result of what a museum does—on the audiences it serves.  We believe that anything a museum does—collect, exhibit, educate—is meaningless unless it is done in the pursuit of impact.  So, when is evaluation most useful for advancing a science center’s tree_fallsmission?  When it is done to advance impact not mission.  It’s a little like that old adage: If a tree falls in the forest and no one is around to hear it, does it make a sound?  With regard to mission and impact, we take a slightly different angle—if a museum does work or evaluation that does not lead to impact, are they really doing the work?

Evaluators are in the same boat as some museum practitioners.  Evaluation is a means to an end, just as a museum’s collections are a means to an end.  Unless evaluation is placed in a meaningful context, such as helping a museum pursue impact, evaluation doesn’t serve a purpose.  As an evaluator, I suppose I should say evaluation is always valuable.  But, that’s just not true.  I’m a self-proclaimed data nerd.  I love the minutia of evaluation—pouring over pages and pages of interview transcripts and pulling out those five key visitor trends.  I can get lost in data for days and find myself pulled in many seemingly fruitful directions.  “Oh, how interesting!” I will say to no one in particular.  I often find myself lost in the visitors’ world, chuckling to myself about a quirky response to an exhibit or wondering who someone is and why he or she responded to a museum experience in a particular way.  Getting lost in your work can be fun and, lucky me, happens to those of us who are passionate about what we do.  So, while pursuing tangents in evaluation data is fun for me, there is a flip side to this coin—a lack of focus that can be detrimental to the pursuit of a larger goal.  This is why we, as evaluators, push our clients to articulate what it is they want to achieve to keep us (and them) on track.

We consistently find museum practitioners to be among those most passionate about their work.  Thus, these moments of losing oneself in one’s work, whether researching or examining an object, designing an exhibition, or creating a program, are frequent occurrences.  When it comes to pursuing impact, this passion is both a joy and a burden.  It is a joy because most practitioners can easily articulate what they do for their audiences.  But, they often get lost in what they do and may not think about why they do what they do.  A practitioner articulating the “why” is similar to the entire museum articulating its intended impact.  Articulating impact provides a laser focus for all the work that museum practitioners do and helps keep them on track toward pursuing that larger goal.  So, our response to ASTC’s second question, When are evaluation strategies less successful in helping advance a science center’s mission?  When a science center and its collective staff have yet to articulate the impact they hope to achieve on the audiences they serve.  Otherwise, we can all do evaluation until we are blue in the face but those reports will continue to collect dust on hundreds of science centers’ shelves.  Of this I am certain—just like death and taxes.

Read Full Post »

That’s right, you “app-ed” for it, a post about the intentional use of apps and mobile technology in museums.  I admit that I come to this post with a rather skewed perspective, for when my phone vibrates, it’s not a shiny wide-screened smart phone that I pick up, but my good ole’ 2008 vintage RAZR.  And though personally I’m a little terrified of the potential for a smart phone to replace my library-book reading time on the Metro and become the center of my world, professionally I’ve come to see how intentionally implemented mobile technology can be a “smart” option for enhancing the museum experience.

So you might ask, what’s going on with apps and museums?  Let’s start with a perspective most recently championed in Matthew Petrie’s controversial piece (see Nancy Proctor’s comments) in The Guardian.  He argues that there is a boom of museum visitors using smart phones, so why don’t museums do more to reach out to this built-in audience?  AAM’s Mobile in Museums Study makes a similar point, alluding to the many smart phone users visiting museums, who on average sport 29 apps on their phone!  In fact, AAM makes the case that apps are one of the top-three fastest-growing mobile interpretation methods in the museum world.  With stats like that, and the exciting selection of museum apps available, it’s hard not to get swept up in museum-app madness!

Museum Apps

But, if apps are the current wave of the future, how can we—museum practitioners and evaluators—think more intentionally about how apps might support visitors’ museum experiences?  While it may sound like common sense, museums may need to ensure visitors become more aware of their apps and mobile-support offerings.  In a recent RK&A study of app use in an art museum, about one-half of interviewees who were not using the app said they chose not to because they wanted a self-guided experience.  However, one-third of these interviewees  said they were unaware that the option to use an app existed.  Interviewees also were unaware of or had misconceptions about the availability of free wifi in the museum, their ability to borrow mobile devices, or that they could download the app for free.  A recent V&A study on visitors’ app use shows similar findings; most of their visitors were unaware of the museum’s free wifi connection.  So, in order for visitors to have meaningful app experiences, we need to improve the communication level and make app awareness a priority.

We may also want to think further about how apps are developed, particularly in terms of content and use.  To justify using apps as a new interpretive tool, museums can take advantage of the qualities that distinguish apps from other interpretive options (the ability for visitors to take pictures, share content, participate in a mobile community, etc.).  We also may need to consider whether an app is the best vehicle for the content a museum might want to feature.  These questions came up in the above RK&A study where we found that one-third of interviewees using the app only used it as an audio guide, ignoring or unaware of the app’s other distinct features.  This finding may have a familiar ring, as Koven Smith, a former professor of mine, notes in his article, “The Future of Mobile Interpretation,” a change in museum mentality may need to take place in order to elevate apps from glorified audio guides to interpretive tools that fully utilize their unique features.

The app currently under development for The British Postal Museum and Archive is really exciting in terms of how it utilizes app technology and fits with the mission of the museum.  A team of undergraduates from the Worchester Polytechnic Institute did the research on how to design the app and their paper is phenomenal.  Their app would introduce visitors to the museum’s collection through a stamp collecting activity.  Visitors can walk through the galleries snapping pictures of stamps and, thanks to the powers of image recognition, pull up more information about them.  The app design makes sense to me because it is activity-based (taking a picture), focused on the museum’s collection, and related to stamp collecting behavioral skills like observing details and curating a collection.  Another great aspect of the app is that, theoretically, it works outside the museum, too.  You can snap pictures of the stamps on your own mail, pull up information about them, and add them to your growing collection.

So, where do we, as museum practitioners and evaluators, go from here?  There is clearly great potential for what apps can do to enhance a museum experience, especially as museums lay the logistical groundwork to support visitors’ app use.  And though app designs still tend toward the didactic audio guide, museums are starting to experiment with new ways of thinking about and presenting their content as well as alternative ways visitors can interact with it.  As I think more about the evolution of museum apps, I return to the cycle of learning and the value that researching, reflecting, planning, and aligning has already had on museum-based apps and will continue to have on determining best practices.

Read Full Post »

Older Posts »