Posts Tagged ‘planning’

We may not have it all together, but together we have it all”

Author unknown

The Cycle of Intentional Practice is proving to be a very useful framework for planning (see “Cycle of Intentional Practice” for more information).  We have applied the Cycle to many different projects—from planning global initiatives, to developing action plans for individual museum departments, to planning a museum’s future, to planning exhibitions.  While all of these projects are completely different, common to them is the museums’ intention for their work to make a difference in people’s lives, which is how we define “impact.”


The Cycle of Intentional Practice

The Cycle of Intentional Practice

When I reflect on our intentional planning work to identify the attributes that have made our approach successful, I land in a pretty simple place, which I have started to share during the workshops. “I don’t need to be here for you to do this kind of deep thinking,” I note during all of the workshops.  But I also realize that the one thing that makes intentional planning an invigorating and very useful process is the one thing that is hard for organizations to do—convene to talk about the work of the museum.  Our intentional planning process uses a workshop format because we believe that when staff work collaboratively to develop a common focus—a requirement for intentional thinking—the conversations, products, plans, and enthusiasm for their museum’s work are richer.

Another related necessity is that we ask that representatives of all departments participate in the workshops; while sometimes there is pushback (due to the unspoken hierarchy that may exist within an institution), we hold our ground because collaboration is a primary tenet of intentionality, and deep facilitated discussions are the only way people from different departments can find their common pursuit.  In nearly all of our intentional planning work, staff recognize the depth that emerges from hearing everyone’s perspective and having everyone working together towards a common end.  Clarifying language often becomes part of the conversation.  For example, we are working on an international initiative for a large art museum and everyone was talking about wanting visitors to experience “cross-cultural connections.” One brave staff member eventually asked what everyone means when they say that. A great question that took participants a while to ponder and judging from rich conversation that ensued, an exceedingly simple and crucial question to pose.  We are all guilty of using words/phrases without ever clarifying what they mean (my personal favorite, overused and now somewhat meaningless word is “engagement”).  When clarifying a museum’s intended impact, part of the conversation should include what people mean by the words they use to represent the results of their museum’s work.

Another primary tenet of intentional planning, in some ways as illustrated above, is inquiry.  For inquiry to work, though, people need to listen to understand (rather than to respond reactively).  Certainly, facilitating inclusive workshops and using inquiry are not new; many organizations use them at different times to do their work.  We think they are successful with our intentionality work because we are using these practices collectively within the context of the Cycle of Intentional Practice (see the diagram).  When used all together, they provide a massive dose of intentional thinking about the topic at hand—whether a strategic plan, a departmental plan, or a plan for an international initiative.  We have observed that bringing staff together for several hours creates an amazing feeling among those who gather—likely because it is a rare occurrence for people to take a moment to breathe and think about the interesting and thought-provoking questions we and others are asking. They are delighted to have a chance to reflect on their individual work and how it supports the collective work of their colleagues, and sometimes there is a Kumbaya moment where everyone feels like they are on the same wonderfully beautiful page.

Read Full Post »

On Wednesday, October 7th from 2-3pm EDT, RK&A’s Randi Korn, Amanda Krantz, and Cathy Sigmond will host RK&A’s second Twitter chat on thinking critically about outcomes, using the hashtag #RKAchat.  To join the conversation make sure your tweets include this hashtag. 

Why this topic?

Last week, Randi wrote a blog about the usefulness of outcomes in museum practitioner’s planning—highlighting that outcomes aren’t just for evaluation as some may assume. In the post, she posits why practitioners may shy from using outcomes more frequently in their work even though their identification and clarification are critical to achieving success (for exhibitions, programs, and otherwise). We would like to hear your take on outcomes, including what value you see in them and the challenges to using them in your work.

Twitter Chat Questions

During the Twitter chat, @IntentionalMuse will tweet numbered questions (for example, “Q1: How do you (or your museum) use outcomes in your work? #RKAchat”).  Your response tweet should reference the question (for example, “A1: We write learning objectives for school programs but not family or public programs. #RKAchat”).

Q1: How do you (or your museum) use outcomes in your work?

Q2: What do you find most useful about outcomes for your work?

Q3: What are the barriers to using outcomes in your work?

Q4:How might you use outcomes for planning your work?

How to Participate

If you do not already have one, create a Twitter account.  On Wednesday, October 7thfrom 2-3pm EDT, tweet using the hashtag #RKAchat.  You can monitor the tweets related to the chat by searching for #RKAchat on Twitter.

Read Full Post »

Roadmap to Success

When evaluators are called in to evaluate a program, exhibition, or museum, the first question they ask is, “Who is your primary audience?”  After fully addressing the “who” question, the next question is usually, “What are you hoping to achieve among [insert primary audience]?”  This question is code for “What are your intended outcomes?”  While most people associate outcomes with the evaluation process, what many don’t realize is that outcomes are even more vital to the planning process!  As such, it is disconcerting to witness museum practitioners avoiding clarifying outcomes and exuding that they fear doing so.  I have a few ideas as to why some might fear clarifying outcomes.  First, it is very scary for a museum to put itself out there and boldly say, “This is what our museum wants to achieve.”  Fear of failure starts to quickly emerge and squelch the possibility of articulating intentions with any kind of specificity.

Second, some practitioners believe that evaluators should just leave people alone; let them experience whatever, because museums can’t control people anyway.  True, museums can’t control people’s experiences, but they can provide opportunities to affect people’s experiences.  Museum experiences are two-way streets and both parties (the museum and the visitor) play a role and take liberties. Why wouldn’t a museum want to clarify what it wants people to experience and create an environment that purposefully aligns with those intentions?

Third, clarifying outcomes is a difficult process that takes considerable time, thought, deliberation, and prioritization.  If museums want to make a difference in the quality of people’s lives, then all staff will need decide together on the work of the museum and align their actions with intended outcomes.  The first step to planning is articulating results in the form of clear, measurable outcomes.  Articulating results is not an exercise in futility as those results can be used to plan subsequent work!

I appreciate outcomes for two reasons—the first reason is a prerequisite for the second reason:  First and foremost, outcomes provide an excellent roadmap for planning—whether for a program, exhibition, and an entire museum.  Yes, outcomes are a planning tool!  I know it sounds odd because evaluators champion them, but they are most useful for planning.  They clarify what practitioners want to achieve, after which they might ask, “Okay, this is where we want to go, so what do we need to do to get there?”  Outcomes can be used to make decisions about what one needs to do, what one can stop doing, and what one might need to change moving forward.   Oh, and the second reason: outcomes provide evaluators with a gauge for examining evaluation data; without outcomes, evaluation is moot.  However, if I were asked which is more important, using outcomes for planning or using them as a gauge for evaluation, I would say, using them for planning!  When good intentioned outcomes are used regularly to guide a museum’s work, the result is generally a “successful” exhibition, program, or museum experience.  By contrast, when outcomes are only used for evaluation, the evaluation often indicates that outcomes should have been considered in planning and throughout development to achieve a successful exhibition, program, or museum experience.  It sounds like a catch-22, and that is because it is.

If you’re interested in talking more about outcomes, be sure to participate in our next #RKAChat— “Thinking Critically About Outcomes”– on Wednesday, October 7th from 2-3pm EDT.  We look forward to chatting with you!


Read Full Post »

As a staff, we have noticed the slow by steady upswing in the number of museums doing and requesting evaluation over the years.  While evaluation was uncommon in the museum world 15 or 20 years ago, today many, many museum professionals are enthusiastic advocates for evaluation and view it as essential to their work.  Ultimately, we are thrilled about this trend because we truly believe that evaluation can be used as a learning tool to reflect on and improve practice.  This has to be good for the museum world, right?  But, there is a part of me that worries about this trend.   As someone who values quality, how can the field be sure the results produced are reliable and useful?  Just because someone says they are doing evaluation, should we take at face value that the evaluation they are doing is “good evaluation?”   No, like most things in the world, there is a continuum of quality when it comes to evaluation—there are ways of doing evaluation that will lead to results you can feel confident about and make decisions from and there are ways of doing evaluation that lack purpose and will result in piles of data that are meaningless and therefore never acted upon.

All of this hit home for me recently when I worked with a museum’s education department to build staff capacity for evaluation.  The education department in this museum had been doing evaluation on their own for years, and while much of it had been useful, they felt they were sometimes collecting data for the sake of collecting data and not quite able to make decisions about what to evaluate and what not to evaluate.  None of them are trained in evaluation, but they all have a great respect for it and wanted to learn how to do it better.  Thus, I step in.  Gulp.  I was honored that they wanted me to teach them about evaluation.  As a trained evaluator with many, many years of experience, it should be easy, right?  I quickly realized that teaching others how to do what you do is anything but easy.  And in the process of preparing my teaching materials I did something I hadn’t done in awhile.  I looked very critically at what I do as my chosen profession and asked myself how I do it—I broke down what I do everyday into pieces that I could explain and teach.  And in the process I came to a new appreciation for how hard it is to do evaluation well unless you truly have the training and the experience.  I have to admit, I feel really good about what I have been able to teach the education department of this museum about evaluation, but it hasn’t been easy by any means.

All this is to say, that we would like to start a conversation about how to conduct high-quality evaluation so that evaluation efforts will result in findings you can feel confident about and use.  High quality isn’t about size—good evaluation can be a very small evaluation study (with small sample sizes) done in-house or a large evaluation study with mixed methods and complex analysis strategies.  Quality evaluation hinges mostly on having a purpose when planning and staying true to that purpose when implementing a study.  As a result of all this thinking we have been doing, we are planning to host our first Twitter Chat, where we will invite museum professionals to think critically about evaluation through a series of questions we will pose. Stay tuned for more details!

Read Full Post »

Whole Garden and West Gallery exhibition [2013]

(Read the full report)

The United States Botanic Garden (USBG) contracted RK&A to study visitors’ experiences in the current West Gallery exhibition. However, after an initial meeting, USBG recognized that any changes to the West Gallery should be intentional and done in the context of staff’s aspirations for the whole Garden experience; thus, the study evolved into a more holistic endeavor with two main goals: (1) collect data about visitors’ experiences in the West Gallery exhibition to inform redesign of the Gallery; and (2) study visitors’ experiences in the whole Garden in the context of the newly-articulated visitor impact statement: Inspired by the welcoming, sensory, and restorative experience, visitors appreciate the diversity of plants, value the essential connection between plants and people, and embrace plant stewardship.

How did we approach this study?

RK&A facilitated a series of planning workshops with USBG staff to help them articulate the impact they aspire to achieve with their audiences. An Impact Framework resulted from these workshops. The Framework articulates the impact statement above, as well as audience outcomes and indicators which make the impact statement concrete and measurable. Guided by the Impact Framework, RK&A conducted an audience research study, employing four methods to explore West Gallery experiences and the Garden’s intended impact: (1) a standardized questionnaire; (2) in-depth interviews; (3) focused observations and interviews in the West Gallery; and (4) focus groups with teachers. Following the audience research study, RK&A facilitated two Using Evaluation Results workshops to help staff reflect on findings and develop action steps moving forward.

What did we learn?

The audience research study revealed many rich findings related to the Whole Garden, its audiences, and the West Gallery exhibition specifically, including visitor types that the Garden can use to inform their decision making (see full report for details). Study findings revealed that visitors’ experiences are, in some ways, well aligned with the Garden’s desired impact and, in other ways, not as well aligned. Specifically, staff used study findings to brainstorm more cohesive interpretive themes for the Whole Garden and West Gallery exhibition. Looking forward, USBG staff has two great opportunities to leverage these themes for an upcoming Garden-wide interpretive planning project and Conservatory Room evaluation.

What are the implications of the findings?

This project highlights the all-important link between planning and evaluation. Too often, evaluation is conducted in a vacuum (one program or exhibition at a time) as opposed to considering the organization’s aspirations for impacting the visitor. USBG staff recognized the need to consider changes to the West Gallery exhibition in the context of their intentions for the Whole Garden experience. In doing so, they now have baseline information about their audiences in the context of the impact they hope to achieve. This information helps USBG staff understand the alignment between their aspirations and visitors’ experiences and how they might need to change their practice to achieve greater impact.

Read Full Post »

Emily’s last blog post (read it here) talked about when evaluation capacity building is the right choice.  When we think about building capacity for evaluation, we think about intentional practice.  This does not necessarily involve teaching people to conduct evaluation themselves, but helping people to ask the right questions and talk with the right people as they approach their work.  RK&A has found this to be particularly important in the planning phases of projects.

The case study below is from a project RK&A did with the Museum of Nature and Science in Dallas, TX (now the Perot Museum of Nature and Science) and involved an interdisciplinary group of museum staff thinking intentionally about the impact the Museum hoped to have on the community.  With a new building scheduled to open a year after this project took place, it was a wonderful time to think intentionally about the Museum’s impact.

Building Capacity to Evaluate [2012]

An evaluation planning project with a nature and science museum

The Museum of Nature and Science (MNS) hired RK&A to develop an evaluation plan and build capacity to conduct evaluation in anticipation of the Museum’s new building scheduled to open in 2013.

How did we approach the project?

The evaluation planning project comprised a series of sequential steps, from strategic to tactical, working with an interdisciplinary group of staff across the Museum. The process began by clarifying the Museum’s intended impact that articulates the intended result of the Museum’s work and provides a guidepost for MNS’s evaluation: Our community will personally connect science to their daily lives. Focusing on the Museum’s four primary audiences that include adults, families, students, and educators, staff developed intended outcomes that serve as building blocks to impact and gauges for measurement. Next, RK&A worked with staff to develop an evaluation plan that identifies the Museum’s evaluation priorities over the next four years, supporting the purpose of evaluation at MNS to measure impact, understand audiences’ needs, gauge progress in the strategic plan, and inform decision making.

The final project step focused on building capacity among staff to conduct evaluation. Based on in-depth discussions with staff, RK&A developed three data collection instruments, including an adult program questionnaire, family observation guide, and family short-answer interview guide, to empower staff to begin evaluating the Museum’s programs. Then, several staff members were trained to systematically collect data using the customized evaluation tools.

What did we learn?

The process of building a museum’s capacity to conduct evaluation highlights an important consideration. Evaluating the museum’s work has become more important given accountability demands in the external environment. Stakeholders increasingly ask, How is the museum’s work affecting its audiences? What difference is the museum making in the quality of people’s lives?

Conducting systematic evaluation and implementing a learning approach to evaluation, however, require additional staff time which is a challenge for most museums. MNS staff recognized the need to create a realistic evaluation plan given competing demands on staff’s time. For example, the evaluation plan balances conducting evaluation internally, partnering with other organizations, and outsourcing to other service providers. Also, the plan incrementally implements the Museum’s evaluation initiatives over time. The Museum will begin with small steps in their efforts to affect great change.

Read Full Post »

25th Anniversary ButterflyFor me, intentionality, a concept I view as essential to museum planning, emerged from two core experiences: results from hundreds of exhibition and program evaluations; and observing museum staff wanting to put too many concepts into an exhibition. Intuitively I knew there was a connection between exhibitions that didn’t fare too well (at least according to the evaluations) and staff not letting go of ideas that are near and dear to their hearts—regardless of whether those ideas supported the thesis of the exhibition.

When I have the good fortune to attend planning meetings, I always find myself thinking critically about what should be included in the exhibition under discussion and what could be saved for another time. My consideration always includes the big idea of the exhibition, what the museum would like to achieve with the exhibition vis-à-vis the public, humans’ capacity to process new ideas when in unfamiliar environments (like that of an exhibition hall), evaluation results from other projects that show what leads to quality visitor experiences and what might move visitors away from having quality experiences, and my utmost respect for scholars’ knowledge and passions. While passionate individuals love their subject matter (and really, I love their subject matter, too), one’s willingness to recognize that not all good ideas (or even great ones) belong in an exhibition and then exercising follow through are traits of intentional practice.

Embedded in intentional practice is the concept of alignment—ensuring that project concepts, components, and elements are present because they support the impact the team wants to achieve. If there are concepts, components, elements that do not contribute to the Intentionalitycore idea of the exhibition and its potential impact on audiences, they need to be omitted. I certainly don’t mean to sound ruthless, but I am acutely aware of how easy it is to keep putting more and more into an exhibition plan and how painfully difficult it is to take anything away. I am also aware of how challenging it is to stay focused on the exhibition’s big idea and have the discipline to say no to ideas because they do not support the intended impact of the exhibition. Learning to say “no” is a necessary survival skill and saying “no” is deeply connected to intentional practice. When practitioners are intentional, they are focused on the impact they want to achieve; they exercise discipline and restraint when determining how to best move forward; and their decision making is egoless and for the sake of achieving the results the team envisions.

Intentional practice represents the culmination of my experiences to date, and my passion for it is directly is tied to my evaluation experience. Over the years I started to realize that when exhibitions tried to do too much, visitors’ experiences didn’t amount to much—from their perspective; their heads were full but descriptions of their experiences were nebulous. Sense-making seemed futile. Nudging me was my memory of exhibition development discussions and tensions about what to put in (everything) and what to take out (nothing). Clarifying the intent of the exhibition and then staying focused on the intent of the exhibition is hard work—not likely to end soon, which is okay. Seeking clarity—whether in thought or action—is a never-ending pursuit.

Read Full Post »

Older Posts »