Posts Tagged ‘impact’

At the start of this year, I started writing about the principles of intentional practice, and to date, I have shared three principles (#1, #2, and #3).  For this post, I feature the next two principles of intentional practice, and I present them together because they are both critically important for achieving the museum’s intended impact, and yet, they are very different in character.

#4. Staff know the impact the museum hopes to achieve on audiences served

Principle #4, “Staff know the impact the museum hopes to achieve on audiences served,” may seem like an unnecessary principle to state; after all, staff participated in the crafting of the impact statement, and certainly they know the museum’s heart-felt intentions.  Stating the obvious reinforces the important role staff have in the museum’s intentional practice.  Omitting it as a principle would be a serious oversight.  To “know” is not taken lightly among museum professionals.  In the context of intentional practice, to “know” leads staff to internalize the impact the museum hopes to achieve, and such knowing enables staff to carry out its work. Oddly, the principle also feels static, which is the antithesis of how work tends to happen in museums—where there is always an abundance of activity.  However static the statement feels, impact statements are never still, and neither is staff’s knowledge.

#5. Staff align its work to achieve its intended impact

Knowing the intended impact of the museum on audiences should affect and determine the work that staff do; however, realizing what one could do and carrying out those actions are two very different things.  Innumerable tensions unfold in museums, as museums pursue the 5th principle: “Staff align its work to achieve its intended impact.” Alignment is about exploring whether a museum’s processes and products can deliver the museum’s intended impact within the resources the museum has to expend (staff and dollars), and within that, alignment can be also about course-correcting work to strengthen alignment between a program and the museum’s intended impact.  Taken a step further, alignment can also become part of a strategy for reducing a museum’s workload if the museum is doing too much, as is often the case.

How can a museum use alignment to reduce its workload?  One approach might be to analyze various programs from two perspectives: 1) a program’s ability to achieve the museum’s intended impact; and 2) the amount of resources required to implement the program—in terms of staff time and dollars.

 

 

If, through discussion, staff ascertain that a program has relatively low impact (compared to other programs) and requires considerable resources, does it make sense to continue the program?  There are two options: a) the museum can change the program to strengthen alignment between the program and the museum’s intended impact and reduce the cost of the program to the museum; or, b) it can stop doing the program altogether, which could free up resources that the museum could put to better use.  However, some museum programs are sacred cows, such as a holiday program or other long-running public programs.  Sometimes programs become tradition, and they are the ones that are most threatened, in part because they were created before the museum started to pursue impact-driven planning.  Some programs continue year after year simply because the museum has always done them—and sometimes for no other reason.

When a museum chooses to engage in impact-driven planning, logic suggests that the museum wants to change in some sort of way.  If nothing changes during or after the planning process, something is afoot.  While the thought of changing is inspiring, change is extremely difficult to actualize.  For example, things could remain the same if someone becomes offended if their program’s effectiveness is being questioned or if those sacred cows are left intact.  Alignment analyses are intended to be honest reflections about whether a program achieves the museum’s intended impact and uses resources responsibly.  Without honesty, change is elusive and alignment is futile.  Knowing that honest analysis and discussion is vital to alignment, convene with your colleagues to discuss your museum’s programs and plot each one on the above graph to help you determine what your museum can improve or stop doing.  Regardless of where each program is placed on the grid, the objective is to have the conversation—which is the beginning of the alignment process.

Read Full Post »

25th Anniversary ButterflyWorking in research and evaluation, you become very skeptical of words like “data-driven” and “research-based.” To evaluators, it is quite flattering that these words are so buzzworthy—yes, we want our research and evaluation work to be important, used, and even desired! However, even though these buzzwords grab attention, they can be misleading. For instance, when we talk about data and research at RK&A, we mean original, first-hand data and research, such as interviews, questionnaires, and surveys with museum visitors.

This was on my mind as I recently had the opportunity to help my CARE (Committee on Audience Research and Evaluation) colleague Liz Kunz Kollmann review session proposals for the 2015 AAM Annual Meeting. Liz, as CARE’s representative for the National Program Committee, was charged with reviewing sessions in the Education, Curation, & Evaluation track (all 141 sessions!) along with fellow National Program Committee members in education and curation. Given that audience research and evaluation can be part of many AAM tracks (marketing, development, exhibit design, etc.), Liz recruited some CARE members to help her review sessions in other tracks to see if there were any sessions outside of our designated track that CARE should advocate for.

I volunteered to review sessions in the tracks Development & Membership and Finance & Administration. I had expected to encounter a lot of buzzwords since the AAM session proposals include a description that must be appropriate for display on the AAM website, mobile app, and other published meeting materials. So, I wasn’t surprised but I was struck by the heavy use of terms like “data-driven” and “research-based” (e.g., data-driven strategies for membership recruitment and research-based programming) and was stymied in trying to determine whether these sessions were relevant to CARE—what data is driving the decisions and is it really of interest to CARE members?

Certainly I am not dismissive of research or data that isn’t “original.” There are many definitions of research and data that are applicable to certain scenarios and within certain fields. For instance, arts-based research is a completely valid field of research within art education when conducted well. However, I am biased to collecting original data from visitors first-hand, which is why terminology like “data-driven” and “research-based” makes my ears perk up—because these words prompt many questions for me about the type of data and research and its appropriateness to inform said decisions and practices. Through our work at RK&A, we truly want practitioners to make decisions that are data-driven; that is the greatest outcome of our work! However, we also want our clients to be skilled users and consumers of data and evaluation so much so that their ears perk up at the very mention of “data”—for hopefully, they, too, have become savvy digesters of the language as well as the meaning behind the data when talking about research and evaluation.

Check out our Buzzword Bingo below inspired by Dilbert: http://dilbert.com/strips/comic/2010-10-25/  Warning: this Bingo is informed by RK&A’s professional experience and is not based on original data.  Maybe with the help of our museum colleagues, we can make it “research-based.”  Please share your buzzwords!

Reflection 19 blog v4

Read Full Post »

Emily’s last blog post (read it here) talked about when evaluation capacity building is the right choice.  When we think about building capacity for evaluation, we think about intentional practice.  This does not necessarily involve teaching people to conduct evaluation themselves, but helping people to ask the right questions and talk with the right people as they approach their work.  RK&A has found this to be particularly important in the planning phases of projects.

The case study below is from a project RK&A did with the Museum of Nature and Science in Dallas, TX (now the Perot Museum of Nature and Science) and involved an interdisciplinary group of museum staff thinking intentionally about the impact the Museum hoped to have on the community.  With a new building scheduled to open a year after this project took place, it was a wonderful time to think intentionally about the Museum’s impact.

Building Capacity to Evaluate [2012]

An evaluation planning project with a nature and science museum

The Museum of Nature and Science (MNS) hired RK&A to develop an evaluation plan and build capacity to conduct evaluation in anticipation of the Museum’s new building scheduled to open in 2013.

How did we approach the project?

The evaluation planning project comprised a series of sequential steps, from strategic to tactical, working with an interdisciplinary group of staff across the Museum. The process began by clarifying the Museum’s intended impact that articulates the intended result of the Museum’s work and provides a guidepost for MNS’s evaluation: Our community will personally connect science to their daily lives. Focusing on the Museum’s four primary audiences that include adults, families, students, and educators, staff developed intended outcomes that serve as building blocks to impact and gauges for measurement. Next, RK&A worked with staff to develop an evaluation plan that identifies the Museum’s evaluation priorities over the next four years, supporting the purpose of evaluation at MNS to measure impact, understand audiences’ needs, gauge progress in the strategic plan, and inform decision making.

The final project step focused on building capacity among staff to conduct evaluation. Based on in-depth discussions with staff, RK&A developed three data collection instruments, including an adult program questionnaire, family observation guide, and family short-answer interview guide, to empower staff to begin evaluating the Museum’s programs. Then, several staff members were trained to systematically collect data using the customized evaluation tools.

What did we learn?

The process of building a museum’s capacity to conduct evaluation highlights an important consideration. Evaluating the museum’s work has become more important given accountability demands in the external environment. Stakeholders increasingly ask, How is the museum’s work affecting its audiences? What difference is the museum making in the quality of people’s lives?

Conducting systematic evaluation and implementing a learning approach to evaluation, however, require additional staff time which is a challenge for most museums. MNS staff recognized the need to create a realistic evaluation plan given competing demands on staff’s time. For example, the evaluation plan balances conducting evaluation internally, partnering with other organizations, and outsourcing to other service providers. Also, the plan incrementally implements the Museum’s evaluation initiatives over time. The Museum will begin with small steps in their efforts to affect great change.

Read Full Post »

25th Anniversary ButterflyLately I have been thinking about how intentional practice seeped into my consciousness. “Seeped” feels like the right verb for a concept that is still evolving and taking shape, admittedly at a slow but steady pace, gently nudging me along. I believe that almost all ideas are influenced by others’ ideas. At the time I was coming upon intentional practice, I had been conducting evaluations for many years, reading Stephen Weil’s and others’ articles and books, witnessing changes in how museums were behaving in response to outside pressures, and wondering why evaluation seemed to have seemingly little effect on museum practice. In this case, when I say “museum practice,” I actually mean the whole museum rather than an individual museum program or exhibition. The glass wasn’t completely half empty, but I was bothered by a few practices I was witnessing.

About 15 years ago I started to feel disturbed by the dangerous game that some museums were playing—ones that were so focused on bolstering attendance that they were hosting exhibitions just to bring in high volumes of visitors, regardless of whether the exhibitions reflected their core mission or purpose. For example, why would a history museum host Body Worlds other than to enjoy an uptick in visitor numbers? Or, why would an art museum host exhibitions featuring impressionism year after year? Perhaps the local community demanded that their museums host these exhibitions, but it is more likely that the museums were thinking about numbers—as in visitors and in dollars and cents. (Apparently this kind of thing is still present, as indicated by this week’s New York Times article about MoMA and its director where the reporter notes “. . . there have been complaints from veteran patrons that the museum has grown too fast and lost much of its soul in courting the crowd.”( http://www.nytimes.com/2014/04/21/arts/momas-expansion-and-director-draw-critics.html?src=me&_r=0) Loss of soul well describes what I was witnessing and thinking a decade and a half ago.) Ideas about intentional practice were emerging (although I didn’t know it at the time), and I eventually wrote an article titled “Self Portrait: Know Thy Self then Serve your Public” that Museum News published. I make the point museums need to know and articulate their core values, assets (intellectual and otherwise), and passions so they can exude continuously them if visitors are to have personally meaningful experiences.

 

Know Thyself, from the Temple of Apollo

Know Thyself, from the Temple of Apollo

Around the same time I was starting to realize that the evaluation field was focused almost entirely on studying individual projects (exhibitions and programs) and it had not explored the effect of the whole museum experience. I observed that evaluation was conceived of and conducted in much the same way museums were managed—each department did its own thing and sometimes individuals did their own thing—without considering other parts of the museum or other colleagues. I recognized that evaluation, as a practice, was benefiting particular programs and exhibitions and even individuals, but I wondered if evaluation could be a more holistic endeavor organizationally, so it could benefit the whole museum. I thought about what might be missing from the practice of evaluation and in the ways museums were doing their work and quietly started to think about developing evaluative strategies that could more adequately serve the whole museum. I also wanted museums to regain focus on their soul and core purpose and I wanted to be able to study the difference museums were making in people’s lives. However, I learned through my evaluation practice that without a statement of intent, I really couldn’t study anything at all. I believe that museums must state their intentions—not just so evaluators can determine whether they have achieved them—but articulating intentions is an excellent planning strategy for museum practitioners; it keeps them focused on their desired end result, which helps them make decisions accordingly. After all this thinking I felt like I had arrived at a new place and passion; I wanted to develop strategies to help evaluators and museums approach their work more collaboratively, holistically, and intentionally.

My belief in the value of intentionality was steadfast, resulting from conducting museum evaluations—program by program, exhibition by exhibition—for over twenty years. I was transferring what I learned from exhibition and program evaluations—successful programs and exhibitions emerge from work that is focused on a core idea, deliberate in exemplifying that core idea, articulate in describing that core idea, and designing components that support the core idea. I also learned that if a museum does not have passion for the core idea, their work will be substandard, and visitors will know the difference. I believed in what I had learned—enough so that I wanted to apply the ideas to a larger entity—the whole museum, and thus was born our intentionality work with museums.

Read Full Post »

Welcome to our new Throwback Thursday series, where we take a moment to look back at projects from our archives.  Today we’ll be sharing a case study about our planning and evaluation work with the Science Museum of Virginia and their Sphere Corps Program.  You might recall this particular Science On a Sphere program from one of our prior posts, Learning to Embrace Failure, and today we’ll share a bit more about how we approached the study, what we learned, and the implications of those findings.

Sphere Corps Program [2012]

For this planning and evaluation project with The Science Museum of Virginia (SMV), RK&A evaluated Sphere Corps, a Science on a Sphere program about climate change developed by SMV with funding from the National Oceanic and Atmospheric Administration (NOAA).    

How did we approach this study?  

The study was designed around RK&A’s belief that organizations must be intentional in their practice by continually clarifying purpose, aligning practices and resources to achieve purpose, measuring outcomes, and learning from practice to strengthen ongoing planning and actions.  To this end, the Sphere Corps project included five phases of work—a literature review, a workshop to define intended program outcomes, two rounds of formative evaluation, and two reflection workshops.  Formative evaluation data were collected using naturalistic observations and in-depth interviews.  Each phase of work allowed staff to explore their vision for the Sphere Corps program and how it changed over time as they learned from and reflected on evaluation findings.

What did we learn?SOS

SMV staff’s goal was to create a facilitated, inquiry-based Science on a Sphere program about climate change.  RK&A first completed a literature review that revealed a facilitated Sphere experience was in keeping with best practices and that using inquiry methods in a 20-minute program would be challenging but worth exploring further.  Staff then brainstormed and honed the outcomes they hoped to achieve in Sphere Corps, which guided planning and script development.  The first round of formative evaluation identified implementation barriers and an overabundance of iClicker questions, all of which created a challenging environment for educators to effectively use inquiry.  Upon reflection, staff reduced the number of iClicker questions and added visualizations and questions that required close observation of the Sphere.  Following a second round of formative evaluation, staff made additional changes to the program script and began to reflect on the reality of using inquiry in a single 20-minute program.  Since the script covered a range of topics related to climate change, staff wondered if they should instead go deeper with one topic while encouraging more visitor observation and interpretation of Sphere data.  Out of this discussion arose the idea of “mini-programs”—a series of programs that would focus on communicating one key idea about climate change, such as helping people understand the difference between weather and climate.

What are the implications of the findings?

Central to the idea of the “mini-program” is the idea of doing less to achieve more.  Impact and outcomes are incredibly difficult to achieve and trying to achieve too much often results in accomplishing very little.  Through a reflection workshop and staff discussion, the SMV team was able to prioritize and streamline the outcomes and indicators originally written for the Sphere Corps program.  Staff also recognized that their primary goal with the Sphere Corps program is to encourage visitors to think more critically about the science behind climate change.  By scaling down the number of topics covered in the presentation, each program could intentionally focus on: (1) one key idea or question related to climate change; (2) achievement of only a few intended outcomes; and (3) implementation of specific facilitation strategies to achieve those outcomes.  Intentionally covering less content also opens up opportunities to more effectively use inquiry methods.

Read Full Post »

25th Anniversary Butterfly

So often we evaluators are asked to measure outcomes or results, which of course align with our expectations.  When we conduct an evaluation and the results are positive, an organization can wave its flag; and ideally the whole museum field benefits from learning why a particular exhibition or program is so successful at achieving its outcomes.  During my time as an evaluator, I have learned that there is enormous value in walking before running.  Because measuring results sounds compelling to museums and their funders, museums often jump over important evaluation processes and rush into measuring results.  Accordingly, staff, in a moment of passion, forgo front-end and formative evaluation—those early stages of concept testing, prototyping, and piloting a program—that help staff understand the gaps between the intended outcomes for their audience and the successes and challenges of implementing a new project. 

So, when we are asked to measure results, we always ask the client if the project has ever been evaluated.  Even then, we may pull the reins to help slow down our clients enough to consider the benefits of first understanding what is and is not working about a particular program or exhibition.  More often than not, slowing down and using front-end and formative evaluation to improve the visitor experience increases the likelihood that staff will be rewarded with positive results when they measure outcomes later.  In fact, when an organization’s evaluation resources are limited, we often advocate for conducting a front-end and/or formative evaluation because we believe that is where all of us will learn the most.  It is human nature to want to jump right in to the good stuff and eat our dessert first.  We, too, get excited by our clients’ passion and have to remind ourselves of the value of taking baby steps.  So, one of the many lessons I’ve learned (and am still learning) is that when it comes to evaluation, encouraging practitioners to walk before they run (or test before they measure) is key to a successful project and their own personal learning.

Read Full Post »

25th Anniversary ButterflySometimes when learning surfaces slowly, it is barely visible, until one day the world looks different.  Responding to that difference is the first layer of that complex process often labeled as learning.  The Cycle of Intentional Practice was a long time coming—emerging from many years of conducting evaluations, where I worked closely with museum staff and leadership as well as with visitors.  The Cycle of Intentional Practice is an illustration of an ideal work cycle that started to form when I was writing “A Case for Holistic Intentionality”.  I am visually oriented and I often have to draw my ideas before I write about them; in this case, I was writing about my ideas and then I felt the need to create a visualization to depict what I was thinking—in part to help me understand what I was thinking, but also to help others.  I included the first iteration of the cycle in the manuscript to Curator, but the editor said the Journal does not usually publish that kind of illustration, so I put it aside.

That original cycle differs from the one I use today—it was simpler (it included “Plan,” “Act,” and “Evaluate”), and while I didn’t know it at the time, it was a draft.  There have been several more iterations over time (one was “Plan,” “Act,” and “Evaluate & Reflect,” for example); as I continue to learn and improve my practice, I change the cycle accordingly.  Most stunning to me was that the first draft of the cycle showed nothing in the center—nothing!  I feel a little embarrassed by my omission and I am not entirely sure what I was thinking at the time, but I hope my oversight was short-lived.  At some point I placed the word “Intentions” in the center, and as I clarified my ideas, with the hope of applying the cycle to our evaluation and planning work, I eventually replaced “intentions” with “impact.”  I recall how difficult it was to explain the concept of “intentions” so I eventually needed to remove the word from the center (as much as I loved having it there).  If my goal was to have museums apply the cycle to their daily and strategic work, the cycle needed to represent an idea people found comfortable and doable.  Soon I realized that intentionality was the larger concept of the cycle and what needed to be placed in the center was the result of a museum’s work on its publics–impact.  So was born our intentionality work with museums.  Then I realized the true power of intentionality—mission could go in the center as well as outcomes, or anything for that matter.  The artist’s rendition below demonstrates the versatility of intentionality as a concept.

Cycle of Intentional Practice

An artistic rendering of the Cycle of Intentional Practice by artist Andrea Herrick

What I find most amazing is that two crucial ideas—reflection and impact—were not present in the first iterations of the cycle, although they were discussed when I talked about intentionality.  Our intentional planning work (which we refer to as impact planning) would be rudderless without the presence of impact and our ability to learn from our work would be weakened without reflection.  And that brings me to another realization, which I am reminded of daily—the never-ending pursuit of achieving clarity of thought, followed by writing a clear expression of that thought.

Today I talk about the Cycle of Intentional Practice as a draft—it will always be on the verge of becoming, but these days I am more comfortable with the idea of the Cycle being a draft—an idea in process—than I was a decade ago; in fact, I have come to realize that all work is a draft and that if one is serious about learning and applying new ideas to work and life, then all ideas, all products, all knowledge are mere drafts because learning is continuous, right?

Humbling?  Yes indeed.

Read Full Post »

Older Posts »