Posts Tagged ‘professional learning’

Emily’s last blog post (read it here) talked about when evaluation capacity building is the right choice.  When we think about building capacity for evaluation, we think about intentional practice.  This does not necessarily involve teaching people to conduct evaluation themselves, but helping people to ask the right questions and talk with the right people as they approach their work.  RK&A has found this to be particularly important in the planning phases of projects.

The case study below is from a project RK&A did with the Museum of Nature and Science in Dallas, TX (now the Perot Museum of Nature and Science) and involved an interdisciplinary group of museum staff thinking intentionally about the impact the Museum hoped to have on the community.  With a new building scheduled to open a year after this project took place, it was a wonderful time to think intentionally about the Museum’s impact.

Building Capacity to Evaluate [2012]

An evaluation planning project with a nature and science museum

The Museum of Nature and Science (MNS) hired RK&A to develop an evaluation plan and build capacity to conduct evaluation in anticipation of the Museum’s new building scheduled to open in 2013.

How did we approach the project?

The evaluation planning project comprised a series of sequential steps, from strategic to tactical, working with an interdisciplinary group of staff across the Museum. The process began by clarifying the Museum’s intended impact that articulates the intended result of the Museum’s work and provides a guidepost for MNS’s evaluation: Our community will personally connect science to their daily lives. Focusing on the Museum’s four primary audiences that include adults, families, students, and educators, staff developed intended outcomes that serve as building blocks to impact and gauges for measurement. Next, RK&A worked with staff to develop an evaluation plan that identifies the Museum’s evaluation priorities over the next four years, supporting the purpose of evaluation at MNS to measure impact, understand audiences’ needs, gauge progress in the strategic plan, and inform decision making.

The final project step focused on building capacity among staff to conduct evaluation. Based on in-depth discussions with staff, RK&A developed three data collection instruments, including an adult program questionnaire, family observation guide, and family short-answer interview guide, to empower staff to begin evaluating the Museum’s programs. Then, several staff members were trained to systematically collect data using the customized evaluation tools.

What did we learn?

The process of building a museum’s capacity to conduct evaluation highlights an important consideration. Evaluating the museum’s work has become more important given accountability demands in the external environment. Stakeholders increasingly ask, How is the museum’s work affecting its audiences? What difference is the museum making in the quality of people’s lives?

Conducting systematic evaluation and implementing a learning approach to evaluation, however, require additional staff time which is a challenge for most museums. MNS staff recognized the need to create a realistic evaluation plan given competing demands on staff’s time. For example, the evaluation plan balances conducting evaluation internally, partnering with other organizations, and outsourcing to other service providers. Also, the plan incrementally implements the Museum’s evaluation initiatives over time. The Museum will begin with small steps in their efforts to affect great change.

Read Full Post »

As evalua25th Anniversary Butterflytors, we are often asked to help our clients build evaluation capacity among staff in their organization. The motivation for these requests varies. Sometimes the primary motivator is professional development; other times it is perceived cost savings (since conducting professional evaluations can require resources that not all organizations have at their disposal). We welcome when an organization values evaluation enough to inquire about how to integrate it more fully into their staff’s daily work. If an organization has a true interest in using evaluation as a tool to learn about the relationship between its work and the public, building an organization’s evaluation capacity may be quite successful. On the other hand, if the primary motivator is to save costs associated with evaluation, often the outcome is much less successful, mostly because evaluation takes considerable time and invariably there is a trade-off; when the evaluation is being done, something else is being ignored.

Evaluation capacity building can take a variety of forms. It can range from building staff’s capacity to think like an evaluator, perhaps by helping staff learn how to articulate a project’s desired outcomes (I think this is the most valuable evaluation planning skill one can learn), to training staff to conduct an evaluation from beginning to end (identifying outcomes, creating an evaluation plan, designing instruments, collecting data, conducting analyses, and reporting findings). Even among the most interested parties, it is rare to find museum practitioners who are genuinely interested in all aspects of evaluation. As an evaluator, even I find certain aspects of evaluation more enjoyable than others. I’ve noticed that while practitioners may be initially intimidated by the data collection process, they often find talking with visitors rewarding and informative. On the other hand, they have much less enthusiasm for data analysis and reporting; I’ve only encountered a handful of museum practitioners who enjoy pouring over pages and pages of interview transcripts. We lovingly refer to these individuals as “data nerds” and proudly count ourselves among them.

There is yet another challenge, and it has to do with the fact that most museum practitioners are required to wear many mountain-data-mining33hats. Conducting evaluations is my one and only job; it is what I am trained to do and have intentionally chosen for my vocation. While a practitioner may be intrigued by what evaluation can offer, often it is not the job he or she was trained or hired to do, which means that evaluation can become a burden—just one more hat for a practitioner to wear. Some organizations have addressed an organization’s evaluation needs by creating a position for an in-house evaluator and the individual who might fill that position is usually someone who is schooled in evaluation and research methodologies, much like all of us here at RK&A. I would caution organizations to be very realistic when considering building their organization’s evaluation capacity. Does your staff have the time and skills to conduct a thoughtful study and follow through with analysis and reporting? What responsibilities is your organization willing to put aside to make time for the evaluation? And, do you want your staff to think like evaluators or become evaluators?—an important distinction, indeed. Otherwise, even those with the best of intentions may find themselves buried in mountains of data. Worse yet is that what was once an exciting proposition may be perceived as an annoyance in the end.

Read Full Post »

25th Anniversary ButterflyAs evaluators, we work with museum professionals to collect data around problems they are facing, and not so surprisingly, museums often face similar problems.  In my six years with RK&A, I have definitely seen trends, and certainly in RK&A’s 25 years, the company has as well.  For this reason, I sometimes find myself wondering whether collecting more data around an issue is worthwhile.  As someone who considers herself a life-long learner, the instinct is to say, “No, we don’t know enough; there is always more to learn.”  But then I consider that, if there is enough existing and reliable information out there, our clients can save time and money but still make informed decisions.  This consideration gives me pause as my intention is for the work we do to help museums do their work better.

I was recently feeling this way while conducting focus groups with teachers about barriers to fieldtrips and their needs for teaching resources. We have worked on many evaluations of museum-school programs lately in which we collected data from teachers about museum programs and professional development, including for Kentucky Historical Society, the National Air and Space Museum, and the Corcoran Gallery of Art. Indeed, during the recent teacher focus groups, I heard a lot of familiar trends—cost of field trips, curriculum links, lack of time due to testing. But as I listened to these teachers, I gained a new appreciation for the phrase “the devil is in the details.” For, while some of the barriers were the same as those I was expecting, there were nuances and specifics unique to the context of the Museum and its community that make a familiar issue particularly challenging—which I have found to be true with every evaluation.

Pieter Bruegel the Elder The Seven Deadly Sins or the Seven Vices - Pride 1558

Pieter Bruegel the Elder
The Seven Deadly Sins or the Seven Vices – Pride
1558

So to the question, have we heard it all before when it comes to barriers to fieldtrip experiences? No. While there are certainly cases when existing research in the field can sufficiently answer a museum’s questions, more often than not, there are situational challenges unique to a museum and its community that are crucial to helping a museum address these challenges. Sometimes our work is about helping museums see the forest for the trees—identifying the big trends. But in the case of identifying barriers to fieldtrip experiences, I need to unpack every detail to help the Museum truly understand the barriers and identify recommendations.   Like this Bruegel painting, it can appear messy and confusing but inspecting each detail is necessary for making meaning.

Read Full Post »

A few weeks ago, Randi blogged about the lack of emphasis grantors place on professional learning as a valuable outcome of projects they have funded.  The fear of failure I sense from practitioners when planning an evaluation is often palpable, as practitioners often think about evaluation as a judgment tool and fear the possibility of failure (especially in the eyes of the funder).  The innovation-obsessed culture of the non-profit sector exacerbates the situation: be the best; make a discernible difference in people’s lives; be innovative; don’t make a mistake; and if you do err, certainly don’t tell anyone about it.  Understandably, the possibility of failure creates a stress level that can override people’s professional sensibilities of what is really important.  Yet, I personally feel refreshed when I hear museum practitioners reflect on their failures during a conference presentation; not because I want to see people fail but because mistakes often lead to learning.  And, as an evaluator, it is my job to help museum practitioners wade through evaluation results and reflect on what did not work and why in the spirit of learning.  My job is to help people value and use evaluation as a learning tool.

Failure CartoonI recently had the pleasure of working on a project with the Science Museum of Virginia (SMV) in Richmond.  The Museum, like many others, received funding from the National Oceanic and Atmospheric Administration (NOAA) to develop programming for Science on a Sphere® (SoS).  And, the Museum, like many others, had high hopes of creating a compelling program—one that uses inquiry to engage visitors in the science behind the timely issue of climate change.  Inquiry can be elegant in its simplicity but it is also incredibly difficult to master under even the best of circumstances.  Staff quickly realized that creating and implementing such a program was a challenging endeavor for a whole host of reasons—some of which were unique to the Museum’s particular installation of SoS.  The challenges staff faced are well documented in the evaluation reports they have shared on NOAA’s web site (http://www.oesd.noaa.gov/network/sos_evals.html) as well as informalscience.org (http://informalscience.org/evaluation/show/654).  Yet, the specific challenges are not important; what is important is that they reflected on and grappled with their challenges throughout the project in the spirit of furthering everyone’s professional learning.  They discussed what worked well and addressed elements that did not work as well.  They invited colleagues from a partner institution to reflect on their struggles with them—something we all might find a bit scary and uncomfortable but, for them, proved invaluable.  In the end, they emerged from the process with a clearer idea of what to do next, and they realized how far they had come.

SMV staff recognized that their program may not be unique and that other museums may have done or may be doing something similar.  But each and every time staff members (from any museum) reflect on the lessons learned from a project, their experience is unique because learning always emerges, even if it is subtle and nuanced.  The notion that every museum program has to be innovative, groundbreaking, or unique is an inappropriate standard, and, frankly, unrealistic.  In fact, when museums embrace innovation as a goal, they too, must embrace and feel comfortable with the idea of failure, especially if they want to affect the audiences they serve.  Grantmakers for Effective Organizations share this sentiment (http://www.geofunders.org/geo-priorities) when defining practices that support non-profit success.  The organization states that “[embracing] failure” is one way we will know that “grantmakers have embraced evaluation as a learning and improvement mechanism.”  An ideal first step would be for all of us—institutions, evaluators, and funders—to proudly share our failures and lessons learned with others.

Read Full Post »

Andy Warhol, Dollar Sign (1982).

Andy Warhol, Dollar Sign (1982).

It’s that time of year—federal grant deadlines for NSF, IMLS, NOAA, NEH, and NEA are looming large, and many informal learning organizations are  eyeing those federal dollars.  While government agencies (and private foundations) often require evaluation, we relish working on projects with team members that integrate evaluation into their work process because they are interested in professional learning—rather than just fulfilling a grant requirement.  Sadly, evaluation is often thought of and used as a judgment tool—which is why funders require it for accountability purposes.  That said,   I am not anti accountability.  In fact, I am pro accountability; and I am also pro learning.

This pretty simple idea—learning from evaluation—is actually quite idealistic because I sense an uncomfortable tension between program failure AND professional learning.  Not all of the projects we evaluate achieve their programmatic intentions, in part because the projects are often complicated endeavors involving many partners who strive to communicate complex and difficult-to-understand ideas to the public.  Challenging projects, though, often achieve another kind of outcome—professional and organizational learning—especially in situations where audience outcomes are ambitious.  When projects fail to achieve their audience outcomes, what happens between the grantee and funder?  If the evaluation requirement is focused on reporting results from an accountability perspective, the organization might send the funder a happy report without any mention of the project’s inherent challenges, outcomes that fell short of expectations, or the evaluator’s analysis that identified realities that might benefit from focused attention next time. The grantee acknowledges its willingness to take on a complicated and ambitious project and notes a modest increase in staff knowledge (because any further admission might suggest that the staff wasn’t as accomplished as the submitted proposal stated).  The dance is delicate because some grantees believe that any admission of not-so-rosy results is reprehensible and punishable by never receiving funding again!

Instead, what if the evaluation requirement was to espouse audience outcomes and professional and/or organizational learning?  The report to the funder might note that staff members were disappointed that their project did not achieve its intended audience outcomes, but they found the evaluation results insightful and took time to process them.  They explain how what they learned, which is now part of their and their organization’s knowledge bank, will help them reframe and improve the next iteration of the project and they look forward to continuing to hone their skills and improve their work.  The report might also include a link to the full evaluation report.  I have observed that funders are very interested in organizations that reflect on their work and take evaluation results to heart.  I have also noticed that funders are open to thinking and learning about alternative approaches to evaluation, outcomes, measurement, and knowledge generation.

Most of the practitioners I know want opportunities to advance their professional knowledge; yet some feel embarrassed when asked to talk about a project that may not have fared well even when their professional learning soared.  When funders speak, most pay close attention.  How might our collective knowledge grow if funders invited their grantees to reflect on their professional learning?  What might happen if funders explicitly requested that organizations acknowledge their learning and write a paper summarizing how they will approach their work differently next time?  If a project doesn’t achieve its audience outcomes and no one learns from the experience—that would be reprehensible.

Isn’t it time that funding agencies and organizations embrace evaluation for its enormous learning potential and respect it as a professional learning tool?  Isn’t it time to place professional and organizational learning at funders’ evaluation tables alongside accountability?  In my opinion, it is.

Read Full Post »