Posted in RK&A 25, tagged 25 years, audience, evaluation, museum, organizational learning, PowerPoint, presentation, report, researchers, template on September 18, 2014|
Leave a Comment »
The most challenging evaluation report I’ve written consisted of 17 PowerPoint slides. The slides didn’t pull the most salient findings from a larger report; the slides were the report! I remember how difficult it was to start out with the idea of writing less from qualitative data. While I had to present major trends, I feared the format might rob the data of its nuance (17 PowerPoint slides obviously require brevity). The process was challenging and at times frustrating, but in the end I felt extremely gratified. Not only was the report thorough, it was exactly what the client wanted, and it was usable.
As evaluators, we toe the line between social science research, application and usability. As researchers, we must analyze and present the data as they appear. Sometimes, especially in the case of qualitative reports, this can lead to an overwhelming amount of dense narrative. This acceptable reporting style in evaluation practice is our default. Given the number of reports we write each year, having guidelines is efficient and freeing. We can focus on the analysis, giving us plenty of time to get to know and understand the data, to tease out the wonderful complexity that comes from open-ended interviews. As researchers, the presentation takes a backseat to analysis and digging into data.
However most of the time we are writing a report that will be shared with other researchers; it is a document that will be read by users—museum staff who may share the findings with other staff or the board. Overwhelming amounts of dense narrative may not be useful; not because our audience can’t understand it, but because often the meaning is packed and needs to be untangled. I would guess what clients want and need is something they can refer to repeatedly, something they can look at to remind themselves, “Visitors aren’t interested in reading long labels,” or “Visitors enjoy interactive exhibits.” As researchers, presentation may be secondary, but as evaluators, presentation must be a primary consideration.
As my experience with the PowerPoint report (and many other reports since then) taught me, it can be tough to stray from a well-intentioned template. A shorter report or a more visual report doesn’t take less time to analyze or less time to write. In fact, writing a short report takes more time because I have to eliminate the dense narrative and find the essence, as I might with a long report. I also have to change the way I think about presentation. I have to think about presentation!
At RK&A, we like to look at our report template to see what we can do to improve it – new ways to highlight key findings or call out visitor quotations. Not all of our ideas work out in the long run, but it is good to think about different ways to present information. At the end of the day, though, what our report looks like for any given project comes from a client’s needs—and not from professional standards. And I learned that when I wrote those 17 PowerPoint slides!
Read Full Post »
Posted in RK&A 25, tagged 25 years, audience, evaluation, formative, front-end, impact, learning, organizational learning, outcome, passion, results, value on February 14, 2014|
1 Comment »
So often we evaluators are asked to measure outcomes or results, which of course align with our expectations. When we conduct an evaluation and the results are positive, an organization can wave its flag; and ideally the whole museum field benefits from learning why a particular exhibition or program is so successful at achieving its outcomes. During my time as an evaluator, I have learned that there is enormous value in walking before running. Because measuring results sounds compelling to museums and their funders, museums often jump over important evaluation processes and rush into measuring results. Accordingly, staff, in a moment of passion, forgo front-end and formative evaluation—those early stages of concept testing, prototyping, and piloting a program—that help staff understand the gaps between the intended outcomes for their audience and the successes and challenges of implementing a new project.
So, when we are asked to measure results, we always ask the client if the project has ever been evaluated. Even then, we may pull the reins to help slow down our clients enough to consider the benefits of first understanding what is and is not working about a particular program or exhibition. More often than not, slowing down and using front-end and formative evaluation to improve the visitor experience increases the likelihood that staff will be rewarded with positive results when they measure outcomes later. In fact, when an organization’s evaluation resources are limited, we often advocate for conducting a front-end and/or formative evaluation because we believe that is where all of us will learn the most. It is human nature to want to jump right in to the good stuff and eat our dessert first. We, too, get excited by our clients’ passion and have to remind ourselves of the value of taking baby steps. So, one of the many lessons I’ve learned (and am still learning) is that when it comes to evaluation, encouraging practitioners to walk before they run (or test before they measure) is key to a successful project and their own personal learning.
Read Full Post »
This week I’d like to share thoughts about evaluative thinking, in part because two weeks ago I was part of a session at the American Alliance of Museums (AAM) annual conference in Baltimore titled “Evaluation as Learning” (titled as such because learning is the ultimate result of evaluative thinking). We took a risk: I set the stage by presenting the Cycle of Intentional Practice (see our first blog post) with a distinct focus on the reflection quadrant, and the three panelists were allotted five minutes to present a “story”; we used the remaining time to ask the audience questions (rather than having them ask us questions). Over the years, AAM has inadvertently trained session attendees to expect 60 minutes of panelists’ presentations (and sometimes more) and 5 or 10 minutes of Q & A whereby the audience would pose questions to panelists. Rarely have sessions been intentionally flipped where the bulk of a session’s time (50 minutes of 75 minutes) is used to ask attendees questions. We all wondered if we should ask our friends to attend the session so our queries wouldn’t be met with silence.
We didn’t surprise the audience with this strategy; we were transparent and gave them a heads-up by saying: “Our intention today is to share brief stories about how we have used evaluation as a learning tool (rather than a judgment tool). Along the way we will be highlighting and clarifying evaluative thinking, and each presenter will spend 5 minutes doing this. Our intention is also this: we will spend the remaining time asking you questions, in a sense, to model the kind of inquiry that organizations can use to engage in evaluative thinking. We want to hear your thoughts and reflections, and we welcome you to challenge our thoughts and push us beyond where we are; then all of us will be using inquiry and reflection to pursue learning—the ultimate goal of evaluative thinking.”
Evaluative thinking (ET) is an intentional, enduring process of questioning, reflecting, thinking critically, learning, and adapting. While learning is at the essence of ET, adapting (one’s thinking or behaviors) is the challenge. An underlying thread in our presentation supported a fact about evaluative thinking—evaluative thinking is effective and meaningful when it is ingrained in the organization’s culture and the responsibility of everyone—leadership and staff.
Evaluative thinking is embedded in intentional practice and the reflection quadrant is essential, as learning is not likely to happen without people taking the time to ask the tough questions and reflect on reality (e.g., evidence of performance) and practice. When evaluation as learning is pursued, it can be a catalyst for personal learning, interpersonal learning, project learning, organizational learning, and field-wide learning.
For more on evaluative thinking, check out:
Preskill, H. and Torres, R. T. (1998). Evaluative inquiry for learning in organizations. Newbury Park, CA: Sage.
Read Full Post »
Posted in Uncategorized, tagged accountability, audience, evaluation, funder, grant, learning, organizational learning, outcome, professional learning, reflect, results on January 9, 2013|
Leave a Comment »
Andy Warhol, Dollar Sign (1982).
It’s that time of year—federal grant deadlines for NSF, IMLS, NOAA, NEH, and NEA are looming large, and many informal learning organizations are eyeing those federal dollars. While government agencies (and private foundations) often require evaluation, we relish working on projects with team members that integrate evaluation into their work process because they are interested in professional learning—rather than just fulfilling a grant requirement. Sadly, evaluation is often thought of and used as a judgment tool—which is why funders require it for accountability purposes. That said, I am not anti accountability. In fact, I am pro accountability; and I am also pro learning.
This pretty simple idea—learning from evaluation—is actually quite idealistic because I sense an uncomfortable tension between program failure AND professional learning. Not all of the projects we evaluate achieve their programmatic intentions, in part because the projects are often complicated endeavors involving many partners who strive to communicate complex and difficult-to-understand ideas to the public. Challenging projects, though, often achieve another kind of outcome—professional and organizational learning—especially in situations where audience outcomes are ambitious. When projects fail to achieve their audience outcomes, what happens between the grantee and funder? If the evaluation requirement is focused on reporting results from an accountability perspective, the organization might send the funder a happy report without any mention of the project’s inherent challenges, outcomes that fell short of expectations, or the evaluator’s analysis that identified realities that might benefit from focused attention next time. The grantee acknowledges its willingness to take on a complicated and ambitious project and notes a modest increase in staff knowledge (because any further admission might suggest that the staff wasn’t as accomplished as the submitted proposal stated). The dance is delicate because some grantees believe that any admission of not-so-rosy results is reprehensible and punishable by never receiving funding again!
Instead, what if the evaluation requirement was to espouse audience outcomes and professional and/or organizational learning? The report to the funder might note that staff members were disappointed that their project did not achieve its intended audience outcomes, but they found the evaluation results insightful and took time to process them. They explain how what they learned, which is now part of their and their organization’s knowledge bank, will help them reframe and improve the next iteration of the project and they look forward to continuing to hone their skills and improve their work. The report might also include a link to the full evaluation report. I have observed that funders are very interested in organizations that reflect on their work and take evaluation results to heart. I have also noticed that funders are open to thinking and learning about alternative approaches to evaluation, outcomes, measurement, and knowledge generation.
Most of the practitioners I know want opportunities to advance their professional knowledge; yet some feel embarrassed when asked to talk about a project that may not have fared well even when their professional learning soared. When funders speak, most pay close attention. How might our collective knowledge grow if funders invited their grantees to reflect on their professional learning? What might happen if funders explicitly requested that organizations acknowledge their learning and write a paper summarizing how they will approach their work differently next time? If a project doesn’t achieve its audience outcomes and no one learns from the experience—that would be reprehensible.
Isn’t it time that funding agencies and organizations embrace evaluation for its enormous learning potential and respect it as a professional learning tool? Isn’t it time to place professional and organizational learning at funders’ evaluation tables alongside accountability? In my opinion, it is.
Read Full Post »