Archive for January, 2013

A few weeks ago, Randi blogged about the lack of emphasis grantors place on professional learning as a valuable outcome of projects they have funded.  The fear of failure I sense from practitioners when planning an evaluation is often palpable, as practitioners often think about evaluation as a judgment tool and fear the possibility of failure (especially in the eyes of the funder).  The innovation-obsessed culture of the non-profit sector exacerbates the situation: be the best; make a discernible difference in people’s lives; be innovative; don’t make a mistake; and if you do err, certainly don’t tell anyone about it.  Understandably, the possibility of failure creates a stress level that can override people’s professional sensibilities of what is really important.  Yet, I personally feel refreshed when I hear museum practitioners reflect on their failures during a conference presentation; not because I want to see people fail but because mistakes often lead to learning.  And, as an evaluator, it is my job to help museum practitioners wade through evaluation results and reflect on what did not work and why in the spirit of learning.  My job is to help people value and use evaluation as a learning tool.

Failure CartoonI recently had the pleasure of working on a project with the Science Museum of Virginia (SMV) in Richmond.  The Museum, like many others, received funding from the National Oceanic and Atmospheric Administration (NOAA) to develop programming for Science on a Sphere® (SoS).  And, the Museum, like many others, had high hopes of creating a compelling program—one that uses inquiry to engage visitors in the science behind the timely issue of climate change.  Inquiry can be elegant in its simplicity but it is also incredibly difficult to master under even the best of circumstances.  Staff quickly realized that creating and implementing such a program was a challenging endeavor for a whole host of reasons—some of which were unique to the Museum’s particular installation of SoS.  The challenges staff faced are well documented in the evaluation reports they have shared on NOAA’s web site (http://www.oesd.noaa.gov/network/sos_evals.html) as well as informalscience.org (http://informalscience.org/evaluation/show/654).  Yet, the specific challenges are not important; what is important is that they reflected on and grappled with their challenges throughout the project in the spirit of furthering everyone’s professional learning.  They discussed what worked well and addressed elements that did not work as well.  They invited colleagues from a partner institution to reflect on their struggles with them—something we all might find a bit scary and uncomfortable but, for them, proved invaluable.  In the end, they emerged from the process with a clearer idea of what to do next, and they realized how far they had come.

SMV staff recognized that their program may not be unique and that other museums may have done or may be doing something similar.  But each and every time staff members (from any museum) reflect on the lessons learned from a project, their experience is unique because learning always emerges, even if it is subtle and nuanced.  The notion that every museum program has to be innovative, groundbreaking, or unique is an inappropriate standard, and, frankly, unrealistic.  In fact, when museums embrace innovation as a goal, they too, must embrace and feel comfortable with the idea of failure, especially if they want to affect the audiences they serve.  Grantmakers for Effective Organizations share this sentiment (http://www.geofunders.org/geo-priorities) when defining practices that support non-profit success.  The organization states that “[embracing] failure” is one way we will know that “grantmakers have embraced evaluation as a learning and improvement mechanism.”  An ideal first step would be for all of us—institutions, evaluators, and funders—to proudly share our failures and lessons learned with others.

Read Full Post »

RecFLOWently, I and my RK&A colleagues finished a project for the Indianapolis Museum of Art looking at the effects of Mary Miss’ public art installation FLOW: Can You See the River?FLOW was conceptualized around the idea that the White River is underappreciated and even ignored by Indianapolis residents who do not fully understand the importance of the White River to the city (see http://flowcanyouseetheriver.org/ for more information on the project).  I was thrilled to work on such an interesting project, and the evaluation results were mostly positive (see http://informalscience.org/reports/0000/0688/2012_RKA_IMA_FLOW_Summ_dist.pdf for evaluation results).  It is rare for me to have internal struggles with projects we work on, so when I felt a tension with this project, I thought it worthy of exploration.

As part of our work we examined people’s engagement with the installation.  But if I were to honestly answer the question, “Would I engage with the FLOW installation?” (imagining that I lived in the Indianapolis area), my answer would be no.  In fact, when I consider the amount of public art that I have engaged with in the last year, it is fairly limited, and not for lack of exposure.  My realization is confounding because I am what some may consider the expected audience for public art installations.  I have degrees in art history and art education and I am absolutely an “art person.”  I love seeing museum exhibitions and collections of late 19th and early 20th century painting and enjoy attending biennials and triennials to keep up with what new things are happening in the art world.  So why don’t I engage with more public art?  In reflecting on that question, I have tried to think about the few public art pieces that have given me pause in the last year and the incidents surrounding them.  Here are the experiences that stand out to me in order from what I would deem the least profound experience to the most profound experience:

Minneapolis Sculpture Garden.  I stopped here this past spring during an excursion to the Walker Art Center during the AAM conference.  As such, it was a planned trip in a time slot I had already allocated for art viewing.  I was also on a pilgrimage to see Claes Oldenburg & Coosje van Bruggen’s Spoonbridge and Cherry—an iconic image of public art.  The James Turrell that I happened upon was a bonus (thank you sculpture garden brochure for pointing it out!).  While technically my experience was a public art experience, I would say that it was more aligned with a traditional museum experience than a public art experience.  I was on a scheduled trip to a museum in a city I was visiting that allowed me to see several artworks in a single location, including one on my must-see list.

Kat Healey’s Coming Home. My interest in public art in airports is hit or miss, but this work at the Philly airport caught my eye one day, and I have stopped at it multiple times since (of course, never when I am on my way home).  I wouldn’t consider this piece to be something that would naturally capture my attention, but the title “Coming Home” caught my eye.  Something about the poignant contrast of the words “coming home” when I was getting ready to leave home for a trip struck me enough to slow me down.  I can’t remember if I read the identification label first or took in the whole work, which is fairly large and detailed, but either way, the image of the work has stayed with me.  Who knows if I would have paid it any attention had the artwork been titled otherwise (see http://www.phl.org/arts/current/Pages/KayHealy.aspx for more on the installation).

Charles Ray, Boy with a Frog

Charles Ray, Boy with a Frog

Charles Ray’s Boy with a Frog.  I was immediately enamored with this piece when I came across it in Venice.  I had spent three months in Venice almost five years ago and loved coming to the Punta della Dogana for a beautiful view of the city.  I was surprised by this new statue in a familiar place and also struck by how the newness and bright whiteness of the statue was in stark contrast with the city, which is known for its history, lack of change, and beautiful decay.  Unfortunately, I couldn’t find any information on the piece nearby and had to wait to get back to my hotel to Google it.  While I wasn’t particularly taken by the explanation of the work and don’t fully understand its intentions, I still find the piece to be extremely striking and perfectly located (see http://www.nytimes.com/2009/06/05/arts/design/05voge.html?_r=0 for more on the sculpture).

So how do these experiences explain my gut instinct that I probably wouldn’t engage with FLOW?  First, I can’t see myself scheduling a visit to FLOW like I had for the Minneapolis Sculpture Garden since Spoonbridge and Cherry has a special iconic gravitas, which FLOW does not.  And secondly, in my experiences with Coming Home and Boy with a Frog there was some sort of “contrast” that hooked me—in one case a contrast to my feelings and in the other the aesthetic contrast of the work and its setting.  “Contrast,” however, is not a word that I would use to describe a FLOW experience (revealing, informative, about a relevant problem, and subtly surprising are more apt words for FLOW).  Yet, two of the three experiences recounted above (2 ½ if you count my unexpected James Turrell encounter at the Minneapolis Sculpture Garden) were highly serendipitous encounters, and maybe, had I happened upon FLOW I would have had a different response.  Once I looked at FLOW through a rationalized, evaluative lens, I couldn’t turn back.  If reflecting upon these experiences has shown me anything though, it is that you never know what type of public art may catch your eye and offer a bit of unexpected meaningfulness—I suppose that’s the true value and beauty of public art.

Read Full Post »

Andy Warhol, Dollar Sign (1982).

Andy Warhol, Dollar Sign (1982).

It’s that time of year—federal grant deadlines for NSF, IMLS, NOAA, NEH, and NEA are looming large, and many informal learning organizations are  eyeing those federal dollars.  While government agencies (and private foundations) often require evaluation, we relish working on projects with team members that integrate evaluation into their work process because they are interested in professional learning—rather than just fulfilling a grant requirement.  Sadly, evaluation is often thought of and used as a judgment tool—which is why funders require it for accountability purposes.  That said,   I am not anti accountability.  In fact, I am pro accountability; and I am also pro learning.

This pretty simple idea—learning from evaluation—is actually quite idealistic because I sense an uncomfortable tension between program failure AND professional learning.  Not all of the projects we evaluate achieve their programmatic intentions, in part because the projects are often complicated endeavors involving many partners who strive to communicate complex and difficult-to-understand ideas to the public.  Challenging projects, though, often achieve another kind of outcome—professional and organizational learning—especially in situations where audience outcomes are ambitious.  When projects fail to achieve their audience outcomes, what happens between the grantee and funder?  If the evaluation requirement is focused on reporting results from an accountability perspective, the organization might send the funder a happy report without any mention of the project’s inherent challenges, outcomes that fell short of expectations, or the evaluator’s analysis that identified realities that might benefit from focused attention next time. The grantee acknowledges its willingness to take on a complicated and ambitious project and notes a modest increase in staff knowledge (because any further admission might suggest that the staff wasn’t as accomplished as the submitted proposal stated).  The dance is delicate because some grantees believe that any admission of not-so-rosy results is reprehensible and punishable by never receiving funding again!

Instead, what if the evaluation requirement was to espouse audience outcomes and professional and/or organizational learning?  The report to the funder might note that staff members were disappointed that their project did not achieve its intended audience outcomes, but they found the evaluation results insightful and took time to process them.  They explain how what they learned, which is now part of their and their organization’s knowledge bank, will help them reframe and improve the next iteration of the project and they look forward to continuing to hone their skills and improve their work.  The report might also include a link to the full evaluation report.  I have observed that funders are very interested in organizations that reflect on their work and take evaluation results to heart.  I have also noticed that funders are open to thinking and learning about alternative approaches to evaluation, outcomes, measurement, and knowledge generation.

Most of the practitioners I know want opportunities to advance their professional knowledge; yet some feel embarrassed when asked to talk about a project that may not have fared well even when their professional learning soared.  When funders speak, most pay close attention.  How might our collective knowledge grow if funders invited their grantees to reflect on their professional learning?  What might happen if funders explicitly requested that organizations acknowledge their learning and write a paper summarizing how they will approach their work differently next time?  If a project doesn’t achieve its audience outcomes and no one learns from the experience—that would be reprehensible.

Isn’t it time that funding agencies and organizations embrace evaluation for its enormous learning potential and respect it as a professional learning tool?  Isn’t it time to place professional and organizational learning at funders’ evaluation tables alongside accountability?  In my opinion, it is.

Read Full Post »

Welcome to the Intentional Museum Blog, an all-staff endeavor of Randi Korn & Associates.  We intend to share our thoughts and questions twice monthly about compelling ideas we come across in our work and readings.  The platform for our practice and the inspiration for our postings is the Cycle of Intentional Practice.

Cycle of Intentional Practice

This cycle has emerged slowly, evolving over the last 10 to 15 years, as I started to think about all I had learned as an evaluator and researcher while pursuing my passion of studying people’s experiences in informal learning environments.  In our work as planners and evaluators in museums, we have witnessed how difficult it can be to achieve results that reflect a museum’s original intentions.  Likewise, we have witnessed that results surface when organizations focus their energy, decisions, actions, and dollars towards well-articulated ends.  Thus, increasingly, intentionality is becoming a driving force in our work with clients, as we recognize how vital it is as an idea, action, theory, and practice.

Intentionality is not a new concept, as several well-known authors have written about it.  For example, business author Jim Collins notes in Good to Great that the work of a great organization must “attract and channel resources directed solely . . .” to their intentions and “reject resources that drive them away from” their passion and unique value.  Collins’ concept applies to museums, too, as described by museum scholar Stephen Weil in Making Museums Matter.  He wrote, “The only activities in which the museum can legitimately engage are those intended to further its institutional purpose.”  Collins’ and Weil’s concepts embody the essence of intentionality, where all actions are purposeful, deliberate, and focused on achieving the intended impact or results.  With funders requiring evidence that their dollars are being used in the way they intended, intentionality rings of relevance.  Likewise, with fewer dollars available to museums, how museums use those dollars is increasingly important.

The Cycle of Intentional Practice places “impact” in the center of the cycle and assumes museums want to make a positive difference in people’s lives, which is how I define “impact.”  Intentional practice requires that the museum articulates the impact it would like to achieve (by writing an impact statement), align a museum’s practices and resources to support the impact it would like to achieve, measure the ways in which the museum is achieving impact, and reflect on the results to learn from them for the purpose of improvement.  Intentional practice may seem insular, but it acknowledges a museum’s responsibility to its external community through evaluation and other efforts.  The tension between a museum and its public is real and the museum may need to work hard to balance its internal aspirations and resources with its community’s needs.  Balancing potentially conflicting ideals, though challenging, demonstrates that the organization is striving to be true to itself and its audience and community.   I have learned in my 30 or so years of experience that when a museum applies laser focus to the center of the cycle, it creates an opportunity to achieve meaningful, measurable results that an evaluator can detect.

We invite you to share your thoughts—agreements or disagreements—in the spirit of our collective learning, as learning has always been the motivational force behind our work.

Read Full Post »