Throwback Thursday: Science On a Sphere

Welcome to our new Throwback Thursday series, where we take a moment to look back at projects from our archives.  Today we’ll be sharing a case study about our planning and evaluation work with the Science Museum of Virginia and their Sphere Corps Program.  You might recall this particular Science On a Sphere program from one of our prior posts, Learning to Embrace Failure, and today we’ll share a bit more about how we approached the study, what we learned, and the implications of those findings.

Sphere Corps Program [2012]

For this planning and evaluation project with The Science Museum of Virginia (SMV), RK&A evaluated Sphere Corps, a Science on a Sphere program about climate change developed by SMV with funding from the National Oceanic and Atmospheric Administration (NOAA).    

How did we approach this study?  

The study was designed around RK&A’s belief that organizations must be intentional in their practice by continually clarifying purpose, aligning practices and resources to achieve purpose, measuring outcomes, and learning from practice to strengthen ongoing planning and actions.  To this end, the Sphere Corps project included five phases of work—a literature review, a workshop to define intended program outcomes, two rounds of formative evaluation, and two reflection workshops.  Formative evaluation data were collected using naturalistic observations and in-depth interviews.  Each phase of work allowed staff to explore their vision for the Sphere Corps program and how it changed over time as they learned from and reflected on evaluation findings.

What did we learn?SOS

SMV staff’s goal was to create a facilitated, inquiry-based Science on a Sphere program about climate change.  RK&A first completed a literature review that revealed a facilitated Sphere experience was in keeping with best practices and that using inquiry methods in a 20-minute program would be challenging but worth exploring further.  Staff then brainstormed and honed the outcomes they hoped to achieve in Sphere Corps, which guided planning and script development.  The first round of formative evaluation identified implementation barriers and an overabundance of iClicker questions, all of which created a challenging environment for educators to effectively use inquiry.  Upon reflection, staff reduced the number of iClicker questions and added visualizations and questions that required close observation of the Sphere.  Following a second round of formative evaluation, staff made additional changes to the program script and began to reflect on the reality of using inquiry in a single 20-minute program.  Since the script covered a range of topics related to climate change, staff wondered if they should instead go deeper with one topic while encouraging more visitor observation and interpretation of Sphere data.  Out of this discussion arose the idea of “mini-programs”—a series of programs that would focus on communicating one key idea about climate change, such as helping people understand the difference between weather and climate.

What are the implications of the findings?

Central to the idea of the “mini-program” is the idea of doing less to achieve more.  Impact and outcomes are incredibly difficult to achieve and trying to achieve too much often results in accomplishing very little.  Through a reflection workshop and staff discussion, the SMV team was able to prioritize and streamline the outcomes and indicators originally written for the Sphere Corps program.  Staff also recognized that their primary goal with the Sphere Corps program is to encourage visitors to think more critically about the science behind climate change.  By scaling down the number of topics covered in the presentation, each program could intentionally focus on: (1) one key idea or question related to climate change; (2) achievement of only a few intended outcomes; and (3) implementation of specific facilitation strategies to achieve those outcomes.  Intentionally covering less content also opens up opportunities to more effectively use inquiry methods.

Learning to Embrace Failure

A few weeks ago, Randi blogged about the lack of emphasis grantors place on professional learning as a valuable outcome of projects they have funded.  The fear of failure I sense from practitioners when planning an evaluation is often palpable, as practitioners often think about evaluation as a judgment tool and fear the possibility of failure (especially in the eyes of the funder).  The innovation-obsessed culture of the non-profit sector exacerbates the situation: be the best; make a discernible difference in people’s lives; be innovative; don’t make a mistake; and if you do err, certainly don’t tell anyone about it.  Understandably, the possibility of failure creates a stress level that can override people’s professional sensibilities of what is really important.  Yet, I personally feel refreshed when I hear museum practitioners reflect on their failures during a conference presentation; not because I want to see people fail but because mistakes often lead to learning.  And, as an evaluator, it is my job to help museum practitioners wade through evaluation results and reflect on what did not work and why in the spirit of learning.  My job is to help people value and use evaluation as a learning tool.

Failure CartoonI recently had the pleasure of working on a project with the Science Museum of Virginia (SMV) in Richmond.  The Museum, like many others, received funding from the National Oceanic and Atmospheric Administration (NOAA) to develop programming for Science on a Sphere® (SoS).  And, the Museum, like many others, had high hopes of creating a compelling program—one that uses inquiry to engage visitors in the science behind the timely issue of climate change.  Inquiry can be elegant in its simplicity but it is also incredibly difficult to master under even the best of circumstances.  Staff quickly realized that creating and implementing such a program was a challenging endeavor for a whole host of reasons—some of which were unique to the Museum’s particular installation of SoS.  The challenges staff faced are well documented in the evaluation reports they have shared on NOAA’s web site (http://www.oesd.noaa.gov/network/sos_evals.html) as well as informalscience.org (http://informalscience.org/evaluation/show/654).  Yet, the specific challenges are not important; what is important is that they reflected on and grappled with their challenges throughout the project in the spirit of furthering everyone’s professional learning.  They discussed what worked well and addressed elements that did not work as well.  They invited colleagues from a partner institution to reflect on their struggles with them—something we all might find a bit scary and uncomfortable but, for them, proved invaluable.  In the end, they emerged from the process with a clearer idea of what to do next, and they realized how far they had come.

SMV staff recognized that their program may not be unique and that other museums may have done or may be doing something similar.  But each and every time staff members (from any museum) reflect on the lessons learned from a project, their experience is unique because learning always emerges, even if it is subtle and nuanced.  The notion that every museum program has to be innovative, groundbreaking, or unique is an inappropriate standard, and, frankly, unrealistic.  In fact, when museums embrace innovation as a goal, they too, must embrace and feel comfortable with the idea of failure, especially if they want to affect the audiences they serve.  Grantmakers for Effective Organizations share this sentiment (http://www.geofunders.org/geo-priorities) when defining practices that support non-profit success.  The organization states that “[embracing] failure” is one way we will know that “grantmakers have embraced evaluation as a learning and improvement mechanism.”  An ideal first step would be for all of us—institutions, evaluators, and funders—to proudly share our failures and lessons learned with others.