25th Anniversary ButterflyWorking in research and evaluation, you become very skeptical of words like “data-driven” and “research-based.” To evaluators, it is quite flattering that these words are so buzzworthy—yes, we want our research and evaluation work to be important, used, and even desired! However, even though these buzzwords grab attention, they can be misleading. For instance, when we talk about data and research at RK&A, we mean original, first-hand data and research, such as interviews, questionnaires, and surveys with museum visitors.

This was on my mind as I recently had the opportunity to help my CARE (Committee on Audience Research and Evaluation) colleague Liz Kunz Kollmann review session proposals for the 2015 AAM Annual Meeting. Liz, as CARE’s representative for the National Program Committee, was charged with reviewing sessions in the Education, Curation, & Evaluation track (all 141 sessions!) along with fellow National Program Committee members in education and curation. Given that audience research and evaluation can be part of many AAM tracks (marketing, development, exhibit design, etc.), Liz recruited some CARE members to help her review sessions in other tracks to see if there were any sessions outside of our designated track that CARE should advocate for.

I volunteered to review sessions in the tracks Development & Membership and Finance & Administration. I had expected to encounter a lot of buzzwords since the AAM session proposals include a description that must be appropriate for display on the AAM website, mobile app, and other published meeting materials. So, I wasn’t surprised but I was struck by the heavy use of terms like “data-driven” and “research-based” (e.g., data-driven strategies for membership recruitment and research-based programming) and was stymied in trying to determine whether these sessions were relevant to CARE—what data is driving the decisions and is it really of interest to CARE members?

Certainly I am not dismissive of research or data that isn’t “original.” There are many definitions of research and data that are applicable to certain scenarios and within certain fields. For instance, arts-based research is a completely valid field of research within art education when conducted well. However, I am biased to collecting original data from visitors first-hand, which is why terminology like “data-driven” and “research-based” makes my ears perk up—because these words prompt many questions for me about the type of data and research and its appropriateness to inform said decisions and practices. Through our work at RK&A, we truly want practitioners to make decisions that are data-driven; that is the greatest outcome of our work! However, we also want our clients to be skilled users and consumers of data and evaluation so much so that their ears perk up at the very mention of “data”—for hopefully, they, too, have become savvy digesters of the language as well as the meaning behind the data when talking about research and evaluation.

Check out our Buzzword Bingo below inspired by Dilbert: http://dilbert.com/strips/comic/2010-10-25/  Warning: this Bingo is informed by RK&A’s professional experience and is not based on original data.  Maybe with the help of our museum colleagues, we can make it “research-based.”  Please share your buzzwords!

Reflection 19 blog v4

The case study below is from a summative evaluation RK&A completed for the Conservation Trust of Puerto Rico.  Based in Manati, Puerto Rico, the Conservation Trust runs a Citizen Science program for local residents.

Citizen Science [2010]

A summative program evaluation with a nature conservancy

The Conservation Trust of Puerto Rico collaborated with RK&A to study the impact of its Citizen Science program, a NSF-funded project designed to involve local Spanish-speaking citizens in scientific research that contributes to growing knowledge about the Trust’s biodiversity and land management efforts. The Citizen Science program underwent formative evaluation in 2009 and summative evaluation in 2010. Summative evaluation is discussed here.

How did we approach this study?

Summative evaluation was guided by four impacts developed using NSF’s Framework for Evaluating Impacts of Informal Science Education Projects. These included that participants will: use and understand the scientific method; experience and understand the purpose of scientific rigor; develop a sense of ownership for the Reserve; and realize that the research in which they participate has wide application to decisions made about conserving the Reserve’s flora and fauna. To explore these impacts, RK&A collected 343 standardized questionnaires, conducted 39 in-depth interviews, and conducted three case studies with participants who have a high level of program involvement.

What did we learn?

In all areas where the Trust hoped to achieve impact with participants, gains were made. Findings show that participants self-reported moderate gains in their knowledge and awareness of flora and fauna and scientific processes; interestingly, those who participated in programs with live animal interaction self-reported greater gains. Some also acknowledged attitude and behavior changes as a result of program participation. Findings further demonstrate that a majority of participants felt the Reserve is relevant and valuable to them and Puerto Rico, honing and developing their sense of pride and ownership. Finally, some participants also recognized the application and value of the research in which they participated. Findings also raised some potential barriers to achieving impact, such as the average participants’ brief, often isolated exposure to a specific research project; as well as the fact that many participants entered the program with prior knowledge and interests that might limit the program’s potential to facilitate significant learning gains.

 What are the implications of the findings?

A review of Citizen Science projects found that very few have formally assessed the impact of participants’ experiences.[1] This study sought to contribute to knowledge in this area by exploring participants’ experiences through the lens of the four program impacts mentioned above. Some findings are consistent with those of other Citizen Science studies, such as the fact that participants exhibited more gains in content knowledge than process skills, and many participants enter with prior interest in and knowledge of science and conservation. Other findings suggest that animal interaction and small group size positively influenced participants’ experiences and perceptions of learning. Collectively, findings suggest implications for program design, including the importance of bridging participants’ experiences so they envision their contribution as part of a greater goal.

[1] Bonney, R., Ballard, H., Jordan, R., McCallie, E., Phillips, T., Shirk, J., & Wilderman, C. C. (2009a). Public participation in scientific research: Defining the field and assessing its potential for informal science education. A CAISE inquiry group report. Washington, D.C.: Center for Advancement of Informal Science Education (CAISE).

25th Anniversary ButterflyA recent Telegraph article announced that the chairman of Arts Council England thinks there should be a one-hour photo ban (on selfies in particular) in art galleries. My first reaction was: “This is an interesting and absolutely horrible idea.” I see how a photo ban could be conceived as a strategy to enhance the visitor experience—I have certainly muttered to myself in annoyance when there are so many people taking photos of an artwork that I can’t get close enough to see it (or if I feel brazen enough to make my way to the front so I can see it, I feel bad for ruining everyone’s photo). However, if this one-hour photo ban were to go through, I see it creating a lot more negative visitor experiences than positive ones when you put yourself in the shoes of the security guard—the person who has to enforce the rule.

Let me step back a moment and say that I owe my current professional career to my work as a security guard. In addition to many other roles as an intern at the Peggy Guggenheim Collection, I guarded galleries and certainly learned a lot about visitor experiences. As someone who wanted to work in a museum, I found that I, as a security guard, had the power to either make or break the quality of a visitor’s experience. Tell visitors about Peggy’s many artist lovers while standing in her bedroom—make their visit and even their day. Ask a visitor to leave her bag in a locker or coatroom—incite anger to the point of someone asking for a refund and never setting foot in the museum. It was a humbling experience to say the least and completely transformed my thinking about the work I wanted to do for museums.

Now jumping back to the policy at hand…when reading the article, I first imagined how this would work on the ground.  I immediately empathized with the poor security guards who would have to enforce this policy (as did a Hyperallergic author who commented on this policy). Yes, perhaps signage would alert visitors to the ban, but from evaluation we know that it would go largely unnoticed. Therefore, my predictor is that the first awareness a visitor would have of the policy is when a security guard tells him or her not to take a photo. No matter how friendly a security guard may be, being told not to do something can create an embarrassing situation. How the visitor then reacts to feeling embarrassed is another story. Does he argue with the guard? Just continue to take pictures anyway? Does he internalize it and feel awful for the rest of the day? Any way it plays out will generally result in a negative experience for the visitor as well as those around him.

The chairman’s comparison of this no-photo policy and the “quiet car” is a perfect analogy in my opinion. As a frequent train rider, I love the concept of the quiet car, and I choose to sit in it more often than not. It works well when everyone knows they are in the quiet car. The trouble is, typically, there is one person who doesn’t know, which puts a negative pallor on everyone else’s experience in the quiet car. Most quiet cars have a sign, sometimes the lights are dimmer than other cars, and sometimes the conductor will announce which car is the quiet car. Perfect—except non-regular riders do not notice the signs, subtle lighting cues, or are aware what car they are in (am I in the first car?). Therefore, when the un-knowing person is encountered by a fellow quiet-car rider or conductor about a rule-breaking cell phone conversation, the ensuing interaction often doesn’t go well. I have seen and heard about everything—from a New Jersey Transit rider being escorted off the train by police after starting a fight with a confronting rider, to an Amtrak rider construing the conductor’s request as him telling her she “has a big mouth.” For these reasons, I find myself avoiding the quiet car lately because I end up being more frustrated than relieved and feeling more negative than positive. From what I have seen as security guard, evaluator, and expert train rider, more negative than positive visitor experiences might result from this potential photo-ban policy.

Photo from the Fortune article, "The Cult of the Amtrak Quiet Car," an interesting read for quiet care devotees and those unaware: http://fortune.com/2014/09/17/amtrak-quiet-car/

Photo from the Fortune article, “The Cult of the Amtrak Quiet Car,” an interesting read for both quiet car devotees and those completely unaware: http://fortune.com/2014/09/17/amtrak-quiet-car/

 

The case study below is from a project RK&A did with the Museum of Science and Industry in Chicago, IL and highlights the importance of iterative testing.

Future Energy Chicago Simulation [2013]

An evaluation of a multimedia simulation for a science museum

Between 2012 and 2013, RK&A conducted four rounds of formative evaluation of the Future Energy Chicago simulation for the Museum of Science and Industry in collaboration with the design firm Potion. In the simulation, up to five teams compete against each other in five games: Future House, Future Car, Future Neighborhood, Future Power, and Future Transportation. In the games, players have to make decisions that challenge them to think about energy production and usage, and they receive badges as rewards for selecting energy-efficient choices.

How did we approach this study?

RK&A included primarily middle school youth (home school groups, etc.) in testing, as they are the target audience for Future Energy Chicago. Each round of evaluation explored unique issues relevant to a particular design phase. In the first round of evaluation, RK&A tested three-dimensional paper prototypes of each game to explore middle school youth’s understanding of the concepts presented. In the next two rounds (alpha and alpha prime), RK&A tested the games on touch-screen monitors to explore each game’s functionality as well as youth’s motivations and learning, including a badge system aimed at rewarding youth’s energy-efficient choices. In the last round of evaluation, RK&A tested the games using a combination of multi-touch and projection technology that closely mirrored the final simulation environment. For each round of evaluation, RK&A staff conducted observations and interviews with middle school youth who played the games.

What did we learn?

Each round of evaluation revealed successes and challenges of the Future Energy Chicago games that MSI staff and Potion designers used to improve the games’ functionality and messaging. Throughout testing, findings revealed three key characteristics of the game that were compelling to middle school youth—variety of energy choices, opportunities to design aspects of their energy environment, and challenging energy problems to solve. Findings also revealed that youth’s prior knowledge and experiences with energy choices highly influenced the choices they made and the messages they took away from each game. A consistent challenge throughout testing was helping youth understand the idea of trade-offs in energy choices (comfort or cost versus saving energy). A badge system was implemented to address this issue, as well as to incentivize youth to select energy-efficient choices.

What are the implications?

This study underscores the importance of iterative testing when evaluating a complex digital learning environment. Not only did MSI staff and Potion designers need to understand barriers to effectively using the games, including the intuitiveness of the technology, the Museum needed to understand what about the simulation motivated youth’s game play and effectively empowered them to make smart energy choices as the future residents of Chicago. Further, RK&A facilitated reflective discussions between rounds of testing that enabled MSI staff and designers to apply the study findings and recommendations to the next round of testing, ultimately improving the overall functionality and effectiveness of Future Energy Chicago.

25th Anniversary ButterflyThe most challenging evaluation report I’ve written consisted of 17 PowerPoint slides. The slides didn’t pull the most salient findings from a larger report; the slides were the report! I remember how difficult it was to start out with the idea of writing less from qualitative data. While I had to present major trends, I feared the format might rob the data of its nuance (17 PowerPoint slides obviously require brevity). The process was challenging and at times frustrating, but in the end I felt extremely gratified. Not only was the report thorough, it was exactly what the client wanted, and it was usable.

As evaluators, we toe the line between social science research, application and usability. As researchers, we must analyze and present the data as they appear. Sometimes, especially in the case of qualitative reports, this can lead to an overwhelming amount of dense narrative. This acceptable reporting style in evaluation practice is our default. Given the number of reports we write each year, having guidelines is efficient and freeing. We can focus on the analysis, giving us plenty of time to get to know and understand the data, to tease out the wonderful complexity that comes from open-ended interviews. As researchers, the presentation takes a backseat to analysis and digging into data.

However most of the time we are writing a report that will be shared with other researchers; it is a document that will be read by userspaper-stack-300x251museum staff who may share the findings with other staff or the board. Overwhelming amounts of dense narrative may not be useful; not because our audience can’t understand it, but because often the meaning is packed and needs to be untangled. I would guess what clients want and need is something they can refer to repeatedly, something they can look at to remind themselves, “Visitors aren’t interested in reading long labels,” or “Visitors enjoy interactive exhibits.” As researchers, presentation may be secondary, but as evaluators, presentation must be a primary consideration.

As my experience with the PowerPoint report (and many other reports since then) taught me, it can be tough to stray from a well-intentioned template. A shorter report or a more visual report doesn’t take less time to analyze or less time to write. In fact, writing a short report takes more time because I have to eliminate the dense narrative and find the essence, as I might with a long report. I also have to change the way I think about presentation. I have to think about presentation!

At RK&A, we like to look at our report template to see what we can do to improve it – new ways to highlight key findings or call out visitor quotations. Not all of our ideas work out in the long run, but it is good to think about different ways to present information. At the end of the day, though, what our report looks like for any given project comes from a client’s needs—and not from professional standards. And I learned that when I wrote those 17 PowerPoint slides!

Emily’s last blog post (read it here) talked about when evaluation capacity building is the right choice.  When we think about building capacity for evaluation, we think about intentional practice.  This does not necessarily involve teaching people to conduct evaluation themselves, but helping people to ask the right questions and talk with the right people as they approach their work.  RK&A has found this to be particularly important in the planning phases of projects.

The case study below is from a project RK&A did with the Museum of Nature and Science in Dallas, TX (now the Perot Museum of Nature and Science) and involved an interdisciplinary group of museum staff thinking intentionally about the impact the Museum hoped to have on the community.  With a new building scheduled to open a year after this project took place, it was a wonderful time to think intentionally about the Museum’s impact.

Building Capacity to Evaluate [2012]

An evaluation planning project with a nature and science museum

The Museum of Nature and Science (MNS) hired RK&A to develop an evaluation plan and build capacity to conduct evaluation in anticipation of the Museum’s new building scheduled to open in 2013.

How did we approach the project?

The evaluation planning project comprised a series of sequential steps, from strategic to tactical, working with an interdisciplinary group of staff across the Museum. The process began by clarifying the Museum’s intended impact that articulates the intended result of the Museum’s work and provides a guidepost for MNS’s evaluation: Our community will personally connect science to their daily lives. Focusing on the Museum’s four primary audiences that include adults, families, students, and educators, staff developed intended outcomes that serve as building blocks to impact and gauges for measurement. Next, RK&A worked with staff to develop an evaluation plan that identifies the Museum’s evaluation priorities over the next four years, supporting the purpose of evaluation at MNS to measure impact, understand audiences’ needs, gauge progress in the strategic plan, and inform decision making.

The final project step focused on building capacity among staff to conduct evaluation. Based on in-depth discussions with staff, RK&A developed three data collection instruments, including an adult program questionnaire, family observation guide, and family short-answer interview guide, to empower staff to begin evaluating the Museum’s programs. Then, several staff members were trained to systematically collect data using the customized evaluation tools.

What did we learn?

The process of building a museum’s capacity to conduct evaluation highlights an important consideration. Evaluating the museum’s work has become more important given accountability demands in the external environment. Stakeholders increasingly ask, How is the museum’s work affecting its audiences? What difference is the museum making in the quality of people’s lives?

Conducting systematic evaluation and implementing a learning approach to evaluation, however, require additional staff time which is a challenge for most museums. MNS staff recognized the need to create a realistic evaluation plan given competing demands on staff’s time. For example, the evaluation plan balances conducting evaluation internally, partnering with other organizations, and outsourcing to other service providers. Also, the plan incrementally implements the Museum’s evaluation initiatives over time. The Museum will begin with small steps in their efforts to affect great change.

As evalua25th Anniversary Butterflytors, we are often asked to help our clients build evaluation capacity among staff in their organization. The motivation for these requests varies. Sometimes the primary motivator is professional development; other times it is perceived cost savings (since conducting professional evaluations can require resources that not all organizations have at their disposal). We welcome when an organization values evaluation enough to inquire about how to integrate it more fully into their staff’s daily work. If an organization has a true interest in using evaluation as a tool to learn about the relationship between its work and the public, building an organization’s evaluation capacity may be quite successful. On the other hand, if the primary motivator is to save costs associated with evaluation, often the outcome is much less successful, mostly because evaluation takes considerable time and invariably there is a trade-off; when the evaluation is being done, something else is being ignored.

Evaluation capacity building can take a variety of forms. It can range from building staff’s capacity to think like an evaluator, perhaps by helping staff learn how to articulate a project’s desired outcomes (I think this is the most valuable evaluation planning skill one can learn), to training staff to conduct an evaluation from beginning to end (identifying outcomes, creating an evaluation plan, designing instruments, collecting data, conducting analyses, and reporting findings). Even among the most interested parties, it is rare to find museum practitioners who are genuinely interested in all aspects of evaluation. As an evaluator, even I find certain aspects of evaluation more enjoyable than others. I’ve noticed that while practitioners may be initially intimidated by the data collection process, they often find talking with visitors rewarding and informative. On the other hand, they have much less enthusiasm for data analysis and reporting; I’ve only encountered a handful of museum practitioners who enjoy pouring over pages and pages of interview transcripts. We lovingly refer to these individuals as “data nerds” and proudly count ourselves among them.

There is yet another challenge, and it has to do with the fact that most museum practitioners are required to wear many mountain-data-mining33hats. Conducting evaluations is my one and only job; it is what I am trained to do and have intentionally chosen for my vocation. While a practitioner may be intrigued by what evaluation can offer, often it is not the job he or she was trained or hired to do, which means that evaluation can become a burden—just one more hat for a practitioner to wear. Some organizations have addressed an organization’s evaluation needs by creating a position for an in-house evaluator and the individual who might fill that position is usually someone who is schooled in evaluation and research methodologies, much like all of us here at RK&A. I would caution organizations to be very realistic when considering building their organization’s evaluation capacity. Does your staff have the time and skills to conduct a thoughtful study and follow through with analysis and reporting? What responsibilities is your organization willing to put aside to make time for the evaluation? And, do you want your staff to think like evaluators or become evaluators?—an important distinction, indeed. Otherwise, even those with the best of intentions may find themselves buried in mountains of data. Worse yet is that what was once an exciting proposition may be perceived as an annoyance in the end.

Follow

Get every new post delivered to your Inbox.

Join 140 other followers