In Reflection #3, Emily Skidmore talked about how you can’t rush measuring outcomes and advocated for slowing down and conducting front-end and formative evaluation to improve exhibitions, programs, and experiences prior to jumping into measuring outcomes. I’d like to piggy-back on the slow movement and talk about Institutional Review Board (IRB) and school district review, which is Slow with a capital ‘s’—for better or worse.
IRB is a formally designated board that reviews social science, biomedical, and behavioral research to determine whether the benefits of the research outweigh the risks for the participants in the study. To be blunt, IRB can be a real thorn in our side. It requires extensive, tedious paperwork for something we may consider innocuous (e.g., interviewing teachers about their program experience). Given the many forms and thorough explanations of research procedures required, we spend a lot of time preparing for IRB, and then there is the fee to the external IRB to review the paperwork and methodology. In addition to the budgetary implications IRB has for our clients, IRB procedures also can significantly delay the research well past when the museum may have expected its research to take place. Not all of our work requires IRB review, but generally, most research projects where we measure outcomes do.
When our work includes collecting data from students and teachers, we sometimes have to submit our protocols to school districts for formal review too. School district review is separate from IRB review, although a school district’s criteria for reviewing research protocols are normally akin to IRB criteria. Nevertheless, it is yet another required process that can really put the brakes on a project. For instance, one school district took five months to review our project—much to the chagrin of our client and its funder (understandably so).
At times, IRB and school district reviews can feel like ridiculous hoops that we have to jump through, or bureaucracy at its worst. As Don Mayne’s cartoon portrays, sometimes the IRB feels like a bunch of nitpicky people who exist solely to make our lives more difficult when we and our museum clients simply want to improve experiences for museum visitors. So as I justify our sampling procedures for the fifteenth time in the required paperwork, I may shake my head and curse under my breath, but I truly do appreciate the work that IRB and school districts do (I swear there aren’t IRB reviewers holding my feet to the fire as I type!).
When I take a moment and step back, I realize that the process of submitting to IRB forces me to think through all the nitty-gritty details of the research process, which ultimately improves the research and protects museum visitors as research participants. The extreme assumptions any given IRB makes about our research—(no, I will not be injecting anyone with an unknown substance)—I try not to take them personally and simply respond as clearly and concisely as possible. And I have gotten pretty good at navigating the system at this point. Then, I hold our client’s hand, try to protect them from as much of the paperwork and tedium as I can, and tell them, ever so gently, that their research may be delayed.
Posted in Uncategorized | Tagged 25 years, children, IRB, museum, research, school | Leave a Comment »
Intentional Museum is pleased to announce its first student blogging competition! In honor of RK&A’s 25th anniversary, we want to hear from tomorrow’s museum professionals about their intentional practice and the impact it can have on the visitor experience.
For our first blogging competition, we ask you to reflect on the following question: Through your intentional practice, how do you help museums enrich the lives of others? Perhaps you are focusing on collections or exhibitions, using objects and artifacts to tell stories. Maybe your love is museum education or visitor services, ensuring visitors have positive museum experiences. Whether front-of-house or behind-the-scenes, we want to hear from you (check the guidelines for more information)!
We often reflect on our professional experiences on Intentional Museum, but we appreciate the personal connection. We want your blogs to tell a story, to speak about your experience, and to highlight your unique insight into the museum field.
- Bloggers must be currently enrolled in a museum studies, museum education, library sciences or similar degree or certificate program. Undergraduate and graduate students are welcome to apply.
- Blogs should be no longer than 500 words and written in a conversational style. Avoid jargon and academic language to ensure clarity.
- You are welcome to share how the work of others has influenced your practice, but this isn’t required. If you include quotes, be sure to include citations.
- We have no idea what the winning blogs will look like – if you look through our past posts, you will see we tell stories, share academic insights, and sometimes we are funny. We want to hear your story, so let your passion show.
- Check your work carefully for spelling and accuracy. While no one is perfect, winning blogs will be error free.
- Email your entry to firstname.lastname@example.org by 5:00pm (EST), Friday, April 4, 2014.
RK&A staff will review all entries, pick the top 3 and publish them on the Intentional Museum blog. Winners will be notified and announced at the end of April. Winning blog posts will be shared with our readers in May and June 2014. Winners will also receive a copy of one of our favorite museum books, Stephen Weil’s Making Museums Matter, with a personalized note from Randi.
How to Enter:
- One (1) entry per blogger, please.
- Send your blog as a Word document attached to an email.
- Include your name, school, degree program and expected graduation date in the body of the email, with the subject line “Intentional Museum Blog Competition.”
- Please do not include your name/identifying information as a header to your blog entry. Each entry will be assigned a number to ensure unbiased review.
- Email your entry to email@example.com by 5:00pm (EST), Friday, April 4, 2014.
Other Important Information:
- RK&A reserves the right to edit winning blog entries for content and length.
- Winners will be notified via email and will have 48 hours to respond with their contact information for book delivery.
- Books will only be mailed to those in the United States and will be sent via the US Postal Service no later than May 30, 2014.
- If a winner does not respond in the allotted timeframe, an alternate winner will take his/her place.
- Winners will be asked to submit a picture of themselves for publication with their blog.
Still have questions? Contact us at firstname.lastname@example.org, or post a comment in response to this post on our blog!
Posted in Uncategorized | Leave a Comment »
Welcome to our new Throwback Thursday series, where we take a moment to look back at projects from our archives. Today we’ll be sharing a case study about our planning and evaluation work with the Science Museum of Virginia and their Sphere Corps Program. You might recall this particular Science On a Sphere program from one of our prior posts, Learning to Embrace Failure, and today we’ll share a bit more about how we approached the study, what we learned, and the implications of those findings.
Sphere Corps Program 
For this planning and evaluation project with The Science Museum of Virginia (SMV), RK&A evaluated Sphere Corps, a Science on a Sphere program about climate change developed by SMV with funding from the National Oceanic and Atmospheric Administration (NOAA).
How did we approach this study?
The study was designed around RK&A’s belief that organizations must be intentional in their practice by continually clarifying purpose, aligning practices and resources to achieve purpose, measuring outcomes, and learning from practice to strengthen ongoing planning and actions. To this end, the Sphere Corps project included five phases of work—a literature review, a workshop to define intended program outcomes, two rounds of formative evaluation, and two reflection workshops. Formative evaluation data were collected using naturalistic observations and in-depth interviews. Each phase of work allowed staff to explore their vision for the Sphere Corps program and how it changed over time as they learned from and reflected on evaluation findings.
What did we learn?
SMV staff’s goal was to create a facilitated, inquiry-based Science on a Sphere program about climate change. RK&A first completed a literature review that revealed a facilitated Sphere experience was in keeping with best practices and that using inquiry methods in a 20-minute program would be challenging but worth exploring further. Staff then brainstormed and honed the outcomes they hoped to achieve in Sphere Corps, which guided planning and script development. The first round of formative evaluation identified implementation barriers and an overabundance of iClicker questions, all of which created a challenging environment for educators to effectively use inquiry. Upon reflection, staff reduced the number of iClicker questions and added visualizations and questions that required close observation of the Sphere. Following a second round of formative evaluation, staff made additional changes to the program script and began to reflect on the reality of using inquiry in a single 20-minute program. Since the script covered a range of topics related to climate change, staff wondered if they should instead go deeper with one topic while encouraging more visitor observation and interpretation of Sphere data. Out of this discussion arose the idea of “mini-programs”—a series of programs that would focus on communicating one key idea about climate change, such as helping people understand the difference between weather and climate.
What are the implications of the findings?
Central to the idea of the “mini-program” is the idea of doing less to achieve more. Impact and outcomes are incredibly difficult to achieve and trying to achieve too much often results in accomplishing very little. Through a reflection workshop and staff discussion, the SMV team was able to prioritize and streamline the outcomes and indicators originally written for the Sphere Corps program. Staff also recognized that their primary goal with the Sphere Corps program is to encourage visitors to think more critically about the science behind climate change. By scaling down the number of topics covered in the presentation, each program could intentionally focus on: (1) one key idea or question related to climate change; (2) achievement of only a few intended outcomes; and (3) implementation of specific facilitation strategies to achieve those outcomes. Intentionally covering less content also opens up opportunities to more effectively use inquiry methods.
Posted in Uncategorized | Tagged align, audience, evaluation, failure, formative, impact, indicators, intentional, outcome, planning, reflect, science, Science Museum of Virginia, science on a sphere, Throwback Thursday, workshop | Leave a Comment »
So often we evaluators are asked to measure outcomes or results, which of course align with our expectations. When we conduct an evaluation and the results are positive, an organization can wave its flag; and ideally the whole museum field benefits from learning why a particular exhibition or program is so successful at achieving its outcomes. During my time as an evaluator, I have learned that there is enormous value in walking before running. Because measuring results sounds compelling to museums and their funders, museums often jump over important evaluation processes and rush into measuring results. Accordingly, staff, in a moment of passion, forgo front-end and formative evaluation—those early stages of concept testing, prototyping, and piloting a program—that help staff understand the gaps between the intended outcomes for their audience and the successes and challenges of implementing a new project.
So, when we are asked to measure results, we always ask the client if the project has ever been evaluated. Even then, we may pull the reins to help slow down our clients enough to consider the benefits of first understanding what is and is not working about a particular program or exhibition. More often than not, slowing down and using front-end and formative evaluation to improve the visitor experience increases the likelihood that staff will be rewarded with positive results when they measure outcomes later. In fact, when an organization’s evaluation resources are limited, we often advocate for conducting a front-end and/or formative evaluation because we believe that is where all of us will learn the most. It is human nature to want to jump right in to the good stuff and eat our dessert first. We, too, get excited by our clients’ passion and have to remind ourselves of the value of taking baby steps. So, one of the many lessons I’ve learned (and am still learning) is that when it comes to evaluation, encouraging practitioners to walk before they run (or test before they measure) is key to a successful project and their own personal learning.
Posted in Uncategorized | Tagged 25 years, audience, evaluation, formative, front-end, impact, learning, organizational learning, outcome, passion, results, value | 1 Comment »
RK&A’s Stephanie Downey will moderate a session, Getting the Most from Evaluation, for the New York City Museum Educators Roundtable on Wednesday, February 12th at 6:00 pm. Panelist from The Bruce Museum of Arts and Science, The Wildlife Conservation Society, and The Metropolitan Museum of Art will join her to share their lessons learned and best practices from evaluation projects.
For more information, please click HERE.
Posted in Uncategorized | Leave a Comment »
Sometimes when learning surfaces slowly, it is barely visible, until one day the world looks different. Responding to that difference is the first layer of that complex process often labeled as learning. The Cycle of Intentional Practice was a long time coming—emerging from many years of conducting evaluations, where I worked closely with museum staff and leadership as well as with visitors. The Cycle of Intentional Practice is an illustration of an ideal work cycle that started to form when I was writing “A Case for Holistic Intentionality”. I am visually oriented and I often have to draw my ideas before I write about them; in this case, I was writing about my ideas and then I felt the need to create a visualization to depict what I was thinking—in part to help me understand what I was thinking, but also to help others. I included the first iteration of the cycle in the manuscript to Curator, but the editor said the Journal does not usually publish that kind of illustration, so I put it aside.
That original cycle differs from the one I use today—it was simpler (it included “Plan,” “Act,” and “Evaluate”), and while I didn’t know it at the time, it was a draft. There have been several more iterations over time (one was “Plan,” “Act,” and “Evaluate & Reflect,” for example); as I continue to learn and improve my practice, I change the cycle accordingly. Most stunning to me was that the first draft of the cycle showed nothing in the center—nothing! I feel a little embarrassed by my omission and I am not entirely sure what I was thinking at the time, but I hope my oversight was short-lived. At some point I placed the word “Intentions” in the center, and as I clarified my ideas, with the hope of applying the cycle to our evaluation and planning work, I eventually replaced “intentions” with “impact.” I recall how difficult it was to explain the concept of “intentions” so I eventually needed to remove the word from the center (as much as I loved having it there). If my goal was to have museums apply the cycle to their daily and strategic work, the cycle needed to represent an idea people found comfortable and doable. Soon I realized that intentionality was the larger concept of the cycle and what needed to be placed in the center was the result of a museum’s work on its publics–impact. So was born our intentionality work with museums. Then I realized the true power of intentionality—mission could go in the center as well as outcomes, or anything for that matter. The artist’s rendition below demonstrates the versatility of intentionality as a concept.
An artistic rendering of the Cycle of Intentional Practice by artist Andrea Herrick
What I find most amazing is that two crucial ideas—reflection and impact—were not present in the first iterations of the cycle, although they were discussed when I talked about intentionality. Our intentional planning work (which we refer to as impact planning) would be rudderless without the presence of impact and our ability to learn from our work would be weakened without reflection. And that brings me to another realization, which I am reminded of daily—the never-ending pursuit of achieving clarity of thought, followed by writing a clear expression of that thought.
Today I talk about the Cycle of Intentional Practice as a draft—it will always be on the verge of becoming, but these days I am more comfortable with the idea of the Cycle being a draft—an idea in process—than I was a decade ago; in fact, I have come to realize that all work is a draft and that if one is serious about learning and applying new ideas to work and life, then all ideas, all products, all knowledge are mere drafts because learning is continuous, right?
Humbling? Yes indeed.
Posted in Uncategorized | Tagged 25 years, act, align, Cycle of Intentional Practice, evaluate, evaluation, impact, intentionality, learning, plan, planning, reflect | Leave a Comment »
Over the years there have been pivotal moments in which we at RK&A tried something out-of-the- ordinary to meet the needs of a particular project that then became a staple in how we do things. It wasn’t always clear at the time that these were “pivotal moments,” but in retrospect I can see that these were times of concentrated learning and change. For me, one of these pivotal moments was the first time we used rubrics as an assessment tool for a museum-based project. I had been introduced to rubrics in my previous position where I conducted educational research in the public school system, which sometimes included student assessment. Rubrics are common practice among classroom teachers and educators who are required to assess individual student performance.
Rubrics had immediate appeal to me because they utilize qualitative research methods (like in-depth interviews, written essays, or naturalistic observations) to assess outcomes in a way that remains authentic to complicated, nuanced learning experiences; while at the same time, they are rigorous and respond to the need to measure and quantify outcomes, an increasing demand from funders. They are also appealing because they respect the complexity of learning—we know from research and evaluation that the impact of a learning experience may vary considerably from person to person. These often very subtle differences in impact can be difficult to detect and measure.
To illustrate what a rubric is, I have an example below from the Museum of the City of New York, where we evaluated the effect of one of its field trip programs on fourth grade students (read report HERE). As shown here, a rubric is a set of indicators linked to one outcome. It is used to assess a performance of knowledge, skills, attitudes, or behaviors—in this example we were assessing “historical thinking,” more specifically students’ ability to recognize and not judge cultural differences. As you can see, rubrics include a continuum of understandings (or skills, attitudes, or behaviors) on a scale from 1 to 4, with 1 being “below beginning understanding” to 4 being “accomplished understanding.” The continuum captures the gradual, nuanced differences one might expect to see.
The first time we used rubrics was about 10 years ago, when we worked with the Solomon R. Guggenheim Museum, which had just been awarded a large research grant from the U.S. Department of Education to study the effects of its long-standing Learning Through Art program on third grade students’ literacy skills. This was a high-stakes project, and we needed to provide measurable, reliable findings to demonstrate complex outcomes, like “hypothesizing,” “evidential reasoning,” and “schema building.” I immediately thought of using rubrics, especially since my past experience had been with elementary school students. Working with an advisory team, we developed the rubrics for a number of literacy-based skills, as shown in the example below (and note the three-point scale in this example as opposed to the four-point scale above—the evolution in our use of rubrics included the realization that a four-point scale allows us to be more exact in our measurement). To detect these skills we conducted one-on-one standardized, but open-ended, interviews with over 400 students, transcribed the interviews, and scored them using the rubrics. We were then able to quantify the qualitative data and run statistics. Because rubrics are precise, specific, and standardized, they allowed us to detect differences between treatment and control groups—differences that may have gone undetected otherwise—and to feel confident about the results. For results, you can find the full report HERE.
Fast forward ten years to today and we use rubrics regularly for summative evaluation, especially when measuring the achievement of complicated and complex learning outcomes. So far, the two examples I’ve mentioned involved students participating in a facilitated program, but we also use rubrics, when appropriate, for regular walk-in visitors to exhibitions. For instance, we used rubrics for two NSF-funded exhibitions, one about the social construction of race (read report HERE) and another about conservation efforts for protecting animals of Madagascar (read report HERE). Rubrics were warranted in both cases—both required a rigorous summative evaluation, and both intended for visitors to learn complicated and emotionally-charged concepts and ideas.
While rubrics were not new to me 10 years ago (and certainly not new in the world of assessment and evaluation), they were new for us at RK&A. What started out as a necessity for the Guggenheim project has become common practice for us. Our use of rubrics has informed the way we approach and think about evaluation and furthered our understanding of the way people learn in museums. This is just one example of the way we continually learn at RK&A.
Posted in Uncategorized | Tagged 25 years, change, control / treatment groups, evaluation, funder, grant, impact, indicators, learning, Museum of the City of New York, outcome, qualitative, research, rubrics, school, Solomon R. Guggenheim Museum | Leave a Comment »