Posts Tagged ‘intentionality’

At the start of this year, I started writing about the principles of intentional practice, and to date, I have shared three principles (#1, #2, and #3).  For this post, I feature the next two principles of intentional practice, and I present them together because they are both critically important for achieving the museum’s intended impact, and yet, they are very different in character.

#4. Staff know the impact the museum hopes to achieve on audiences served

Principle #4, “Staff know the impact the museum hopes to achieve on audiences served,” may seem like an unnecessary principle to state; after all, staff participated in the crafting of the impact statement, and certainly they know the museum’s heart-felt intentions.  Stating the obvious reinforces the important role staff have in the museum’s intentional practice.  Omitting it as a principle would be a serious oversight.  To “know” is not taken lightly among museum professionals.  In the context of intentional practice, to “know” leads staff to internalize the impact the museum hopes to achieve, and such knowing enables staff to carry out its work. Oddly, the principle also feels static, which is the antithesis of how work tends to happen in museums—where there is always an abundance of activity.  However static the statement feels, impact statements are never still, and neither is staff’s knowledge.

#5. Staff align its work to achieve its intended impact

Knowing the intended impact of the museum on audiences should affect and determine the work that staff do; however, realizing what one could do and carrying out those actions are two very different things.  Innumerable tensions unfold in museums, as museums pursue the 5th principle: “Staff align its work to achieve its intended impact.” Alignment is about exploring whether a museum’s processes and products can deliver the museum’s intended impact within the resources the museum has to expend (staff and dollars), and within that, alignment can be also about course-correcting work to strengthen alignment between a program and the museum’s intended impact.  Taken a step further, alignment can also become part of a strategy for reducing a museum’s workload if the museum is doing too much, as is often the case.

How can a museum use alignment to reduce its workload?  One approach might be to analyze various programs from two perspectives: 1) a program’s ability to achieve the museum’s intended impact; and 2) the amount of resources required to implement the program—in terms of staff time and dollars.



If, through discussion, staff ascertain that a program has relatively low impact (compared to other programs) and requires considerable resources, does it make sense to continue the program?  There are two options: a) the museum can change the program to strengthen alignment between the program and the museum’s intended impact and reduce the cost of the program to the museum; or, b) it can stop doing the program altogether, which could free up resources that the museum could put to better use.  However, some museum programs are sacred cows, such as a holiday program or other long-running public programs.  Sometimes programs become tradition, and they are the ones that are most threatened, in part because they were created before the museum started to pursue impact-driven planning.  Some programs continue year after year simply because the museum has always done them—and sometimes for no other reason.

When a museum chooses to engage in impact-driven planning, logic suggests that the museum wants to change in some sort of way.  If nothing changes during or after the planning process, something is afoot.  While the thought of changing is inspiring, change is extremely difficult to actualize.  For example, things could remain the same if someone becomes offended if their program’s effectiveness is being questioned or if those sacred cows are left intact.  Alignment analyses are intended to be honest reflections about whether a program achieves the museum’s intended impact and uses resources responsibly.  Without honesty, change is elusive and alignment is futile.  Knowing that honest analysis and discussion is vital to alignment, convene with your colleagues to discuss your museum’s programs and plot each one on the above graph to help you determine what your museum can improve or stop doing.  Regardless of where each program is placed on the grid, the objective is to have the conversation—which is the beginning of the alignment process.

Read Full Post »

I recently had the pleasure of participating in an online forum called Interactive Café, sponsored by the National Art Education Association (NAEA) Research Commission.  For a week, I exchanged ideas virtually with my co-hosts, Olga Hubard of Teachers College Columbia University, Michelle Grohe of the Isabella Stewart Gardner Museum, and Benjamin Tellie of the Charles E. Smith Jewish Day School in Rockville, Maryland, about assessing students’ responses to works of art.  Olga began the forum by posing the provocative question, “What is worth assessing in students’ responses to works of art?”  For me, the answer lies in another question: “For what purpose are you assessing students?” As a professional evaluator, the purpose is usually to help a museum understand the impact it has on the students it serves.  As Olga noted, there are many possible outcomes or benefits for students when they look at and respond to works of art, and it is my job to help a museum articulate its unique intentions for students.  Is the program designed to increase students’ critical thinking skills, curiosity or creativity, personal connections, or something else?  Once I truly understand a museum’s intent, the work of developing the assessment can begin.   In this post, I describe my work with one museum to illustrate intentionality in the process of developing a student assessment.

For the last eight months I have been working with the Katonah Museum of Art in Westchester County, New York, to assess the impact that the program, ArteJuntos /ArtTogether, has on the bilingual preschool students it serves.  The program is a partnership with a nearby local preschool that serves immigrant families.  Staff from the Museum visit the children (and their parents) at their school once a week to look at, talk about (through inquiry), and make art; the program also includes two visits to the Museum and parent training (an important part of the program that I have to leave out here for the sake of brevity).  I feel honored to be working with such a unique program and with people who understand that quality assessment takes time.  Fortunately, a full year of assessment (and other program activities) was generously funded by the National Endowment for the Arts.  As mentioned previously, I began by asking the very basic question, “How is your program designed to affect students?” and we continued from there.  To illustrate the intentional approach we took to developing the assessment, below I outline and explain the steps we have taken thus far.

  • The Museum described its intent for students primarily around literacy, especially emergent literacy. Remember these children are very young (on average 3 years old).  Museum staff believed (and had witnessed) that through facilitated, inquiry-based discussions about works of art, students have the unique opportunity to use and develop rich, descriptive language. Furthermore, they had heard from Pre-K teachers of students who had participated in ArteJuntos in previous years that these students seemed more verbal and better prepared for Pre-K.  The staff was eager to find out what was happening.
  • To hone and better understand the idea of literacy as it manifests in the context of ArteJuntos, we assembled a team of experts including Museum staff, teachers from the preschool, school administrators, and representatives from another community organization to talk about what literacy looks like for these young bilingual children and how the program affects their literacy. By the end of the day, I had a list of key indicators that would serve as evidence of students’ literacy in relation to looking at and talking about art.
  • I refined and honed the list of student outcomes and indicators and drafted a rubric that would be used to assess students’ literacy. The draft rubric included relatively simple indicators such as “The child names shapes (triangles, circles) to describe the work of art” and “The child names colors to describe the work of art” as well as more complex indicators like, “The child names two objects that are similar and/or two that are different and accurately describes similarities or differences.”  Museum staff, preschool teachers, and the principal reviewed the rubric and provided feedback.
  • I developed a protocol for the assessment.  In this protocol, each student sits one-on-one with a bilingual educator and a reproduction of a work of art (see the work of art by Carmen Lomas Garza below) and is asked a series of open-ended and close-ended questions that closely mirror the kinds of questions they are asked in the program.  For examples, questions include: “What do you see in this picture?” “What can you tell me about that?” and “What colors can you find in this picture?”
"Oranges" by Carmen Lomas Garza

“Oranges” by Carmen Lomas Garza

  • We tested the protocol by trying it out with some of the Museum staff’s children. As a result, we identified problem areas in the line of questioning and revised it as necessary.
  • In early fall, before ArteJuntos began, we did the first set of one-on-one student assessments with 12 children who would participate in the program that year.  These assessments serve as the pre-program assessment.  We videotaped the assessments, as shown here in this still clip from one of the videos:

Video still clip

  • Museum staff, preschool teachers, and I watched the videos together, discussing the emerging literacy we saw from the children. As a result of that meeting I revised the rubric. It remains focused on literacy, but now more closely aligns with what we saw happening among students.

We are about three-quarters of the way finished with the project.  This spring, once ArteJuntos ends, we will do a second round of assessments with the same children.  These will represent the post-program assessments.  At that point, we will score all of the videotaped children using the rubric, compare and contrast the pre- and the post-program assessments, and draw some preliminary conclusions about the way ArteJuntos impacts students’ literacy.  Our sample is small and we realize there are problems inherent in comparing pre-school children from pre- to post given how rapidly they develop; nevertheless, given our intentional process of developing the assessment tool, we feel confident that we will capture an accurate measure of students’ responses to works of art in the context of this unique program, and we hope that the assessment can continue to be utilized in years to come.

Read Full Post »

Cycle of Intentional PracticeWith a new year upon us and all sorts of possibilities—most of them unknown at this time—our blog entries will take on a slightly different flavor. We intend to remain true to the name of this blog, the Intentional Museum, by presenting a monthly series on Intentional Practice in museums. Throughout this series, we’ll discuss how we see intentional practice emerging in our work with clients as well as investigate how professionals working in different areas of the museum field think about intentionality. In addition to reflecting on how we see intentional practice emerging in our clients’ work, we’ll interview people from a range of museums and areas of the profession and talk with them about how they infuse intentional practice into their thinking, actions, and aspirations. Perhaps we will be reaching out to some of you!

As we embark on 2015, it might be useful to recall the meaning of intentionality. When my interest in intentionality surfaced, I did a little background research to learn about its origins. Little known to me at the time, the word “intentionality” has deep philosophical roots. Timothy Crane, a professor of philosophy in Cambridge who is best known for his work on intentionality, credits Franz Brentano for reintroducing the concept in 1874; it derives from the Medieval Latin. Brentano’s Thesis, as it is known, “can be expressed by saying that one cannot believe, wish, or hope without believing or wishing something.” Webster’s Dictionary’s definition of “intend” is “to direct the mind on,” which harkens back to Brentano’s original explanation of intentionality noted as “the direction of the mind on an object.” These above definitions are considered scholastic definitions, although I feel like they suggest that when the mind is focusing attention on something (a concept or even intended impact) it is possible to move mountains, which doesn’t sound very scholastic. I can feel the intensity of Brentano’s Thesis, and it is with that intensity that I have come to appreciate intentionality and its power to help museums achieve their aspirations.

After digging a bit deeper, I discovered that the way I apply the term in my practice with museums is more similar to how the field of social cognition defines and uses the term. In 2001 Bertrum F. Malle, Louis J. Moses, and Dare A. Baldwin edited a book called Intentions and Intentionality: Foundations of Social Cognition. And according to the Psychology Wiki, which provides different uses of the term, the following use is offered under social cognition: “Human perceivers consider a behavior intentional when it appears purposeful or done intentionally—that is, based on reasons (beliefs, desires) and performed with skill and awareness.” The social cognition use of the term aligns well with all that I have attached onto intentionality, for example, the notion of intentional practice, the four quadrants—all comprised of actions, and (of course) Impact. I believe intentionality is required practice if museums are going to make a difference in the quality of people’s lives, which is how I define “impact.” Without an intense focus on taking actions to achieve a well-defined end result, the end result will be difficult to achieve; ahhhh, if only impact would just magically appear . . . .

We’ll publish a new post in our Intentional Practice series once a month, on the third Wednesday of each month. Stay tuned for the next post on January 21st!


Read Full Post »

25th Anniversary ButterflyFor me, intentionality, a concept I view as essential to museum planning, emerged from two core experiences: results from hundreds of exhibition and program evaluations; and observing museum staff wanting to put too many concepts into an exhibition. Intuitively I knew there was a connection between exhibitions that didn’t fare too well (at least according to the evaluations) and staff not letting go of ideas that are near and dear to their hearts—regardless of whether those ideas supported the thesis of the exhibition.

When I have the good fortune to attend planning meetings, I always find myself thinking critically about what should be included in the exhibition under discussion and what could be saved for another time. My consideration always includes the big idea of the exhibition, what the museum would like to achieve with the exhibition vis-à-vis the public, humans’ capacity to process new ideas when in unfamiliar environments (like that of an exhibition hall), evaluation results from other projects that show what leads to quality visitor experiences and what might move visitors away from having quality experiences, and my utmost respect for scholars’ knowledge and passions. While passionate individuals love their subject matter (and really, I love their subject matter, too), one’s willingness to recognize that not all good ideas (or even great ones) belong in an exhibition and then exercising follow through are traits of intentional practice.

Embedded in intentional practice is the concept of alignment—ensuring that project concepts, components, and elements are present because they support the impact the team wants to achieve. If there are concepts, components, elements that do not contribute to the Intentionalitycore idea of the exhibition and its potential impact on audiences, they need to be omitted. I certainly don’t mean to sound ruthless, but I am acutely aware of how easy it is to keep putting more and more into an exhibition plan and how painfully difficult it is to take anything away. I am also aware of how challenging it is to stay focused on the exhibition’s big idea and have the discipline to say no to ideas because they do not support the intended impact of the exhibition. Learning to say “no” is a necessary survival skill and saying “no” is deeply connected to intentional practice. When practitioners are intentional, they are focused on the impact they want to achieve; they exercise discipline and restraint when determining how to best move forward; and their decision making is egoless and for the sake of achieving the results the team envisions.

Intentional practice represents the culmination of my experiences to date, and my passion for it is directly is tied to my evaluation experience. Over the years I started to realize that when exhibitions tried to do too much, visitors’ experiences didn’t amount to much—from their perspective; their heads were full but descriptions of their experiences were nebulous. Sense-making seemed futile. Nudging me was my memory of exhibition development discussions and tensions about what to put in (everything) and what to take out (nothing). Clarifying the intent of the exhibition and then staying focused on the intent of the exhibition is hard work—not likely to end soon, which is okay. Seeking clarity—whether in thought or action—is a never-ending pursuit.

Read Full Post »

We have been thinking about intentional practice a lot lately.  The article below, written by Randi, appeared in ASTC Dimensions May/June 2008 issue.  If you would like to read more of Randi’s thoughts on intentional practice, be sure to read her 2007 Curator article, “The Case for Holistic Intentionality.”

At museum conferences these days, people are talking about accountability, public impact, and relevance. These ideas are not new. A decade ago, in a 1997 keynote address for the Mid-Atlantic Association of Museums’ 50th anniversary, the late Smithsonian scholar Stephen Weil spoke of the “in-your-face, bottom-line, hard-nosed questions”—the ones that museums often hope to keep under wraps: “Do museums really matter? Can and do museums make a difference?”In arguing that some museums do make a difference, and that all should strive to do so, Weil supported the notion that “the very things that make a museum good are its intent to make a ‘positive difference in the quality of people’s lives.’” He borrowed this last phrase from the United Way of America, which was then challenging its grantees to document the benefits a given program had made in their lives.

Today, museums face accountability questions from many directions. In response to the Government Performance and Results Act of 1993, U.S. federal agencies began to articulate the kinds of outcomes they expected grantees to document. Private foundations followed suit, reexamining their own evaluation practices, as well as those of grantees. The effort continues. The National Science Foundation recently published its Framework for Evaluating Impacts of Informal Science Education Projects, outlining five categories of impact it expects grantees to assess. And for those who object that “you can’t measure mission-centered work,” current United Way CEO Brian Gallagher, as reported in the Wall Street Journal, has a succinct reply: “You most certainly can. The question is, ‘Are you committed to do it?’ And then, ‘Are you committed to report on it?’”

As museums begin to grapple with their intent to make a positive difference, they can start by reexamining their museum’s mission. Weil believed, as many do still, that a mission is key to an institution’s success. A museum’s mission should be a declaration of its core purpose—clarifying what the museum values, reflecting what the museum embodies, and describing its intent to affect its public and community. Establishing a clear institutional purpose, Weil believed, is the first step to being able to assess effectiveness in achieving public impact.

From my own experience as an evaluator, I would add this observation: Museums do not, in and of themselves, value, reflect, or intend. People do.

An institution’s mission will not be within reach unless everyone who works in that institution is mission-focused and mission-driven. Before museums can assess their impact, staff must collectively clarify their intent. Public impact, relevance, and value grow from what I have called “intentional practice”—the willingness of everyone in the museum to examine all operational activities through three mission-based filters: clarity of intent, alignment of practice and resources, and reflective inquiry.

  • Clarity of intent. Opportunities for all staff to come together to discuss the core values of their museum are vital. Colleagues should both encourage others to explore their passions and also challenge others’ thinking as a way of clarifying what is truly of importance.  In the spirit of thoughtful inquiry, why not ask a colleague to defend his or her position? Most people appreciate being asked to explain why they think the way they do. This kind of exploration allows practitioners to voice the passion behind their ideas and learn what they, as a group, really care about. Reexamining the essence of the museum together can reinvigorate the collaborative spirit, enabling staff to further their practice with intent.
  • Alignment of practices and resources. Unless the work of the museum is aligned with its intent, staff may spend time and resources on activities that are good in themselves but may not support the museum’s intent. Perhaps staff should determine—through evaluation—which programs yield the highest impact, keep those programs, and either improve or discontinue those that do not deliver impact. Aligning practice—the activities a museum does and how it does them—and resources so they support the museum’s intent requires thinking about what you should be doing and what you need not do any more. Conversations about realignment will deepen staff members’ understanding of the museum’s intent and the ways in which their work supports it.
  • Reflective inquiry. As an evaluator, I frequently see front-end and formative evaluation being used effectively to shape a final visitor experience. The same cannot be said of summative evaluation. By the time a mandated final report is done, practitioners may have little time or motivation to review it. This is unfortunate because much can be learned through reflecting on past work.

I see a strong relationship between taking the time to think about the work you have done and learning from the work you have done. Practitioners who want to be intentional in their practice can use summative evaluation as a way to gain insight and knowledge about visitors’ perspectives and experiences. The outcome of such reflective inquiry is learning about the ways in which their museum is achieving impact. I would encourage all museums to routinely set aside time for staff to use inquiry as a reflection strategy and to discuss their practice in the context of the institution’s intent.

In conclusion, accountability questions are not likely to disappear, but even if they did, museum practitioners would still need to respond to the “To what end?” question. The sustainable health of the museum depends on it.

Most of the workers I encounter in museums are passionate about their work and want to make a positive difference in people’s lives. If practitioners begin collaborating with colleagues to clarify their museum’s intent, realign their practices and resources to support that intent, and engage in reflective inquiry to learn how they can improve their efforts, they will be on their way to achieving that goal.

Read Full Post »

25th Anniversary ButterflyYou can’t escape technology in museums. Visitors use smartphones to take pictures. Exhibits use touch screens and high-tech interactives to share stories and information. Programs use technology to help visitors engage. Everywhere you look there is a screen . . . until you encounter an evaluator armed with a clipboard and a pencil. I don’t think this is because evaluators are luddites. I think this is because, much like exhibit and program designers, we want our use of technology to make sense. Technology has to make data collection easier not only for us, but for the visitor, too.

I have been using technology for data collection since I was in grad school. We had grant money to spend, so iPads were purchased and students were encouraged to try new things. We approached the task with gusto, certain that this would make things easier; after all, if we enter visitor data as we collect it, we will have eliminated the need to enter data later! It sounded like the perfect plan, until we realized we were collecting data outside—in the elements. In Seattle. In November. We learned a few lessons that day: iPads don’t do well in the cold and neither do the cold, bare fingers we needed for the touch screens. Perhaps in this case, using iPads wasn’t the best plan, given our data collection environment.

Since joining RK&A two years ago, we have experimented with technology as well. Each time we elect to use technology, we think about how it will affect the project. In some cases, it is simple – if we take interview notes on a laptop or tablet while talking to visitors, we can record more of what the visitor is saying and eliminate some work on the back end. This is a low-risk decision that we frequently make. In other cases, we rely much more heavily on technology by using tablets to collect survey data at museums. This is a higher-risk decision because while there are many positive aspects to non-paper data collection, there are also challenges.

Tablet data collection requires me to think about survey presentation in a different way. The survey is designed mostlytechnology for the data collector since we often administer surveys verbally, so it has to be easy to manipulate. Some question formats, such as the scales that RK&A often creates, which are non-traditional scales, don’t always translate well from paper to digital. I have to think about how the question is presented for the data collector, and what, if any, information they have to present to the visitor (e.g., a visual representation of the scale), then find a way to balance the two. Also, we ask the visitor to enter their own demographic information, formerly a single page of questions. But when collecting data on a tablet, we need to balance how much the visitor has to scroll with the number of pages they have to click through. This can be tricky when skip logic directs visitors to the appropriate questions. Regardless of the demographic information the visitor inputs, the survey has to be easy for them to complete. Each time, I learn something I need to change in future surveys to make it easier for visitors.

There are huge advances taking place in digital data collection as new software and platforms are created, and we researchers and evaluators develop best practices for their use. For me, every project that uses technology to help with data collection teaches me something new and makes me a better practitioner. What experiences have you had and what have you learned? The field is changing and I can’t wait to see what we think of next.

Read Full Post »

Effective and cle25th Anniversary Butterflyar communication is a skill that all evaluators must master but sometimes those of us in the evaluation field forget that we may be speaking a foreign language to our clients. We become comfortable with acronyms like IRB or throw around names for data collection methods like surveys, focus groups, naturalistic observations, and ethnography to name a few—not realizing that those words can have different meanings to different people. Precision of language is vitally important, and it goes hand-in-hand with another skill that, over time, I have come to respect—that of active listening. As evaluators, we need to listen to ourselves, visitors, our colleagues, and our clients. To me, active listening is listening first to understand and then responding, and I have found it difficult to do especially when I feel that I have something important to say. We’ve all been there I think; those moments when you start to tune out someone because you are searching for the right opening to say what’s been on your mind for the last several minutes. It’s fairly easy to spot when someone is not actively listening because their comments result in non sequiturs.

One of the challenges with active listening is that it is mentally exhausting. It’s much easier to let your brain relax at regular intelistening2rvals rather than to be constantly aware of what every other person is saying. When conducting in-depth interviews, asking visitors open-ended questions about an exhibition or program, which are intended to result in visitor-centered conversations, we are “on” the entire time and do very little talking ourselves. We are very careful to train our interviewers to actively listen to visitors’ responses so they can discern whether visitors are responding to the questions or whether their responses require additional questions; simultaneously they also have to make sure they understand what the visitor means by the words he or she uses—which is the essence of understanding. At the end of a day of interviewing, I warn my data collectors that they will feel mentally exhausted, because I have experienced this kind of fatigue so many times before. This one example points to a key tenet of active listening—you, as the listener, really do not say much at all; rather your primary job is to listen to understand and ask questions to seek clarity if you don’t understand.

With our clients, active listening is important, too. It’s one of the many things I’ve learned to do while working at RK&A. I learned to actively listen first when I was an interviewer (which I still like to do when the project calls for it because it is a great reminder of the importance of active listening). Then, I learned the key role active listening plays in client meetings and intentional planning workshops we facilitate. Most of what we do as evaluators is ask questions, listen, and then ask more questions to seek further clarity. The mental exhaustion I feel at the end of the day is a good sign that I have done my job as an active listener. I’d be concerned if that feeling were to ever disappear.

Read Full Post »

Older Posts »