Archive for May, 2015

As a staff, we have noticed the slow by steady upswing in the number of museums doing and requesting evaluation over the years.  While evaluation was uncommon in the museum world 15 or 20 years ago, today many, many museum professionals are enthusiastic advocates for evaluation and view it as essential to their work.  Ultimately, we are thrilled about this trend because we truly believe that evaluation can be used as a learning tool to reflect on and improve practice.  This has to be good for the museum world, right?  But, there is a part of me that worries about this trend.   As someone who values quality, how can the field be sure the results produced are reliable and useful?  Just because someone says they are doing evaluation, should we take at face value that the evaluation they are doing is “good evaluation?”   No, like most things in the world, there is a continuum of quality when it comes to evaluation—there are ways of doing evaluation that will lead to results you can feel confident about and make decisions from and there are ways of doing evaluation that lack purpose and will result in piles of data that are meaningless and therefore never acted upon.

All of this hit home for me recently when I worked with a museum’s education department to build staff capacity for evaluation.  The education department in this museum had been doing evaluation on their own for years, and while much of it had been useful, they felt they were sometimes collecting data for the sake of collecting data and not quite able to make decisions about what to evaluate and what not to evaluate.  None of them are trained in evaluation, but they all have a great respect for it and wanted to learn how to do it better.  Thus, I step in.  Gulp.  I was honored that they wanted me to teach them about evaluation.  As a trained evaluator with many, many years of experience, it should be easy, right?  I quickly realized that teaching others how to do what you do is anything but easy.  And in the process of preparing my teaching materials I did something I hadn’t done in awhile.  I looked very critically at what I do as my chosen profession and asked myself how I do it—I broke down what I do everyday into pieces that I could explain and teach.  And in the process I came to a new appreciation for how hard it is to do evaluation well unless you truly have the training and the experience.  I have to admit, I feel really good about what I have been able to teach the education department of this museum about evaluation, but it hasn’t been easy by any means.

All this is to say, that we would like to start a conversation about how to conduct high-quality evaluation so that evaluation efforts will result in findings you can feel confident about and use.  High quality isn’t about size—good evaluation can be a very small evaluation study (with small sample sizes) done in-house or a large evaluation study with mixed methods and complex analysis strategies.  Quality evaluation hinges mostly on having a purpose when planning and staying true to that purpose when implementing a study.  As a result of all this thinking we have been doing, we are planning to host our first Twitter Chat, where we will invite museum professionals to think critically about evaluation through a series of questions we will pose. Stay tuned for more details!

Read Full Post »

I recently had the pleasure of participating in an online forum called Interactive Café, sponsored by the National Art Education Association (NAEA) Research Commission.  For a week, I exchanged ideas virtually with my co-hosts, Olga Hubard of Teachers College Columbia University, Michelle Grohe of the Isabella Stewart Gardner Museum, and Benjamin Tellie of the Charles E. Smith Jewish Day School in Rockville, Maryland, about assessing students’ responses to works of art.  Olga began the forum by posing the provocative question, “What is worth assessing in students’ responses to works of art?”  For me, the answer lies in another question: “For what purpose are you assessing students?” As a professional evaluator, the purpose is usually to help a museum understand the impact it has on the students it serves.  As Olga noted, there are many possible outcomes or benefits for students when they look at and respond to works of art, and it is my job to help a museum articulate its unique intentions for students.  Is the program designed to increase students’ critical thinking skills, curiosity or creativity, personal connections, or something else?  Once I truly understand a museum’s intent, the work of developing the assessment can begin.   In this post, I describe my work with one museum to illustrate intentionality in the process of developing a student assessment.

For the last eight months I have been working with the Katonah Museum of Art in Westchester County, New York, to assess the impact that the program, ArteJuntos /ArtTogether, has on the bilingual preschool students it serves.  The program is a partnership with a nearby local preschool that serves immigrant families.  Staff from the Museum visit the children (and their parents) at their school once a week to look at, talk about (through inquiry), and make art; the program also includes two visits to the Museum and parent training (an important part of the program that I have to leave out here for the sake of brevity).  I feel honored to be working with such a unique program and with people who understand that quality assessment takes time.  Fortunately, a full year of assessment (and other program activities) was generously funded by the National Endowment for the Arts.  As mentioned previously, I began by asking the very basic question, “How is your program designed to affect students?” and we continued from there.  To illustrate the intentional approach we took to developing the assessment, below I outline and explain the steps we have taken thus far.

  • The Museum described its intent for students primarily around literacy, especially emergent literacy. Remember these children are very young (on average 3 years old).  Museum staff believed (and had witnessed) that through facilitated, inquiry-based discussions about works of art, students have the unique opportunity to use and develop rich, descriptive language. Furthermore, they had heard from Pre-K teachers of students who had participated in ArteJuntos in previous years that these students seemed more verbal and better prepared for Pre-K.  The staff was eager to find out what was happening.
  • To hone and better understand the idea of literacy as it manifests in the context of ArteJuntos, we assembled a team of experts including Museum staff, teachers from the preschool, school administrators, and representatives from another community organization to talk about what literacy looks like for these young bilingual children and how the program affects their literacy. By the end of the day, I had a list of key indicators that would serve as evidence of students’ literacy in relation to looking at and talking about art.
  • I refined and honed the list of student outcomes and indicators and drafted a rubric that would be used to assess students’ literacy. The draft rubric included relatively simple indicators such as “The child names shapes (triangles, circles) to describe the work of art” and “The child names colors to describe the work of art” as well as more complex indicators like, “The child names two objects that are similar and/or two that are different and accurately describes similarities or differences.”  Museum staff, preschool teachers, and the principal reviewed the rubric and provided feedback.
  • I developed a protocol for the assessment.  In this protocol, each student sits one-on-one with a bilingual educator and a reproduction of a work of art (see the work of art by Carmen Lomas Garza below) and is asked a series of open-ended and close-ended questions that closely mirror the kinds of questions they are asked in the program.  For examples, questions include: “What do you see in this picture?” “What can you tell me about that?” and “What colors can you find in this picture?”
"Oranges" by Carmen Lomas Garza

“Oranges” by Carmen Lomas Garza

  • We tested the protocol by trying it out with some of the Museum staff’s children. As a result, we identified problem areas in the line of questioning and revised it as necessary.
  • In early fall, before ArteJuntos began, we did the first set of one-on-one student assessments with 12 children who would participate in the program that year.  These assessments serve as the pre-program assessment.  We videotaped the assessments, as shown here in this still clip from one of the videos:

Video still clip

  • Museum staff, preschool teachers, and I watched the videos together, discussing the emerging literacy we saw from the children. As a result of that meeting I revised the rubric. It remains focused on literacy, but now more closely aligns with what we saw happening among students.

We are about three-quarters of the way finished with the project.  This spring, once ArteJuntos ends, we will do a second round of assessments with the same children.  These will represent the post-program assessments.  At that point, we will score all of the videotaped children using the rubric, compare and contrast the pre- and the post-program assessments, and draw some preliminary conclusions about the way ArteJuntos impacts students’ literacy.  Our sample is small and we realize there are problems inherent in comparing pre-school children from pre- to post given how rapidly they develop; nevertheless, given our intentional process of developing the assessment tool, we feel confident that we will capture an accurate measure of students’ responses to works of art in the context of this unique program, and we hope that the assessment can continue to be utilized in years to come.

Read Full Post »