Position Opening: Research Associate

Randi Korn & Associates, Inc., a consulting firm specializing in evaluating museum programs and exhibitions, is seeking a Research Associate in its Alexandria, VA office. The Research Associate will be responsible for implementing diverse evaluation projects and services, coordinating contractor data collection teams, collecting and analyzing qualitative and quantitative data, and preparing reports and presentations.

The ideal candidate will have 3-5 years experience conducting evaluation in informal learning settings and a desire to work in a client-centered environment. A master’s degree in social sciences, education, museum studies, or a related field is required. Qualitative data analysis experience is required, and quantitative data analysis experience is a plus. The qualified candidate must have excellent writing skills, be able to juggle multiple projects and work both independently and as part of teams. A passion for museums or other kinds of informal learning environments is preferred.

RK&A offers a competitive compensation and benefits package. For information on RK&A, please visit randikorn.com. Interested applicants should forward a resume with cover letter, salary requirements, and two independently written and edited writing samples to:info@randikorn.com. Please include your last name in the file name of all the documents you send. Closing date for applications is August 24.

Data visualization is a very hot topic in the research and evaluation world right now. As a student of art history and art education, visual communication is something I believe in whole-heartedly and, like many evaluators, have been honing my skills in visual presentation. In reading about data viz, the tip “keep it simple” is ubiquitous. I mostly agree with this statement, but to say you must “simplify” really oversimplifies the problem. I think data visualization expert Stephen Few’s company, Perceptual Edge, highlights the issue well with these three quotes:


Simplicity, simplicity, simplicity. Henry David Thoreau

Simplicity is the ultimate sophistication. Anonymous

Seek simplicity and distrust it. Alfred North Whitehead


Simplicity can be distrustful if it is not done with clear intentions. Having clear intentions for data visualization requires a critical knowledge of methodology and analysis to understand what is being lost in the simplification process. What value is a simple graphic if it does not accurately represent the data? This tension between simplicity and accuracy reminds me of a children’s book by Lois Ehlert where a tiger becomes a mouse as shapes are incrementally stripped away. The process of deconstructing a tiger to a mouse is exciting…but when the results are really a tiger and you are making decisions from a representation of a mouse, there is a problem.


So while simple is better, or as we like to say here “less is more,” readers and data viz creators must question simplicity. No one wants to be surprised by a hidden tiger.



As RK&A’s first Research Fellow and a doctoral candidate in sociology at Northwestern University, I’m delighted to write the first of my Intentional Museum posts exploring the relationship of sociological research and museum evaluation. As it turns out, the timing of this writing is pretty fortuitous: I’m currently preparing a presentation for this year’s Leadership Exchange in Arts and Disability (LEAD) Conference at the Kennedy Center, which has me thinking about how sociological research on museums can benefit practitioners. So I thought I’d start, as I’ll start in that presentation, with the big picture.

First things first: What do sociologists have to say about museums? In general, our research has spoken to three distinct themes. We’ve shown how visitor demographics in art museums reflect broader systems of social inequality by explaining how people’s education and class background shapes their familiarity with art. This helps explain patterns of visitor attendance, while also identifying the societal barriers that may leave some people out of the conversations museums try to foster.  We’ve shown how broader social changes (i.e. funding structures or political conditions) can impact what happens inside museums. This work illustrates how the environments in which organizations operate define what counts as legitimate operations, which in turn influences what museums do. Most recently, we’ve lifted the hood to look inside museums and focus on practice: what people do within these organizations, and how. Because museums offer a particularly apposite case for examining how people interact with objects, some of this work has examined how different objects and environments can structure interpretation and shape organizational goals.

Recently, I’ve grown increasingly curious about why so little has been said about the intersections of sociology and evaluation. In part this is because throughout my doctoral fieldwork on museum education, people regularly confused sociological research with evaluative practice. It was easy at the time to point out differences. Perhaps the most foundational one regards the role of theory. Sociologists study specific things to tell a more general story about the social world, entering museums to answer a theoretically-motivated question (for example, about inequality, legitimacy, or practice) that can speak beyond a single museum, exhibit, or program. In contrast, evaluators concentrate on how to help particular museums articulate their practical objectives (for exhibition development, for program assessment, and so forth), and then develop research designs to assess them. They may even make formal recommendations, which is not typically within the purview of sociology.

However, when reflecting – as I often do, and as I’ve been doing for LEAD – on what people in museums can do with broader sociological ideas, I inevitably find myself asking how, if at all, sociology and evaluation are akin in helping museum practitioners. For one, the best work in sociology and in evaluation rests on carefully prescribed methods. A sociologist’s ability to specify the method by which he or she arrives at a particular theory is what ensures it is sound. Evaluators, similarly, help museum practitioners identify their guiding questions at the outset of a project (just like sociologists must do for themselves) and select the methods (surveys, focus groups, observations, interviews) that will guide that client to results that are both reliable and useful, an important aim our Twitter chat explored on June 9th. In this way, both the sociologist and the evaluator practice with intention.

Perhaps most importantly, both professions can also aid museum practitioners in becoming more intentional. In assuaging museum educators that I was not doing program evaluation, I often explained that my research could instead give practitioners the language to talk about and reflect upon how they interpret the broader social conditions and institutional environments in which museums operate. Or, paraphrasing Max Weber – one of sociology’s founders – scientists can through their work promote clarity about choices by showing people the results of their actions. But now it seems false to me to distinguish this guiding philosophy too starkly from the aims of evaluation. High quality evaluations present museum staff with systematic research in efforts to help them make more informed choices about what they do, how they do it, and with what impact. This objective, I’ve come to understand, is central to how RK&A understands the role of evaluation in museum settings, and what we here call “intentional” practice.

Last week, Stephanie wrote a thoughtful piece about the recent upswing in museum professionals who are conducting evaluation and the importance of thinking critically about evaluation. Among other things, she asked “how can the field be sure the results produced are reliable and useful?” You can read the full post here. In an effort to open up discussion and conversation on thinking critically about evaluation, we’re excited to announce RK&A’s first Twitter chat. We hope you will join us!

Join the Conversation

From 2-3pm EDT on Tuesday, June 9th, RK&A’s Stephanie Downey, Amanda Krantz, and Cathy Sigmond will host RK&A’s first Twitter chat on thinking critically about evaluation, using the hashtag #RKAchat.  To join the conversation make sure your tweets include this hashtag. 

Why this topic?

Many museum professionals are enthusiastic advocates for evaluation and view it as essential to their work.  As evaluators, we’re ecstatic about this! But for an evaluation to be truly useful, museum professionals need to think evaluatively about evaluation.  In other words, museum professionals must think critically about how evaluations are planned and conducted to fully make sense of evaluation results within the context of the reliability and validity of the study.

To that end, we’re hosting a discussion on thinking critically about evaluation.  We’ll pose some questions so we can hear your thoughts and experiences, and we will share our thoughts on how museum professionals can position themselves to be critical consumers of evaluation.

Twitter Chat Questions

During the Twitter chat, @IntentionalMuse will tweet numbered questions (for example, “Q1: What do you evaluate, and how? #RKAchat”).  Your response tweet should reference the question (for example, “A1: We talk to visitors about what they took away from an exhibition to evaluate the exhibition’s learning goals #RKAchat”).

Q1: What do you evaluate, and how?

Q2: What do you think characterizes a “good” quality evaluation?

Q3: What characterizes a “bad” quality evaluation?

Q4: What are the challenges in conducting high quality evaluation that will provide meaningful results?

Q5: With quality in mind, what is one way you might think about or approach evaluation differently in the future?

How to Participate

If you do not already have one, create a Twitter account.  On Tuesday, June 9th from 2-3pm EDT, tweet using the hashtag #RKAchat.  You can monitor the tweets related to the chat by searching for #RKAchat on Twitter.

As a staff, we have noticed the slow by steady upswing in the number of museums doing and requesting evaluation over the years.  While evaluation was uncommon in the museum world 15 or 20 years ago, today many, many museum professionals are enthusiastic advocates for evaluation and view it as essential to their work.  Ultimately, we are thrilled about this trend because we truly believe that evaluation can be used as a learning tool to reflect on and improve practice.  This has to be good for the museum world, right?  But, there is a part of me that worries about this trend.   As someone who values quality, how can the field be sure the results produced are reliable and useful?  Just because someone says they are doing evaluation, should we take at face value that the evaluation they are doing is “good evaluation?”   No, like most things in the world, there is a continuum of quality when it comes to evaluation—there are ways of doing evaluation that will lead to results you can feel confident about and make decisions from and there are ways of doing evaluation that lack purpose and will result in piles of data that are meaningless and therefore never acted upon.

All of this hit home for me recently when I worked with a museum’s education department to build staff capacity for evaluation.  The education department in this museum had been doing evaluation on their own for years, and while much of it had been useful, they felt they were sometimes collecting data for the sake of collecting data and not quite able to make decisions about what to evaluate and what not to evaluate.  None of them are trained in evaluation, but they all have a great respect for it and wanted to learn how to do it better.  Thus, I step in.  Gulp.  I was honored that they wanted me to teach them about evaluation.  As a trained evaluator with many, many years of experience, it should be easy, right?  I quickly realized that teaching others how to do what you do is anything but easy.  And in the process of preparing my teaching materials I did something I hadn’t done in awhile.  I looked very critically at what I do as my chosen profession and asked myself how I do it—I broke down what I do everyday into pieces that I could explain and teach.  And in the process I came to a new appreciation for how hard it is to do evaluation well unless you truly have the training and the experience.  I have to admit, I feel really good about what I have been able to teach the education department of this museum about evaluation, but it hasn’t been easy by any means.

All this is to say, that we would like to start a conversation about how to conduct high-quality evaluation so that evaluation efforts will result in findings you can feel confident about and use.  High quality isn’t about size—good evaluation can be a very small evaluation study (with small sample sizes) done in-house or a large evaluation study with mixed methods and complex analysis strategies.  Quality evaluation hinges mostly on having a purpose when planning and staying true to that purpose when implementing a study.  As a result of all this thinking we have been doing, we are planning to host our first Twitter Chat, where we will invite museum professionals to think critically about evaluation through a series of questions we will pose. Stay tuned for more details!

I recently had the pleasure of participating in an online forum called Interactive Café, sponsored by the National Art Education Association (NAEA) Research Commission.  For a week, I exchanged ideas virtually with my co-hosts, Olga Hubard of Teachers College Columbia University, Michelle Grohe of the Isabella Stewart Gardner Museum, and Benjamin Tellie of the Charles E. Smith Jewish Day School in Rockville, Maryland, about assessing students’ responses to works of art.  Olga began the forum by posing the provocative question, “What is worth assessing in students’ responses to works of art?”  For me, the answer lies in another question: “For what purpose are you assessing students?” As a professional evaluator, the purpose is usually to help a museum understand the impact it has on the students it serves.  As Olga noted, there are many possible outcomes or benefits for students when they look at and respond to works of art, and it is my job to help a museum articulate its unique intentions for students.  Is the program designed to increase students’ critical thinking skills, curiosity or creativity, personal connections, or something else?  Once I truly understand a museum’s intent, the work of developing the assessment can begin.   In this post, I describe my work with one museum to illustrate intentionality in the process of developing a student assessment.

For the last eight months I have been working with the Katonah Museum of Art in Westchester County, New York, to assess the impact that the program, ArteJuntos /ArtTogether, has on the bilingual preschool students it serves.  The program is a partnership with a nearby local preschool that serves immigrant families.  Staff from the Museum visit the children (and their parents) at their school once a week to look at, talk about (through inquiry), and make art; the program also includes two visits to the Museum and parent training (an important part of the program that I have to leave out here for the sake of brevity).  I feel honored to be working with such a unique program and with people who understand that quality assessment takes time.  Fortunately, a full year of assessment (and other program activities) was generously funded by the National Endowment for the Arts.  As mentioned previously, I began by asking the very basic question, “How is your program designed to affect students?” and we continued from there.  To illustrate the intentional approach we took to developing the assessment, below I outline and explain the steps we have taken thus far.

  • The Museum described its intent for students primarily around literacy, especially emergent literacy. Remember these children are very young (on average 3 years old).  Museum staff believed (and had witnessed) that through facilitated, inquiry-based discussions about works of art, students have the unique opportunity to use and develop rich, descriptive language. Furthermore, they had heard from Pre-K teachers of students who had participated in ArteJuntos in previous years that these students seemed more verbal and better prepared for Pre-K.  The staff was eager to find out what was happening.
  • To hone and better understand the idea of literacy as it manifests in the context of ArteJuntos, we assembled a team of experts including Museum staff, teachers from the preschool, school administrators, and representatives from another community organization to talk about what literacy looks like for these young bilingual children and how the program affects their literacy. By the end of the day, I had a list of key indicators that would serve as evidence of students’ literacy in relation to looking at and talking about art.
  • I refined and honed the list of student outcomes and indicators and drafted a rubric that would be used to assess students’ literacy. The draft rubric included relatively simple indicators such as “The child names shapes (triangles, circles) to describe the work of art” and “The child names colors to describe the work of art” as well as more complex indicators like, “The child names two objects that are similar and/or two that are different and accurately describes similarities or differences.”  Museum staff, preschool teachers, and the principal reviewed the rubric and provided feedback.
  • I developed a protocol for the assessment.  In this protocol, each student sits one-on-one with a bilingual educator and a reproduction of a work of art (see the work of art by Carmen Lomas Garza below) and is asked a series of open-ended and close-ended questions that closely mirror the kinds of questions they are asked in the program.  For examples, questions include: “What do you see in this picture?” “What can you tell me about that?” and “What colors can you find in this picture?”
"Oranges" by Carmen Lomas Garza

“Oranges” by Carmen Lomas Garza

  • We tested the protocol by trying it out with some of the Museum staff’s children. As a result, we identified problem areas in the line of questioning and revised it as necessary.
  • In early fall, before ArteJuntos began, we did the first set of one-on-one student assessments with 12 children who would participate in the program that year.  These assessments serve as the pre-program assessment.  We videotaped the assessments, as shown here in this still clip from one of the videos:

Video still clip

  • Museum staff, preschool teachers, and I watched the videos together, discussing the emerging literacy we saw from the children. As a result of that meeting I revised the rubric. It remains focused on literacy, but now more closely aligns with what we saw happening among students.

We are about three-quarters of the way finished with the project.  This spring, once ArteJuntos ends, we will do a second round of assessments with the same children.  These will represent the post-program assessments.  At that point, we will score all of the videotaped children using the rubric, compare and contrast the pre- and the post-program assessments, and draw some preliminary conclusions about the way ArteJuntos impacts students’ literacy.  Our sample is small and we realize there are problems inherent in comparing pre-school children from pre- to post given how rapidly they develop; nevertheless, given our intentional process of developing the assessment tool, we feel confident that we will capture an accurate measure of students’ responses to works of art in the context of this unique program, and we hope that the assessment can continue to be utilized in years to come.


Our winning entry for our student blog competition reminded me that so many of us find our way into the museum field through other avenues, led by our passion for connecting people with art, science, history, you name it. For example, I started out studying non-human primate behavior which led to educating the public about non-human primates and now I study human primate behavior in museums and other informal learning environments! Intentionally following our passion for learning in and experiencing museums is often what unites us. Our student blogger, Kwasi, also reminded me of the passion that emerging museum professionals have for ensuring museums are accessible to many publics. Kwasi explains how he applied intentional practice—uniting his actions around the single goal of museum accessibility—to develop an app (www.thetravelsee.com) that helps people align their interests with cultural offerings in Cooperstown, NY. Read below for more about his journey.

Through your intentional practice, how do you help museums enrich the lives of others?IMG_3347

A few years ago, I ran a tour guide business in D.C. that focused on introducing visitors to the wonders of D.C.’s local museums. Every time I walked a tour group up to the entrance of a historic house or a national park site, someone in the group would always ask, “Can we really go inside?” I began to realize there was a disconnect between the way potential visitors viewed museums and how museums perceived themselves within communities.

I wanted to do something to help solve this issue and decided to close my tour guide business to pursue my masters in Museum Studies. A graduate course titled “Digital Technologies,” pushed me to think about how technology can be used to attract, engage and diversify museum audiences. The class spurred my intentional practice of creating a web application that redefines the way people engage with museums. I developed a prototype app called Travelsee which gives users an aggregate list of available cultural activities such as guided tours, seminars, or exhibitions in a given area based on the user’s keywords and GPS location.

Travelsee was formed by a need to show the general public that museums are fun and fascinating spaces. I decided to take a step outside of the museum advocate world, and I devised a simple way to use technology to strengthen the public’s awareness of museums and increase visitor engagement. I figured if I could come up with a way to gather all the local museum activities in one place, visitors would be able choose an activity based on their interests without any prior experience of visiting that specific museum.

I believe that museum activities should not be limited to a specific race, gender, class or any other social construct; unfortunately, most potential museum visitors look at museums as spaces for the elite. That is why redefining how museums engage potential visitors is important. As a museum advocate, I am using my web application Travelsee to engage new audiences. What methods are you using to engage new audiences?


Get every new post delivered to your Inbox.

Join 180 other followers