Posts Tagged ‘conferences’

25th Anniversary ButterflyThis year, I was lucky to receive a full scholarship to attend the 42nd annual Museum Computer Network (MCN) conference in Dallas, TX. For those who don’t know, MCN’s a fantastic organization that focuses on digital engagement in the cultural sector. Here’s a video of some of the highlights from the conference:

 

 

I’d wanted to attend MCN for a long time after hearing many friends and colleagues rave about the amazing energy and talents of MCN-ers.  Before I left, I set two (very broad) goals for myself for MCN2014:

  1. Deepen my own understanding of how digital is transforming museums
  2. Think about new ways to apply this understanding to my work as an evaluator

Luckily, this year’s conference theme—“Think Big, Start Small” —aligned perfectly with these goals. I figured I would “start small” by going to the conference, all the while remembering to “think big” about the relationship between evaluation and digital transformation in the cultural sector.  And with that in mind, I dove headfirst into the MCN2014 madness.

 

Most of the MCN scholarship recipients (I’m sitting on the bench on the left).

Most of the MCN scholarship recipients (I’m sitting on the bench on the left).

I quickly realized that I was one of the only full-time evaluators there. Despite the high energy of Wednesday’s activities (including a workshop on dabbling with microcontrollers and a series of inspiring Ignite talks), I couldn’t shake the feeling that I was going to be out of my element among all of these “tech” people as I headed to the first official sessions on Thursday. Would anyone understand why an evaluator was at a conference that’s all about digital? Worries aside, I was curious to learn how other attendees were thinking about and creating digital experiences, and how, if at all, they were working to evaluate their impacts (even if we might use different vocabulary to talk about the evaluation process).

My worries were quickly alleviated.  While there may not have been many full-time evaluators at MCN2014, I was blown away by the amount of evaluative thinking I observed in nearly every session I attended (and, frankly, in all of the side conversations I had throughout the conference). Not only are those who work at the intersection of the cultural and digital sectors a highly energetic and creative group of people, but they are also working hard to determine realistic goals for their projects and are thinking seriously about how to measure them. I was both inspired and amazed by the extent to which evaluation was part of the other attendees’ thinking and planning processes. This evaluative thinking showed throughout the conference tweets (#MCN2014 was actually trending on Twitter during the conference), so I figured I’d use a few of my favorites to talk about some of the many ideas I’ve been thinking about ever since I got back from Dallas:

 

Bollwerk tweetAllen-Greil tweet

These two perfectly sum up a few ideas that we are constantly thinking about at RK&A. To echo Simon Tanner, there’s no point in gathering data just for sake of having information. It’s essential to think about why you want to gather data and outline how you plan to use the data you collect. Without articulating a plan for using the data in the long-run, it becomes difficult to ensure that you’re gathering the types of data that will be most useful to you. And having a plan for how you will use the data will ensure that when it’s time for analysis you can align your analysis with your long-terms goals. At RK&A, we want to make sure our clients clearly understand that data are there to be used so that when it comes time to make changes based on the data, they are already prepared to do so. However, I think that data is primarily used to test assumptions rather than confirm them. The word “confirm” is misleading, because it presupposes positive assumptions, i.e. that a project is working well. If that’s the case then it’s understandable that people want to see those positive assumptions confirmed (which in turn would mean having to make few changes). Learning to accept what the data tells you, even when the results are negative, is no simple task.  It’s very easy to become so attached to a project that you ignore the problems and only see what’s working well. But remaining “open to surprise” and letting the data shine new light on a project is the best way to develop a true understanding of what’s happening so you can adapt and make changes to help achieve your goals.

 

Birch tweet

I have mixed feelings on drawing distinctions between testing for user experience and testing for content.  In my opinion, the two are separated by a fine line—at what point in any museum interactive, mobile app, game, or other digital experience does the user experience become entirely separate from the content?  All content matters in terms of the user experience because the content itself, no matter the particular subject, dictates the experience that visitors/users expect to have.  In other words, I think that visitors’/users’ prior expectations of a (digital) experience and opinions of the experience after use inherently depend on the subject matter presented to them.  Visitors’/users’ preconceptions about the particular content at hand are so much a part of their experience.  While there are always smaller usability issues that can be addressed without giving much regard to content (the size of a button, for example), I ultimately think that the entire user experience can never be truly separated from the content that supports it.  If you change the content, you can’t help but change the experience.

 

Those are just a few of the ideas discussed at MCN2014 that I am still thinking about weeks later. The conference evoked so many interesting issues and questions that I couldn’t possibly go into all of them in one post. Suffice it to say that I left MCN2014 feeling silly for ever being nervous about whether others would perceive the overlaps between the worlds of evaluation and digital. MCN turned out to be a fantastic experience that greatly expanded my own thinking on these issues, and I’m excited to put these new ideas to use in my work and to (hopefully) explore them further at MCN2015 in Minneapolis!

Didn’t make it to MCN2014 but want to view the sessions? Check out MCN’s YouTube channel. And don’t forget to check out the amazing (and short—9 minutes!) Ignite talks.  You can also find all of the conference tweets using the hashtag #MCN2014.

Read Full Post »

In June, The Association of Science and Technology Centers (ASTC) invited professionals to respond to these questions for an upcoming issue of Dimensions magazine: When are evaluation and other visitor feedback strategies the most useful for helping advance a science center’s mission?  When are such strategies less successful?  We pondered this at a staff meeting and decided that a small but important tweak may be needed to begin addressing the questions.  First, let’s clarify that mission describes what a museum does and impact describes the result of what a museum does—on the audiences it serves.  We believe that anything a museum does—collect, exhibit, educate—is meaningless unless it is done in the pursuit of impact.  So, when is evaluation most useful for advancing a science center’s tree_fallsmission?  When it is done to advance impact not mission.  It’s a little like that old adage: If a tree falls in the forest and no one is around to hear it, does it make a sound?  With regard to mission and impact, we take a slightly different angle—if a museum does work or evaluation that does not lead to impact, are they really doing the work?

Evaluators are in the same boat as some museum practitioners.  Evaluation is a means to an end, just as a museum’s collections are a means to an end.  Unless evaluation is placed in a meaningful context, such as helping a museum pursue impact, evaluation doesn’t serve a purpose.  As an evaluator, I suppose I should say evaluation is always valuable.  But, that’s just not true.  I’m a self-proclaimed data nerd.  I love the minutia of evaluation—pouring over pages and pages of interview transcripts and pulling out those five key visitor trends.  I can get lost in data for days and find myself pulled in many seemingly fruitful directions.  “Oh, how interesting!” I will say to no one in particular.  I often find myself lost in the visitors’ world, chuckling to myself about a quirky response to an exhibit or wondering who someone is and why he or she responded to a museum experience in a particular way.  Getting lost in your work can be fun and, lucky me, happens to those of us who are passionate about what we do.  So, while pursuing tangents in evaluation data is fun for me, there is a flip side to this coin—a lack of focus that can be detrimental to the pursuit of a larger goal.  This is why we, as evaluators, push our clients to articulate what it is they want to achieve to keep us (and them) on track.

We consistently find museum practitioners to be among those most passionate about their work.  Thus, these moments of losing oneself in one’s work, whether researching or examining an object, designing an exhibition, or creating a program, are frequent occurrences.  When it comes to pursuing impact, this passion is both a joy and a burden.  It is a joy because most practitioners can easily articulate what they do for their audiences.  But, they often get lost in what they do and may not think about why they do what they do.  A practitioner articulating the “why” is similar to the entire museum articulating its intended impact.  Articulating impact provides a laser focus for all the work that museum practitioners do and helps keep them on track toward pursuing that larger goal.  So, our response to ASTC’s second question, When are evaluation strategies less successful in helping advance a science center’s mission?  When a science center and its collective staff have yet to articulate the impact they hope to achieve on the audiences they serve.  Otherwise, we can all do evaluation until we are blue in the face but those reports will continue to collect dust on hundreds of science centers’ shelves.  Of this I am certain—just like death and taxes.

Read Full Post »

I love a good story.  Who doesn’t?  It’s how we humans make meaning—we construct narratives to explain and interpret events both to ourselves and for others.  Think about the number of stories you tell or hear in a day, even the mundane ones.  It’s a way to form and sustain connections with others and to understand ourselves.  So I was intrigued to see that this year’s AAM theme was “The Power of Story.”  I remembered that the 2012 AAM keynote address included a couple storytellers from The Moth (the tagline is True Stories told Live, and it features everyday people telling very personal stories on stage and is broadcast on National Public Radio).  The Moth was one the highlights of the 2012 AAM conference for me, so I was especially disappointed that I was unable to attend AAM this year.  But I talked to several people who did attend and read some blogs and, not really surprisingly, its sounds like panelists wove the theme into their presentations in interesting and appropriate ways, (which certainly isn’t always the case with conference themes).

It got me thinking.  Storytelling isn’t something I consider on a daily basis in my work, at least not in a literal, explicit way.  But the more I think about it, the more I realize storytelling permeates my work in nearly every way and has even become a tool for helping museums think about and define their impact.

To begin with, I am a qualitative researcher.  I was drawn to the field as a way to understand the world, in particular, people and groups of people—how they live, experience life, make meaning, and why and how they do what they do.  Of course one can study all this through quantitative research as well, but I am interested in the messiness and ambiguities inherent in qualitative research.  Qualitative data is narrative, and more specifically, I’ve noticed the best data often results when, for example, an interviewee or focus group participant tells a story to illustrate an idea.  And in fact, a strand of qualitative research called narrative research explicitly uses storytelling as a methodology.  Stories as data are powerful because they resonate and illuminate truths about the human experience.

Bed Curtain

Bed Curtain: England (1690-1710), artist unknown, V&A Museum.

Secondly, I was drawn to work in museums because of my love of objects—whether art, natural history specimens, or historical artifacts; to me, objects embody stories.  Objects are the physical evidence demonstrating that something was here; something happened here!  Objects stir the imagination and stimulate storytelling, whether fantastical stories (just listen to a child explain a work of art) or stories based on interpretation and deductive reasoning.  And, based on all my years of conducting research and evaluation in museums, I can tell you visitors feel the same way I do about objects—authentic objects evoke stories for visitors, and as I mentioned earlier, stories are how we construct meaning and connect with others—objects help us bridge a gap between ourselves and another (whether an artist, a dinosaur, or the mysterious person who used this 17th century bed curtain shown to the right).

The final, perhaps more subtle way that stories are important to my work is when we help our museum clients clarify and define impact (as defined by Stephen Weil is “making a difference in the quality of people’s lives”).  Randi has written a lot about impact on our blog so I won’t say too much about it here.  But I will say that one of the best ways for museums to begin thinking about impact is by telling stories about their work and why they love it.  I’ve never explicitly sat down with a client and said, “Okay, tell me a story about your visitors’ experiences.”  But invariably, that is what happens when we ask questions to help museums articulate their impact—they start telling stories (and at least twice, I’ve watched the telling of those stories lead to tears).  These stories are a starting place for museums to think authentically about the affect they have on their audiences.  As I discussed in my previous blog post, Explaining the Unexplainable, it is daunting to sit down and try to articulate impact and outcomes (particularly if you worry about measuring them), but starting with stories grounds you in what’s real and meaningful and can lay the foundation for articulating a distinct and authentic impact statement.

Read Full Post »

Learning2This week I’d like to share thoughts about evaluative thinking, in part because two weeks ago I was part of a session at the American Alliance of Museums (AAM) annual conference in Baltimore titled “Evaluation as Learning” (titled as such because learning is the ultimate result of evaluative thinking).  We took a risk:  I set the stage by presenting the Cycle of Intentional Practice (see our first blog post) with a distinct focus on the reflection quadrant, and the three panelists were allotted five minutes to present a “story”; we used the remaining time to ask the audience questions (rather than having them ask us questions).  Over the years, AAM has inadvertently trained session attendees to expect 60 minutes of panelists’ presentations (and sometimes more) and 5 or 10 minutes of Q & A whereby the audience would pose questions to panelists.  Rarely have sessions been intentionally flipped where the bulk of a session’s time (50 minutes of 75 minutes) is used to ask attendees questions.  We all wondered if we should ask our friends to attend the session so our queries wouldn’t be met with silence.

We didn’t surprise the audience with this strategy; we were transparent and gave them a heads-up by saying: “Our intention today is to share brief stories about how we have used evaluation as a learning tool (rather than a judgment tool).  Along the way we will be highlighting and clarifying evaluative thinking, and each presenter will spend 5 minutes doing this.  Our intention is also this: we will spend the remaining time asking you questions, in a sense, to model the kind of inquiry that organizations can use to engage in evaluative thinking.  We want to hear your thoughts and reflections, and we welcome you to challenge our thoughts and push us beyond where we are; then all of us will be using inquiry and reflection to pursue learning—the ultimate goal of evaluative thinking.”

Evaluative thinking (ET) is an intentional, enduring process of questioning, reflecting, thinking critically, learning, and adapting.  While learning is at the essence of ET, adapting (one’s thinking or behaviors) is the challenge.  An underlying thread in our presentation supported a fact about evaluative thinking—evaluative thinking is effective and meaningful when it is ingrained in the organization’s culture and the responsibility of everyone—leadership and staff.

Evaluative thinking is embedded in intentional practice and the reflection quadrant is essential, as learning is not likely to happen without people taking the time to ask the tough questions and reflect on reality (e.g., evidence of performance) and practice.  When evaluation as learning is pursued, it can be a catalyst for personal learning, interpersonal learning, project learning, organizational learning, and field-wide learning.

For more on evaluative thinking, check out:

Preskill, H. and Torres, R. T. (1998). Evaluative inquiry for learning in organizations. Newbury Park, CA: Sage.

Read Full Post »