In my last post I detailed how I knitted together thematic connections across different collections and what effect in-gallery labels have on object engagement, but I wasn’t yet able to get any insight into what users’ full conversations looked like. We met with the Tech team to talk through potential needs and issues exist in order to effectively analyze chat data. Since one of my main goals was to determine a bit more about how visitors are using the in-gallery ASK labels during their visit, we decided that a search function would be most useful, similar to the search for snippets. Our incredible web developer Jacki Williams implemented the chat search into the dashboard so I could pull complete visitor chats based on what I was looking for.
The chat search function has three possible ways to search through chats. The (seemingly) easiest way is to search for a particular object via its accession number. However, not all chats have the objects tagged with its accession number. That is why being able to search words or phrases in the chats (the second option) is so useful. I might not be able to search the accession number, 83.84, and pull up all of the chats for it. However, I can search “East River View with Brooklyn Bridge” or “Yvonne Jacquette” to pull up more chats where individuals were asking about this particular work. There is a third option to search Chat by ID, which is useful if I need to reference a particular chat.
Jacki also created a function to filter out the search results. This has been especially useful as of late. A significant number of visitors recently have been using ASK for Frida Kahlo themed tours or quote hunts. If I want to look at an object that was incorporated in the Frida Kahlo activities I can just filter out the past few months in my results rather than manually combing through to remove can filter out the past few months of Frida Kahlo.
There is also the option to export desired chats from the results into a JSON file. The JSON file export is super useful because the file format lets allows me to read the full chat conversations and is a great record of what I have already analyzed. This is a huge step up from copy/pasting into Google docs and will likely have future benefits that the next Fellow or researcher can explore!
Additionally, the ability to search words or phrases was also useful for searching the in-gallery ASK label text. The object 83.84, East River View with Brooklyn Bridge, has an in-gallery label that says: “How did the artist get this view? Download our app or text … to learn more from our experts.” I used the search to pull up any chats that contain the label text to see if visitors were using the language verbatim and then what else they were looking at.
I used the various search features to start compiling a table organized by object, which looks at the following:
This process took a lot longer than I anticipated for one object alone. Unfortunately there was no way to streamline gathering this information from each chat. The most time consuming aspect was having to look up accession numbers and/or titles of different works that visitors asked about which did not get tagged in the chat or the conversation did not explicitly include the title. This highlights an overarching issue with the chat data that not everything has been tagged consistently over time especially in regards to objects asked about.
With how long it took me just to go through one object/in-gallery label and time running out of my fellowship, I wouldn’t have time to go through as many objects and chats as I would have liked. I decided to focus on a few objects that have in-gallery label questions, in-gallery generic labels, and popular objects with no labels at all. The downside is, the more popular or asked about an object the longer they take to analyze, further limiting how many I can get through.
From my very brief overview of a few objects and roughly 200 chats only one visitor used the in-gallery labels more than once. This visitor used two question labels but also asked about two objects that did not have in-gallery labels. Additionally, from what I did look at, visitors who did use the in-gallery labels most often used them as the first object that they asked about. My hypothesis is that visitors are using the in-gallery labels to get an introduction to the app and then proceed asking questions about objects that they were interested in. This is great news, since that was the original goal of these labels: to get people using the app. More chats and objects will definitely need to be looked at to confirm this, but the foundation for going through chat data has now been established.
Over the past year I have learned a lot not just about the ASK app and its users but also that big learnings can still come from small tools. I’m going to miss working with this challenging but incredibly interesting data and I wish the best of luck to next year’s Fellow!
Sydney Stewart is the 2018/19 Pratt Visitor Experience & Engagement Fellow at the Brooklyn Museum. She is currently pursuing her M.S. in Museums & Digital Culture from Pratt’s School of Information and has previous experience in collections management, exhibitions, visitor research and digital media. Her primary interests are in creating and evaluating the ways visitors can digitally interact with museum collections. Sydney’s current research focus is analyzing user data from ASK Brooklyn Museum.