In the Gallery vs. Online: How a Split Second Can Differ

One of the questions people always ask me is how web differs from what happens in the building and that’s a difficult thing to get metrics on.  With Split Second, we are in a unique position to answer that question because we’ve been running the same online activity on kiosks in the gallery.  In this final Split Second blog post, I’m going to compare these two sets of data.

Kiosks in Split Second

Visitors were invited to take the online activity using kiosks in the gallery, so the data could be compared. 2600 visitors sat down at the kiosks to take the activity for a spin.

You may remember from an earlier post, even though part of the project took place online, we were surprised to see a mostly local audience taking part.  Overall, that local audience spent an average of 15 minutes completing the online activity (as opposed to the general average of 7 minutes).  In the gallery, our visitors spent an average of 4 minutes 18 seconds completing the activity at the kiosks.  Even though they spent less time doing the activity, the average ratings per person were quite similar:  online – 39.1 vs. gallery – 36.7. Also, the in-gallery vs. online completion rates were very similar, which suggests a highly focused visitor consuming content at the kiosks very quickly.  Here are a few charts to show off some of the online vs. in-gallery differences.

Gender (Online)

Gender (Online): Online women showed up to take the activity in almost twice the force as men.

Gender (Gallery)

Gender (Gallery): In the gallery, we had more of an even male to female ratio of participants. Also, in the gallery women tended to be younger than male participants.

Age (Online)

Age (Online): Online participants were a bit older than gallery participants.

Age (Gallery)

Age (Gallery): In the gallery, the participants tended to be younger. This may suggest younger visitors to the show overall or, perhaps, younger visitors were more attracted to the in-gallery technology.

Experience (Online)

Experience (Online): Online participants tended to self-identify in the "some," "more than a little," and "above average" categories.

Experience (Gallery)

Experience (Gallery): In the gallery, participants self-identified with a lower experience level. As Beau mentions below, older people tended to self-identify with higher experience levels. Given most participants at the kiosks tended to be young, this flip in the metrics seems to be on target.

Completion (Online)

Completion (Online): Completion rates were similar with online being slightly higher.

Completion (Gallery)

Completion (Gallery): Even though completion rates were similar, in the gallery there was a slight uptick in participants aborting (10.3% in-gallery vs. 6.3% online) the activity at stage two, which focused on engagement. The abort rate in stage three, was almost equal suggesting that both sets of participants were equally engaged around the adding information part of the experiment.

When it came to some of the data that Beau’s been delving into, he ran a comparison of in-gallery versus online data and found his original findings still held:

  • No correlation between experience and time spent.
  • Slight negative correlation between rating and birth-year. i.e. older people give slightly higher ratings.
  • Women rank things slightly higher than men.
  • Slight positive correlation between rating and experience, but women consistently rate themselves as being more experienced, so it’s hard to tell whether the aforementioned correlation is caused by experience or gender or what.
  • Older people tend to self-identify as slightly more experienced.
  • Complexity and information findings still hold.
  • Engagement and rating variance, the finding also still holds, though there is an interesting change. In the gallery, rating variance tended to be much higher than online. For the control task, online variance was 520.6, while in-gallery variance was 668.5. For the free task, online variance was 459.1, while in-gallery variance was 510.1. So we’re still seeing massive reductions in variance, but the variance in the gallery was higher to begin with.
  • Adding information, the finding still holds, though in the gallery the increase in ratings was not quite as big. (The muting of this effect might be related to the age/mean rating issue discussed above.)
Intoxicated Lady at a Window

Intoxicated Lady at a Window, late 18th century. Opaque watercolor and gold on paper, sheet: 13 3/4 x 11 3/8 in. (34.9 x 28.9 cm). Brooklyn Museum, Gift of Dr. and Mrs. Robert Walzer, 79.285.

Beau also took a look at the rankings data and found, for the most part, the same works win and lose.  As he notes, “There are some minor upsets, and a few things which might be worth a story. In particular, Intoxicated Lady at a Window seemed to always do quite a bit worse in the gallery than online.”  While we are not totally sure why this painting didn’t do so well in the gallery, it’s interesting to note that this was the image that the New York Times used when the project was first announced.  It’s very possible that we had an information cascade happen online with participants rating this work higher because they might have been more familiar with it. This is one case where the in-gallery metrics might actually be more accurate and it shows just how delicate subconscious effects may be.

As Joan mentioned in one of her posts, Split Second closes at the end of the year.  If you have not managed to see it in the gallery, we hope you can come take a visit because the show will be gone in the blink of an eye.