How to Transform Pittsburgh’s Gulf Tower Beacon into a Mood Ring


gulf-tower_mood-ring_distant-feel_antoine-catala_crop

Here at CMOA we’re gearing up for a new exhibition, Antoine Catala: Distant Feel, one of the more challenging yet rewarding shows to come through our Forum Gallery. Catala is interested in how images provoke feelings, notably empathy. But, he asks, how should we express empathy online, to strangers? Now that we’re able to see thousands of images per day through Internet-connected devices, what are the emotional ramifications?

It’s a potentially limitless line of inquiry. It’s also difficult to communicate. “So,” several of us thought, “what if we plan some sort of live demonstration for the whole city of Pittsburgh?”

The beacon atop Gulf Tower came to mind immediately. Six stories tall, and pyramid-shaped, the Art Deco-inspired structure has had one lighting scheme or another since it first opened in 1932. In 2012, a new set of LED bulbs enabled the lighting to change drastically, and its weather program feeds directly from KDKA through an Internet connection. We could do something similar, except around emotional responses to images. It seemed perfect. Hence the Gulf Tower Project was born.

I tracked down Larry Walsh, the COO of Rugby Realty, which manages the building, and he agreed. I sat rather slack-jawed at my desk as he explained the beacon system’s capabilities, and told me that only two other organizations have the power to change the colors: the Pirates and the Penguins (of course). CMOA would be the third. Using Instagram and a direct VPN connection to the beacon, we would feed a constant stream of data about Pittsburgh’s emotional state. That left only the question of how we’d read emotional content on Internet image-sharing.

We approached independent developer David Newbury, who is already working with CMOA to code a system that will visualize artworks’ provenance as they travel around the world. He had some ideas. I sat down with him for a few questions about how it all came together.

What got you interested in this project?

I took this project because it uses the skills that I have. I worked for Iontank for three years, and have collaborated with Deeplocal. I enjoy technology that uses the web, but also brings it into the real world—technology that feels magical. I don’t really like tech for tech’s sake, it’s just a tool. I don’t geek out about new tech and gadgets like so many people do. They’re all just tools, useless unless you apply them to something. I don’t care about the tool, but about finding the right tool, and what can be built with it.

Were you familiar with these kinds of social media hacks before? Do you have favorites?

Yes. I did a project last spring with Laser Lab Studio and MAYA Design to promote Oreo cookies at South by Southwest. We produced a 3-D printer that made custom cream filling. We could print multiple layers of different kinds of frosting, flavors, colors, etc.

With Iontank, for VICE Media’s new music channel and the rollout of the Samsung Galaxy S4, I worked with Pyrotecnico, a Pittsburgh fireworks company, for a concert in NYC. We set up a system of S4 phones around the audience, and they could use the phones to actually choose and launch fireworks—colors, timing, etc. The challenge: building an automated launching system that operated safely, and didn’t give the crowd too much control (i.e., it kept everything from going off together if people mashed the controls).

Please describe the sentiment readings for the images.

Luckily, I didn’t need to research data on text-based sentiment reading. Several existing bodies of research already exists. But it’s still challenging. The idea is to make sure that we’re reading comments on images. Instagram messages, more so than tweets, are more complicated to analyze. They’re short and slangy. I was getting mediocre results from some existing tools, but there are enough different techniques for sentiment analysis, so I’m running through multiple, and averaging. It doesn’t always work, but for the most part, it does. It’s bad at sarcasm, for example, or inside jokes. People are much better at emotion than computers are. Imagine a photo of a guy with a huge slice of cake and a big grin, and the caption is: “Oh, sad that I have to eat this entire cake by myself.”

Even better, The Walking Dead premiered on Sunday night. Pittsburgh’s mood became 50% more negative, because everything was about zombie attacks. Of course, the mood was actually positive and excited.

This isn’t the only work you’re doing for the museum. Can you please give a brief overview of what you’re doing for Art Tracks?

For Art Tracks, we’re machine-reading art provenance data and creating new ways of searching and visualizing provenance (who has owned the work, where it’s been, etc.). It is different for me, in a very good way. It involves a lot more research. It’s not so much of a “stunt,” with high-pressure parameters, where you’ve done something cool, that tells a story, but without a lasting effect every time.

With Art Tracks, results are meaningful and lasting. If we keep it going, we’ll make museum research and visitor experience better; even the whole field of art history better. It’s not as glamorous. When I talk about parsing art provenance, 95% of the world’s eyes glaze over. But for the other 5%, it’s extremely useful. And they can use that data to make compelling stories for the other 95%.

The Gulf Tower Project runs from February 11–13. Antoine Catala: Distant Feelpart of the Hillman Photography Initiative’s Orphaned Images project, opens February 14, 2015, and runs through May 18 in the Forum Gallery here at Carnegie Museum of Art.