-
Developing the Proposal
06/28/2023 at 13:56 • 0 commentsJoe and I were applying to the Downtown Brooklyn Public Art + Placemaking Fund and needed to put together a proposal that explained:
- How our project fit into Downtown Brooklyn
- Which site we wanted to use and why
- The schedule for implementation
- How much money we needed and what we would spend it on
Ideas
The starting point was a list of locations that The Downtown Brooklyn + Dumbo Art Fund had provided for where the funded artworks could be installed. The anchorage space under the Manhattan Bridge had jumped out immediately to us as an especially interesting place for a work.
In the past the archway was closed off to public access and used to store materials for repair work on the bridge, but was restored and opened to the public in 2008. Nowadays it’s used for occasional events but is mostly a quick way for people to get under the bridge, a landmark to meet people at, or a place to sit for a bit. It’s a relatively dark space with a lot of noise from passing trains and construction nearby.
We thought it might suit a sound installation. The varying noise in the space would let our audio “hide” and then reemerge in the moments when the space quiets down. Mostly that’s at night after the trains start spacing out and the construction work has stopped for the day. We wanted to introduce a light component as well because that would let us direct the attention of passers-by to where the audio is playing. And the light could make the space generally more welcoming and highlight the installation.
Joe suggested the idea of reaching out to the Endangered Language Alliance (ELA) to see if we could use their interview audio for the installation. The ELA is a non-profit that was founded in 2010 to document Indigenous, minority, and endangered languages. Over the years they have conducted hundreds of interviews and collected various forms of data, which they share via maps and other projects. Thinking about how we could share that content in the form of an installation led us to the idea for our project.
We would highlight varied backgrounds and lives of New Yorkers by playing audio of them speaking and singing in the many languages spoken around the city. Ideally the installation would provide a chance for city dwellers passing by to be reminded that they live immersed in a vast wealth of human stories and culture. Unfortunately, that purpose grew more urgent over time as the pandemic progressed while we built the project.
Initial Proposal
With a workable concept for the art in our heads our next thoughts were about implementation. How would we approach building an installation that could play back hundreds of voices and produce lighting effects across the fairly large space under the anchorage (~140’x45’)? How could we do that affordably? We would need hardware distributed across the space and a structure to support it. Some careful thinking would be needed to do this within a reasonable budget and in a way we could actually execute.
Structure
Initially we wanted to minimize the structural engineering work that would need to be done, because that was both something that we thought could incur a lot of cost and a kind of engineering work that neither of us had much experience with. We went looking for off-the-shelf structures that would meet the needs and be affordable. The “safe” option was a truss structure like the type that is used to support lights and other equipment on stage for music shows. Our first rough 3D models of what the installation could look like used those.
(By Joe in Rhinoceros)
We didn’t entirely like the look of the frame structure so we kept thinking. It occurred to us that a common structure you see all over NYC could be repurposed to provide the structural support for our art installation. That structure is the common scaffold walkway put in place to protect pedestrians from falling debris when work is being done on the facade of a building. Because of legal and logistical factors these are omnipresent around the city. Our first proposal used a segment of this scaffolding as the host structure for the electronics.
(By me in Blender)
On one hand this was a simple way to get a structure that we could be confident would work, and it would leverage a feature of the city that any resident would recognize to share aspects of their fellow city-dwellers that they might not know. But on the other hand it would mean funneling people through a small, enclosed path that would not make good use of all the space available under the arch. And it might run the risk of people not understanding that it was in fact an art installation rather than just another building project in progress. Despite the issues, initially we pushed forward with this approach because it saved money that we could use to improve the audio experience.
Audio Units
Aside from the structure we needed to figure out how we would deliver the audio. We were mostly starting with the idea that we would hang our sound and light devices on cables like what Joe had done with Spaces within Spaces. We wanted to play back hundreds of different snippets of audio and not have them all blend together. So I started thinking about a reflector that would guide the audio and provide a surface to diffuse and spread the light from each device.
I started researching parabolic reflectors and trying to figure out whether we could use them to direct the sound in specific directions. At the time I was playing with using a visual effects tool called Houdini to do parametric geometry creation for various projects, so I used it to create a parametric model for parabolic reflector geometry. Then I played with the parameters to get a sense of the physical configurations that might be possible.
To be able to have each unit be unique and still be feasible to manufacture I was considering vacuum-forming a thermoplastic on a reconfigurable form, potentially with robotic automation to churn through the hundreds of reflectors we would need. Above you can see the tweaked version of the design tool that I used to think about that possibility.
To make the parabolic reflector work well the speaker needed to face inward and be at a specific distance offset relative to the center. My initial thought for accomplishing this cheaply was to have two PCBs separated by stiff wire. One PCB would be on the back of the reflector and have a connector for data and power. The elevated PCB would have everything else, including the main processor, speaker, and LEDs. It would receive power and data over the stiff wire.
Continuing with the parametric design exercise in Houdini I built up a full model with units hanging off of suspended wires. It was fun to learn about the math for catenary (slack, suspended) wires. And how to instance geometry with variations in Houdini.
One detail I tried playing with was the diffusive properties of the plastic used for the reflectors. I set up a test scene in Blender to try to see if I could get a sense of the different possibilities virtually. After trying a bunch of renders I couldn’t quite get the effects I wanted with my limited CG experience, but the attempt provided some food for thought.
Sensing Visitors
Instead of running all of the audio run at once or having the units run on preset loops we thought it would be more interesting for them to respond to the visitors. But how could we allow for interaction without increasing the cost too much per unit? My first thought was to use some kind of distance sensor that would point downward and detect when someone passed below. For example, the VL53L3CX, which is a really cool miniature time-of-flight sensor that I had been looking for an excuse to use. But a more traditional ultrasonic distance sensor would also likely suffice.
Power
I expected that we would run cabling to hang the units in the installation and this would also be used to carry power and control signals to them. For the bus voltage I chose 48V because it’s commonly used, so power supplies would be cheap and easy to get, it’s not such a high voltage that I’d need to worry too much about direct shock hazards, but it would still be high enough to keep resistive losses in the wiring below an acceptable level. I wasn’t sure how much power each unit would use, but I figured that they might average at least a few watts. With 1000 units that meant we would be drawing at least a few kilowatts.
Control
For control I wanted something simple but reliable and optionally high bandwidth. I knew I could get those properties from RS-485, which is a serial communications standard implemented with differential signaling (meaning common mode noise can be rejected). I didn’t know what the specific serial communication protocol would be exactly, but I knew the general properties it would need to have. Mostly the data would be flowing one way, from a central controller out to the units in the array. The system would require duplex communication for certain operations, like updating the firmware on the units. I wanted to keep as much flexibility as possible to support implementing experiences later that we hadn’t thought of at the start of the project. For the proposal I left the options open.
Budgeting
To estimate the project costs I worked out a detailed sketch of the implementation. I expected the exact details would change as we solved problems along the way, but the basic cost structure was dependent on constraints I thought would be durable. We wanted to make 1000 units, so the main driver of the overall cost would be the price per unit. The other main costs would be the material for the structure, structural engineering services, renting equipment for the installation, and the power supplies, wiring, connectors, and etc. To reach the final estimate I did research to cost options for each of the system components and any labor or services associated with them (e.g. PCB assembly).
Feedback and Final Proposal
Joe and I collected our thoughts, submitted the proposal, and then waited for a response. Eventually we heard back with some follow up questions about details of our proposal, which we took as a good sign. Relatively soon after that we were told that we made it to the interview round of the selection process.
In the interview there was a lot of useful feedback. The most important was about the structure. We had considered that the scaffolding might cause confusion in the visitors, but weren’t sure how significant that issue might be. But in the interview the main critique was that the scaffolding would give the wrong impression of the piece. It wouldn’t look very good, would restrict movement through the space, and might not look finished to visitors. Hearing that made Joe and I decide it was worth whatever the extra complexity would be to make a custom structure. We adjusted our proposal and shared it with the Downtown Brooklyn Public Art + Placemaking Fund, and they accepted it.
We were green-lit to start working on the project. I immediately started working on a more detailed hardware design, which we will look at in the next post.
-
Prelude - Space within Spaces
03/30/2023 at 02:11 • 1 commentSpace within Spaces
Back in 2019 I worked on the software for another project with Joe - one of Joe’s installations called Space within Spaces. In that piece Joe had 180 lights hanging in a grid in the Juliana Curran Terian Design Center Atrium at Pratt Institute. They were to be animated based on particle collisions recorded by a muon detector. Each light was controlled by an ESP8266 over WiFi. The goal intent was to run the animations on a central computer connected to the detector and then stream them out to the array. Due to the large number of devices and the crowded airwaves on campus there was a technical challenge to overcome to get sufficient performance to achieve the intended effect. In the end I did find a solution, and that inspired the general architecture that I ended up implementing for Babel in Reverse.
Fast WiFi Streaming to 180 Lights
I joined the project near the end, after the hardware had been installed in the Atrium and the behavior was being programmed. There was already a first pass on the control software. But unfortunately it updated the array of lights extremely slowly - about one update every 10-20 seconds. This version of the software had each of the ESP8266s join a WiFi network and then run an HTTP server exposing a REST API to control the light. A Processing sketch generated the effects based on the incoming muon detections and then called the API of each light in turn to write an updated brightness value. Because several back-and-forths over the network were happening for each light in the array there was a lot of radio traffic and consequently high congestion that slowed communication.
We needed to reduce the number and size of messages being sent per frame. Because we were using HTTP over TCP each bulb update was taking 3 packets for the handshake to open the connection, one for the HTTP request, and then one for the reply. Multiplied by every bulb that meant at least 900 packets per frame, or 27,000 packets per second at 30 frames per second. Even setting aside the WiFi congestion issues that wasn’t workable, so I went looking for a way to reduce the number of packets being sent.
Each ESP8266 could control the brightness of the bulb in 1024 increments between off and full brightness. Since 1024 is 210 that means a bulb's brightness can be expressed in exactly 10 bits, which we would need to use 2 bytes to transmit. Multiplied by the 180 bulbs in the array we get 360 bytes for brightness, which is well under the 1500 byte MTU for the network. So we could make a single packet update the brightness for every bulb in the array at the same time. But how do we get that packet to every bulb without repeating it? And how does a bulb know what part of the packet to use to update its brightness?
The first question can be answered with a network facility called a "broadcast address." Basically these addresses are treated specially by a network and, if the network is configured to allow it, a packet with a broadcast address listed as its destination will be sent to every other client (read more in RFC 919). So we just take our update and put it it in a UDP packet that we send off to the broadcast address. The router sees our packet and retransmits it to every other client on the network, including all of our ESP8266s. Each bulb looks at the packet and pulls out the brightness that it should have and uses that value to update the duty cycle of the PWM signal controlling the LED brightness.
How does the bulb know what value in the packet to use? If each bulb had a unique identity and we had a map of where each of them were in the array then we could pick an arbitrary order for the values in the packet and then just flash the map into the firmware so each bulb could use it to look up its value in the right place. For the identity we can use the 32 bit chip ID in the ESP8226. In general it’s not guaranteed to be unique, but it was for the ESP8266s that we were using. I figured out where each bulb was by asking each to flash over the HTTP API and then asking for its ID once I found where it was. You can see the resulting map in the firmware here.
At this point the control problem is basically solved. We were able to update the array about 100 times per second. The control server was implemented in Processing so we could do the effects there. I also set it up to be controlled from TouchDesigner just for fun.
You can take a look at the implementation code in these git repos:
Towards Babel
This basic approach of having an entire array of devices receive individual packets that contain an update for the entire array via UDP broadcasts is simple, flexible, and powerful. As long as you don’t need the units to talk back and you can fit everything you want to send in one or a few MTU it will let you control many devices at a high rate over WiFi while being somewhat robust to airwave congestion.
I made many improvements and extensions for Babel in Reverse but the installation uses that same approach. I’ll elaborate on those details in a future post on the firmware.
-
It's Up!
01/23/2023 at 23:58 • 0 comments(Photo by Hassan Mokaddam)
“Babel in Reverse” is an art installation under the Manhattan Bridge in the Dumbo neighborhood of Brooklyn. It features 178 hanging lamp-like custom devices that illuminate the archway space and play audio consisting of interviews, songs, lessons, and other types of recordings of people speaking some of the more than 700 languages spoken around New York City. Each unit in the array is assigned a language. The installation cycles through several dynamic behaviors highlighting different sets of units over time. For example, playing only units with recordings of music or playing a section of units that moves like a wave from one end of the arch to the other.
My collaborator, Joseph Morris, and I partnered with the Endangered Language Alliance (ELA) for the project. They have been working since 2010 to document Indigenous, minority, and endangered languages around New York City and beyond. Without audio from their extensive collection of interviews (most are on their YouTube channel, but also take a look at their language map) it would not have been possible to create our installation.
It took us more than two years to realize the project. Pandemic supply chain issues and work disruptions significantly impacted us, greatly complicating fabrication of our custom hardware and generally messing up our timeline.
In this blog my aim is to retroactively document the work that went into making this art installation a reality, how we overcame the challenges along the way, and hopefully inspire others to take on creating their own public art. I’ll cover the technical bits, the logistics, and expand on some of the side quests that I went down trying to get this built within our budget and on time. There were many mistakes made along the way. I’ll dig into some of those and distill the lessons that I’m taking away for my next projects.
The installation is up right now and will be until about May 2023. If you happen to live nearby feel free to stop by for a visit! The address is 1 Anchorage Place, Brooklyn, NY.