If people have ghosts, sound has reverb – the lingering, echoic presence of a sonic event that has occurred and passed on. Clapping your hands in a large, empty, hard-surfaced space readily produces the bell-like footprint of the transient as its energy fades.
Life without reverb would be annoying, at the very least. Reverberation adds ambiance and context to sound; a minute or two in an anechoic chamber, used to measure the performance of speakers and microphones in a reflection-free environment, produces a pronounced feeling of imbalance. The aural cues that we unconsciously use to navigate the physical plane lose a dimension that most of the time we don’t even realize we have.
On the other hand, too much reverb and you’ll never know what you missed. Reverb is the enemy of intelligibility, cloaking spoken information in a fog of its own making in a series of cascading first, second and third reflections that ultimately overwhelm the source. Trying to control ambient sound has taken on new importance in recent years, part of a larger trend toward improved audio that’s become more pervasive in personal and professional lives, from cinemas to pulpits to airports. Yet it sometimes appears that the technology of sound reproduction has moved forward faster than the ability to control the environment that that sound is poured into.
We’ve been trying long enough: In the late 19th century, physicist Wallace Clement Sabine, widely credited with founding the field of architectural acoustics, created experiments at Harvard University to investigate the impact of absorption on reverberated sound. Using a portable wind chest (which stores the air from a bellows) and organ pipes as a sound source, a stopwatch and his ears, he measured the time from the onset of the source sound to the level of inaudibility, which the logarithmic decibel scale puts at approximately 60 dB. (Sabine’s methodology would establish a reverb time measurement standard going forward: RT60 is the time required for reflections of a direct sound to decay by 60 dB below the level of the direct sound.)
His basic findings – that reverberation time is proportional to the dimensions of a room and inversely proportional to the amount of absorption present – formed the fundamental building blocks of the science of acoustics, which led to various measurement models, such as impulse (transient) measurements and random-noise (white or pink) generation, commonly known as the interrupted response, and ultimately to software-based solutions, like the EASE acoustical modeling program and the SMAART acoustical analysis program, that let us not only measure reverberation but predict the behavior of sound in hypothetical spaces.
Nonetheless, in the real world, the connections do not get made all the time. Steve Haas, owner of acoustical consulting firm SH Acoustics in Milford, Conn., recalls a Carly Simon concert performed in New York City’s Grand Central Terminal years ago. “It was just awful,” Haas says of the sound. “They didn’t have the same kind of sophisticated line arrays and other equipment then that we do now, but it likely wouldn’t have mattered.” The energy of the sound simply overwhelmed the ability of the space to naturally process it.
One of the ways acousticians deal with this is to create zones for the sources of sound. For the Newseum in Washington, D.C., Haas needed to keep the sound in the building’s large atrium from bleeding out into adjacent spaces. Typical line array PA systems are extremely good at letting users dial in narrow and predictable dispersion patterns and are often used to keep sound within certain boundaries. But even with significant acoustic treatment implemented in the design, the remaining large amounts of glass and stone in the Newseum’s atrium would still produce intelligibility-defeating reflections.