Out of Auto-Tune

in Music by

Over at Pitchfork, Simon Reynolds charts the rise of Auto-Tune (and its much more diabolical twin, Melodyne):

Some speculate that it features in 99 percent of today’s pop music. Available as stand-alone hardware but more commonly used as a plug-in for digital audio workstations, Auto-Tune turned out—like so many new pieces of music technology—to have unexpected capacities. In addition to selecting the key of the performance, the user must also set the ‘retune’ speed, which governs the slowness or fastness with which a note identified as off-key gets pushed towards the correct pitch. Singers slide between notes, so for a natural feel—what [makers Antares Audio Technologies] assumed producers would always be seeking—there needed to be a gradual (we’re talking milliseconds here) transition. …

Chances are that any vocal you hear on the radio today is a complex artifact that’s been subjected to an overlapping array of processes. Think of it as similar to the hair on a pop star’s head, which has probably been dyed, then cut and layered, then plastered with grooming products, and possibly had extensions woven into it. The result might have a natural feel to it, even a stylized disorder, but it is an intensely cultivated and sculpted assemblage. The same goes for the singing we hear on records. But because at some deep level we still respond to the voice in terms of intimacy and honesty—as an outpouring of the naked self—we don’t really like to think of it as being doctored and denatured as a neon green wig.

Reynolds provides a lot of info on how voice-editing tools like Auto-Tune actually work–and it’s sort of terrifying. Take “comping,” for example:

Comping started back in the analog era, with producers painstakingly stitching the best lines of singing from multiple renditions into a superior final performance that never actually occurred as a single event. But Melodyne can take the expressive qualities of one take (or fraction thereof) by mapping its characteristics and pasting those attributes into an alternative take that is preferable for other reasons. As the Celemony tutorial puts it, the newly created blob ‘inherits the intonation’ of the first but also the timing of the second take. And that’s just one example of Melodyne superpowers: It can also work with polyphonic material, shifting a note within, say, a guitar chord, and it can change the timbre and harmonics of a voice to the point of altering its apparent gender.

Auto-Tune may have gotten its big break with Cher’s 1998 “Believe,” but, as Reynolds points out, its greatest leap into the limelight came with the arrival of “the T-Pain effect” in 2005: “If Lil Wayne and Kanye West had reacted like Jay-Z and spurned the effect rather than embraced it as a creative tool, it’s unlikely that Antares would be catering to the appetite for vocal distortion and estrangement.” These early experiments with Auto-Tune were almost always done after the fact, in the editing booth. But then came rappers like Future, Chief Keef and Migos, who use Auto-Tune from the moment the producer hits “Record.” “[They] are almost literally cyborgs,” says Reynolds, “inseparable from the vocal prosthetics that serve as their bionic superpowers.”

And trap music isn’t the only genre to heavily rely on the technology:

Rihanna is the dominant singer of our era, in no small part because the Barbados grain of her voice interacts well with Auto-Tune’s nasal tinge, making for a sort of fire-and-ice combination. Voice effects have been prominent in many of her biggest hits, from the ‘eh-eh-eh-eh-eh’ pitch descents in ‘Umbrella’ to the melodious twinkle-chime of the chorus in ‘Diamonds.’ Then there’s Katy Perry, whose voice is so lacking in textural width that Auto-Tune turns it into a stiletto of stridency that—on songs like ‘Firework’ and ‘Part of Me’—seems to pierce deep into the listener’s ear canal.

Reynolds also touches on the backlash against voice-sculpting tools like Auto-Tune, which I’m generally a part of. I’ve never been a fan of Auto-Tune, though I tolerate it in rap because the sound of a rapper’s voice isn’t really that crucial to a track–though the sound of a rapper’s voice sometimes has a lot to do with his or her popularity; it at least adds to the overall sound of the music he or she puts out (imagine the bars on Illmatic rapped by someone other than Nas).

But singers are a different story. Call me old-fashioned but, when it comes to singers, the integrity of the natural voice is what matters most, because the unique abilities of his or her voice is what a singer is presenting to the world. We criticize models who look nothing like the person in their edited photographs, and especially if we hear they’ve had a ton of surgical work done on their faces and bodies, because we expect models to be natural beauties. We feel betrayed when star athletes test positive for performance-enhancing drugs because we expect athletes to be exemplars of what the human body can do given enough time, dedication, and natural talent. The same goes for singers.

Good singers, the best singers, can sing a song without any accompanying music whatsoever and still have us listening. There’s something about the raw timbre and feel of a singer’s voice that seems to reverberate from within the listener’s own chest, and which, at least for me, usually makes live or stripped albums much more pleasing to the ear than the highly produced versions. It’s the art and the craft behind the performance-enhancing technology that we prize most. Remarking on how beautiful a singer’s voice sounds after it’s been Auto-Tuned is like being amazed by how tall a man is when he’s standing on stilts.

Still, Reynolds makes a good point in showing how voice alteration has been around since the early days of recorded music (even since the invention of the microphone, which I think is a bit much): Elvis and especially the Beatles were constantly experimenting with the sound of their voices. Yet, and maybe I’m just overhyping artists of the past here (though I don’t think I am), I tend to believe that Otis Redding, Sam Cooke, Frank Sinatra and Aretha Franklin sounded much better live and in person, and that even the vinyl recordings don’t do their voices justice. Singing, after all, is a physical interaction between the singer and the audience–the singer’s vocal chords vibrate, sending waves of vibrating molecules through the air, which then reach the listener’s eardrums, which pick up the vibrations. Good vibrations is all about good vibrations, and while a recording can imitate those vibrations, it isn’t the same as the real thing.

A few years back, when we were still living in Chicago, my wife took my stepdaughter to an Selena Gomez concert at the Allsate Arena. They were both big fans and I expected them to have a good time, but when they came back all they talked about was how crappy little Selena sounded live. “You could barely hear her!” my wife complained. “And when you could hear her singing, she sucked!” (Not an exact quote, but something like that.) Since moving to Vegas we’ve seen Britney Spears and Jennifer Lopez perform at Planet Hollywood. Britney’s show was much more engaging and involved a lot more production than J-Lo’s, though J-Lo’s choreography was way more impressive than the basic pom-pom moves Britney kept repeating over and over again. I barely remember how their singing sounded–amateurish. The Shakira concert we just went to didn’t have nearly the production value or choreography that either the Britney or J-Lo shows had, but Shakira’s show was way better than both because Shakira can actually sing. (And Santana’s residency show at the house of Blues tops them all.)

I guess it all comes down to what kind of listener you are, though. Maybe all you care about is the finished product, what the performer and the producer can create in the booth and send to the radio stations. If that’s true, then producers are as important, and many times even more important, than the singers’ actual talents and skills, because the producer can take whatever the singer does in the studio and, like a lump of clay, shape it into whatever he or she wants.

I’m not about that. I always want see what performers can do all on their own, away from all the electronics. I still expect singers to be able to sing. But again, maybe I’m just old-school.


Featured image: Pop superstar Rihanna during her Last Girl on Earth Tour in 2011 (Eva Rinaldi/Flickr)

Hector is the editor and publisher of Enclave. A Chicago writer now floating on the edge of Las Vegas, he is also the former deputy editor for Latino Rebels, as well as the former managing editor for Gozamos, a Latino art-activism site based in his home town. He has contributed to RedEye, a Chicago daily geared toward millennials, and La Respuesta, a New York-based site for the Puerto Rican Diaspora, plus a number of publications, including The Huffington Post. He studied history (for some reason) at the University of Illinois-Chicago, where his focus was on ethnic relations in the United States.

Leave a Reply

Latest from Music

Go to Top