World Monitor magazine, March, 1991:

        
                         Note: Straight ASCII text
          Please check paragraph end markers before reformatting
                  "_" marks beginning and end of italics
         Travis Charbeneau  3421 Hanover Ave., Richmond, VA 23221
            travischarbeneau@gmail.com    Phone: 804 358 0417
                         www.travischarbeneau.com

                           Is Anything for Real?
                            Travis Charbeneau
                               slug "mef"
                               2417 words
     A few years ago "Saturday Night Live" did a takeoff in which Elvis'
gold lame suit toured on a coat hanger "playing" to packed houses long
after his death. Today, following the Milli Vanilli scandal where two
Grammy winners were caught not only singing along with their records in
ostensibly "live" concerts, but having failed even to sing on the original
recordings, this doesn't seem so far-fetched. And technological
"intrusions" are hardly restricted to modern pop.
     George Gershwin recorded a number of exquisite piano rolls early in his
career which have been laboriously transcribed for "MIDI," "the musical
instrument digital interface," a sort of player piano for computers. The
results are extraordinary.
     But, can art be automated?
     _Should_ art be automated? As futurists have long observed, with most
technological innovation, the "should" only gets asked long after the "can"
has been answered in the positive, and this has certainly been the case in
the world of music.
     Today, when you purchase an audio recording, attend a concert, a
Broadway show, even an opera or ballet, do you know how much of the music
is automated by sophisticated synthesis and computers? Do you care? Should
you? Or is "the play the thing," whatever the machinations going on in the
studio or backstage?
     Even before Milli Vanilli, the pop music world had been shaken by a
wide variety of artists singing along, or "lip-synching" to their studio
recordings during "live" shows. Older audiences and critics seethe at the
notion of forking over $50 for what amounts to a celebrity pantomime of a
$10 recording. Younger audiences, used to MTV's lip-synched videos and
arena-sized "concerts" where spectacle has long reigned supreme over music,
seem less indignant. From the bleachers you can't tell if the singer's lips
are moving at all -- in synch or out.
                    FROM LIP-SYNCH TO ROBOTICS
     The lip-synch flap is merely the most recent and visible aspect of an
old and ornery squabble over technology's role in music. In recent years,
even many non-electric, "acoustic" artists have at least part of their live
act "sequenced." Sequencing merely employs a computer driving synthesizers
and digital samplers (on which more later) for, say, incidental Latin
percussion fills, a cowbell clank here, a maraca shake there. It sure beats
hiring a full-time, touring (and unionized) human percussionist.
     No big deal so far, but it's only a small step from this to having the
computer play the string section for occasional "sweetening," instead of
hiring a group of (similarly unionized) humans. "Techno-pop" brings perhaps
a majority of the music under direct control of the computer. Finally, in
the ragingly popular realm of rap music, it's almost Japanese karaoke
time; an out-and-out sing-along with pre-recorded music, much of that
itself _re-recorded_ (some would say "stolen") from old records!
Lip-synching merely closes the circle.
     And these examples apply just to live concerts. In the recording
studio, the computer has long been king.
     Is this something to be tolerated, not just legally but aesthetically?
There's a stiff whiff of fraud in a lip-synched "concert." But what about
that sequenced Latin percussion? What about studio recordings made by one
human and a computer that sound like a full band? How far can technology
intrude into music and still make "music"?
     Perhaps we first really crossed swords on the matter in the 12th
century when the earliest cathedrals invited huge pipe organs into the
sanctum. These wonderful machines are quite literally synthesizers, taking
a variety of what are called "sine waves," the basic elements of all
pitched sound, and mixing them together to achieve a variety of tones, from
a thunderous, bone-shaking roar to a bell-like, flutish toodle. Today's
aesthetic controversies regarding technology and music no doubt derive some
of their near-theological ferocity from the medieval argument:
     "It is a sacrilege for anything but the human voice (male!) to sound in
church!"
     "Ah, but listen to my artificial monstrosity ..."
     It took nearly 200 years for the organ to get from monastery to
cathedral, but the devout were seduced. The hi-tech crowd won.
     And set the pattern. When Sebastien (acute first e) Erard (acute cap
E)introduced the first modern pianoforte in 1780, purists howled. Today, we
see the "classic" concert grand as a wonderful, _natural_ acoustic
instrument. But well into the 19th century the cumbersome beast was widely
derided as tinny, dissonant and, worst of all, "mechanical." "Electrical"
was not yet a relevant pejorative, but it soon would be.
                       ENTER CHARLIE CHRISTIAN
     Long after the piano was happily integrated into the music community, a
man named Charlie Christian came from Oklahoma at the behest of jazz
impresario John Hammond to "sit in" with a very reluctant Benny Goodman.
This was the late '30s and Mr. Christian had one of those newfangled
electrified guitars. Mr. Goodman thought it "a sacrilege [!] to electrify a
guitar." He was hardly alone in his opinion, and the opening notes were not
auspicious.
     By the time the chorus came 'round, however, things were looking up. A
full _16_ choruses later, Benny and Charlie had to forcibly refrain from
their mutual magic if they were ever going to finish the tune. The music
world was seduced. Electrified guitarists could play single note lead
lines, right along with the sax, horn -- or clarinet -- and be heard above
the din of the orchestra. Heretofore, they had been restricted to quietly
"comping" chords in the background. It was an incredible breakthrough for
artistic expression, courtesy of the vacuum tube. The technology of the
electrified guitar -- an acoustic guitar with a microphone on it -- was
accepted, just like the piano.
     But then came the _electric_ guitar. Merely sticking a microphone
onto a hollow-body, acoustic guitar invites ear-piercing feedback. It's
just the nature of acoustics. In 1941-2, innovator Les Paul was the first
to stick a microphone on what he unapologetically called "The Log," a solid
slab of wood (with curvy cutouts stuck on the sides to make it at least
_look_ like a guitar). The "solid body" electric guitar was born. It
resisted feedback even if turned up quite loud.
                            READY TO ROCK
     Which is exactly what happened next. When rock and roll replaced big
band music in the '50s, "the older generation" had more to object to than
that overtly Africanized beat: those awful electric guitars! "Do you have
to play that loud!?" Well, no, as it turned out.
     By the '60s, thanks to the solid body electric Les Paul had invented
for jazz, heavy rock and rollers found they could play even _louder_
There is a perhaps apocryphal but telling anecdote concerning amplifier
maker Jim Marshall's first encounter in the late 60s with guitarist Eric
Clapton of the super rock group "Cream." Mr. Marshall had designed a new
line of powerful, efficient amplifiers with Chet Atkins and other low-key
musical stylists in mind. Inquiring as to what settings on the
exquisitely-calibrated new Marshall Amp the British rock guitar hero might
be finding best for any given occasion, the engineer was dismayed to hear
Clapton say he just twisted all the knobs to "10" and left 'em there.
     The resulting sound was _loud_. So loud it could actually produce
ear-piercing (if expertly controlled) feedback even with the solid body Les
Paul Guitar. But, worst of all, it was distorted! Vacuum tubes driven to
the point of "clipping," or distortion, populated the nightmares of every
electronics expert. But Clapton, who had started out as a traditional blues
artist with more of a B. B. King, "clean" guitar sound, was playing larger
and larger clubs and halls. He, too, wanted to be heard. Cream, after all,
was only a trio: guitar, bass and drums. Along with other (mostly British)
guitarists of the period, he found that by turning both his Les Paul and
his Marshall (soon to be "a wall of Marshalls") to "10," he not only rocked
the hall, but the resulting distortion enabled him to sustain, or hold, a
note -- again, just like a sax or horn player -- to be bent, perhaps
modulated like a violin; or chorded and held as long as an organ. The "Les
Paul/Marshall on Ten" combination also produced sounds never before heard
on this planet. "Everything On Ten" was, in fact, an entirely new timbre,
or instrumental tone color, to be added to the world's musical palette,
just like the piano and "electrified" guitar. And it was due again to
technological intrusion, this time a "perverse" intrusion at that.
                           SEGOVIA SAID NO
     Echoing the feelings of the music establishment, classical guitar
virtuoso Andres Segovia was disgusted. "Sacrilege" would have
conveyed insufficient scorn: to his dying day The Master would not even
allow himself to be miked to a standard PA system, no matter how large the
hall. It was just he, his classical guitar and many in the audience
straining to hear.
     But by the late '60s new technologies were helping artists create new
timbres every day. Jimi Hendrix took electric guitar to heights still
universally acknowledged as unequalled.
     Then, in 1969, Robert Moog marketed the first commercially-viable
synthesizer, and the dam truly burst. The Minimoog could crank out a new
color with the mere crank of a knob.
     In 1982 Yamaha introduced its famous DX-7 synthesizer; which could not
only produce sounds from Mars, but proved very adept at simulating the
sound of the Hammond organ, the Fender Rhodes piano, as well as synthesized
but fairly convincing varieties of brass, strings, reeds and other acoustic
instruments.
     Within another couple of years, digital samplers appeared which played
actual digital recordings of these instruments which were even more than
fairly convincing:  drums, human choirs, and yes, to the horror of purists,
the concert grand piano. A digital sampler is like a CD player with a
keyboard attached. Press a key and the sampler will play back a CD-quality
recording of the real instrument playing that note.
                          THE BLINDFOLD TEST
     Today, if blindfolded, the average listener, and many a professional,
cannot tell the difference between a digital grand piano and a miked
(ital.) acoustic playing in the same hall. On record, there is virtually no
instrument, possibly excepting the acoustic guitar, which cannot be
convincingly simulated -- especially now that the computer has entered the
loop, providing microscopic, surgically-precise control of every aspect of
sound creation, composition and performance.
     It's a long way from the 400-pipe organ of Winchester Monastery (980)to
the computer-driven digital piano, but "parts is parts," and only the
"parts" distinguish these technologies. If the bottom line is music, if
indeed "the play's the thing," if the fans are happy, why all the fuss?
Even regarding lip-synch, we should recall that in the '50s Les Paul had
Mary Ford sing along with herself on tape for harmony parts,
surreptitiously -- in concert. Audiences were delighted. To his credit, Mr.
Paul teased early audiences by challenging, "I'll give ten bucks to anyone
who can guess how Mary did this." He was eventually caught out by a little
girl who simply asked, "Where's the other lady?" That was good enough for
Les.
     Obviously, in the case of "live" concerts that aren't, some "truth in
labeling" might be appropriate. But new music technologies raise far
broader issues. Not least among them is the displacement of traditional
musicians, striking not only the soul, but the pocketbook. Dedicated,
talented people who have trained all their lives to master difficult
instruments increasingly find "one human and a computer" taking over the
world of film, television and advertising, dominating both rehearsal and
recording, elbowing them out of the Broadway orchestra pit and even off the
concert stage.
                     IF ROBOTS DO IT, IS IT ART?
     This raises unprecedented cultural questions. "Robots" were never
supposed to be able to make "art," remember?  This was the forever safe
domain of real, live humans. In addition to everything else I've described,
today you can purchase algorithmic compositional software which not only
orchestrates and produces sounds, but literally composes tunes, reducing
even the _composer_ to the status of "editor:" keep the good stuff, delete
the junk. And yet: Bach and Mozart experimented, no one knows exactly how
extensively, with then-available musical algorithms and mathematical
equations which generated melodies, counterpoint, chords and retrograde
and/or inverted successions of notes. "Parts is parts?" Is it merely a
matter of degree?
     Music technology, like technology generally, is now advancing at such a
furious pace that we are increasingly confronted with philosophical issues.
It's no accident that Benny Goodman chose the word "sacrilege." Where do we
draw the line? Can we? _Should_ we?
     This is, of course, where we came in, focusing on the discomforting
aspects of technological change, especially discomforting when it impacts
something as emotional as music, which Plato called "the language of the
soul."
     But, for all the discomfort of adapting appropriately to the new
technology, we might take comfort in its long-term, empowering aspects.
                           A NEW FOLK MUSIC
     I played guitar for 20 years before losing the required dexterity of my
hands. I am now an electronic composer; "a human and a computer." In the
most vital respects, plus many I simply never imagined, music has been
given back to me.
     And what of the millions who love music, who hear new music every day
in their heads, but simply lack the physical skills, time or money to
realize their creations?
     Today, an inexpensive computer, the right music sequencing software and
a few black boxes are midwifing what I have called the birth of a "new folk
music," enabling common "folks," without access to music attorneys or big
record companies, to explore their talents.
     And what about children, who seem to take to computers as though born
for it?  What will succeeding generations do with the ancient instinct to
sing, given _these_ vocal chords?
     The technology of fire was a two-edged sword. (So was the technology of
the two-edged sword, now that I think about it.) The joys of the computer
music workstation and tribulations like the "is it live or is it
lip-synch?" concert performance, likewise cut two ways.
     Still, the human has always been the _real_ ghost in every machine, and
our _real_ job is to keep it that way. As technology continues to advance
and challenge in every department of human endeavor, awareness is the only
lasting tool we can develop to maintain humanity; wisdom the only
continuing craft that can save us from genius.