1. In his 2006 dissertation, The Effects of Music Notation Software on Compositional Practices and Outcomes, Chris Watson uses the acronym MNS (music notation software). For all intents and purposes, MNS and MEPS (music engraving and playback software) refer to the same thing (software such as Finale and Sibelius), but I believe that it is crucial to underscore two points. 1) “Notation” versus “Engraving.” These programs are designed principally for the production of scores of Western classical music (with all of the rather specific assumptions and conventions this implies: time being represented from left-to-right, pitches of lesser frequency being notated lower on the page than those of greater frequency, octaves being divided in 12 even intervals, meters subdividing in powers of two, and so on). For that reason, the term “engraving,” which I believe evokes this particular apparatus, seems more appropriate than the vaguer term “notation.” “Engraving” accounts well for the struggles faced by composers who wish to break any these conventions when using these programs to notate their music, struggles they would most certainly not face, were they composing with pen and pencil. 2) “Playback.” For many composers, the playback function is a vital function of these programs. It is a feature of equal if not greater importance than the ease of cut-and-paste, transposition, or the ability to print professional-looking scores.
3. Chris Watson, The Effects of Music Notation Software on Compositional Practices and Outcomes. Ph. D. Diss. Victoria University of Wellington, 2006. Available online at http://www.chriswatsoncomposer.com/chris_watson_phd.pdf
4. For his study, Watson sent a lengthy questionnaire to all of the composers currently active in New Zealand which he then analyzes in a careful, scientific manner. My study is based on interviews of a far smaller sample: two professors and five students. (Watson’s study doesn’t include students.) My conclusions are based on their observations as well as my own experience composing music using Finale.
5. Frank J. Oteri, ed. “How does using music notation software affect your music?” NewMusicBox, August 1, 2002. Available online at http://newmusicbox.com/article.nmbx?id=1810
7. Robert Morris, “How does using music notation software affect your music?,” NewMusicBox, August 1, 2002. Available online at http://newmusicbox.com/page.nmbx?id=40hf04
As one of this paper’s reviewers pointed out, MEPS implicitly shares a bias held by an older generation of academic (/modernist) composers, whereby all the elements of the music in performance are assumed to be fully predictable (i.e. they can be fully imagined by the composer) from the written page. For such composers, any form of computer playback would be at best redundant, at worse inaccurate and misleading. The severe limitations inherent to early playback (most notably the poor rendition of timbre and dynamics) added fuel to this negative opinion.
8. That is not to say, however, that these younger composers only use one computer program or have discarded pencil and paper altogether. Many young composers still compose music by hand, and when their music does not lend itself to being easily copied into Finale (and when Finale would not be able to play it back properly anyway), they see little advantage in spending time doing so. My interviewees also use a variety of computer programs besides MEPS, including recording and sequencing software such as Digital Performer, or software such as MAX.
10. Regarding the ability to hear scores in one’s head, one of the comments I received seems very pertinent and deserves mention here: “I know of a few composers who think that composing at the piano or computer is bad because we should ‘hear it in our heads’ but they’re few and far between. It seems to me that most of the composers who say these types of things tend to write either tonal, largely homophonic music which is relatively easy to imagine without any other tools on hand, or very conceptual music. Since I don’t like being limited to those options, I tend not to listen to their advice.”
11. For instance, at this point, Finale/Sibelius simply cannot produce Braille musical notation, and it cannot play back vocal scores, or improvisatory scores. These are, for all intents and purposes, unachievable using these programs.
13. Composers who only use pen and pencil need to leave a lot of space on their scores to manage for potential insertions. If that space becomes used up, they need to literally cut up their score, and/or paste flaps of staff paper, or simply copy out a brand new page.
14. Chris Watson’s research indicates, however, that this hasn’t enabled composers to be more prolific. Instead, they spend more time focusing on other aspects, such as weighing competing options using the playback function. (Watson, 69 and 116.)
15. There are other MIDI instruments in existence, such as MIDI guitar or MIDI violin. These are comparatively much rarer than MIDI keyboard, but they could conceivably be used to input music into Finale, and similar issues would apply (e.g., guitar-like chord spacings, violin-like gestures). Finale also has the ability to “listen” to a non-MIDI, single line instrument (e.g. a saxophone, but not a guitar) as it plays into the computer’s microphone. Finale attempts to determine pitch and duration, and notates this information on a staff. (Again, the same set of issues could arise here.)
17. Just as journalists use portrait monitors (instead of the far more common landscape monitor), nothing technically prevents composers from purchasing display systems that correspond to their needs. Large screens are expensive, however, and composers may not experience the inconvenience to such a degree that they would decide to procure one from themselves.
18. Say, for instance, the composer is working on a highly polyphonic piece for large orchestra (really not such an exotic proposition). Say the composer wants the first flute part to double that of the violins. In order to cut-and-paste the string music, the composer needs to travel down to the part of the score where the string parts are notated, which involves scrolling through all the woodwinds (12 staff lines), all the brass (11 lines), as well as percussion and harp (7 lines) before finally reaching the violins (30 staff lines in all). If the screen can only legibly accommodate 10 lines, the strings are 4 screens removed from the flutes. If the composer first zooms out, the score is so crunched up that the user can’t even tell what the music in the strings is. On paper, the experience is nowhere as cumbersome.
20. If one starts by inputting a quarter note, the computer assumes that the desired rhythmic division is three quarter notes in the space of a half note, instead of three eighth notes in the space of two quarters.
21. Textbooks disagree regarding whether the duplet unit in this case should be the eighth or the quarter. Should the space of a dotted quarter be filled with a duplet containing two quarter notes (as in Figure 2c), or with a duplet containing two eighth notes? The latter appears to have more currency, and Finale is geared toward that solution, making it more difficult to use the quarter-note notation style used here.
22. The full version of the Vienna Symphonic Library (“Vienna Super Package”) is currently listed at 14,980 euros. http://vsl.co.at/en/211/442/484/490/305.htm (accessed March 9, 2009).
23. This is likely to change in years to come. Vocaloid currently commercializes sound samples that enable a composer to produce quite convincing backup vocals. (Particularly effective if these vocals remain in the background and an actual human sings the same words, simultaneously in the foreground!)
27. The Garritan instrumental patches, unlike earlier patches do not extend beyond the range of an actual instrument. So although a note that lies outside of the range of an instrument can be notated, playback will not emit any sound for that note. This leads to a new potential pitfall. If the composer is not paying attention, he or she may not notice (say, in a complex texture) that certain notes are not sounding. The composer may then print out parts that include these notes, which later results in confusion and lost time at the first rehearsal.
28. These are of course possible, for instance when pianists use their arms to play clusters. Nonetheless, for a beginner who is not familiar with the piano (or is simply not thinking about how the notation translates into a performance), the playback’s willingness to perform may be misleading.
29. One last example (one that qualifies more as an overlook by programmers and will no doubt soon be remedied): trills can involve either the principal note and that half step or that a whole step above. Finale allows a composer to specify this using the appropriate trill-with-accidental notation. However, the program doesn’t take this into account, and alternates between the principal note and the one with the next letter name, using whatever accidentals (or lack thereof) happen to be in the key signature.
30. Robert Morris wryly remarks that early versions of the program were best suited to notating hymns and lead sheets. Chris Watson claims that the problem persists, “as [Sibelius’s] core clients [are] primarily band leaders in the United States.” (Watson, 66) This emphasis on marketing the program to music educators is manifest in the features (“Educator Tools” and “Exercise Wizards”) that are advertised with each new version, features that composers will not even install on their computer.
31. Joseph Pehrson argues otherwise, describing how he devised a special font to get Finale to notate his particular microtonal system, then was able to program each symbol to play back properly using pitch bend. I would argue that, like Kathryn Alexander’s, this is a very work-intensive approach that even a seasoned user of Finale might be loath to implement.
32. While computer engineers such as USC’s Elaine Chew are currently at work on programs that can detect pitch center, many pieces do not actually lend themselves to a cut and dry analysis of the sort.
33. Anders Friberg and Giovanni Umberto Battel, “Structural Communication,” in The Science & Psychology of Music Performance, Richard Parncutt and Gary McPherson, eds. (Oxford: Oxford University Press, 2002), 199.
34. Watson, however, points to the reverse mechanism whereby composers feel uncomfortable not constantly updating their software to the most recent version, for fear of being left behind. (Watson, 103)
38. In this case, the only difference between a composer and an engineer is the level of remoteness. Perhaps Finale’s software engineer and composer can be compared to an architect (at his desk, away from the construction site) and a mason (laying every stone of the building). The place of creativity becomes shifted, but a human creator remains at the center of the process. This very unemotional approach can be perceived as very unsettling and inauthentic. Yet, this is a direction actively explored by modernist composers.
39. See Babbitt’s famous essay “the composer as specialist.” (Milton Babbitt, “Who Cares if you Listen?” in Contemporary Composers on Contemporary Music. Elliott Schwartz, Barney Childs and James Fox, eds., Cambridge, MA.: Da Capo Press, 1998, 246.)
40. One must note, however, that the USC composition department requires students to submit a portfolio when they apply for admission in the undergraduate program. They must therefore have had a number of years of experience prior to the end of high school (during which they most likely already encountered warnings about the use of MEPS from their teachers). Students at other school who may have only started composing in earnest after entering college can therefore expect to receive reminders throughout their entire undergraduate career!
43. As is frequently pointed out, without even starting to worry about rhythm, there are 125 (about a quarter million) possible 5-note melodies to be made from a set of 12 notes. (Indeed, this does include many incidences of melodies that are transpositions of other melodies, but the number remains quite large, and becomes geometrically larger as new parameters are factored in.)
44. Although, indeed, recordings also demonstrate that one generation’s or one population’s favorite music may be beyond the appreciation of the audience in a different time or place. So perhaps the competition is not as great as one might at first fear.
45. Grant-giving organizations and award panels shape a composer’s career by providing financial support and performance opportunities. Universities view the prizes, grants, and performances as testimonies of the quality of the composer’s work, and base part of their hiring decisions on this. It is thus in a composer’s interest to glean as many of these as possible.
Prizes are often decided by panels. The members of the panel base their choices on roughly two factors: (1) intrinsic aesthetic value and (2) the general appeal of the music to laymen (be that potential concert-goers or philanthropists), to ensure sufficient financial income to sustain the whole enterprise. Panel members tend to be successful composers themselves, meaning that their music was once recognized as worthy by a previous generation of composers. While this presumably ensures a degree of consistency and quality, it also leads to the potential for stylistic bias that shrewd applicants might have the savvy to decode. Once this is determined, the composer can find a way of incorporating the elements required by the judges into a product appealing to laymen (lively, loud and rhythmic often does the trick – and it also serves to make one’s music stand out on the long days and night judges must pore over literally hundred and hundreds of scores, that is, unless everyone else has resorted to the same stratagem).