Algorithmic composition from text
How well can a computer generated song express emotion?
Jacob Sievers and Rex Wagenius
Degree Project in Computer Science, DD143X
KTH, Computer Science and Communication Tutor: Robert Bresin
Examiner: Örjan Ekeberg
Abstract
Algorithmic composition has long been a subject of research in computer science. In this report we investigate how well this knowledge can be applied
to generating an emotional song from a piece of text, such as a text message, a tweet or a poem.
The algorithm described in this paper uses Markov chains to generate a melody from just a sentence
and a mood (happy or sad). It can then be output from some voice software such as Vocaloid, which
is used here. The results show that a simple algorithm can be used to generate songs that
communicate the wanted emotion.
Referat
Algoritmisk komposition har länge varit ett forskningsämne inom datavetenskapen. I den här
rapporten undersöker vi hur denna kunskap kan appliceras för att generera en sång ifrån ett stycke
text såsom ett textmeddelande, en tweet eller en dikt. Algoritmen som beskrivs använder Markovkedjor för att generera melodier från bara en
mening och en känsla (glad eller ledsen). Sången kan sedan spelas från röstmjukvara, såsom Vocaloid, vilket används här. Resultaten visar att en
simpel algoritm kan användas för att generera sånger som kommunicerar den önskade känslan.
Table of Contents
1. Introduction
1.1. Algorithmic composition 1.2. Purpose
2. Objective and scope 3. Background
3.1. Emotion in Music 3.1.1. Minor vs. Major 3.1.2. Melodic factors 3.1.3. Vocal factors 3.2. Speech Synthesis 3.3. Algorithmic composition
3.4. Composition using markov chains 4. Method
4.1. Software
4.1.1. Syllabification
4.1.2. Generating a melody 4.1.3. The tables
4.1.4. Vocaloid 4.2. Survey
5. Results 6. Discussion
6.1. Method 6.2. Results 7. Conclusion
References
A. Appendix
a. Download link b. singme.py c. filecreator.py
1. Introduction
1.1 Algorithmic composition
Algorithmic composition has for a long time been a major application of computer science in general[7], and while it has been more of a “pet project” for researchers due to interest in music, great progress has been made. There are many known algorithms to generate music that sounds like many known composers, for example Beethoven[1], but there are few practical applications for this.
1.2 Purpose
In the last several years communication through text messages on the likes of twitter, SMS and chat messages has completely exploded in popularity, and while these are great means of communication for most people in most situations it creates problems for both blind people and if we for example need to take in urgent information while driving. To solve this problem there is voice synthesis. Voice synthesis is great if only the information itself is important, but it is
common knowledge that a great deal of communication happens through how the message is read. If something being read to us loses its emotion, it loses a great deal of its meaning.
Therefore the purpose of this report is to investigate how well a computer generated song from a piece of text can express emotions.
2. Objective and scope
The objective is to examine how well a song generated from text can express emotion to humans. This report is limited to the generation itself and not to analyzing the text to identify which emotion to express. The factors examined are limited to musical factors, that is vocal factors are the same for all generated melodies. The problem statement is: How well can a computer generated song express emotion?
3. Background
There are many parameters which affect our perception of emotions in music and song, of which the most well known are major, minor and dissonant chords and scales[2], but also things like tempo, sound level, variability and tone attack effect it a great deal[3].
3.1 Emotion in Music
There has been a great deal of research dealing with emotion in music. One of the more well known ways to communicate emotion is major and minor chords[2], but there is also a
substantial amount of research dealing with vocal cues such as tempo, sound level and sound level variability[3]. This section covers what we have decided to use in our research.
3.1.1 Minor vs. Major
Minor chords communicate sadness, while major chords communicate happiness, especially in trained musicians[2]. Dissonant chords also communicate some measure of sadness and to a greater extent for nonmusicians[2]. When it comes to happiness, major chords tend to be rated as very happysounding. Both major and minor chords are rated as “pleasantsounding” while dissonant chords sound unpleasant, especially to musicians[2].
3.1.2 Melodic factors
Naturally, melody affects the perceived emotions to a great extent. Intervals can elicit emotions such as sad or melancholy for lowpitched intervals and the minor2 interval.[4] It can also give a sense of happiness for highpitched intervals or a sense of carefreeness for the perfect 4th, perfect 5th, major 6th and minor 7th.[4] Melodic direction communicates happiness when ascending and sadness when descending.[4] The general pitch of the melody can also affect emotion; a high pitched melody gives a sense of happiness and a low pitched melody gives a sense of sadness or seriousness.[4]
3.1.3 Vocal factors
When it comes to tempo and sound level, a fast tempo and high sound level can give both a sense of happiness, anger or fear, while a slow tempo and low sound level can give a sense of sadness or tenderness.[3] High variety in sound level gives a sense of anger or fear, while a low variability gives a sense of happiness, sadness or tenderness.[3]
Emotion Acoustic cues (vocal expression/music performance)
Anger Fast speech rate/tempo, high voice intensity/sound level, much voice intensity/sound level variability, much highfrequency energy, high F0/pitch level, much F0/pitch variability, rising
F0/pitch contour, fast voice onsets/tone attacks, and microstructural irregularity
Fear Fast speech rate/tempo, low voice intensity/sound level (except in panic fear), much voice intensity/sound level variability, little highfrequency energy, high F0/pitch level, little F0/pitch
variability, rising F0/pitch contour, and a lot of microstructural irregularity Happiness Fast speech rate/tempo, medium–high voice intensity/sound level, medium highfrequency
energy, high F0/pitch level, much F0/pitch variability, rising F0/pitch contour, fast voice onsets/tone attacks, and very little microstructural regularity
Sadness Slow speech rate/tempo, low voice intensity/sound level, little voice intensity/sound level variability, little highfrequency energy, low F0/pitch level, little F0/pitch variability, falling F0/pitch
contour, slow voice onsets/tone attacks, and microstructural irregularity
Tenderness Slow speech rate/tempo, low voice intensity/sound level, little voice intensity/sound level variability, little highfrequency energy, low F0/pitch level, little F0/pitch variability, falling F0/pitch
contours, slow voice onsets/tone attacks, and microstructural regularity
Table 1: Summary of CrossModal Patterns of Acoustic Cues for Discrete Emotions (Table 11 from Juslin and Laukka[3])
3.2 Speech Synthesis
The first speech synthesizer, models of the human vocal tract, was created in 1779 by danish scientist Christian Kratzenstein[5]. From there on the development of speech synthesizers took on and today we use them in several different services. Microsoft Sam, Apple’s Siri, and various devices for speech supplement or replacement for mutes, and other speech disables, are some examples of speech synthesizers.
Speech synthesis can be done in several different ways, concatenative, formant, and
articulatory are some of them. Concatenative synthesis, which is what Vocaloid (a program we will describe later) uses, can be done in more than one way but the general technique can briefly be described as saving segments of human speech in a database and later concatenate these segments so that they form words.
In our research we’re going to use a singing voice synthesizer called Vocaloid. When Yamaha started to develop Vocaloid in March 2000, and three years later released the software for commercial use, it was the only of its kind[6]. What is so special about it is that the user enters lyrics and melody and then the song is generated using the vocal fragments of the chosen voice.
3.3 Algorithmic composition
There are many possible approaches to algorithmic composition.[7] One such approach is using deterministic models to generate the melody from some set of in parameters, for example using cellular automata or lindenmayer systems[7] while the other is using stochastic models to
generate some kind of randomized melody, for example using markov chains. Many modern use some combination of the two, fusing the randomness of stochastic composition with the
predictability of deterministic algorithms.[7]
3.4 Composition using markov chains
The Markov chain is an inherently simple mathematical concept, but its usefulness can easily be seen in algorithmic composition.
Markov chains can be described as a completely memoryless stochastic process where the probability of being in some state only depends on the previous state it was in and the particular probability for there being a transition between these two states[8]. This can be used in
algorithmic composition with the states representing single notes and the probability of the transition representing the probability that a particular note will follow another. For example, in Cmajor, the transition from C to D (the next tone in the scale) might be given a high probability while the transition from C to F# (the tritone interval, which sounds highly unpleasant) might be given a low or nonexistent probability.
This approach to composition gives a random sounding, occasionally comical music, but it does have many strengths, chief of which is the easiness with which it can be applied. It is simple to implement and test different rules used to generate a melody.
Figure 1: Example of a markov chain. The numbers represent the probability of going from one state to another.
4. Method
Since the purpose is to explore how well a computer generated song can express emotion other texts will be used as input to the algorithm to generate new melodies for these texts. One thing not of interest is how a person is affected by the song, that is subjects were not asked if a song actually makes you feel sad, they were asked ask how the song is perceived.
The approach used in this report, described in short, consists of first splitting a sentence into words, splitting words into syllables, then generating a melody with the same number of
syllables, converting it to a vsqxfile and finally generating a sound file with Vocaloid. These files were played to nine different test subjects and their perceived emotions from the different pieces were gathered.
4.1 Software
4.1.1 Syllabification
Syllabification, that is generating syllables from a given word, is a major problem in computer science. It is so challenging that no accepted algorithm for automatic syllabification exists[9]. Consequently, we have decided to use the PyHyphen[10] library to generate syllables from words in a sentence. PyHyphen uses Open Source hyphenation dictionaries to split words into
syllables.
4.1.2 Generating a melody
Several options were considered for which technique to use for the algorithmic composition. In the end the decision fell on using markov chains due to their simplicity and being suited to generate varied music from a small set of rules. The implementation used can be found in the appendix.
Regarding the specific implementation of the markov chain, we opted to use nested lists to represent the notes. The list at the index of a specific note is populated with the different notes that can come next. Probabilities are represented by there being a different number of some note. To generate the melody, we have a starting note which generally is the tonic, or first note of the scale. Subsequent notes in the melody are generated by randomly selecting a tone from the list at the index of the number of the previous note.
The input to the algorithm is a number of syllables and a letter to select which table to generate a melody from (in our case a happytable and a sadtable). The tempo is set depending on which table is used.
4.1.3 The tables
The tables used in the markov chain are generated according to some simple rules. The happy table has an increased number of possibilities to go up, with both major second and major third added to the table and a perfect fourth and fifth added to the downward direction, making the jumps down bigger. For the sad table the minor second and minor third are added along with the perfect fourth.
When the tables have been generated, all the notes not belonging to the scale are removed. This has some consequences, such that in the sad table, some notes only have one natural
progression. This is fixed by adding a perfect fifth to all the notes which have less than two notes in its list.
4.1.4 Vocaloid
Vocaloid is, as previously mentioned, a singing voice synthesizer where vocal fragments of different singers are used to generate a song[6]. The main purpose for using Vocaloid is that every syllable can be assigned with a note that is highly configurable. Vocaloid can either be used manually or automatically. If used manually, each note with assigned syllable is entered by the user and can then be modified to fit the users preferences. To use Vocaloid automatically a vsqxfile, which is the file format that Vocaloid songs are saved in, can be created by a third party program where the song, and every note, is represented by customized xmlcode. For
customization of the notes there are tags for each configurable parameter.
In this report such a tool has been used to export the generated melody to Vocaloid. In excess of exporting the melody it also decides in which tempo the song will go in which is higher for happy and lower for sad since these are known parameters that affect emotion in music as well as the musical qualities mentioned above. The source code of the tool can be found in the appendix.
Figure 2: An example of the custom XMLcode used by Vocaloids vsqxfiles.
4.2 Survey
The test was performed by letting each participant listen to six song snippets, where half of them tries to communicate a sad feeling and the other half a happy feeling. Three different lyrics were used to generate the song, ergo there is one sad version and one happy for each lyric. Every
participant listened to the songs in the same order and after each song were asked to give it a rating between one and five, where one is Sad and five is Happy. The participants were only allowed to listen to each song one time before rating it, this was to ensure that the ratings given were not compared to all the other songs but instead a spontaneous rating.
The participants were also required to rate their own musical training to see if it had any noticeable impact on the ratings given.
5. Results
The answers given by the test participants is put together and presented in table 2. As we can see in the table all the sad songs were rated almost exclusively as ‘Sad’ or ‘Somewhat sad’
whereas the happy songs were rated as ‘Neutral’ and ‘Somewhat happy’ in a larger extent. It is clear that the sad songs sound much clearer, some explanations for this are discussed later in this paper. One thing that is obvious is that the test subjects generally were able to identify the intended emotion of the song. The only song that they had significant problems with identifying the intended emotion with was number four, where over half rated the song as “neutral” with the intended emotion being “happy”. Some participants noted that the the biggest factor for making a song communicate a sad feeling was the slower tempo, some even thought that the happy and sad versions of a song had the same melody and that the tempo was the only property that had been altered. Little to no difference between the ratings of those with high musical training and those with low musical training were found.
Desired Emotion/Song
number
Sad Somewhat sad Neutral Somewhat happy Happy
Sad 1 0 8 1 0 0
Happy 2 0 0 2 5 2
Happy 3 0 0 4 3 2
Happy 4 0 0 5 2 2
Sad 5 7 2 0 0 0
Sad 6 4 5 0 0 0
Table 2: The test results from the study
6. Discussion
6.1 Method
As was mentioned in the background there are a great many ways to compose music
algorithmically. Ours was chosen primarily for its simplicity and perceived efficiency, since we could tailor the algorithm to play exactly what we wanted it to, but if you choose another technique the results will probably be different, and the challenges would also be different.
Regarding the choice of using vocaloid as our tool for the actual generation of the melody, the main concern was with getting something to sound realistic, and in that regard Vocaloid is easily the best choice.
Something that probably did impact the results is the voice we used in vocaloid. Things such as voice timber affect emotion in the song so by using only one voice we might have skewed the results in one direction or the other. Paying greater attention to the voice when generating the songs would probably get better results.
Finding a varied group of test subjects for the study was challenging, therefore, the range of age for the participants is 20 to 26 and the results can therefore only be seen to be representative for people in that age group.
6.2 Results
As noted in the results it was easier to make the computer generated song communicate a sad feeling than happy, possibly due to the the slower tempo of the sad songs. This is not too surprising since both speech that communicate a neutral or happy feeling have rather similar tempo whereas speech that communicate a sad feeling often has a slower tempo.
One thing which might have effected the results is that the songs were played in the keys Cmajor and Aminor, respectively. The subjects might therefore have been set on thinking a song was in a particular key when the intended key is different since a song in either key might be seen as the continuation of the previous one since they essentially are on the same scale.
Someone doing similar tests might therefore want to keep changing the key or maybe keeping the key set to for example Cmajor and Cminor making sure the subjects hear a difference.
One thing noted was that there was little to no difference between how people with different musical experience rated the songs. This is surprising since musical experience is know to have an impact on how big an impact modality will have on the perceived emotion.[2]
7. Conclusion
The study shows that computer generated songs can express emotions even when a rather simple algorithm is used. Although the communicated emotion might not be as clearly
expressed as it would have been if a human composed the song. If a more advanced algorithm is used it is possible that the emotions can be expressed as well as those in a song composed
by humans.
References
1. David Cope, The Algorithmic Composer, 2000
2. Karen Johanne Pallesen, Elvira Brattico, Christopher Bailey, Antti Korvenoja, Juha Koivisto, Albert Gjedde, Synnöve Carlson. Emotion Processing of Major, Minor and Dissonant Chords, New York Academy of Sciences, 2005
3. Juslin PN, Laukka P, Communication of emotions in vocal expression and music performance: different channels, same code?, Psychol Bull, 2003
4. Patrik N. Juslin, Johm A. Sloboda, Handbook of Music and Emotion, 2010
5. Sami Lemmetty, Review of Speech Synthesis Technology, Master Thesis, 30 March 1999
6. Yamaha Corporation, Vocaloid, www.vocaloid.com/en/
7. Michael Edwards, Algorithmic Composition: Computational Thinking in Music, 2011 8. Bruno Abrantes Basseto, João José Neto. A stochastic musical composer based on
adaptive algorithms, 1999
9. Yannick Marchand, Connie R. Adsett, Robert I. Damper, Comparison of Automatic Syllabification Methods
10. PyHypen 2.0.4, https://pypi.python.org/pypi/PyHyphen/
A. Appendix
A.a Download link
The songs generated and used for testing can be found at the link below:
https://dl.dropboxusercontent.com/u/13161065/Algorithmic%20Composition%20From%20Te xt/Songs.zip
A.b singme.py
from hyphen importHyphenator import random
import os.path import filecreator hy =Hyphenator('en_US') happyTable =[]
sadTable =[]
# Generate tables for i in range(23):
happyTable.append([]) sadTable.append([])
# Check if tone is in the major scale def inMajorScale(tone):
tone =(tone+1)%12 if tone in[2,4,7,9,11]:
returnFalse else:
returnTrue
# Make happy table def generateHappy():
# Adds major second and third and perfect fourth and fifth for i in range(21):
happyTable[i].append(i+2) if i<19:
happyTable[i].append(i+4) if i<18:
happyTable[i].append(i+5) happyTable[i+5].append(i) for i in range(16):
if(len(happyTable[i])<2)or(len(happyTable[i+7])<2):
happyTable[i].append(i+7) happyTable[i+7].append(i) # Removes tones not in scale for i,el in enumerate(happyTable):
for j in range(len(el)):
el = filter(inMajorScale, el) happyTable.pop(i)
happyTable.insert(i,el)
# Make sad table def generateSad():
# Adds major second and third and perfect fourth and fifth for i in range(22):
sadTable[i].append(i+1) if i<20:
sadTable[i].append(i+3) if i<18:
sadTable[i].append(i+5) sadTable[i+5].append(i) for i in range(16):
if len(sadTable[i])<4: sadTable[i].append(i+7) if len(sadTable[i+7])<4: sadTable[i+7].append(i) # Removes tones not in scale for i,el in enumerate(sadTable):
for j in range(len(el)):
el = filter(inMajorScale, el) sadTable.pop(i)
sadTable.insert(i,el) def write(mood, nSyl):
notes =[]
if mood =='h': prevNote =12 print prevNote
for i in range(nSyl-1):
thisNote = random.choice(happyTable[prevNote]) #print ('' + str(prevNote) + ' > ' + str(thisNote)) print thisNote
prevNote = thisNote elif mood =='s':
prevNote =9 print prevNote
for i in range(nSyl-1):
thisNote = random.choice(sadTable[prevNote])
#print ('' + str(prevNote) + ' > ' + str(thisNote)) print thisNote
prevNote = thisNote else:
print'invalid input' generateHappy()
generateSad()
#Open file containing ths lyrics
song = unicode(raw_input('Enter file to open: ')) songtext =""
if(os.path.isfile("Lyrics/"+song)):
songtext = unicode(open("Lyrics/"+song,'r').read()) elif(os.path.isfile("Lyrics/"+song+".txt")):
songtext = unicode(open("Lyrics/"+song+".txt",'r').read()) else:
print"File \'"+ song +"\' could not be found."
raise
#Set the mood
mood = raw_input('Enter a mood ("h" or "s"): ') if mood =="h":
tempo =12000 nextindex =12 elif mood =="s": tempo =8000 nextindex =8 else:
print"\'"+ mood +"\' is not a valid mood"
raise songlist =[]
#TODO remove code duplicates and optimize for word in songtext.split():
#print word if","in word:
word = word.replace(",","") if len(word)>3:
for syl in hy.syllables(word):
#print syl if(mood =="h"):
note = random.choice(happyTable[nextindex]) elif(mood =="s"):
note = random.choice(sadTable[nextindex]) songlist.append([syl, note+48])
nextindex = note else:
if(mood =="h"):
note = random.choice(happyTable[nextindex]) elif(mood =="s"):
note = random.choice(sadTable[nextindex]) songlist.append([word, note+48])
nextindex = note
#Create the song
songname = raw_input('Enter a name for the song: ') filecreator.createfile(songname, songlist, tempo)
print"The song \'"+ songname +"\' have been created."
A.b filecreator.py
#Creates a vocaloid file with file name 'filename' and lyrics/notes from 'list'.
#If the file already exists it's overwriten.
notelength =360
def createfile(filename, list, tempo):
target = open("Vocaloid Songs/"+filename+".vsqx",'w') writestart(target, filename, tempo)
posttick =0 for entry in list:
writenote(target, entry, posttick) posttick = posttick+notelength writeend(target)
target.close()
def writenote(target, entry, posttick):
target.write("<note>\n")
target.write("<posTick>"+str(posttick)+"</posTick>\n") target.write("<durTick>"+str(notelength)+"</durTick>\n") target.write("<noteNum>"+str(entry[1])+"</noteNum>\n") target.write("<velocity>64</velocity>\n")
target.write("<lyric><![CDATA["+entry[0]+"]]></lyric>\n") target.write("<phnms><![CDATA[]]></phnms>\n")
target.write("<noteStyle>\n")
target.write("<attr id=\"accent\">50</attr>\n") target.write("<attr id=\"bendDep\">8</attr>\n") target.write("<attr id=\"bendLen\">0</attr>\n") target.write("<attr id=\"decay\">50</attr>\n") target.write("<attr id=\"fallPort\">0</attr>\n") target.write("<attr id=\"opening\">127</attr>\n") target.write("<attr id=\"risePort\">0</attr>\n") target.write("<attr id=\"vibLen\">0</attr>\n") target.write("<attr id=\"vibType\">0</attr>\n") target.write("</noteStyle>\n")
target.write("</note>\n")
def writestart(target, filename, tempo):
target.write("<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"no\"?>\n") target.write("<vsq3 xmlns=\"http://www.yamaha.co.jp/vocaloid/schema/vsq3/\"\n") target.write("xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n")
target.write("xsi:schemaLocation=\"http://www.yamaha.co.jp/vocaloid/schema/vsq3/
vsq3.xsd\">\n")
target.write("<vender><![CDATA[Yamaha corporation]]></vender>\n") target.write("<version><![CDATA[3.0.0.11]]></version>\n")
target.write("<vVoiceTable>\n") target.write("<vVoice>\n") target.write("<vBS>1</vBS>\n") target.write("<vPC>0</vPC>\n")
target.write("<compID><![CDATA[BLCX9DC8G3RSWLBL]]></compID>\n") target.write("<vVoiceName><![CDATA[Oliver]]></vVoiceName>\n") target.write("<vVoiceParam>\n")
target.write("<bre>0</bre>\n") target.write("<bri>0</bri>\n") target.write("<cle>0</cle>\n") target.write("<gen>0</gen>\n") target.write("<ope>0</ope>\n") target.write("</vVoiceParam>\n")
target.write("</vVoice>\n") target.write("</vVoiceTable>\n") target.write("<mixer>\n")
target.write("<masterUnit>\n") target.write("<outDev>0</outDev>\n") target.write("<retLevel>0</retLevel>\n") target.write("<vol>0</vol>\n")
target.write("</masterUnit>\n") target.write("<vsUnit>\n")
target.write("<vsTrackNo>0</vsTrackNo>\n") target.write("<inGain>0</inGain>\n")
target.write("<sendLevel>-898</sendLevel>\n") target.write("<sendEnable>0</sendEnable>\n") target.write("<mute>0</mute>\n")
target.write("<solo>0</solo>\n") target.write("<pan>64</pan>\n") target.write("<vol>0</vol>\n") target.write("</vsUnit>\n") target.write("<seUnit>\n")
target.write("<inGain>0</inGain>\n")
target.write("<sendLevel>-898</sendLevel>\n") target.write("<sendEnable>0</sendEnable>\n") target.write("<mute>0</mute>\n")
target.write("<solo>0</solo>\n") target.write("<pan>64</pan>\n") target.write("<vol>0</vol>\n") target.write("</seUnit>\n") target.write("<karaokeUnit>\n") target.write("<inGain>0</inGain>\n") target.write("<mute>0</mute>\n") target.write("<solo>0</solo>\n") target.write("<vol>-129</vol>\n") target.write("</karaokeUnit>\n") target.write("</mixer>\n") target.write("<masterTrack>\n")
target.write("<seqName><![CDATA["+filename+"]]></seqName>\n") target.write("<comment><![CDATA[New VSQ File]]></comment>\n") target.write("<resolution>480</resolution>\n")
target.write("<preMeasure>4</preMeasure>\n") target.write("<timeSig>\n")
target.write("<posMes>0</posMes>\n") target.write("<nume>4</nume>\n") target.write("<denomi>4</denomi>\n") target.write("</timeSig>\n")
target.write("<tempo>\n")
target.write("<posTick>0</posTick>\n") target.write("<bpm>"+str(tempo)+"</bpm>\n") target.write("</tempo>\n")
target.write("</masterTrack>\n") target.write("<vsTrack>\n")
target.write("<vsTrackNo>0</vsTrackNo>\n")
target.write("<trackName><![CDATA[Track]]></trackName>\n") target.write("<comment><![CDATA[Track]]></comment>\n") target.write("<musicalPart>\n")
target.write("<posTick>7680</posTick>\n") target.write("<playTime>61440</playTime>\n")
target.write("<partName><![CDATA[NewPart]]></partName>\n")
target.write("<comment><![CDATA[New Musical Part]]></comment>\n") target.write("<stylePlugin>\n")
target.write("<stylePluginID><![CDATA[ACA9C502-A04B-42b5-B2EB-5CEA36D16FCE]]></stylePlu ginID>\n")
target.write("<stylePluginName><![CDATA[VOCALOID2 Compatible Style]]></stylePluginName>\n")
target.write("<version><![CDATA[3.0.0.1]]></version>\n") target.write("</stylePlugin>\n")
target.write("<partStyle>\n")
target.write("<attr id=\"accent\">50</attr>\n") target.write("<attr id=\"bendDep\">8</attr>\n") target.write("<attr id=\"bendLen\">0</attr>\n") target.write("<attr id=\"decay\">50</attr>\n") target.write("<attr id=\"fallPort\">0</attr>\n") target.write("<attr id=\"opening\">127</attr>\n") target.write("<attr id=\"risePort\">0</attr>\n") target.write("</partStyle>\n")
target.write("<singer>\n")
target.write("<posTick>0</posTick>\n") target.write("<vBS>1</vBS>\n")
target.write("<vPC>0</vPC>\n") target.write("</singer>\n")
def writeend(target):
target.write("</musicalPart>\n") target.write("</vsTrack>\n") target.write("<seTrack>\n") target.write("</seTrack>\n") target.write("<karaokeTrack>\n") target.write("</karaokeTrack>\n") target.write("<aux>\n")
target.write("<auxID><![CDATA[AUX_VST_HOST_CHUNK_INFO]]></auxID>\n")
target.write("<content><![CDATA[VlNDSwAAAAADAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=]]></conten t>\n")
target.write("</aux>\n") target.write("</vsq3>\n")