In addition to functioning as a score player for filed recordings, Lyrebird my have some application as a tool for feature analysis of electroacoustic works. Below are some sample mappings of Pierre Schaeffer's Etude au Chemin de Fer, in which contours and timbral shifts are readily recognisable. The scroll-rate is slowed to 200ms/px so that the entire work can be seen in a single pane. Interestingly although the mappings "YORVIB" and "YGBIV" appear to give the best illusion of gradation between light and dark they don't necessarily provide the best results for this particular bit of sound. |
The eye contains only three kinds of colour detecting cone cells: red, green and blue. Colours that fall between them appear perceptually brighter. The “height/lightness” of spectral colours is also inverted in comparison to the pitch spectrum: higher frequency colours are perceived as darker and heavier. The picture below shows the response curves of RGB cones mapped to the colour spectrum and illustrates how the "lighter" anomalies occur at the mid-points between them. Lyrebird at present allows for the following mappings of timbral brightness to hue. The spectra below depict a test tone of increasing brightness, noisiness and bark scale depicted by a variety of mappings.
1 Comment
![]() Ablinger: Phonorealism Spectral analysis data from recordings is “reconstituted in various media: instrumental ensembles, white noise, or computer-controlled player piano. The Speaking Piano "The reproduction of "phonographs" by instruments can be compared to photo-realist painting, or - what describes the technical aspect of the "Quadraturen" more precisely -with techniques in the graphic arts that use grids to transform photos into prints Using a smaller grain, e.g. 16 units per second, the original source approaches the border of recognition within the reproduction." Ablinger, P., (2011). Quadraturen 1995-2000. The thing that is interesting about sonification/visualisation/re-sonfication/re-visualisation (etc) is not only the distortion it creates, but also how, like the sponge that can be passed through a sieve but will reconstitute itself, the transfer from one modality to another (and back) retains elements of form. In Nature Forms I, the visual representation of natural shapes is passed through the distorting mirrors of three performers and a computer. Four contrasting forms of reading/sonifcation are presented for the audience: machine sonification in which spatial position and colour are more or less precisely rendered; tablature in which spatial position and colour are recast against the geography of a specific instrument; semantic reading in which the performer’s understanding of notational conventions informs the outcome; and aesthetic reading in which the performer’s understanding of the conventions of sonic representation of broader conceptual issues are drawn upon.
following conversations with michael terren, josten myburgh and dane yates - some ideas for representing drum kit and no-imput mixer tablature notation spatially (so the notation is where the instruments are) and proportionally (so that horizontal space=time) - for scrolling scores. spatial/proportional drum kit notation spatial/proportional no-input mixer notation
![]() What is a “normal” (or at least average) reading rate for a score and how does the rate impact upon the amount of sonic detail that is capable of being represented? The table below compares the notional average rate at which the score progresses as the performer reads the work: its “scroll-rate”. The scroll-rate is calculated by dividing the length of the score by its average duration. The works are varied: Beethoven Piano Sonata No. 17 in D minor Op. 31 No. 2 (1802) (The Tempest) first movement includes significant changes of tempo in which the performer would be reading at different rates; the Chopin Waltz in D-flat major Op. 64 No. 1 (1847) (Minute Waltz), Ravel Pavane pour une infante défunte (1899), Debussy Voiles (1909) might be considered examples at the high, low and centre of the scroll-rate speeds. These rates give an indication of what is an acceptable and perhaps even conventional speed to read musical notation. The final five works on the table are “scrolling scores” by Cat Hope and Lindsay Vickery, in which the score moves past the performer at a constant rate on an iPad screen. There is, at the least, a psychological distinction between this paradigm, where the performer is forced to view only a portion of the score at any time, and the fixed score where the performer directs their own gaze. The scrolling score, although highly useful for synchronising musical events in non-metrical music has particular natural constraints based on the limitations of human visual processing: at scroll rates greater than 3 cm/s the reader struggles to capture information; information dense musical notation may significantly lower this threshold; reading representations of fine metrical structures in a scrolling medium is problematic. The problem may be caused by a conflict between the continuous movement of the score and the relatively slow (in comparison to the ear) fixation rate of the eye or simply a by-product of unfamiliarity with the medium, however the cause is currently unexplained. The approach to sonic visualisation used by the Lyrebird Environment Player of also has application to the analysis of electroacoustic music. As Grill and Flexer have indicated, traditional spectrogram “visualizations are highly abstract, lacking a direct relationship to perceptual attributes of sound” (2012). The approach employed by Lyrebird goes someway toward alleviating the problem of “demonstrating coindexation and segmentation due to the difficulty in illustrating differences in timbre” (Adkins 2008) in a spectrogram and provides an (almost) realtime feature analysis of the recording in which contours and timbral shifts are readily recognizable.
The image below shows a representation of Pierre Schaeffer’s Étude aux Chemins de Fer, clearly delineating segments of the work created with varied source materials. The colour scaling in this reading consistently colours sound objects of the same timbre/material. The entire 170 seconds of the work was represented by slowing the scrollrate of the lcd object. The insert shows the whistle that occurs at approximately 112 seconds into the work and illustrates the “Doppler” effect that is heard through a change of both vertical height (pitch) and colour (timbre). By coincidence(?) the Doppler shift is represented by a change from red(dish) to blue(ish) colours. A very small plot of the formal structure is shown below the Lyrebird analysis. One of the long-crescendo F#s from the clarinet part of Messiaen's Abîme des Oiseaux represented as a spectrogram (using Chris Cannam’s Sonic Visualiser software) and the Lyrebird Environment Player. Lyrebird represents the pitch of the single strongest detected sinusoidal peak vertical position; amplitude by the size of the rectangle; and brightness, noisiness and bark scale data is used to determine the luminance, hue and saturation of each rectangle.
The updated Environment Player for percussionist Vanessa Tomlinson adds a analysis and scaling panel, more finely grained audio analysis and jitter lcd. Graphic notation is a term used, in Western Art music, to capture the broad spectrum of non-traditional approaches to notating music that began to emerge during the twentieth century. An important component of this evolution has been the capacity to present musical scores on screen in colour and in motion. This trend is reflected both in increased academic activity[1] and the appearance of the documentation of numerous new works appearing on video [2]. While traditional music notation evolved over a long period, the recent advances in media for the presentation of notation have been rapid and therefore we should consider ourselves “still on the "steep part of the curve" from the technology standpoint”[3]. It has been argued, “the language and notation we use exerts a large influence on what we think and create”[4]. Digital innovations provide an opportunity for an expansion of the possibilities of the musical score. Composers continue to explore an increasingly broad range of idiosyncratic approaches to creating music, and many of these novel approaches, (for example: microtonality [5], pulseless music [6], algorithmically generated music [7], guided improvisation [8], Interactivity [9] and mobile structure [10]), are at best, cumbersome and, at worst, impossible to represent using traditional music notation. This practice-led project will develop a robust platform for the investigation of musical notation and its performance. It will enhance the capabilities of the already existing Decibel Scoreplayer to include the industry standard Open Sound Control [11] (OSC) communication protocol (to facilitate synchronisation with external devices), multi-touch interaction, and a range of new models of score presentation (in addition to its current scrolling model) and integrated use of the iPad camera. These developments will allow for the exploration of new paradigms for the musical score and provide a platform for the systematic exploration of their effectiveness through the collection of relevant data on performer eye-movement in relation to the score itself. In collaboration with Dr. Stuart Medley (SCA), I will explore the effectiveness of visual representation of sounds in the music score. In a 2011 paper, Medley discusses visual representation as a continuum ranging between photographic realism and textual description. Figure 1. An example of a realism continuum from (Medley 2011) Scored forms of musical representation occupy a similar continuum, in this case between the spectrogram (a precise frequency/time/amplitude representation of sound) to text scores that simply describe the required sound. Figure 2. An example of a musical representation continuum In collaboration with Dr. Medley, I will create a range of new notational approaches for scores exploring a similar continuum between literal representation of sound and figurative “evocative notation”. (Figure 3. Gives an indication of the direction this work might take).
In collaboration with Professor Craig Speelman, I will develop an experimental method to capture eye-movement data, measuring variations in eye fixations and saccades in performers reading from the scrolling score and a number of other novel scoring methods that I have developed including realtime permutation, transformation, and generation of scores. It is hoped to gain insight into effectiveness of mapping of shape and colour to sound as well as a range of scoring strategies. We will investigate the possibility of using the iPad’s built-in camera as a means of collecting eye-movement data, allowing for precise synchronisation and time-coding of data. Notes
[1] Contemporary Music Review Volume 29 (2010) was devoted to the discussion of Virtual Scores and Real-Time Playing; Leonardo Journal Volume 21 (2011) Beyond Notation: Communicating Music included several significant discussions of the Screenscore. Also See: Winkler, Gerhard E. (2004). The Real Time-Score. The Missing-Link in Computer-Music Performance. In Sound and Music Computing ʼ04. IRCAM Hajdu, G. and Didkovsky N. (2009). "On the Evolution of Music Notation in Network Music Environments." Contemporary Music Review 28(4): 395 — 407. Kim-Boyle, D. (2010). Real-time Score Generation for Extensible Open Forms. Contemporary Music Review, 29(1), 3 - 15. McClelland, C., Alcorn, M. (2008). Exploring New Composer/Performer Interactions Using Real-Time Notation. In International Computer Music Conference ʼ08. Belfast, Northern Ireland [2] See: the Icelandic collective S.L.Á.T.U.R.’s site http://animatednotation.blogspot.com.au/ [3] Dewar, J. (1998). The Information Age and the Printing Press: Looking Backward to See Ahead. Santa Monica, CA, RAND. p. 5 [4] Dannenberg, R. (1996). Extending Music Notation Through Programming. Computer Music in Context. C. Harris, Harwood Academic 63-76 p. 63. [5] Keislar, D., Blackwood, E. et al. (1991). "Six American Composers on Nonstandard Tunings." Perspectives of New Music 29(1). [6] Burt, W. (1991). "Australian Experimental Music 1963-1990." Leonardo Music Journal 1(1): 5-10. pp. 5-6 [7] Hudak, P., Makucevich, T. et al. (1993). "Haskore Music Notation - An Algebra of Music -." Functional Programming 1(1): 1-18. [8] Lock, G. (2008). “What I Call a Sound”: Anthony Braxton’s Synaesthetic Ideal and Notations for Improvisers. Critical Studies in Improvisation / Études critiques en improvisation, Vol 4, No 1 (2008) [9] Freeman, J. (2008). Extreme Sight-reading, Mediated Expression and Audience Participation: Real-time Music Notation in Live Performance. Computer Music Journal 32: 25–41. [10] Dubinets, E. (2007). Between Mobility and Stability: Earle Brown’s Compositional Process. Contemporary Music Review 26(3): 409–426. [11] http://opensoundcontrol.org/introduction-osc [12] Ramachandran, V.S., and Hubbard, E.M. (2001).Synaesthesia — A Window Into Perception, Thought and Language. Journal of Consciousness Studies, 8, No. 12, 2001, pp. 3–34 p. 19 [13] Palmer, S. E., (2013). Color, Music, and Emotion (In Synesthetes and Non-Synesthetes) International Colour Association (AIC) University of Newcastle Newcastle-Upon-Tyne July 12, 2013 [14] Vickery,L.R. (2011). The possibilities of novel formal structures through computer controlled live performance . Organic Sounds in Live Electroacoustic Music. John Coulter. Auckland New Zealand. Australasian Computer Music Association. 112-117. [15] Decibel ScorePlayer App, Apple iTunes, https://itunes.apple.com/us/app/decibel-scoreplayer/id622591851?mt=8. Accessed August 28, 2013. [16] Madell, J., and Hébert, S. (2008). Eye Movements and Music Reading: Where Do We Look Next?. Music Perception 26(2) 157–170 p. 167 |
lindsay vickerytest version CategoriesArchives
September 2020
|