[1] J. Ganseman and W. D’haes, “Score-performance matching in practice:
Problems encountered and solutions proposed,” 2006, presented at RMA
Research Students’ Conference 2006.
[2] Steinberg Media Technologies GmbH, “VST (Virtual Studio Technology).”
[Online]. Available: http://www.steinberg.net/325 1.html
[3] Recordare LLC, “MusicXML.” [Online]. Available: http://www.
musicxml.org/xml.html
[4] MIDI Manufacturers Association, Complete MIDI 1.0 Detailed Specification,
MIDI Manufacturers Association Std., Rev. 96.1, november 2001.
[5] W. D’haes, “Automatic estimation of control parameters for musical
synthesis algorithms,” Ph.D. dissertation, University of Antwerp, june
2004.
[6] J. L. Flanagan, “Parametric coding of speech spectra,” in Journal of the
Acoustical Society of America, vol. 68, no. 2, august 1980, pp. 412–419.
[7] R. J. McAulay and T. F. Quatieri, “Speech analysis/synthesis based on a
sinusoidal representation,” in IEEE Transactions on Acoustics, Speech,
and Signal Processing, vol. 34, no. 4, august 1986, pp. 744–754.
[8] G. Bailly, E. Bernard, and P. Coisnon, “Sinusoidal modelling,” 1998.
[9] A. Syrdal, Y. Stylianou, L. Garrison, A. Conkie, and J. Schroeter, “TDPSOLA
versus harmonic plus noise model in diphone based speech synthesis,”
in Proceedings of the 1998 IEEE International Conference on
Acoustics, Speech and Signal Processing, vol. 1, may 1998, pp. 273–276.
[10] J. Paulus and A. Klapuri, “Measuring the similarity of rhytmic patterns,”
in Proceedings of the 3rd International Conference on Music Information
Retrieval, october 2002, pp. 150–156.
[11] T. Virtanen and A. Klapuri, “Separation of harmonic sound sources
using sinusoidal modeling,” in Proceedings of the IEEE International
Conference on Acoustics, Speech and Signal Processing, vol. 2, may 2000,
pp. 765–768.
[12] H. Ye and S. Young, “High quality voice morphing,” in Proceedings
of the IEEE International Conference on Acoustics, Speech, and Signal
Processing, vol. 1, may 2004, pp. 9–12.
[13] J. W. Cooley and J. W. Tukey, “An algorithm for the machine calculation
of complex fourier series,” in Mathematics of Computation, vol. 19,
april 1965, pp. 297–301.
[14] I. J. Good, “The interaction algorithm and practical fourier analysis,”
in Journal of the Royal Statistical Society. Series B (Methodological),
vol. 20, no. 2, 1958, pp. 361–372.
[15] S. G. Johnson and M. Frigo, “A modified split-radix fft with fewer arithmetic
operations,” in Mathematics of Computation, vol. 55, no. 1, january
2007, pp. 111–119.
[16] C. Roads, The computer music tutorial. MIT Press, 1996.
[17] W. H. Press, W. T. Vetterling, S. A. Teukolsky, and B. P. Flannery,
Numerical Recipes in C++: the art of scientific computing, 2nd ed.
Cambridge University Press, 2002.
[18] R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd ed.
Prentice Hall, 2002.
[19] H. A. Gaberson, “A comprehensive windows tutorial,” Sound and Vibration,
pp. 14–23, march 2006.
[20] F. J. Harris, “On the use of windows for harmonic analysis with the
discrete fourier transform,” in Proceedings of the IEEE, vol. 66, no. 1,
january 1978, p. 5183.
[21] A. H. Nuttall, “Some windows with very good sidelobe behavior,” in
IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 29,
no. 1, february 1981, p. 8491.
[22] H. Taube, “Automatic tonal analysis: Toward the implementation of a
music theory workbench,” Computer Music Journal, vol. 23, no. 4, pp.
18–32, 1999.
[23] D. R. Hofstadter, G¨odel, Escher, Bach: an Eternal Golden Braid. Basic
Books, 1979.
[24] M. R. Portnoff, “Implementation of the digital phase vocoder using the
fast fourier transform,” IEEE Transactions on Acoustics, Speech, and
Signal Processing, vol. 24, no. 3, pp. 243– 248, june 1976.
[25] M. S. Puckette and J. C. Brown, “Accuracy of frequency estimates using
the phase vocoder,” IEEE Transactions on Speech and Audio Processing,
vol. 6, no. 2, pp. 166–176, march 1998.
[26] A. V. Oppenheim and R. W. Schafer, “From frequency to quefrency:
A history of the cepstrum,” IEEE Signal Processing Magazine, vol. 21,
no. 5, pp. 95–106, september 2004.
[27] T. Tolonen and M. Karjalainen, “A computationally efficient multipitch
analysis model,” IEEE Transactions on Speech and Audio Processing,
vol. 8, no. 6, pp. 708–716, november 2000.
[28] C. Yeh, A. R¨obel, and X. Rodet, “Multiple fundamental frequency estimation
of polyphonic music signals,” in Proceedings of the IEEE International
Conference on Acoustics, Speech, and Signal Processing, vol. 3,
march 2005, pp. 225–228.
[29] C. Bishop, Neural Networks for Pattern Recognition. Oxford University
Press, 1995.
[30] Steinberg Media Technologies GmbH, “VST Software Development
Kit version 2.4 rev. 2,” november 2006. [Online]. Available:
http://www.steinberg.de/324 1.html
[31] Muse Research, Inc., “KVR Audio Plugin Resources.” [Online].
Available: http://www.kvraudio.com
[32] Apple Computer, Inc., “Audio Units.” [Online]. Available: http:
//developer.apple.com/audio/audiounits.html
[33] Avid Technology, Inc., “Real Time Audio Suite.” [Online]. Available:
http://www.digidesign.com/
[34] Microsoft Corporation, “DirectX.” [Online]. Available: http://msdn.
microsoft.com/directx/
[35] R. Furse, “LADSPA.” [Online]. Available: http://www.ladpsa.org
[36] FXpansion, “VST to AU and VST to RTAS adapters.” [Online].
Available: http://www.fxpansion.com/index.php?page=31
[37] F. Vanmol, “Cubase VST SDK for Delphi v2.4.2.1.” [Online]. Available:
http://www.axiworld.be/vst.html
[38] D. Martin, “jVSTwrapper.” [Online]. Available: http://jvstwrapper.
sourceforge.net/
[39] Steinberg Media Technologies GmbH, “VSTGUI: Graphical User
Interface Framework for VST plugins, version 3.5,” february 2007.
[Online]. Available: http://vstgui.sourceforge.net/
[40] Microsoft Corporation, “GDI+.” [Online]. Available: http://msdn2.
microsoft.com/en-us/library/ms533798.aspx
[41] The Open Group, “Motif 2.1.” [Online]. Available: http://www.
opengroup.org/motif/
[42] AudioNerdz, “Delay Lama,” may 2002. [Online]. Available: http:
//www.audionerdz.com
[43] TrollTech, “Qt: Cross-Platform Rich Client Development Framework.”
[Online]. Available: http://trolltech.com/products/qt
[44] OpenGL Working Group, “OpenGL version 2.1,” august 2006. [Online].
Available: http://www.opengl.org/
[45] S. Thakkar and T. Huff, “The Internet Streaming SIMD Extensions,”
Intel Technology Journal, vol. Q2, pp. 1–8, may 1999.
[46] Intel Corporation, “Intel 64 and IA-32 Architectures Optimization
Reference Manual,” may 2007. [Online]. Available: http://developer.
intel.com/products/processor/manuals/index.htm
[47] F. Franchetti and M. P¨uschel, “SIMD Vectorization of non-two-powered
sized FFTs,” in Proceedings of the IEEE International Conference on
Acoustics, Speech and Signal Processing, vol. 2, april 2007, pp. 17–20.
[48] The MathWorks, “Matlab 7 for Windows,” 2006. [Online]. Available:
http://www.mathworks.com/products/matlab/
[49] H. Haas, “The influence of a single echo on the audibility of speech,”
Journal of the Audio Engineering Society, vol. 20, no. 2, pp. 146–159,
march 1972.
[50] J. R. Ashley, “Echoes, reverberation, speech intelligibility and musical
performance,” in Proceedings of the IEEE International Conference on
Acoustics, Speech and Signal Processing, vol. 6, april 1981, pp. 770–772.
[51] C. Chafe, M. Gurevich, G. Leslie, and S. Tyan, “Effect of time delay on
ensemble accuracy,” in Proceedings of the International Symposium on
Musical Acoustics, april 2004.
[52] L. de Soras, “Denormal numbers in floating point signal processing
applications,” april 2004. [Online]. Available: http://ldesoras.free.fr/
[53] S. N. Levine, T. S. Verma, and J. O. Smith III, “Multiresolution sinusoidal
modeling for wideband audio with modifications,” in Proceedings
of the International Conference on Acoustics, Speech, and Signal Processing,
may 1998.
[54] K.-H. Kim and I.-H. Hwang, “A multi-resolution sinusoidal model using
adaptive analysis frame,” in Proceedings of the 12th European Signal
Processing Conference (EUSIPCO 2004), september 2004, pp. 2267–
2270.
[55] P. Herrera-Boyer, G. Peeters, and S. Dubnov, “Automatic classification
of musical instrument sounds,” Journal of New Music Research, vol. 32,
no. 1, pp. 3–21, 2003.
[56] P. Herrera, A. Yeterian, and F. Gouyon, “Automatic classification of
drum sounds: a comparison of feature selection methods and classification
techniques,” in International Conference on Music and Artificial
Intelligence, september 2002.
[57] H. Heijink, L. Windsor, and P. Desain, “Data processing in music
performance research: Using structural information to improve scoreperformance
matching,” Behavior Research Methods, Instruments &
Computers, vol. 32, no. 4, pp. 546–554, august 2000.
[58] H. Heijink, P. Desain, H. Honing, and L. Windsor, “Make me a match:
an evaluation of different approaches to score-performance matching,”
Computer Music Journal, vol. 24, no. 1, pp. 43–56, april 2000.
[59] R. B. Dannenberg, “An on-line algorithm for real-time accompaniment,”
in Proceedings of the International Computer Music Conference, 1984,
pp. 193–198.
[60] M. Puckette and C. Lippe, “Score following in practice,” in Proceedings
of the International Computer Music Conference, 1992, pp. 182–185.
[61] A. Lerch, G. Eisenberg, and K. Tanghe, “Feapi: A low level feature
extraction plugin api,” Proceedings of the 8th International Conference
on Digital Audio Effects, Madrid, Spain, September 2005.