Automatic Semantic Music Annotator
Creating a music search and recommendation engine and optimized algorithm for annotating music.
PI: Gert Lanckriet | International Partner: KETI
Technology is revolutionizing the way in which music is produced, distributed and consumed. As a result, millions of songs are instantly available to millions of people, on the Internet. This has created the need for novel music search and discovery technologies, to help users find a “mellow Beatles song” on a nostalgic night, “scary Halloween music'” on October 31st, or address a sudden desire for “romantic jazz with saxophone and deep male vocals'” without knowing an appropriate artist or song title.
One important task in the realization of a music search engine is the automatic annotation of music with descriptive keywords, or tags, based on the audio content of the song. The goal of this project is to develop a content-based music retrieval system, consisting of an algorithm for content-based music annotation and a text-based music retrieval system.
Developing a text-based retrieval system consisting of i) a database of 2,000 songs of popular music ii) a content-based music annotation algorithm to automatically annotate and index all songs in this database and iii) a text-based music search engine to retrieve songs from this database based on natural language queries.
- Lead Korea Organization: KETI
- US organization: CalIT2 – UCSD (Gert Lanckriet firstname.lastname@example.org)
- PhD students: Emanuele Coviello email@example.com
Gert Lanckriet, Associate Professor, Electrical and Computer Engineering, Jacobs School of Engineering; PI, Computer Audition Laboratory; Ph.D. 2005 from UC Berkeley.
Machine learning, applied statistics, convex optimization, with applications to music information retrieval, computer audition, computational genomics, finance.