Opera Forever
Opera Forever is an online collaboration platform and social networking site to collectively explore large amounts of opera recordings.
The platform allows users to tag audio sequences with various types of semantics, such as personal preference, emotional reaction, specific musical features, technical issues, etc. Through the analysis of personal preference and/or emotional reaction to specific audio sequences, a characterization of personal listening tastes will be possible, and people with similar (or very dissimilar) tastes can be matched. The platform will also contain a recommendation system based on preference information and/or keyword search.
Background: The Bern University of the Arts has inherited a large collection of about 15'000 hours of bootleg live opera recordings. Most of these recordings are unique, and many individual recordings rather long (up to 3-4 hours), hence the idea of segmenting the recordings so as to allow for the creation of semantical links between segments to enhance the possibilities of collectively exploring the collection.
Core Idea: Users engaging in “active” listening leave semantic traces behind that can be used as a resource to guide further exploration of the collection, both by themselves and by third parties. The approach can be used for an entire spectrum of users, ranging from occasional opera listeners, through opera amateurs, to interpretation researchers. The tool can be used as a collaborative tagging platform among research teams or within citizen science settings. By putting the focus on the listeners and their personal reaction to the audio segments, the perspective of analysis can be switched to the user, e.g. by creating typologies or clusterings of listening tastes or by using the approach for match-making in social settings.
Demo Video
Proof of Concept
Opera Forever (demo application)
A first proof of concept was developed at the Swiss Open Cultural Data Hackathon 2019 in Sion and contains the following features:
- The user can browse through and listen to the recordings of different performances of the same opera.
- The individual recordings are segmented into their different parts.
- By using simple swiping gestures, the user can navigate between the individual segments of the same recording (swiping left or right) or between different recordings (swiping up or down) - the swiping is not yet implemented, but you can click on the respective arrows.
- For each segment, the user can indicate to what extent they like that particular segment (1 to 5 stars). - not implemented yet
- Based on this information, individual preference lists and collective hit-parades are generated. - not implemented yet
- Also, it will be possible to cluster users according to their musical taste, which opens up the possibility to match users based on their musical taste or to build recommendation systems. not implemented yet
A second proof of concept was developed in the context of the Master Thesis “Einbindung und Nutzung von Kulturdaten in Wikidata im Zusammenhang mit der Ehrenreich-Sammlung” (Johanna Hentze 2020), containing the following features:
- selecting audio recordings from the Ehrenreich Collection
- editing meta information about recordings/performances
- manual segmentation of audio files (adding, editing)
- visual editing of audio sequence (“Audacity” style)
- semantic linking to external resources (e.g. Wikidata)
Data
- Metadata: Ehrenreich Collection Database
- Audio Files: Digitized audio recordings from the Ehrenreich Collection (currently not available online; many of them presenting copyright issues)
- Photographs of artists: Taken from a variety of websites; most of them presenting copyright issues.