Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
project:opera_forever [2019/10/31 17:35] – [Demo Video] beat_estermann | project:opera_forever [2021/04/09 10:15] (current) – [Proof of Concept] corrected typos beat_estermann | ||
---|---|---|---|
Line 5: | Line 5: | ||
The platform allows users to tag audio sequences with various types of semantics, such as personal preference, emotional reaction, specific musical features, technical issues, etc. Through the analysis of personal preference and/or emotional reaction to specific audio sequences, a characterization of personal listening tastes will be possible, and people with similar (or very dissimilar) tastes can be matched. The platform will also contain a recommendation system based on preference information and/or keyword search. | The platform allows users to tag audio sequences with various types of semantics, such as personal preference, emotional reaction, specific musical features, technical issues, etc. Through the analysis of personal preference and/or emotional reaction to specific audio sequences, a characterization of personal listening tastes will be possible, and people with similar (or very dissimilar) tastes can be matched. The platform will also contain a recommendation system based on preference information and/or keyword search. | ||
- | **Background: | + | **Background: |
**Core Idea:** Users engaging in " | **Core Idea:** Users engaging in " | ||
==== Demo Video ==== | ==== Demo Video ==== | ||
- | {{vimeo> | + | {{vimeo> |
Line 16: | Line 16: | ||
[[https:// | [[https:// | ||
- | A first proof of concept contains the following features: | + | A first proof of concept |
* The user can browse through and listen to the recordings of different performances of the same opera. | * The user can browse through and listen to the recordings of different performances of the same opera. | ||
* The individual recordings are segmented into their different parts. | * The individual recordings are segmented into their different parts. | ||
Line 23: | Line 23: | ||
* Based on this information, | * Based on this information, | ||
* Also, it will be possible to cluster users according to their musical taste, which opens up the possibility to match users based on their musical taste or to build recommendation systems. //not implemented yet// | * Also, it will be possible to cluster users according to their musical taste, which opens up the possibility to match users based on their musical taste or to build recommendation systems. //not implemented yet// | ||
+ | |||
+ | A second proof of concept was developed in the context of the Master Thesis " | ||
+ | * selecting audio recordings from the Ehrenreich Collection | ||
+ | * editing meta information about recordings/ | ||
+ | * manual segmentation of audio files (adding, editing) | ||
+ | * visual editing of audio sequence (" | ||
+ | * semantic linking to external resources (e.g. Wikidata) | ||
+ | [[https:// | ||