tasty-musicgen-small
tasty-musicgen-small is a musicgen-small fine-tuned on a patched version of the Taste & Affect Music Database. It generates music that's supposed to induce gustatory synesthesia perceptions based on multimodal research. It generates mono audio in 32khz.
Code and Dataset
Code and the dataset used to train this model are available at: https://osf.io/xs5jy/.
How to use
Here is a showcase on how to use the model with the transformer library, it is also possible to make inference with the audiocraft library, for a detailed explanation we suggest to read the official MusicGEN guide by Hugging Face
from transformers import pipeline
import scipy
synthesiser = pipeline("text-to-audio", "csc-unipd/tasty-musicgen-small")
music = synthesiser("sweet music for fine restaurents", forward_params={"do_sample": True})
scipy.io.wavfile.write("musicgen_out.wav", rate=music["sampling_rate"], data=music["audio"])
Citation
If you use this model, code or the data in your research, please cite the following article:
@misc{spanio2025multimodalsymphonyintegratingtaste,
title={A Multimodal Symphony: Integrating Taste and Sound through Generative AI},
author={Matteo Spanio and Massimiliano Zampini and Antonio Rodà and Franco Pierucci},
year={2025},
eprint={2503.02823},
archivePrefix={arXiv},
primaryClass={cs.SD},
url={https://arxiv.org/abs/2503.02823},
}
- Downloads last month
- 93
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for csc-unipd/tasty-musicgen-small
Base model
facebook/musicgen-small