This paper delves into SpeechBrain, OpenAI and CoquiTTS voice and speech models within a research-through-design inspired, exploratory research engagement with pre-trained speech synthesis models to find their musical affordances in an experimental vocal practice.