Here is my code :
try {
_channel = IOWebSocketChannel.connect(
'wss://api.elevenlabs.io/v1/text-to-speech/$voiceId/stream-input?model_id=$model');
_channel!.stream.listen(
(data) async {
final response = jsonDecode(data);
if (response['audio'] != null) {
try{
final audioChunk = response['audio'] as String;
final uint8List = Uint8List.fromList(base64.decode(audioChunk));
await _audioPlayer.setAudioSource(
ConcatenatingAudioSource(
children: [
AudioSource.uri(
Uri.dataFromBytes(
uint8List,
mimeType: 'audio/mpeg',
),
),
],
),
);
await _audioPlayer.play();
}
catch (e){
print("Error setting audio source and playing: $e");
}
I am able to get the response and am converting it to uint8List and trying to play it but its not playing anything and I am getting error :
flutter: Error setting audio source and playing: (-11828) Cannot Open
flutter: Error setting audio source and playing:
MissingPluginException(No implementation found for method load on
channel
com.ryanheise.just_audio.methods.abaaa9b2-f8ed-4c88-9d89-4623ab523beb)
flutter: Error setting audio source and playing: (-11828) Cannot Open
How can I play it? Is there any other package that can do it? I am successfully getting the response but am just not able to play it
2
Answers
This example comes from the just_audio README:
You can define your own custom subclass of
StreamAudioSource
that feeds audio data into just_audio. Since part of the returnedStreamAudioResponse
is a stream, you can just transform and redirect your websocket stream into thisStreamAudioResponse
. So instead of:you could use something like:
Since this audio source would continuously stream the audio data to just_audio, you would just call
setAudioSource
once and not repeatedly.I just made my account so I unfortunately don’t have enough reputation to leave a comment however, I did also try the answer from Ryan Heise. What I found is that all the bytes need to be loaded into the StreamAudioSource before the player starts to buffer. I made a project to mimc my stream of bytes by using a random .mp3 file.
I have pasted my code and the logs below:
From the logs, you can see that the stream had to complete before the audio player actually played. I’m not too sure how we can get the bytes to play as each chunk is received. Any help would also be appreciated.
Logs: