pkgsrc-Users archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: Adding native audio support to ports
hello. How is the aaudio wave form getting passed from Espeak-ng to the audio device? Is
it going through pcaudiolib? If so, are you using a named pipe, a simple file descriptor pipe,
or something else? You're doing this on a RPI, right? The trick is that it takes a while to
take the ASCII text, turn it into a wave form, then to pass that wave form to the audio device
driver. If you're using sockets to do that work, then you're not only flipping between
userland and kernel space multiple times to generate your audio, but you're also employing a
lot of network stack to pass the data back and forth. If the audio ioctls are correct, then
I'm guessing the bulk of the time is spent generating the wave form by espeak-ng, then by
passing it up and down through the various programs on its way to the audio device. Before you
spend too much time banging your head on the length and complexity of the path between
espeak-ng and the audio device, I'd suggest getting a measurement of how much time espeak-ng is
spending just generating the wave forms. One way to do this is to use ktrace with time stamps
enabled, to see how much time elapses between the time espeak-ng gets the text you want it to
render, and the time it emits it to the sound device, or at least the next link in the chain of
the path to the audio device. If memory serves correct, I could run Yasr and Eflite, with
Flite in the eflite binary, on a 486 processor, but it wasn't pretty. The eflite engine spent
all of its time generating the wave files for the audio device. Given that limitation, it was
hard to make it much faster on a 486. Not sure how an RPI compares with a 486 in compute
cycles, but I imagine it's a bit slower than a modern Intel or AMD 64-bit processor.
-thanks
-Brian
Home |
Main Index |
Thread Index |
Old Index