On Wed, Apr 1, 2009 at 1:49 AM, malc wrote: > Hmm.. please define low-powered.. No problem, people have been running the Android SDK on 700 MHz. I used to run it on a 900Hz Celeron or something like that. Today, I'm doing all my "minimal performance" tests on a venerable 1 GHz Pentium III laptop with an integrated Intel audio chipset running Windows XP (and a Linux install under VMWare). Just to be clear, I'm not suggesting to anyone to fix anything here.. > > And FWIW nowadays it's MPC7447A at 1.3Ghz which is not speed demon > either. Not that i dont belive you, esp considering esd thrown into > equation, but it would be interesting to know what people deem > low-powered currently. > Regarding esd, I unfortunately have to support the emulator binary running on Ubuntu 6.06 installations where the sound server will frequently lock up when the Android emulator is run. In some cases, this also totally freezes the whole desktop, though it is possible to recover by perfoming a "killall -9 esd" from a console. I initially thought that the main reason for that was that the esd client library could not handle the multiple SIGALRM-induced EINTR returned by write() and other system calls, and left the sound server in a sorry state (maybe because it was stuck waiting for some data that would never arrive). However, I protected all calls to the esd backend (playing with the signal mask to avoid that), but this still doesn't get rid of the problem. Fortunately, this doesn't seem to happen with later versions of Ubuntu. If anyone has an explanation for this behaviour (which doesn't seem to happen on later Ubuntu releases), I'd be happy to share more info. Well.. I certainly fail to see how adding some other clock would > overcome the fact that something can't keep up.. (rt priorties and > suchlike?) > the main idea is to avoid filling up buffers by dropping some frames when that kind of thing happens. Or to introduce silence in the output under other conditions. The main idea is to avoid increasing drift between the emulated and host system when it comes to audio. Again, I'm not suggesting to implement anything like that. > > > > - adding dynamic linking support to the esd and alsa backends > > (using dlopen/dlsym allows the emulator to run on platforms where > > all the corresponding libraries / sound server are not available). > > I don't think this is worth it for QEMU, after all shipping binaries > is not what it's best known for. > :-) > > > - modifying the sub-system to be able to use different backends for > > audio input and output. > > Yeah i recall seeing this, and was wondering why Android needed this > functionality. > this was to be able to get audio input from a wave file while sending the output to the host audio system. Turned out to be very useful to test the VoiceRecorder application. I also played with it to debug esd/also related problems. It's not exactly something that is worth it for upstream QEMU. > Complexity notwithstanding i happen to like the way things are done in > DSound, the conceptually simple approach that is. > I must admit I totally fail to see any simplicity in DirectSound :-) Regards