* hardware channel mixing
@ 2004-08-29 15:03 Patrick Dumais
2004-09-03 12:59 ` Clemens Ladisch
0 siblings, 1 reply; 33+ messages in thread
From: Patrick Dumais @ 2004-08-29 15:03 UTC (permalink / raw)
To: alsa-devel
When I read a description of what the soundblaster Live features, I notice
they say "131 hardware channels". Now is this what I think it is? does it
mean that I can actualy write multiple (upto 131) sounds and let the card
mix it by itself?
If so, I haven't seen any documentation about this. is there a
snd_pcm_writen(buffer, channel_where_to_write, Size) or something like
that? I would really like to know where to get that documentation.
Thank you
Patrick Dumais
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing
2004-08-29 15:03 hardware channel mixing Patrick Dumais
@ 2004-09-03 12:59 ` Clemens Ladisch
2004-09-03 13:15 ` Patrick Dumais
2004-09-03 23:30 ` Lee Revell
0 siblings, 2 replies; 33+ messages in thread
From: Clemens Ladisch @ 2004-09-03 12:59 UTC (permalink / raw)
To: Patrick Dumais; +Cc: alsa-devel
Patrick Dumais wrote:
> When I read a description of what the soundblaster Live features, I notice
> they say "131 hardware channels". Now is this what I think it is? does it
> mean that I can actualy write multiple (upto 131) sounds and let the card
> mix it by itself?
Essentially, yes, but the real number is 64 voices, and each stereo
stream needs two of them, so you can play 32 streams simultaneously.
> If so, I haven't seen any documentation about this.
On the SB Live, the PCM device has 32 subdevices. The subdevice
number can be specified as the third number in the "hw:x,y,z" device
name when opening the device. If you don't specify a subdevice
number, the default is -1 which means "pick the first free one".
HTH
Clemens
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing
2004-09-03 12:59 ` Clemens Ladisch
@ 2004-09-03 13:15 ` Patrick Dumais
2004-09-03 13:24 ` Clemens Ladisch
2004-09-03 13:58 ` Florian Schmidt
2004-09-03 23:30 ` Lee Revell
1 sibling, 2 replies; 33+ messages in thread
From: Patrick Dumais @ 2004-09-03 13:15 UTC (permalink / raw)
To: Clemens Ladisch; +Cc: alsa-devel
Thanks a lot, I've been looking for that information.
So this would mean that the only solution would be to snd_pcm_open() a
device for each sound file that I want to play simultenaously?
Patrick Dumais
****************************
http://www.dumaisnet.ca:242/
On Fri, 3 Sep 2004, Clemens Ladisch wrote:
> Date: Fri, 03 Sep 2004 14:59:26 +0200 (METDST)
> From: Clemens Ladisch <clemens@ladisch.de>
> To: Patrick Dumais <pat@dumaisnet.ca>
> Cc: alsa-devel@lists.sourceforge.net
> Subject: Re: [Alsa-devel] hardware channel mixing
>
> Patrick Dumais wrote:
> > When I read a description of what the soundblaster Live features, I notice
> > they say "131 hardware channels". Now is this what I think it is? does it
> > mean that I can actualy write multiple (upto 131) sounds and let the card
> > mix it by itself?
>
> Essentially, yes, but the real number is 64 voices, and each stereo
> stream needs two of them, so you can play 32 streams simultaneously.
>
> > If so, I haven't seen any documentation about this.
>
> On the SB Live, the PCM device has 32 subdevices. The subdevice
> number can be specified as the third number in the "hw:x,y,z" device
> name when opening the device. If you don't specify a subdevice
> number, the default is -1 which means "pick the first free one".
>
>
> HTH
> Clemens
>
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing
2004-09-03 13:15 ` Patrick Dumais
@ 2004-09-03 13:24 ` Clemens Ladisch
2004-09-03 13:58 ` Florian Schmidt
1 sibling, 0 replies; 33+ messages in thread
From: Clemens Ladisch @ 2004-09-03 13:24 UTC (permalink / raw)
To: Patrick Dumais; +Cc: alsa-devel
Patrick Dumais wrote:
> So this would mean that the only solution would be to snd_pcm_open() a
> device for each sound file that I want to play simultenaously?
Yes.
Clemens
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing
2004-09-03 13:15 ` Patrick Dumais
2004-09-03 13:24 ` Clemens Ladisch
@ 2004-09-03 13:58 ` Florian Schmidt
2004-09-03 14:04 ` Patrick Dumais
1 sibling, 1 reply; 33+ messages in thread
From: Florian Schmidt @ 2004-09-03 13:58 UTC (permalink / raw)
To: pat; +Cc: Clemens Ladisch, alsa-devel
On Fri, 3 Sep 2004 09:15:06 -0400 (EDT)
Patrick Dumais <pat@dumaisnet.ca> wrote:
>
> Thanks a lot, I've been looking for that information.
>
> So this would mean that the only solution would be to snd_pcm_open() a
> device for each sound file that I want to play simultenaously?
If you are programming an app that needs to play several sounds at once,
it is a very bad idea to rely on the hardware provifding hw mixing. Most
SC's do not provide hw mixing.
So the solution really is to do the mixing yourself before sending the
sound to the soundcard [mixing sounds is pretty trivial -> add them.
only take care about the headroom].
OTOH: if you write this app only for yourself and you know you will
always use a SC with hw mixing, feel free to use snd_pcm_open for every
sound you play..
flo
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing
2004-09-03 13:58 ` Florian Schmidt
@ 2004-09-03 14:04 ` Patrick Dumais
2004-09-03 14:31 ` Florian Schmidt
2004-09-07 5:04 ` Glenn Maynard
0 siblings, 2 replies; 33+ messages in thread
From: Patrick Dumais @ 2004-09-03 14:04 UTC (permalink / raw)
To: Florian Schmidt; +Cc: Clemens Ladisch, alsa-devel
That's the thing. I'm writing a soft-sampler. meaning that I have multiple
samples playing at one time when the user presses the keys on his midi
keyboard. I don't like software mixing because I feel I could gain quality
by relying on the hardware (am I right?). And I also want my app to go
faster by letting the hardware do this job. But I realize that opening 16
devices (16voice polyphony for my app) can be ressource consuming, so it's
a drawback. I'm not too sure what to do.
Mixing is a little bit more than adding the samples together, you have to
do clipping and there is also a method shown on
http://www.vttoth.com/digimix.htm
to make prevent one sound to be higher than the other one when one has a
silence in it. With that in mind I think I could get a high quality sound
for my app, but would it be worth all the processing? should I still use
more than one device instead, assuming that I would include but
functionalyties for users that don't have a compatible sound card.
Your help is very appreciated
Patrick Dumais
****************************
http://www.dumaisnet.ca:242/
On Fri, 3 Sep 2004, Florian Schmidt wrote:
> Date: Fri, 3 Sep 2004 15:58:01 +0200
> From: Florian Schmidt <mista.tapas@gmx.net>
> To: pat@dumaisnet.ca
> Cc: Clemens Ladisch <clemens@ladisch.de>, alsa-devel@lists.sourceforge.net
> Subject: Re: [Alsa-devel] hardware channel mixing
>
> On Fri, 3 Sep 2004 09:15:06 -0400 (EDT)
> Patrick Dumais <pat@dumaisnet.ca> wrote:
>
> >
> > Thanks a lot, I've been looking for that information.
> >
> > So this would mean that the only solution would be to snd_pcm_open() a
> > device for each sound file that I want to play simultenaously?
>
> If you are programming an app that needs to play several sounds at once,
> it is a very bad idea to rely on the hardware provifding hw mixing. Most
> SC's do not provide hw mixing.
>
> So the solution really is to do the mixing yourself before sending the
> sound to the soundcard [mixing sounds is pretty trivial -> add them.
> only take care about the headroom].
>
> OTOH: if you write this app only for yourself and you know you will
> always use a SC with hw mixing, feel free to use snd_pcm_open for every
> sound you play..
>
> flo
>
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing
2004-09-03 14:04 ` Patrick Dumais
@ 2004-09-03 14:31 ` Florian Schmidt
2004-09-03 14:34 ` Patrick Dumais
2004-09-07 5:04 ` Glenn Maynard
1 sibling, 1 reply; 33+ messages in thread
From: Florian Schmidt @ 2004-09-03 14:31 UTC (permalink / raw)
To: pat; +Cc: Clemens Ladisch, alsa-devel
On Fri, 3 Sep 2004 10:04:29 -0400 (EDT)
Patrick Dumais <pat@dumaisnet.ca> wrote:
>
> That's the thing. I'm writing a soft-sampler. meaning that I have
> multiple samples playing at one time when the user presses the keys on
> his midi keyboard. I don't like software mixing because I feel I could
> gain quality by relying on the hardware (am I right?).
No.
> And I also want my app to go
> faster by letting the hardware do this job. But I realize that opening
> 16 devices (16voice polyphony for my app) can be ressource consuming,
> so it's a drawback. I'm not too sure what to do.
Also this will not help you any at all in avoiding latency [btw: what
are you referring to as "go faster"?]. And yes, opening 16 devices is
resource consuming..
>
> Mixing is a little bit more than adding the samples together, you have
> to do clipping and there is also a method shown on
> http://www.vttoth.com/digimix.htm
That method sounds like it's inappropriate for a soft sampler. Mixing is
adding. period. The user will have to make sure not to bust the headroom
by adjusting the samples gains. A method like in the link is maybe
useful for game sound systems, etc.. but a professional audio app should
avoid such approaches like hell. If you want to make sure you don't
bust the headroom, use a hardlimiter[and process everything in a
datatype that can hold the intermediate result], but i wouldn't want my
sampler to tinker with the dynamics at all if not explicitly requested.
> to make prevent one sound to be higher than the other one when one has
> a silence in it. With that in mind I think I could get a high quality
> sound for my app, but would it be worth all the processing? should I
> still use more than one device instead, assuming that I would include
> but functionalyties for users that don't have a compatible sound card.
flo
P.S.: you should come join the #lad channel on irc.freenode.org. there
we can discuss in RT. I hang around there in the evening [gmt] usually
P.P.S.: If you want to avoid all the alsa pcm troubles i strongly
recommend using jack
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing
2004-09-03 14:31 ` Florian Schmidt
@ 2004-09-03 14:34 ` Patrick Dumais
2004-09-03 15:23 ` Florian Schmidt
0 siblings, 1 reply; 33+ messages in thread
From: Patrick Dumais @ 2004-09-03 14:34 UTC (permalink / raw)
To: Florian Schmidt; +Cc: Clemens Ladisch, alsa-devel
yeah I didnt wanna talk about latency because im not sure what causes my
app to run "not smooth" (sorry for my english). maybe its the sound
processing or it might just be the fact that i'm running too much apps on
this computer at the same time. But i'll take your advice.
The thing is I never used a sampler in my life. I'm designing this because
I had an idea in my head and wanted to play samples by using my midi
controller. This is all experimental. So you're telling me that a real
sampler would not clip the sound? would not adjust the volume of each
samples automaticaly (is this what a compressor is used for?)?
I will join the channel this weekend or during next week, you guys seem to
know where to guide me.
Thank you
Patrick Dumais
****************************
http://www.dumaisnet.ca:242/
On Fri, 3 Sep 2004, Florian Schmidt wrote:
> Date: Fri, 3 Sep 2004 16:31:46 +0200
> From: Florian Schmidt <mista.tapas@gmx.net>
> To: pat@dumaisnet.ca
> Cc: Clemens Ladisch <clemens@ladisch.de>, alsa-devel@lists.sourceforge.net
> Subject: Re: [Alsa-devel] hardware channel mixing
>
> On Fri, 3 Sep 2004 10:04:29 -0400 (EDT)
> Patrick Dumais <pat@dumaisnet.ca> wrote:
>
> >
> > That's the thing. I'm writing a soft-sampler. meaning that I have
> > multiple samples playing at one time when the user presses the keys on
> > his midi keyboard. I don't like software mixing because I feel I could
> > gain quality by relying on the hardware (am I right?).
>
> No.
>
> > And I also want my app to go
> > faster by letting the hardware do this job. But I realize that opening
> > 16 devices (16voice polyphony for my app) can be ressource consuming,
> > so it's a drawback. I'm not too sure what to do.
>
> Also this will not help you any at all in avoiding latency [btw: what
> are you referring to as "go faster"?]. And yes, opening 16 devices is
> resource consuming..
>
> >
> > Mixing is a little bit more than adding the samples together, you have
> > to do clipping and there is also a method shown on
> > http://www.vttoth.com/digimix.htm
>
> That method sounds like it's inappropriate for a soft sampler. Mixing is
> adding. period. The user will have to make sure not to bust the headroom
> by adjusting the samples gains. A method like in the link is maybe
> useful for game sound systems, etc.. but a professional audio app should
> avoid such approaches like hell. If you want to make sure you don't
> bust the headroom, use a hardlimiter[and process everything in a
> datatype that can hold the intermediate result], but i wouldn't want my
> sampler to tinker with the dynamics at all if not explicitly requested.
>
> > to make prevent one sound to be higher than the other one when one has
> > a silence in it. With that in mind I think I could get a high quality
> > sound for my app, but would it be worth all the processing? should I
> > still use more than one device instead, assuming that I would include
> > but functionalyties for users that don't have a compatible sound card.
>
> flo
>
> P.S.: you should come join the #lad channel on irc.freenode.org. there
> we can discuss in RT. I hang around there in the evening [gmt] usually
>
> P.P.S.: If you want to avoid all the alsa pcm troubles i strongly
> recommend using jack
>
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing
2004-09-03 14:34 ` Patrick Dumais
@ 2004-09-03 15:23 ` Florian Schmidt
0 siblings, 0 replies; 33+ messages in thread
From: Florian Schmidt @ 2004-09-03 15:23 UTC (permalink / raw)
To: pat; +Cc: Clemens Ladisch, alsa-devel
On Fri, 3 Sep 2004 10:34:00 -0400 (EDT)
Patrick Dumais <pat@dumaisnet.ca> wrote:
> The thing is I never used a sampler in my life. I'm designing this
> because I had an idea in my head and wanted to play samples by using
> my midi controller. This is all experimental. So you're telling me
> that a real sampler would not clip the sound? would not adjust the
> volume of each samples automaticaly (is this what a compressor is used
> for?)?
Take a look at existing sampler projects like hydrogen, linuxsampler or
specimen. No need to invent the wheel twice :)
flo
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing
2004-09-03 12:59 ` Clemens Ladisch
2004-09-03 13:15 ` Patrick Dumais
@ 2004-09-03 23:30 ` Lee Revell
2004-09-04 1:19 ` Manuel Jander
1 sibling, 1 reply; 33+ messages in thread
From: Lee Revell @ 2004-09-03 23:30 UTC (permalink / raw)
To: Clemens Ladisch; +Cc: Patrick Dumais, alsa-devel, perex
On Fri, 2004-09-03 at 08:59, Clemens Ladisch wrote:
> Patrick Dumais wrote:
> > When I read a description of what the soundblaster Live features, I notice
> > they say "131 hardware channels". Now is this what I think it is? does it
> > mean that I can actualy write multiple (upto 131) sounds and let the card
> > mix it by itself?
>
> Essentially, yes, but the real number is 64 voices, and each stereo
> stream needs two of them, so you can play 32 streams simultaneously.
>
Not quite, it's 32 mono substreams, 21 stereo. Each mono substream
requires 2 voices, each stereo substream, three. There is an extra
voice allocated per playback substream that is silent, and is just used
to generate the period interrupts.
I have pored over the code, and I don't understand exactly why the extra
voice is needed, Jaroslav said something like "if you don't use the
extra voice the interrupts are going faster than the voice position". I
think it might be needed so that multiple can playback, each with
different period sizes.
There is currently no way to open an 8 or 16 channel playback
substream. There are the various plughw surround plugins, but these
seem to work by allocating several mono/stereo streams. This is
certainly wasteful - 5.1 surround wastes 2 voices.
I have been planning to add another playback device to the emu10k1
driver with one 16-63 channel substream, to correspond to the hw:0,2
capture device which can record up to 64 channels.
> > If so, I haven't seen any documentation about this.
>
> On the SB Live, the PCM device has 32 subdevices. The subdevice
> number can be specified as the third number in the "hw:x,y,z" device
> name when opening the device. If you don't specify a subdevice
> number, the default is -1 which means "pick the first free one".
There is some documentation, but nothing that gives a high level
overview of the emu10k1's design and how the driver works, maybe I will
write one at some point. For now you have to use the source.
Lee
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing
2004-09-03 23:30 ` Lee Revell
@ 2004-09-04 1:19 ` Manuel Jander
2004-09-04 23:28 ` Lee Revell
2004-09-05 18:28 ` Lee Revell
0 siblings, 2 replies; 33+ messages in thread
From: Manuel Jander @ 2004-09-04 1:19 UTC (permalink / raw)
To: alsa-devel
Hi,
[snip]
> Not quite, it's 32 mono substreams, 21 stereo. Each mono substream
> requires 2 voices, each stereo substream, three. There is an extra
> voice allocated per playback substream that is silent, and is just used
> to generate the period interrupts.
>
> I have pored over the code, and I don't understand exactly why the extra
> voice is needed, Jaroslav said something like "if you don't use the
> extra voice the interrupts are going faster than the voice position". I
> think it might be needed so that multiple can playback, each with
> different period sizes.
Smells like incomplete hardware spec. Wasting a DMA channel for timing
purposes sounds pretty stupid, despite it may be the only known way to
make it work right now.
> There is currently no way to open an 8 or 16 channel playback
> substream. There are the various plughw surround plugins, but these
> seem to work by allocating several mono/stereo streams. This is
> certainly wasteful - 5.1 surround wastes 2 voices.
>
> I have been planning to add another playback device to the emu10k1
> driver with one 16-63 channel substream, to correspond to the hw:0,2
> capture device which can record up to 64 channels.
There is one problem: If the DMA engine really can demultiplex that many
interleaved channels, the buffer/period *time* will be pretty low. I
think more than 4 or 6 interleaved channels per DMA is not very sane.
Using several DMA channels would work, but again thats the same as using
several alsa pcm devices plugged together. The voice waste should be
eliminated by other means in my opinion.
Would'nt it be time to think about a resource manager ? Something like
CheckoutVoice() / CheckinVoice() ? Using a resource manager inside a
individual driver already relieves a lot of problems, but maybe using a
resource manager at ALSA-LIB level, would not be a bad idea. Just a
thought that came to my mind...
Best Regards
Manuel Jander
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing
2004-09-04 1:19 ` Manuel Jander
@ 2004-09-04 23:28 ` Lee Revell
2004-09-05 3:02 ` Manuel Jander
2004-09-05 18:28 ` Lee Revell
1 sibling, 1 reply; 33+ messages in thread
From: Lee Revell @ 2004-09-04 23:28 UTC (permalink / raw)
To: mjander; +Cc: alsa-devel
On Fri, 2004-09-03 at 21:19, Manuel Jander wrote:
> Hi,
>
> [snip]
> > Not quite, it's 32 mono substreams, 21 stereo. Each mono substream
> > requires 2 voices, each stereo substream, three. There is an extra
> > voice allocated per playback substream that is silent, and is just used
> > to generate the period interrupts.
> >
> > I have pored over the code, and I don't understand exactly why the extra
> > voice is needed, Jaroslav said something like "if you don't use the
> > extra voice the interrupts are going faster than the voice position". I
> > think it might be needed so that multiple can playback, each with
> > different period sizes.
>
> Smells like incomplete hardware spec. Wasting a DMA channel for timing
> purposes sounds pretty stupid, despite it may be the only known way to
> make it work right now.
>
Doubtful. The OSS driver does not use the extra voice, so it can be
made to work otherwise. It looks more like a conscious design decision
to get better latency.
(This is a common knee-jerk reaction to any discussion re: Creative
hardware. I think people must still be angry with them for "embracing
open source" years ago, then releasing preprocessed, obfuscated drivers
with minimal docs.)
The type of interrupt being used is the channel loop interrupt.
Presumably this hardware feature is intended for MIDI synth use, and it
was discovered that you could get better output latency for PCM by using
an extra voice as a control channel per substream.
Jaroslav, care to comment?
> > There is currently no way to open an 8 or 16 channel playback
> > substream. There are the various plughw surround plugins, but these
> > seem to work by allocating several mono/stereo streams. This is
> > certainly wasteful - 5.1 surround wastes 2 voices.
> >
> > I have been planning to add another playback device to the emu10k1
> > driver with one 16-63 channel substream, to correspond to the hw:0,2
> > capture device which can record up to 64 channels.
>
> There is one problem: If the DMA engine really can demultiplex that many
> interleaved channels
It certainly can, works great for capture. I can record 8 channels at
32 frames, it works perfectly. I suspect I could record more but
there's some kind of mixer settings bug.
> , the buffer/period *time* will be pretty low
Yes, this is called 'low latency', and people generally want it.
> . I
> think more than 4 or 6 interleaved channels per DMA is not very sane.
Why? It's more efficient than handling them in chunks of 4. This would
be intended for use by the sound server, in this case jackd; you would
launch jackd with an 8 channel input and 8 channel output device,
corresponding to the analog ins and outs on the hardware. With 16 or 32
channels the rest are just FX buses. And this all lives in hardware.
>
> Using several DMA channels would work, but again thats the same as using
> several alsa pcm devices plugged together. The voice waste should be
> eliminated by other means in my opinion.
>
Not needed, the DMA engine can handle all 64 channels. 64 channels of
16 bit samples at 48000 hz is 6,144 MB/s. If the emu10k3 increases the
DSP samplerate to 96Khz (hopefully!), and enables 32-bit I/O (the DSP is
already 32 bits but the i/o path is 16), this is still only 24MB/s. Old
hard drives can do that easily, and they have to wait for data to arrive
from moving parts.
> Would'nt it be time to think about a resource manager ? Something like
> CheckoutVoice() / CheckinVoice() ? Using a resource manager inside a
> individual driver already relieves a lot of problems, but maybe using a
> resource manager at ALSA-LIB level, would not be a bad idea. Just a
> thought that came to my mind...
For the general case, we have one, it works great.
http://jackit.sf.net. This is the only thing talks to the sound
hardware on any serious Linux audio setup. This notion of voices is
fairly specific to the emu10k1 hardware, so it wouldn't make much sense
to manage them anywhere but in the alsa driver. Check out also
http://ld10k1.sf.net, which is the DSP patch loader.
The alsa plugin layer is actually almost completely unnecessary on the
SBLives, because they can do it all (and more) in hardware. I don't
know why so many people bash these cards, they are really an amazing
feat of engineering. The emu10k1 was originally designed to support an
early DAW, the EMU APS, in the days of Win95 and Pentium 90s. Computers
were not nearly fast enough so they did it all in hardware.
Lee
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing
2004-09-04 23:28 ` Lee Revell
@ 2004-09-05 3:02 ` Manuel Jander
2004-09-05 5:06 ` Lee Revell
0 siblings, 1 reply; 33+ messages in thread
From: Manuel Jander @ 2004-09-05 3:02 UTC (permalink / raw)
To: alsa-devel
Hi Lee,
On Sat, 2004-09-04 at 19:28, Lee Revell wrote:
> On Fri, 2004-09-03 at 21:19, Manuel Jander wrote:
> > Hi,
> >
> > [snip]
> > > There is currently no way to open an 8 or 16 channel playback
> > > substream. There are the various plughw surround plugins, but these
> > > seem to work by allocating several mono/stereo streams. This is
> > > certainly wasteful - 5.1 surround wastes 2 voices.
> > >
> > > I have been planning to add another playback device to the emu10k1
> > > driver with one 16-63 channel substream, to correspond to the hw:0,2
> > > capture device which can record up to 64 channels.
> >
> > There is one problem: If the DMA engine really can demultiplex that many
> > interleaved channels
>
> It certainly can, works great for capture. I can record 8 channels at
> 32 frames, it works perfectly. I suspect I could record more but
> there's some kind of mixer settings bug.
Using one single DMA channel ?
> > , the buffer/period *time* will be pretty low
>
> Yes, this is called 'low latency', and people generally want it.
Yeah, but if you don't even have enough time to handle IRQs ? That
certainly leaves you very little CPU time for other tasks. Going into
the extreme doesn't work. There must be a minimum of latency.
> > . I
> > think more than 4 or 6 interleaved channels per DMA is not very sane.
>
> Why? It's more efficient than handling them in chunks of 4. This would
> be intended for use by the sound server, in this case jackd; you would
> launch jackd with an 8 channel input and 8 channel output device,
> corresponding to the analog ins and outs on the hardware. With 16 or 32
> channels the rest are just FX buses. And this all lives in hardware.
No need to handle them in chunks of specifically 4. But using more than
one single DMA channel to relieve the CPU. If not, it would be better
using programmed I/O instead of DMA.
> >
> > Using several DMA channels would work, but again thats the same as using
> > several alsa pcm devices plugged together. The voice waste should be
> > eliminated by other means in my opinion.
> >
>
> Not needed, the DMA engine can handle all 64 channels. 64 channels of
> 16 bit samples at 48000 hz is 6,144 MB/s. If the emu10k3 increases the
> DSP samplerate to 96Khz (hopefully!), and enables 32-bit I/O (the DSP is
> already 32 bits but the i/o path is 16), this is still only 24MB/s. Old
> hard drives can do that easily, and they have to wait for data to arrive
> from moving parts.
Of course it can :D. Thats not the question. The question is if ONE
single DMA _channel_ (not the entire engine) can handle 64 PCM audio
streams. The problem is not bandwith. The problem is that the size of
one single frame gets too close to the max period size.
A PCI bus always uses its full width (32bits mostly), there is no need
to care about that.
> > Would'nt it be time to think about a resource manager ? Something like
> > CheckoutVoice() / CheckinVoice() ? Using a resource manager inside a
> > individual driver already relieves a lot of problems, but maybe using a
> > resource manager at ALSA-LIB level, would not be a bad idea. Just a
> > thought that came to my mind...
>
> For the general case, we have one, it works great.
> http://jackit.sf.net. This is the only thing talks to the sound
> hardware on any serious Linux audio setup. This notion of voices is
> fairly specific to the emu10k1 hardware, so it wouldn't make much sense
> to manage them anywhere but in the alsa driver. Check out also
> http://ld10k1.sf.net, which is the DSP patch loader.
No. Most modern audio cards have plenty DMA channels:
Aureal Vortex: 96
NVidia Soundstorm: 256
Trident 4D: 64 (or more)
(that are the few a have read the specs for).
jackd seems great as a resource manager, but currently it is unable to
handle advanced channel paramenters as HRTF, filters, and such (it could
maybe added ?). If you have a mix of different channels with different
capabilities, its not a minor task to allocate them in a apropriate
fashion. You may want to have some kind of software fallback too and you
start getting a headache. Thats where a resource manager could make a
difference.
> The alsa plugin layer is actually almost completely unnecessary on the
> SBLives, because they can do it all (and more) in hardware. I don't
> know why so many people bash these cards, they are really an amazing
> feat of engineering. The emu10k1 was originally designed to support an
> early DAW, the EMU APS, in the days of Win95 and Pentium 90s. Computers
> were not nearly fast enough so they did it all in hardware.
Its not the only and not the first of that kind... Hopefuly somebody
would implemente something interesting.
Best Regards
Manuel Jander
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing
2004-09-05 3:02 ` Manuel Jander
@ 2004-09-05 5:06 ` Lee Revell
2004-09-05 18:12 ` Manuel Jander
0 siblings, 1 reply; 33+ messages in thread
From: Lee Revell @ 2004-09-05 5:06 UTC (permalink / raw)
To: mjander; +Cc: alsa-devel
On Sat, 2004-09-04 at 23:02, Manuel Jander wrote:
> Hi Lee,
>
> On Sat, 2004-09-04 at 19:28, Lee Revell wrote:
> > On Fri, 2004-09-03 at 21:19, Manuel Jander wrote:
> > > Hi,
> > >
> > > [snip]
> > > > There is currently no way to open an 8 or 16 channel playback
> > > > substream. There are the various plughw surround plugins, but these
> > > > seem to work by allocating several mono/stereo streams. This is
> > > > certainly wasteful - 5.1 surround wastes 2 voices.
> > > >
> > > > I have been planning to add another playback device to the emu10k1
> > > > driver with one 16-63 channel substream, to correspond to the hw:0,2
> > > > capture device which can record up to 64 channels.
> > >
> > > There is one problem: If the DMA engine really can demultiplex that many
> > > interleaved channels
> >
> > It certainly can, works great for capture. I can record 8 channels at
> > 32 frames, it works perfectly. I suspect I could record more but
> > there's some kind of mixer settings bug.
>
> Using one single DMA channel ?
>
Yes. Although, this is a unique feature of the FX8010 capture device.
For capture there is only a single DMA channel for the FX8010 and you
set bits in a hardware register to select which of the the 64 FX8010
outputs you want to record. The hardware imposes some sanity, there is
a fixed table of allowed buffer sizes, and the number of channels record
must be a power of two.
The playback hardware does not work this way. You can only do one mono
PCM stream per DMA channel. My proposal would still use one DMA channel
per mono PCM, but where the driver currently will only allocate voices
one or two at a time (which is two or three with the extra voice), this
would use 17 voices - 16 "data channels" and the "control channel" aka
the extra voice.
> > > , the buffer/period *time* will be pretty low
> >
> > Yes, this is called 'low latency', and people generally want it.
>
> Yeah, but if you don't even have enough time to handle IRQs ? That
> certainly leaves you very little CPU time for other tasks. Going into
> the extreme doesn't work. There must be a minimum of latency.
>
I agree, 32 frames is a bit extreme, this is what I use for stress
testing. I think 8 channels at 32 frames is exactly the limit of the
hardware. You actually can't record 4 channels at 32 frames, because
that would require a smaller buffer than the hardware supports, so you
have to record more channels to get lower latency.
> > > . I
> > > think more than 4 or 6 interleaved channels per DMA is not very sane.
> >
> > Why? It's more efficient than handling them in chunks of 4. This would
> > be intended for use by the sound server, in this case jackd; you would
> > launch jackd with an 8 channel input and 8 channel output device,
> > corresponding to the analog ins and outs on the hardware. With 16 or 32
> > channels the rest are just FX buses. And this all lives in hardware.
>
> No need to handle them in chunks of specifically 4. But using more than
> one single DMA channel to relieve the CPU. If not, it would be better
> using programmed I/O instead of DMA.
>
I am not sure I understand. How would it tax the CPU more heavily? You
end up doing more work at each interrupt, but you have fewer interrupts.
Lee
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing
2004-09-05 5:06 ` Lee Revell
@ 2004-09-05 18:12 ` Manuel Jander
2004-09-05 18:39 ` Lee Revell
0 siblings, 1 reply; 33+ messages in thread
From: Manuel Jander @ 2004-09-05 18:12 UTC (permalink / raw)
To: alsa-devel
Hi Lee,
On Sun, 2004-09-05 at 01:06, Lee Revell wrote:
> On Sat, 2004-09-04 at 23:02, Manuel Jander wrote:
> > Hi Lee,
> >
> > On Sat, 2004-09-04 at 19:28, Lee Revell wrote:
> > > On Fri, 2004-09-03 at 21:19, Manuel Jander wrote:
> > > > Hi,
> > > >
> > > > [snip]
> > > It certainly can, works great for capture. I can record 8 channels at
> > > 32 frames, it works perfectly. I suspect I could record more but
> > > there's some kind of mixer settings bug.
> >
> > Using one single DMA channel ?
> >
>
> Yes. Although, this is a unique feature of the FX8010 capture device.
> For capture there is only a single DMA channel for the FX8010 and you
> set bits in a hardware register to select which of the the 64 FX8010
> outputs you want to record. The hardware imposes some sanity, there is
> a fixed table of allowed buffer sizes, and the number of channels record
> must be a power of two.
>
> The playback hardware does not work this way. You can only do one mono
> PCM stream per DMA channel. My proposal would still use one DMA channel
> per mono PCM, but where the driver currently will only allocate voices
> one or two at a time (which is two or three with the extra voice), this
> would use 17 voices - 16 "data channels" and the "control channel" aka
> the extra voice.
That sounds completely reasonable.
> > > > . I
> > > > think more than 4 or 6 interleaved channels per DMA is not very sane.
> > >
> > > Why? It's more efficient than handling them in chunks of 4. This would
> > > be intended for use by the sound server, in this case jackd; you would
> > > launch jackd with an 8 channel input and 8 channel output device,
> > > corresponding to the analog ins and outs on the hardware. With 16 or 32
> > > channels the rest are just FX buses. And this all lives in hardware.
> >
> > No need to handle them in chunks of specifically 4. But using more than
> > one single DMA channel to relieve the CPU. If not, it would be better
> > using programmed I/O instead of DMA.
> >
>
> I am not sure I understand. How would it tax the CPU more heavily? You
> end up doing more work at each interrupt, but you have fewer interrupts.
If the IRQ's are fired at a lower rate (fewer IRQ per time unit), you
will have less processing overhead at the cost of higher latency.
Handling a period elapsed event takes CPU time. If there are fewer of
those events, or fewer channels packed into each period that means more
time for each period. Unfortunately we are limited by a maximal period
size of only 4KiB on x86's. Making them bigger is out of question.
Its a matter of tradeoff between how many IRQ you can handle per second
and how low you want the latency to be.
I did some calculation, which yield that with 64 channels, 4KiB of
period size at 44100Hz 16bits would trigger one IRQ each 725us approx.
That is not that bad after all. On low end machine maybe this could
cause xruns, but i guess that trying it out would give a definite
answer.
Best Regards
Manuel Jander
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing
2004-09-04 1:19 ` Manuel Jander
2004-09-04 23:28 ` Lee Revell
@ 2004-09-05 18:28 ` Lee Revell
2004-09-06 11:54 ` Jaroslav Kysela
1 sibling, 1 reply; 33+ messages in thread
From: Lee Revell @ 2004-09-05 18:28 UTC (permalink / raw)
To: mjander; +Cc: alsa-devel
On Fri, 2004-09-03 at 21:19, Manuel Jander wrote:
> Hi,
>
> [snip]
> > Not quite, it's 32 mono substreams, 21 stereo. Each mono substream
> > requires 2 voices, each stereo substream, three. There is an extra
> > voice allocated per playback substream that is silent, and is just used
> > to generate the period interrupts.
> >
> > I have pored over the code, and I don't understand exactly why the extra
> > voice is needed, Jaroslav said something like "if you don't use the
> > extra voice the interrupts are going faster than the voice position". I
> > think it might be needed so that multiple can playback, each with
> > different period sizes.
>
> Smells like incomplete hardware spec. Wasting a DMA channel for timing
> purposes sounds pretty stupid, despite it may be the only known way to
> make it work right now.
I thought about this and you may be right. It seems like there is no
half loop interrupt for playback, only IPR_CHANNELLOOP. This seems
wrong.
If you look at the kX project header files, 8010.h is clearly derived
from emu10k1.h. But, there are several unknown values in the comments
that seem to be the result of reverse engineering, probably a PCI bus
capture with the Windows driver. One of these comments refers to a half
loop interrupt.
It seems like if this were the case, then it would require a workaround
similar to the extra voice hack. Does this seem plausible?
Lee
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing
2004-09-05 18:12 ` Manuel Jander
@ 2004-09-05 18:39 ` Lee Revell
0 siblings, 0 replies; 33+ messages in thread
From: Lee Revell @ 2004-09-05 18:39 UTC (permalink / raw)
To: mjander; +Cc: alsa-devel
On Sun, 2004-09-05 at 14:12, Manuel Jander wrote:
> Hi Lee,
>
> On Sun, 2004-09-05 at 01:06, Lee Revell wrote:
> > On Sat, 2004-09-04 at 23:02, Manuel Jander wrote:
> > > Hi Lee,
> > >
> > > On Sat, 2004-09-04 at 19:28, Lee Revell wrote:
> > > > On Fri, 2004-09-03 at 21:19, Manuel Jander wrote:
> > > > > Hi,
> > > > >
> > > > > [snip]
> > > > It certainly can, works great for capture. I can record 8 channels at
> > > > 32 frames, it works perfectly. I suspect I could record more but
> > > > there's some kind of mixer settings bug.
> > >
> > > Using one single DMA channel ?
> > >
> >
> > Yes. Although, this is a unique feature of the FX8010 capture device.
> > For capture there is only a single DMA channel for the FX8010 and you
> > set bits in a hardware register to select which of the the 64 FX8010
> > outputs you want to record. The hardware imposes some sanity, there is
> > a fixed table of allowed buffer sizes, and the number of channels record
> > must be a power of two.
> >
> > The playback hardware does not work this way. You can only do one mono
> > PCM stream per DMA channel. My proposal would still use one DMA channel
> > per mono PCM, but where the driver currently will only allocate voices
> > one or two at a time (which is two or three with the extra voice), this
> > would use 17 voices - 16 "data channels" and the "control channel" aka
> > the extra voice.
>
> That sounds completely reasonable.
>
I should add that this is not my idea, Jaroslav suggested it a while
back. But I agree that this smells like an incomplete hardware spec.
There are some world-class reverse engineers on this list... ;-)
> > > > > . I
> > > > > think more than 4 or 6 interleaved channels per DMA is not very sane.
> > > >
> > > > Why? It's more efficient than handling them in chunks of 4. This would
> > > > be intended for use by the sound server, in this case jackd; you would
> > > > launch jackd with an 8 channel input and 8 channel output device,
> > > > corresponding to the analog ins and outs on the hardware. With 16 or 32
> > > > channels the rest are just FX buses. And this all lives in hardware.
> > >
> > > No need to handle them in chunks of specifically 4. But using more than
> > > one single DMA channel to relieve the CPU. If not, it would be better
> > > using programmed I/O instead of DMA.
> > >
> >
> > I am not sure I understand. How would it tax the CPU more heavily? You
> > end up doing more work at each interrupt, but you have fewer interrupts.
>
> If the IRQ's are fired at a lower rate (fewer IRQ per time unit), you
> will have less processing overhead at the cost of higher latency.
> Handling a period elapsed event takes CPU time. If there are fewer of
> those events, or fewer channels packed into each period that means more
> time for each period. Unfortunately we are limited by a maximal period
> size of only 4KiB on x86's. Making them bigger is out of question.
> Its a matter of tradeoff between how many IRQ you can handle per second
> and how low you want the latency to be.
>
> I did some calculation, which yield that with 64 channels, 4KiB of
> period size at 44100Hz 16bits would trigger one IRQ each 725us approx.
> That is not that bad after all. On low end machine maybe this could
> cause xruns, but i guess that trying it out would give a definite
> answer.
The FX8010 is fixed at 48000Hz so it's one irq every 666 usecs. This is
rock solid on my 600Mhz C3 with Ingo's latest VP patches, I cannot
produce an xrun no matter how I punish the machine.
I don't think that the machine breaks a sweat from an irq every 666
usecs, the timer interrupt fires every 1000, and there's very little
work to be done in the interrupt handler assuming you use mmap(). I
think a too low period size is bad due to cache thrashing, andthere's no
point in using a latency too low to be perceptible, but the more
frequent interrupts do not seem to bother it at all. Right now I am
trying to determine the physical limits of the hardware, I will worry
about what is practical later ;-).
Lee
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing
2004-09-05 18:28 ` Lee Revell
@ 2004-09-06 11:54 ` Jaroslav Kysela
2004-09-06 20:41 ` Lee Revell
0 siblings, 1 reply; 33+ messages in thread
From: Jaroslav Kysela @ 2004-09-06 11:54 UTC (permalink / raw)
To: Lee Revell; +Cc: mjander, alsa-devel
On Sun, 5 Sep 2004, Lee Revell wrote:
> If you look at the kX project header files, 8010.h is clearly derived
> from emu10k1.h. But, there are several unknown values in the comments
> that seem to be the result of reverse engineering, probably a PCI bus
> capture with the Windows driver. One of these comments refers to a half
> loop interrupt.
>
> It seems like if this were the case, then it would require a workaround
> similar to the extra voice hack. Does this seem plausible?
Maybe. But it forces us to use only two periods per ring buffer. It can be
easy to add an extra check for this case and don't allocate extra voice
for it. It still might be that it won't work correctly (I think that emu
chips interrupt a bit earlier and not all samples are transferred via PCI
bus at the time). Also note that the current code uses a ccis correction
for the extra voice (I don't know what this is - probably some cache or
interpolation correction). Without complete specs describing the exact
hardware behaviour, it's difficult to do a proper driver design.
Jaroslav
-----
Jaroslav Kysela <perex@suse.cz>
Linux Kernel Sound Maintainer
ALSA Project, SUSE Labs
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing
2004-09-06 11:54 ` Jaroslav Kysela
@ 2004-09-06 20:41 ` Lee Revell
2004-09-07 1:09 ` hardware channel mixing [EMU10K1 DMA] Manuel Jander
0 siblings, 1 reply; 33+ messages in thread
From: Lee Revell @ 2004-09-06 20:41 UTC (permalink / raw)
To: Jaroslav Kysela; +Cc: mjander, alsa-devel
On Mon, 2004-09-06 at 07:54, Jaroslav Kysela wrote:
> On Sun, 5 Sep 2004, Lee Revell wrote:
>
> > If you look at the kX project header files, 8010.h is clearly derived
> > from emu10k1.h. But, there are several unknown values in the comments
> > that seem to be the result of reverse engineering, probably a PCI bus
> > capture with the Windows driver. One of these comments refers to a half
> > loop interrupt.
> >
> > It seems like if this were the case, then it would require a workaround
> > similar to the extra voice hack. Does this seem plausible?
>
> Maybe. But it forces us to use only two periods per ring buffer.
But this is how the hardware was designed to work. It is ALSA that is
unusual in supporting more than 2 periods per buffer after all. Is this
really the only reason for the extra voice? It is good that ALSA
supports this, but forcing it on hardware that was not designed for it
doesn't make sense here.
Supporting more than 2 periods per buffer is useless on the emu10k1
anyway because it only works in the playback direction, which means JACK
can only use 2 periods per buffer. Also I was under the impression that
more than 2 periods should only be used if 2 periods is problematic due
to buggy hardware or system latency issues. With the latest emu10k1
ALSA driver and Ingo's patches these are not an issue.
> It can be
> easy to add an extra check for this case and don't allocate extra voice
> for it. It still might be that it won't work correctly (I think that emu
> chips interrupt a bit earlier and not all samples are transferred via PCI
> bus at the time).
I have only heard of this causing problems in combination with certain
buggy VIA chipsets (KT266 and KT333). There is a workaround available
for Windows (google for "george breese pci latency"). It should be
simple enough to do the same if it becomes an issue.
> Also note that the current code uses a ccis correction
> for the extra voice (I don't know what this is - probably some cache or
> interpolation correction). Without complete specs describing the exact
> hardware behaviour, it's difficult to do a proper driver design.
The OSS driver apparently does not need the extra voice. Neither does
the kX driver. Presumably because both only support 2 periods per
buffer.
Anyway, I will take a shot at fixing this and see how it works. Worst
case scenario I can reverse engineer kX ASIO, which I would rather not
do because I think I have almost figured out how to get the same
functionality from the ALSA driver without any reverse engineering.
Eliminating the extra voice is the last major hurdle.
Lee
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing [EMU10K1 DMA]
2004-09-06 20:41 ` Lee Revell
@ 2004-09-07 1:09 ` Manuel Jander
2004-09-07 4:47 ` Lee Revell
0 siblings, 1 reply; 33+ messages in thread
From: Manuel Jander @ 2004-09-07 1:09 UTC (permalink / raw)
To: alsa-devel
Hi,
On Mon, 2004-09-06 at 16:41, Lee Revell wrote:
> > > It seems like if this were the case, then it would require a workaround
> > > similar to the extra voice hack. Does this seem plausible?
> >
> > Maybe. But it forces us to use only two periods per ring buffer.
>
> But this is how the hardware was designed to work. It is ALSA that is
> unusual in supporting more than 2 periods per buffer after all. Is this
> really the only reason for the extra voice? It is good that ALSA
> supports this, but forcing it on hardware that was not designed for it
> doesn't make sense here.
If the most effective way is to use 2 DMA subbuffers, there are software
means to emulate more periods. AFAIK, there is a Crystal CS42xx driver
using that scheme, and the Aureal driver does this too (despite the
latter supports upto 4 subbuffers).
> Supporting more than 2 periods per buffer is useless on the emu10k1
> anyway because it only works in the playback direction, which means JACK
> can only use 2 periods per buffer. Also I was under the impression that
> more than 2 periods should only be used if 2 periods is problematic due
> to buggy hardware or system latency issues. With the latest emu10k1
> ALSA driver and Ingo's patches these are not an issue.
IMHO, 2 periods are necessary. With only one, things get though, if not
impossible. Most hardware does not allow to manipulate the DMA registers
of a subbuffer which running.
> > It can be
> > easy to add an extra check for this case and don't allocate extra voice
> > for it. It still might be that it won't work correctly (I think that emu
> > chips interrupt a bit earlier and not all samples are transferred via PCI
> > bus at the time).
>
> I have only heard of this causing problems in combination with certain
> buggy VIA chipsets (KT266 and KT333). There is a workaround available
> for Windows (google for "george breese pci latency"). It should be
> simple enough to do the same if it becomes an issue.
>
> > Also note that the current code uses a ccis correction
> > for the extra voice (I don't know what this is - probably some cache or
> > interpolation correction). Without complete specs describing the exact
> > hardware behaviour, it's difficult to do a proper driver design.
>
> The OSS driver apparently does not need the extra voice. Neither does
> the kX driver. Presumably because both only support 2 periods per
> buffer.
>
> Anyway, I will take a shot at fixing this and see how it works. Worst
> case scenario I can reverse engineer kX ASIO, which I would rather not
> do because I think I have almost figured out how to get the same
> functionality from the ALSA driver without any reverse engineering.
>
> Eliminating the extra voice is the last major hurdle.
Well, have luck. Its never hurts to have some.
Best Regards
Manuel Jander
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing [EMU10K1 DMA]
2004-09-07 1:09 ` hardware channel mixing [EMU10K1 DMA] Manuel Jander
@ 2004-09-07 4:47 ` Lee Revell
2004-09-07 6:53 ` Lee Revell
2004-09-07 8:23 ` Jaroslav Kysela
0 siblings, 2 replies; 33+ messages in thread
From: Lee Revell @ 2004-09-07 4:47 UTC (permalink / raw)
To: mjander; +Cc: alsa-devel
On Mon, 2004-09-06 at 21:09, Manuel Jander wrote:
> On Mon, 2004-09-06 at 16:41, Lee Revell wrote:
> > > > It seems like if this were the case, then it would require a workaround
> > > > similar to the extra voice hack. Does this seem plausible?
> > >
> > > Maybe. But it forces us to use only two periods per ring buffer.
> >
> > But this is how the hardware was designed to work. It is ALSA that is
> > unusual in supporting more than 2 periods per buffer after all. Is this
> > really the only reason for the extra voice? It is good that ALSA
> > supports this, but forcing it on hardware that was not designed for it
> > doesn't make sense here.
>
> If the most effective way is to use 2 DMA subbuffers, there are software
> means to emulate more periods. AFAIK, there is a Crystal CS42xx driver
> using that scheme, and the Aureal driver does this too (despite the
> latter supports upto 4 subbuffers).
>
OK, I think I got it. The OSS driver uses the interval timer for
playback. The ALSA driver does not use the timer interrupt at all, and
the OSS driver does not seem to use the channel loop interrupt at all.
The interval timer seems to be intended exactly for this use; I am a bit
baffled as to why was the channel loop interrupt, a relatively obscure
feature, was chosen as the playback interrupt source.
I should have a patch in a day or two to eliminate the extra voice.
Looks like this will also allow the driver to support real multichannel
PCM playback rather than the current kludge of alsa plugins using
multiple stereo and mono PCMs for 5.1, etc.
Lee
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing
2004-09-03 14:04 ` Patrick Dumais
2004-09-03 14:31 ` Florian Schmidt
@ 2004-09-07 5:04 ` Glenn Maynard
1 sibling, 0 replies; 33+ messages in thread
From: Glenn Maynard @ 2004-09-07 5:04 UTC (permalink / raw)
To: alsa-devel
On Fri, Sep 03, 2004 at 10:04:29AM -0400, Patrick Dumais wrote:
> That's the thing. I'm writing a soft-sampler. meaning that I have multiple
> samples playing at one time when the user presses the keys on his midi
> keyboard. I don't like software mixing because I feel I could gain quality
> by relying on the hardware (am I right?). And I also want my app to go
> faster by letting the hardware do this job. But I realize that opening 16
> devices (16voice polyphony for my app) can be ressource consuming, so it's
> a drawback. I'm not too sure what to do.
FYI, I did this in the sound system for StepMania[1]: if the underlying
sound system supports multiple hardware channels, it'll use them. If
it doesn't, it'll just start one stream and do it all in software. It
works very well--and this is a game, not a soft-sampler, so it's doing
a lot of other things, too (like decoding videos and rendering graphics).
The original reason for this was improving latency in Windows: DirectSound
has a minimum latency of about ~40ms (which is a joke)--if you don't buffer
that far ahead, it croaks, but you can start playing new streams faster.
This isn't a problem with ALSA.
However, a bigger win is that SB cards do hardware resampling. Since sound
data for this game comes from any number of sources, there's no consistent
sample rate; announcer files are often 22khz, songs are usually 44khz with a
few 48khz oddballs, etc. With software mixing, we have to resample all of
this manually, and I havn't found a resampler that can handle it[2].
I can play a dozen stereo streams at once, all resampled, and keep plugging
along at 95FPS (in Windows). In practice, we probably never will really need
to resample more than two streams at once during play (we may need to play a
dozen or more, but most can be preloaded), though those gross cards that only
support 48khz may need to do three.
Of course, none of this necessarily means hardware mixing is a good approach
for your application.
[1] www.stepmania.com
[2] FWIW, we need: reasonable quality resampling--good enough for use in a
game based entirely on music but not "high quality"; very low CPU use,
preferably under 5% per stereo stream on a typical low-end 1ghz PC; and
permissively licensed, 2-, 3-clause BSD or MIT, so it can be integrated
directly into our MIT-licensed code. I don't know if all of this is
possible, DSP's not my field. :)
--
Glenn Maynard
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing [EMU10K1 DMA]
2004-09-07 4:47 ` Lee Revell
@ 2004-09-07 6:53 ` Lee Revell
2004-09-07 8:23 ` Jaroslav Kysela
1 sibling, 0 replies; 33+ messages in thread
From: Lee Revell @ 2004-09-07 6:53 UTC (permalink / raw)
To: mjander; +Cc: alsa-devel
On Tue, 2004-09-07 at 00:47, Lee Revell wrote:
> OK, I think I got it. The OSS driver uses the interval timer for
> playback. The ALSA driver does not use the timer interrupt at all, and
> the OSS driver does not seem to use the channel loop interrupt at all.
>
> The interval timer seems to be intended exactly for this use; I am a bit
> baffled as to why was the channel loop interrupt, a relatively obscure
> feature, was chosen as the playback interrupt source.
>
> I should have a patch in a day or two to eliminate the extra voice.
> Looks like this will also allow the driver to support real multichannel
> PCM playback rather than the current kludge of alsa plugins using
> multiple stereo and mono PCMs for 5.1, etc.
>
The above is almost right; this will eliminate the extra voice for
regular PCM playback, system sounds, etc but will not give you sample
accurate synchronized full duplex operation like kX ASIO. The only way
to do this is to have all the period_elapsed callbacks for the linked
capture/playback streams run atomically in the same interrupt handler.
This can be achieved by adding another multichannel playback device,
corresponding to the hw;0,2 FXBus, and use the efx_capture interrupt for
playback. When these playback channels are opened a voice is allocated
and added to the efx capture device;s linked linked list of playback
streams. This _has_ to be how the ASIO drivers work.
I should have a patch fairly soon.
Lee
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing [EMU10K1 DMA]
2004-09-07 4:47 ` Lee Revell
2004-09-07 6:53 ` Lee Revell
@ 2004-09-07 8:23 ` Jaroslav Kysela
2004-09-07 18:26 ` Lee Revell
1 sibling, 1 reply; 33+ messages in thread
From: Jaroslav Kysela @ 2004-09-07 8:23 UTC (permalink / raw)
To: Lee Revell; +Cc: mjander, alsa-devel
On Tue, 7 Sep 2004, Lee Revell wrote:
> The interval timer seems to be intended exactly for this use; I am a bit
> baffled as to why was the channel loop interrupt, a relatively obscure
> feature, was chosen as the playback interrupt source.
No, in this case you don't get exact interrupt at period boundary.
It seems bigger problem (wrapping) than having an extra voice.
Jaroslav
-----
Jaroslav Kysela <perex@suse.cz>
Linux Kernel Sound Maintainer
ALSA Project, SUSE Labs
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing [EMU10K1 DMA]
2004-09-07 8:23 ` Jaroslav Kysela
@ 2004-09-07 18:26 ` Lee Revell
2004-09-07 19:16 ` Jaroslav Kysela
0 siblings, 1 reply; 33+ messages in thread
From: Lee Revell @ 2004-09-07 18:26 UTC (permalink / raw)
To: Jaroslav Kysela; +Cc: mjander, alsa-devel
On Tue, 2004-09-07 at 04:23, Jaroslav Kysela wrote:
> On Tue, 7 Sep 2004, Lee Revell wrote:
>
> > The interval timer seems to be intended exactly for this use; I am a bit
> > baffled as to why was the channel loop interrupt, a relatively obscure
> > feature, was chosen as the playback interrupt source.
>
> No, in this case you don't get exact interrupt at period boundary.
> It seems bigger problem (wrapping) than having an extra voice.
>
Hmm. If this is the case then it really seems like the OSS driver
should not work at all then. You mentioned previously that removing the
extra voice would only allow 2 periods per buffer. Do you mean that the
interval timer could be used in this case, and that it's only not
reliable for more than 2 periods per buffer? Or would you still use the
channel loop interrupt, but on the playback voice rather than an extra
one?
How did you figure out the use of the channel loop interrupt, as this is
not used in the OSS driver at all?
Anyway, guess it's time to try it and see what happens. It still seems
like I can implement the kX ASIO functionality without needing the extra
voice because the efx capture device provides a very high resolution
timer.
Lee
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing [EMU10K1 DMA]
2004-09-07 18:26 ` Lee Revell
@ 2004-09-07 19:16 ` Jaroslav Kysela
2004-09-07 19:34 ` Lee Revell
2004-09-08 22:49 ` Lee Revell
0 siblings, 2 replies; 33+ messages in thread
From: Jaroslav Kysela @ 2004-09-07 19:16 UTC (permalink / raw)
To: Lee Revell; +Cc: mjander, alsa-devel
On Tue, 7 Sep 2004, Lee Revell wrote:
> On Tue, 2004-09-07 at 04:23, Jaroslav Kysela wrote:
> > On Tue, 7 Sep 2004, Lee Revell wrote:
> >
> > > The interval timer seems to be intended exactly for this use; I am a bit
> > > baffled as to why was the channel loop interrupt, a relatively obscure
> > > feature, was chosen as the playback interrupt source.
> >
> > No, in this case you don't get exact interrupt at period boundary.
> > It seems bigger problem (wrapping) than having an extra voice.
> >
>
> Hmm. If this is the case then it really seems like the OSS driver
> should not work at all then.
It works, but with higher latencies than application requested. You can
probably write a timer scheduler code, but it will be probably
a maintenance nightmare.
> You mentioned previously that removing the extra voice would only allow
> 2 periods per buffer. Do you mean that the interval timer could be used
Yes, if we can do proper interrupt in the middle of voice's buffer.
> How did you figure out the use of the channel loop interrupt, as this is
> not used in the OSS driver at all?
I don't remember exactly. Maybe from old EMU8000 (because some things are
common) and the header file from OSS driver.
> like I can implement the kX ASIO functionality without needing the extra
> voice because the efx capture device provides a very high resolution
> timer.
You can create a special playback PCM which will share efx interrupt, of
course. But I don't know, how you expect to synchronize multiple streams
for exact sample resolution, because you cannot start multiple playback
streams using one i/o transaction for emu10k? chips.
Jaroslav
-----
Jaroslav Kysela <perex@suse.cz>
Linux Kernel Sound Maintainer
ALSA Project, SUSE Labs
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing [EMU10K1 DMA]
2004-09-07 19:16 ` Jaroslav Kysela
@ 2004-09-07 19:34 ` Lee Revell
2004-09-07 19:41 ` Jaroslav Kysela
2004-09-08 22:49 ` Lee Revell
1 sibling, 1 reply; 33+ messages in thread
From: Lee Revell @ 2004-09-07 19:34 UTC (permalink / raw)
To: Jaroslav Kysela; +Cc: mjander, alsa-devel
On Tue, 2004-09-07 at 15:16, Jaroslav Kysela wrote:
> On Tue, 7 Sep 2004, Lee Revell wrote:
>
> > On Tue, 2004-09-07 at 04:23, Jaroslav Kysela wrote:
> > > On Tue, 7 Sep 2004, Lee Revell wrote:
> > >
> > > > The interval timer seems to be intended exactly for this use; I am a bit
> > > > baffled as to why was the channel loop interrupt, a relatively obscure
> > > > feature, was chosen as the playback interrupt source.
> > >
> > > No, in this case you don't get exact interrupt at period boundary.
> > > It seems bigger problem (wrapping) than having an extra voice.
> > >
> >
> > Hmm. If this is the case then it really seems like the OSS driver
> > should not work at all then.
>
> It works, but with higher latencies than application requested. You can
> probably write a timer scheduler code, but it will be probably
> a maintenance nightmare.
>
Hmm, OK. Guess I will have to try it.
> > You mentioned previously that removing the extra voice would only allow
> > 2 periods per buffer. Do you mean that the interval timer could be used
>
> Yes, if we can do proper interrupt in the middle of voice's buffer.
>
OK.
> > How did you figure out the use of the channel loop interrupt, as this is
> > not used in the OSS driver at all?
>
> I don't remember exactly. Maybe from old EMU8000 (because some things are
> common) and the header file from OSS driver.
>
Hmm, I will have a look at that one.
> > like I can implement the kX ASIO functionality without needing the extra
> > voice because the efx capture device provides a very high resolution
> > timer.
>
> You can create a special playback PCM which will share efx interrupt, of
> course. But I don't know, how you expect to synchronize multiple streams
> for exact sample resolution, because you cannot start multiple playback
> streams using one i/o transaction for emu10k? chips.
>
I think you can start a stereo stream using one I/O transaction, but
right, I think you are limited to opening 2 at a time. Anyway it works
great in Windows with the kX driver so there must be a way. If I have
to I will reverse engineer it. Thanks for the info.
Lee
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing [EMU10K1 DMA]
2004-09-07 19:34 ` Lee Revell
@ 2004-09-07 19:41 ` Jaroslav Kysela
2004-09-07 19:46 ` Lee Revell
2004-09-07 19:48 ` Lee Revell
0 siblings, 2 replies; 33+ messages in thread
From: Jaroslav Kysela @ 2004-09-07 19:41 UTC (permalink / raw)
To: Lee Revell; +Cc: mjander, alsa-devel
On Tue, 7 Sep 2004, Lee Revell wrote:
> I think you can start a stereo stream using one I/O transaction, but
No, voices are started independently, so even with stereo, there might be
no proper sample sync.
Jaroslav
-----
Jaroslav Kysela <perex@suse.cz>
Linux Kernel Sound Maintainer
ALSA Project, SUSE Labs
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing [EMU10K1 DMA]
2004-09-07 19:41 ` Jaroslav Kysela
@ 2004-09-07 19:46 ` Lee Revell
2004-09-07 19:48 ` Lee Revell
1 sibling, 0 replies; 33+ messages in thread
From: Lee Revell @ 2004-09-07 19:46 UTC (permalink / raw)
To: Jaroslav Kysela; +Cc: mjander, alsa-devel
On Tue, 2004-09-07 at 15:41, Jaroslav Kysela wrote:
> On Tue, 7 Sep 2004, Lee Revell wrote:
>
> > I think you can start a stereo stream using one I/O transaction, but
>
> No, voices are started independently, so even with stereo, there might be
> no proper sample sync.
>
Are you saying that you can't even do stereo playback and have the
voices in sync?
Lee
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing [EMU10K1 DMA]
2004-09-07 19:41 ` Jaroslav Kysela
2004-09-07 19:46 ` Lee Revell
@ 2004-09-07 19:48 ` Lee Revell
2004-09-07 19:52 ` Jaroslav Kysela
1 sibling, 1 reply; 33+ messages in thread
From: Lee Revell @ 2004-09-07 19:48 UTC (permalink / raw)
To: Jaroslav Kysela; +Cc: mjander, alsa-devel
On Tue, 2004-09-07 at 15:41, Jaroslav Kysela wrote:
> On Tue, 7 Sep 2004, Lee Revell wrote:
>
> > I think you can start a stereo stream using one I/O transaction, but
>
> No, voices are started independently, so even with stereo, there might be
> no proper sample sync.
>
Well even if this is the case, the ASIO drivers certainly seem to work
great. What kinds of problems could this cause?
Maybe it works because the capture channels are always in sync?
Lee
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing [EMU10K1 DMA]
2004-09-07 19:48 ` Lee Revell
@ 2004-09-07 19:52 ` Jaroslav Kysela
2004-09-07 20:06 ` Lee Revell
0 siblings, 1 reply; 33+ messages in thread
From: Jaroslav Kysela @ 2004-09-07 19:52 UTC (permalink / raw)
To: Lee Revell; +Cc: mjander, alsa-devel
On Tue, 7 Sep 2004, Lee Revell wrote:
> On Tue, 2004-09-07 at 15:41, Jaroslav Kysela wrote:
> > On Tue, 7 Sep 2004, Lee Revell wrote:
> >
> > > I think you can start a stereo stream using one I/O transaction, but
> >
> > No, voices are started independently, so even with stereo, there might be
> > no proper sample sync.
> >
>
> Well even if this is the case, the ASIO drivers certainly seem to work
> great. What kinds of problems could this cause?
>
> Maybe it works because the capture channels are always in sync?
Yes, efx device is only one, thus it is started only via one PCI
transaction.
Jaroslav
-----
Jaroslav Kysela <perex@suse.cz>
Linux Kernel Sound Maintainer
ALSA Project, SUSE Labs
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing [EMU10K1 DMA]
2004-09-07 19:52 ` Jaroslav Kysela
@ 2004-09-07 20:06 ` Lee Revell
0 siblings, 0 replies; 33+ messages in thread
From: Lee Revell @ 2004-09-07 20:06 UTC (permalink / raw)
To: Jaroslav Kysela; +Cc: mjander, alsa-devel
On Tue, 2004-09-07 at 15:52, Jaroslav Kysela wrote:
> On Tue, 7 Sep 2004, Lee Revell wrote:
>
> > On Tue, 2004-09-07 at 15:41, Jaroslav Kysela wrote:
> > > On Tue, 7 Sep 2004, Lee Revell wrote:
> > >
> > > > I think you can start a stereo stream using one I/O transaction, but
> > >
> > > No, voices are started independently, so even with stereo, there might be
> > > no proper sample sync.
> > >
> >
> > Well even if this is the case, the ASIO drivers certainly seem to work
> > great. What kinds of problems could this cause?
> >
> > Maybe it works because the capture channels are always in sync?
>
> Yes, efx device is only one, thus it is started only via one PCI
> transaction.
>
OK, so you can't start them in sync, but it seems like once you are
rolling then it would be OK. Makes sense considering the emu10k1 seems
more designed for MIDI wavetable synthesis than PCM playback.
Lee
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: hardware channel mixing [EMU10K1 DMA]
2004-09-07 19:16 ` Jaroslav Kysela
2004-09-07 19:34 ` Lee Revell
@ 2004-09-08 22:49 ` Lee Revell
1 sibling, 0 replies; 33+ messages in thread
From: Lee Revell @ 2004-09-08 22:49 UTC (permalink / raw)
To: Jaroslav Kysela; +Cc: mjander, alsa-devel
On Tue, 2004-09-07 at 15:16, Jaroslav Kysela wrote:
> On Tue, 7 Sep 2004, Lee Revell wrote:
>
> > On Tue, 2004-09-07 at 04:23, Jaroslav Kysela wrote:
> > > On Tue, 7 Sep 2004, Lee Revell wrote:
> > >
> > > > The interval timer seems to be intended exactly for this use; I am a bit
> > > > baffled as to why was the channel loop interrupt, a relatively obscure
> > > > feature, was chosen as the playback interrupt source.
> > >
> > > No, in this case you don't get exact interrupt at period boundary.
> > > It seems bigger problem (wrapping) than having an extra voice.
> > >
> >
> > Hmm. If this is the case then it really seems like the OSS driver
> > should not work at all then.
>
> It works, but with higher latencies than application requested. You can
> probably write a timer scheduler code, but it will be probably
> a maintenance nightmare.
>
Hmm, looks like the OSS driver sets the interval timer, then when this
goes off, uses a tasklet to schedule the work that ALSA does in the
pointer callback! Of course this will result in higher latencies than
the application requested, it seems like this would not work well at
all.
The timer seems to support latencies as low as 4 sample periods, so it
looks like it would work since the ALSA driver would run the pointer
callback directly from the timer interrupt handler.
It doesn't seem like this would be too bad to maintain, the timer
handler code in the OSS driver is very simple.
However the OSS driver seems suspiciously missing some things. For
example the capture buffer interrupt handlers are not used at all, it
uses the interval timer for capture too, which seems broken.
Lee
-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click
^ permalink raw reply [flat|nested] 33+ messages in thread
end of thread, other threads:[~2004-09-08 22:49 UTC | newest]
Thread overview: 33+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-08-29 15:03 hardware channel mixing Patrick Dumais
2004-09-03 12:59 ` Clemens Ladisch
2004-09-03 13:15 ` Patrick Dumais
2004-09-03 13:24 ` Clemens Ladisch
2004-09-03 13:58 ` Florian Schmidt
2004-09-03 14:04 ` Patrick Dumais
2004-09-03 14:31 ` Florian Schmidt
2004-09-03 14:34 ` Patrick Dumais
2004-09-03 15:23 ` Florian Schmidt
2004-09-07 5:04 ` Glenn Maynard
2004-09-03 23:30 ` Lee Revell
2004-09-04 1:19 ` Manuel Jander
2004-09-04 23:28 ` Lee Revell
2004-09-05 3:02 ` Manuel Jander
2004-09-05 5:06 ` Lee Revell
2004-09-05 18:12 ` Manuel Jander
2004-09-05 18:39 ` Lee Revell
2004-09-05 18:28 ` Lee Revell
2004-09-06 11:54 ` Jaroslav Kysela
2004-09-06 20:41 ` Lee Revell
2004-09-07 1:09 ` hardware channel mixing [EMU10K1 DMA] Manuel Jander
2004-09-07 4:47 ` Lee Revell
2004-09-07 6:53 ` Lee Revell
2004-09-07 8:23 ` Jaroslav Kysela
2004-09-07 18:26 ` Lee Revell
2004-09-07 19:16 ` Jaroslav Kysela
2004-09-07 19:34 ` Lee Revell
2004-09-07 19:41 ` Jaroslav Kysela
2004-09-07 19:46 ` Lee Revell
2004-09-07 19:48 ` Lee Revell
2004-09-07 19:52 ` Jaroslav Kysela
2004-09-07 20:06 ` Lee Revell
2004-09-08 22:49 ` Lee Revell
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.