* [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)
@ 2014-05-20 2:52 Lin, Mengdong
2014-05-20 8:10 ` Takashi Iwai
` (2 more replies)
0 siblings, 3 replies; 24+ messages in thread
From: Lin, Mengdong @ 2014-05-20 2:52 UTC (permalink / raw)
To: Vetter, Daniel, Takashi Iwai (tiwai@suse.de),
alsa-devel@alsa-project.org, intel-gfx@lists.freedesktop.org
Cc: Yang, Libin, Nikkanen, Kimmo, Koul, Vinod, Deak, Imre,
Babu, Ramesh, Li, Jocelyn, Shankar, Uma, Girdwood, Liam R
This RFC is based on previous discussion to set up a generic communication channel between display and audio driver and
an internal design of Intel MCG/VPG HDMI audio driver. It's still an initial draft and your advice would be appreciated
to improve the design.
The basic idea is to create a new avsink module and let both drm and alsa depend on it.
This new module provides a framework and APIs for synchronization between the display and audio driver.
1. Display/Audio Client
The avsink core provides APIs to create, register and lookup a display/audio client.
A specific display driver (eg. i915) or audio driver (eg. HD-Audio driver) can create a client, add some resources
objects (shared power wells, display outputs, and audio inputs, register ops) to the client, and then register this
client to avisink core. The peer driver can look up a registered client by a name or type, or both. If a client gives
a valid peer client name on registration, avsink core will bind the two clients as peer for each other. And we
expect a display client and an audio client to be peers for each other in a system.
int avsink_new_client ( const char *name,
int type, /* client type, display or audio */
struct module *module,
void *context,
const char *peer_name,
struct avsink_client **client_ret);
int avsink_free_client (struct avsink_client *client);
int avsink_register_client(struct avsink_client *client);
int avisink_unregister_client(int client_handle);
struct avsink_client *avsink_lookup_client(const char *name, int type);
struct avsink_client {
const char *name; /* client name */
int type; /* client type*/
void *context;
struct module *module; /* top-level module for locking */
struct avsink_client *peer; /* peer client */
/* shared power wells */
struct avsink_power_well *power_well;
int num_power_wells;
/* endpoints, display outputs or audio inputs */
struct avsink_endpoint * endpoint;
int num_endpints;
struct avsink_registers_ops *reg_ops; /* ops to access registers of a client */
void *private_data;
...
};
On system boots, the avsink module is loaded before the display and audio driver module. And the display and audio
driver may be loaded on parallel.
* If a specific display driver (eg. i915) supports avsink, it can create a display client, add power wells and display
outputs to the client, and then register the display client to the avsink core. Then it may look up if there is any
audio client registered, by name or type, and may find an audio client registered by some audio driver.
* If an audio driver supports avsink, it usually should look up a registered display client by name or type at first,
because it may need the shared power well in GPU and check the display outputs' name to bind the audio inputs. If
the display client is not registered yet, the audio driver can choose to wait (maybe in a work queue) or return
-EAGAIN for a deferred probe. After the display client is found, the audio driver can register an audio client with
the display client's name as the peer name, the avsink core will bind the display and audio clients to each other.
Open question:
If the display or audio driver is disabled by the black list, shall we introduce a time out to avoid waiting for the
other client registered endlessly?
2. Shared power wells (optional)
The audio and display devices, maybe only part of them, may share a common power well (e.g. for Intel Haswell and
Broadwell). If so, the driver that controls the power well should define a power well object, implement the get/put ops,
and add it to its avsink client before registering the client to avsink core. Then the peer client can look up this
power well by its name, and get/put this power well as a user.
A client can have multiple power well objects.
struct avsink_power_well {
const char *name; /* name of the power well */
void *context; /* parameter of get/put ops, maybe device pointer for this power well */
struct avsink_power_well_ops *ops
};
struct avsink_power_well_ops {
int (*get)(void *context);
int (*put)(void *context);
};
API:
int avsink_new_power(struct avsink_client *client,
const char *power_name,
void * power_context,
struct avsink_power_well_ops *ops,
struct avsink_power_well **power_ret);
struct avsink_power_well *avisnk_lookup_power(const char *name);
int avsink_get_power(struct avsink_power_well *power); /* Reqesut the power */
int avsink_put_power(struct avsink_power_well *power); /* Release the power */
For example, the i915 display driver can create a device for the shared power well in Haswell GPU, implement its PM
functions, and use the device pointer as the context when creating the power well object, like below
struct avsink_power_well_ops i915_power_well_ops = {
.get = pm_runtime_get_sync;
.put = pm_runtime_put_sync;
};
...
avsink_new_power ( display_client,
"i915_display_power_well",
pdev, /* pointer of the power well device */
&i915_power_well_ops,
...)
Power domain is not used here since a single device seems enough to represent a power well.
3. Display output and audio input endpoints
A display client should register the display output endpoints and its audio peer client should register the audio input
endpoints. A client can have multiple endpoints. The avsink core will bind an audio input and a display output as peer
to each other. This is to allow the audio and display driver to synchronize with each other for each display pipeline.
All endpoints should be added to a client before the client is registered to avsink core. Dynamic endpoints are not
supported now.
A display out here represents a physical HDMI/DP output port. And as long as it's usable in the system (i.e. physically
connected to the HDMP/DP port on the machine board), the display output should be registered not matter the port is
connected to an external display device or not. And if HW and display driver can support DP1.2 daisy chain (multiple DP
display devices can be connected to a single port), multiple static displays outputs should be defined for the DP port
according to the HW capability. The port & display device number can be indicated by the name (e.g. "i915_DDI_B",
"i915_DDI_B_DEV0", "i915_DDI_B_DEV1", or "i915_DDI_B_DEV2"), defined by the display driver.
The audio driver can check the endpoints of its peer display client and use an display endpoint's name, or a presumed
display endpoint name, as peer name when registering an audio endpoint, thus the avsink core will bind the two display
and audio endpoints as peers.
struct avsink_endpoint {
const char *name; /*name of the endpoint */
int type; /* DISPLAY_OUTPUT or AUDIO_INPUT */
void *context; /* private data, used as parameter of the ops */
struct avsink_endpoint_ops *ops;
struct avsink_endpoint *peer; /* peer endpoint */
};
struct avsink_endpoint_ops {
int (*get_caps) (enum had_caps_list query_element,
void *capabilities,
void *context);
int (*set_caps) (enum had_caps_list set_element,
void *capabilities,
void *context);
int (*event_handler) (enum avsink_event_type event_type, void *context);
};
API:
int avsink_new_endpoint (struct avsink_client *client,
const char *name,
int type, /* DISPLAY_OUTPUT or AUDIO_INPUT*/
void *context,
const char *peer_name, /* can be NULL if no clue */
avsink_endpoint_ops *ops,
struct avsink_endpoint **endpoint_ret);
int avsink_endpoint_get_caps(struct avsink_endpoint *endpoint,
enum avsink_caps_list t get_element,
void *capabilities);
int avsink_endpoint_set_caps(struct avsink_endpoint *endpoint,
enum had_caps_list set_element,
void *capabilities);
int avsink_endpoint_post_event(struct avsink_endpoint *endpoint,
enum avsink_event_type event_type);
4. Get/Set caps on an endpoint
The display or audio driver can get or set capabilities on an endpoint. Depending on the capability ID, the avsink core
will call get_caps/set_caps ops of this endpoint, or call get_caps/set_caps ops of its peer endpoint and return the
result to the caller.
enum avsink_caps_list {
/* capabilities for display output endpoints */
AVSINK_GET_DISPLAY_ELD = 1,
AVSINK_GET_DISPLAY_TYPE, /* HDMI or DisplayPort */
AVSINK_GET_DISPLAY_NAME, /* Hope to use display device name under /sys/class/drm, like "card0-DP-1", for user
* space to figure out which HDMI/DP output on the drm side corresponds to which audio
* stream device on the alsa side */
AVSINK_GET_DISPLAY_SAMPLING_FREQ, /* HDMI TMDS clock or DP link symbol clock, for audio driver to
* program N value
*/
AVSINK_GET_DISPLAY_HDCP_STATUS,
AVSINK_GET_DISPLAY_AUDIO_STATUS, /* Whether audio is enabled */
AVSINK_SET_DISPLAY_ENABLE_AUDIO, /* Enable audio */
AVSINK_SET_DISPLAY_DISABLE_AUDIO, /* Disable audio */
AVSINK_SET_DISPLAY_ENABLE_AUDIO_INT, /* Enable audio interrupt */
AVSINK_SET_DISPLAY_DISABLE_AUDIO_INT, /* Disable audio interrupt */
/* capabilities for audio input endpoints */
AVSINK_GET_AUDIO_IS_BUSY, /* Whether there is an active audio streaming */
OTHERS_TBD,
};
For example, the audio driver can query ELD info on an audio input endpoint by using caps AVSINK_GET_DISPLAY_ELD, and
avsink core will call get_caps() on the peer display output endpoint and return the ELD info to the audio driver.
Some audio driver may only use part of these caps. E.g. HD-Audio driver can use bus commands instead of the ops to
control the audio on gfx side, so it doesn't use caps like ENABLE/DISABLE_AUDIO or ENABLE/DISABLE_AUDIO.
When the display driver want to disable a display pipeline for hot-plug, mode change or power saving, it can use caps
AVSINK_GET_AUDIO_IS_BUSY to check if the audio input is busy (active streaming) on this display pipeline. And if audio
is busy, the display driver can choose to wait or go ahead to disable display pipeline anyway. For the latter case, the
audio input endpoint will be notified by an event and should abort audio streaming.
5. Event handling of endpoints
A driver can post events on an endpoint. Depending on the event type, the avsink core will call the endpoint's event
handler or pass the event to its peer endpoint and trigger the peer's event handler function if defined.
int avsink_endpoint_post_event(struct avsink_endpoint *endpoint,
enum avsink_event_type event_type);
Now we only defined event types which should be handled by the audio input endpoints. The event types can be extended
in the future.
enum avsink_event_type {
AVSINK_EVENT_DISPLAY_DISABLE = 1, /* The display pipeline is disabled for hot-plug, mode change or
* suspend. Audio driver should stop any active streaming.
*/
AVSINK_EVENT_DISPLAY_ENABLE, /* The display pipeline is enabled after hot-plug, mode change or
* resume. Audio driver can restore previously interrupted streaming
*/
AVSINK_EVENT_DISPLAY_MODE_CHANGE, /* Display mode change event. At this time, the new display mode is
* configured but the display pipeline is not enabled yet. Audio driver
* can do some configurations such as programing N value */
AVSINK_EVENT_DISPLAY_AUDIO_BUFFER_DONE, /* Audio Buffer done interrupts. Only for audio drivers if DMA and
* interrupt are handled by GPU
*/
AVSINK_EVENT_DISPLAY_AUDIO_BUFFER_UNDERRUN, /* Audio Buffer under run interrupts. Only for audio drivers if
* DMA and interrupt are handled by GPU
*/
};
So for a display driver, it can post an event on a display output endpoint and get processed by the peer audio input
endpoint. Or it can also directly post an event on a peer audio input endpoint, by using the 'peer' pointer on a
display output endpoint.
6. Display register operation (optional)
Some audio driver needs to access GPU audio registers. The register ops are provided by the peer display client.
struct avsink_registers_ops {
int (*read_register) (uint32_t reg_addr, uint32_t *data, void *context);
int (*write_register) (uint32_t reg_addr, uint32_t data, void *context);
int (*read_modify_register) (uint32_t reg_addr, uint32_t data, uint32_t mask, void *context);
int avsink_define_reg_ops (struct avsink_client *client, struct avsink_registers_ops *ops);
And avsink core provides API for the audio driver to access the display registers:
int avsink_read_display_register(struct avsink_client *client , uint32_t offset, uint32_t *data);
int avsink_write_display_register(struct avsink_client *client , uint32_t offset, uint32_t data);
int avsink_read_modify_display_register(struct avsink_client *client, uint32_t offset, uint32_t data, uint32_t mask);
If the client is an audio client, the avsink core will find it peer display client and call its register ops;
and if the client is a display client, the avsink core will just call its own register ops.
Thanks
Mengdong
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)
2014-05-20 2:52 [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM) Lin, Mengdong
@ 2014-05-20 8:10 ` Takashi Iwai
2014-05-20 10:37 ` Vinod Koul
2014-05-22 2:46 ` [alsa-devel] " Raymond Yau
2014-05-20 10:02 ` Daniel Vetter
2014-05-20 14:29 ` Imre Deak
2 siblings, 2 replies; 24+ messages in thread
From: Takashi Iwai @ 2014-05-20 8:10 UTC (permalink / raw)
To: Lin, Mengdong
Cc: Yang, Libin, alsa-devel@alsa-project.org, Nikkanen, Kimmo,
Koul, Vinod, intel-gfx@lists.freedesktop.org, Babu, Ramesh,
Li, Jocelyn, Shankar, Uma, Girdwood, Liam R, Vetter, Daniel,
Deak, Imre
At Tue, 20 May 2014 02:52:19 +0000,
Lin, Mengdong wrote:
>
> This RFC is based on previous discussion to set up a generic communication channel between display and audio driver and
> an internal design of Intel MCG/VPG HDMI audio driver. It's still an initial draft and your advice would be appreciated
> to improve the design.
>
> The basic idea is to create a new avsink module and let both drm and alsa depend on it.
> This new module provides a framework and APIs for synchronization between the display and audio driver.
Thanks, this looks like a good ground to start with.
Some comments below.
> 1. Display/Audio Client
>
> The avsink core provides APIs to create, register and lookup a display/audio client.
> A specific display driver (eg. i915) or audio driver (eg. HD-Audio driver) can create a client, add some resources
> objects (shared power wells, display outputs, and audio inputs, register ops) to the client, and then register this
> client to avisink core. The peer driver can look up a registered client by a name or type, or both. If a client gives
> a valid peer client name on registration, avsink core will bind the two clients as peer for each other. And we
> expect a display client and an audio client to be peers for each other in a system.
>
> int avsink_new_client ( const char *name,
> int type, /* client type, display or audio */
> struct module *module,
> void *context,
> const char *peer_name,
> struct avsink_client **client_ret);
>
> int avsink_free_client (struct avsink_client *client);
>
> int avsink_register_client(struct avsink_client *client);
> int avisink_unregister_client(int client_handle);
>
> struct avsink_client *avsink_lookup_client(const char *name, int type);
>
> struct avsink_client {
> const char *name; /* client name */
> int type; /* client type*/
> void *context;
> struct module *module; /* top-level module for locking */
>
> struct avsink_client *peer; /* peer client */
>
> /* shared power wells */
> struct avsink_power_well *power_well;
> int num_power_wells;
The "power well" is Intel-specific things. Better to use a more
generic term. (And, I'm always confused what "power well disable"
means :)
>
> /* endpoints, display outputs or audio inputs */
> struct avsink_endpoint * endpoint;
> int num_endpints;
>
> struct avsink_registers_ops *reg_ops; /* ops to access registers of a client */
Use const for ops pointers in general (also other cases below).
> void *private_data;
> ...
> };
>
> On system boots, the avsink module is loaded before the display and audio driver module. And the display and audio
> driver may be loaded on parallel.
For HD-audio HDMI, both controller and codec drivers would need the
avsink access. So, both drivers will register the own client?
> * If a specific display driver (eg. i915) supports avsink, it can create a display client, add power wells and display
> outputs to the client, and then register the display client to the avsink core. Then it may look up if there is any
> audio client registered, by name or type, and may find an audio client registered by some audio driver.
>
> * If an audio driver supports avsink, it usually should look up a registered display client by name or type at first,
> because it may need the shared power well in GPU and check the display outputs' name to bind the audio inputs. If
> the display client is not registered yet, the audio driver can choose to wait (maybe in a work queue) or return
> -EAGAIN for a deferred probe. After the display client is found, the audio driver can register an audio client with
> the display client's name as the peer name, the avsink core will bind the display and audio clients to each other.
There is already "component" framework, BTW. Can we integrate it into
avsink instead?
> Open question:
> If the display or audio driver is disabled by the black list, shall we introduce a time out to avoid waiting for the
> other client registered endlessly?
Yes, timeout sounds like a sensible option.
> 2. Shared power wells (optional)
>
> The audio and display devices, maybe only part of them, may share a common power well (e.g. for Intel Haswell and
> Broadwell). If so, the driver that controls the power well should define a power well object, implement the get/put ops,
> and add it to its avsink client before registering the client to avsink core. Then the peer client can look up this
> power well by its name, and get/put this power well as a user.
>
> A client can have multiple power well objects.
>
> struct avsink_power_well {
> const char *name; /* name of the power well */
> void *context; /* parameter of get/put ops, maybe device pointer for this power well */
> struct avsink_power_well_ops *ops
> };
>
> struct avsink_power_well_ops {
> int (*get)(void *context);
> int (*put)(void *context);
> };
>
> API:
> int avsink_new_power(struct avsink_client *client,
> const char *power_name,
> void * power_context,
> struct avsink_power_well_ops *ops,
> struct avsink_power_well **power_ret);
>
> struct avsink_power_well *avisnk_lookup_power(const char *name);
>
> int avsink_get_power(struct avsink_power_well *power); /* Reqesut the power */
> int avsink_put_power(struct avsink_power_well *power); /* Release the power */
>
> For example, the i915 display driver can create a device for the shared power well in Haswell GPU, implement its PM
> functions, and use the device pointer as the context when creating the power well object, like below
>
> struct avsink_power_well_ops i915_power_well_ops = {
> .get = pm_runtime_get_sync;
> .put = pm_runtime_put_sync;
> };
This needs function pointer cast, and it's not portable although it'd
work practically.
> ...
> avsink_new_power ( display_client,
> "i915_display_power_well",
> pdev, /* pointer of the power well device */
> &i915_power_well_ops,
> ...)
>
> Power domain is not used here since a single device seems enough to represent a power well.
>
> 3. Display output and audio input endpoints
>
> A display client should register the display output endpoints and its audio peer client should register the audio input
> endpoints. A client can have multiple endpoints. The avsink core will bind an audio input and a display output as peer
> to each other. This is to allow the audio and display driver to synchronize with each other for each display pipeline.
>
> All endpoints should be added to a client before the client is registered to avsink core. Dynamic endpoints are not
> supported now.
>
> A display out here represents a physical HDMI/DP output port. And as long as it's usable in the system (i.e. physically
> connected to the HDMP/DP port on the machine board), the display output should be registered not matter the port is
> connected to an external display device or not. And if HW and display driver can support DP1.2 daisy chain (multiple DP
> display devices can be connected to a single port), multiple static displays outputs should be defined for the DP port
> according to the HW capability. The port & display device number can be indicated by the name (e.g. "i915_DDI_B",
> "i915_DDI_B_DEV0", "i915_DDI_B_DEV1", or "i915_DDI_B_DEV2"), defined by the display driver.
>
> The audio driver can check the endpoints of its peer display client and use an display endpoint's name, or a presumed
> display endpoint name, as peer name when registering an audio endpoint, thus the avsink core will bind the two display
> and audio endpoints as peers.
>
> struct avsink_endpoint {
> const char *name; /*name of the endpoint */
> int type; /* DISPLAY_OUTPUT or AUDIO_INPUT */
> void *context; /* private data, used as parameter of the ops */
> struct avsink_endpoint_ops *ops;
>
> struct avsink_endpoint *peer; /* peer endpoint */
> };
>
> struct avsink_endpoint_ops {
> int (*get_caps) (enum had_caps_list query_element,
> void *capabilities,
> void *context);
> int (*set_caps) (enum had_caps_list set_element,
> void *capabilities,
> void *context);
> int (*event_handler) (enum avsink_event_type event_type, void *context);
> };
>
> API:
> int avsink_new_endpoint (struct avsink_client *client,
> const char *name,
> int type, /* DISPLAY_OUTPUT or AUDIO_INPUT*/
> void *context,
> const char *peer_name, /* can be NULL if no clue */
> avsink_endpoint_ops *ops,
> struct avsink_endpoint **endpoint_ret);
>
> int avsink_endpoint_get_caps(struct avsink_endpoint *endpoint,
> enum avsink_caps_list t get_element,
> void *capabilities);
> int avsink_endpoint_set_caps(struct avsink_endpoint *endpoint,
> enum had_caps_list set_element,
> void *capabilities);
>
> int avsink_endpoint_post_event(struct avsink_endpoint *endpoint,
> enum avsink_event_type event_type);
>
> 4. Get/Set caps on an endpoint
>
> The display or audio driver can get or set capabilities on an endpoint. Depending on the capability ID, the avsink core
> will call get_caps/set_caps ops of this endpoint, or call get_caps/set_caps ops of its peer endpoint and return the
> result to the caller.
>
> enum avsink_caps_list {
> /* capabilities for display output endpoints */
> AVSINK_GET_DISPLAY_ELD = 1,
> AVSINK_GET_DISPLAY_TYPE, /* HDMI or DisplayPort */
> AVSINK_GET_DISPLAY_NAME, /* Hope to use display device name under /sys/class/drm, like "card0-DP-1", for user
> * space to figure out which HDMI/DP output on the drm side corresponds to which audio
> * stream device on the alsa side */
> AVSINK_GET_DISPLAY_SAMPLING_FREQ, /* HDMI TMDS clock or DP link symbol clock, for audio driver to
> * program N value
> */
> AVSINK_GET_DISPLAY_HDCP_STATUS,
> AVSINK_GET_DISPLAY_AUDIO_STATUS, /* Whether audio is enabled */
> AVSINK_SET_DISPLAY_ENABLE_AUDIO, /* Enable audio */
> AVSINK_SET_DISPLAY_DISABLE_AUDIO, /* Disable audio */
> AVSINK_SET_DISPLAY_ENABLE_AUDIO_INT, /* Enable audio interrupt */
> AVSINK_SET_DISPLAY_DISABLE_AUDIO_INT, /* Disable audio interrupt */
>
> /* capabilities for audio input endpoints */
> AVSINK_GET_AUDIO_IS_BUSY, /* Whether there is an active audio streaming */
> OTHERS_TBD,
> };
>
> For example, the audio driver can query ELD info on an audio input endpoint by using caps AVSINK_GET_DISPLAY_ELD, and
> avsink core will call get_caps() on the peer display output endpoint and return the ELD info to the audio driver.
>
> Some audio driver may only use part of these caps. E.g. HD-Audio driver can use bus commands instead of the ops to
> control the audio on gfx side, so it doesn't use caps like ENABLE/DISABLE_AUDIO or ENABLE/DISABLE_AUDIO.
>
> When the display driver want to disable a display pipeline for hot-plug, mode change or power saving, it can use caps
> AVSINK_GET_AUDIO_IS_BUSY to check if the audio input is busy (active streaming) on this display pipeline. And if audio
> is busy, the display driver can choose to wait or go ahead to disable display pipeline anyway. For the latter case, the
> audio input endpoint will be notified by an event and should abort audio streaming.
>
> 5. Event handling of endpoints
>
> A driver can post events on an endpoint. Depending on the event type, the avsink core will call the endpoint's event
> handler or pass the event to its peer endpoint and trigger the peer's event handler function if defined.
>
> int avsink_endpoint_post_event(struct avsink_endpoint *endpoint,
> enum avsink_event_type event_type);
>
> Now we only defined event types which should be handled by the audio input endpoints. The event types can be extended
> in the future.
>
> enum avsink_event_type {
> AVSINK_EVENT_DISPLAY_DISABLE = 1, /* The display pipeline is disabled for hot-plug, mode change or
> * suspend. Audio driver should stop any active streaming.
> */
> AVSINK_EVENT_DISPLAY_ENABLE, /* The display pipeline is enabled after hot-plug, mode change or
> * resume. Audio driver can restore previously interrupted streaming
> */
> AVSINK_EVENT_DISPLAY_MODE_CHANGE, /* Display mode change event. At this time, the new display mode is
> * configured but the display pipeline is not enabled yet. Audio driver
> * can do some configurations such as programing N value */
> AVSINK_EVENT_DISPLAY_AUDIO_BUFFER_DONE, /* Audio Buffer done interrupts. Only for audio drivers if DMA and
> * interrupt are handled by GPU
> */
> AVSINK_EVENT_DISPLAY_AUDIO_BUFFER_UNDERRUN, /* Audio Buffer under run interrupts. Only for audio drivers if
> * DMA and interrupt are handled by GPU
> */
> };
>
> So for a display driver, it can post an event on a display output endpoint and get processed by the peer audio input
> endpoint. Or it can also directly post an event on a peer audio input endpoint, by using the 'peer' pointer on a
> display output endpoint.
Hm, one unclear thing to me is who handles this event by how.
Suppose you issue GET_ELD on an audio endpoint. Then what would
avsink does against the display client exactly?
>
> 6. Display register operation (optional)
>
> Some audio driver needs to access GPU audio registers. The register ops are provided by the peer display client.
>
> struct avsink_registers_ops {
> int (*read_register) (uint32_t reg_addr, uint32_t *data, void *context);
> int (*write_register) (uint32_t reg_addr, uint32_t data, void *context);
> int (*read_modify_register) (uint32_t reg_addr, uint32_t data, uint32_t mask, void *context);
Why an extra read_modify_register ops is needed?
> int avsink_define_reg_ops (struct avsink_client *client, struct avsink_registers_ops *ops);
>
> And avsink core provides API for the audio driver to access the display registers:
>
> int avsink_read_display_register(struct avsink_client *client , uint32_t offset, uint32_t *data);
> int avsink_write_display_register(struct avsink_client *client , uint32_t offset, uint32_t data);
> int avsink_read_modify_display_register(struct avsink_client *client, uint32_t offset, uint32_t data, uint32_t mask);
>
> If the client is an audio client, the avsink core will find it peer display client and call its register ops;
> and if the client is a display client, the avsink core will just call its own register ops.
>
> Thanks
> Mengdong
thanks,
Takashi
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)
2014-05-20 2:52 [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM) Lin, Mengdong
2014-05-20 8:10 ` Takashi Iwai
@ 2014-05-20 10:02 ` Daniel Vetter
2014-05-20 10:04 ` Daniel Vetter
2014-05-20 14:29 ` Imre Deak
2 siblings, 1 reply; 24+ messages in thread
From: Daniel Vetter @ 2014-05-20 10:02 UTC (permalink / raw)
To: Lin, Mengdong
Cc: Yang, Libin, alsa-devel@alsa-project.org, Nikkanen, Kimmo,
Takashi Iwai (tiwai@suse.de), Greg KH,
intel-gfx@lists.freedesktop.org, Babu, Ramesh, Koul, Vinod,
Girdwood, Liam R, Vetter, Daniel
Adding Greg just as an fyi since we've chatted briefly about the avsink
bus. Comments below.
-Daniel
On Tue, May 20, 2014 at 02:52:19AM +0000, Lin, Mengdong wrote:
> This RFC is based on previous discussion to set up a generic communication channel between display and audio driver and
> an internal design of Intel MCG/VPG HDMI audio driver. It's still an initial draft and your advice would be appreciated
> to improve the design.
>
> The basic idea is to create a new avsink module and let both drm and alsa depend on it.
> This new module provides a framework and APIs for synchronization between the display and audio driver.
>
> 1. Display/Audio Client
>
> The avsink core provides APIs to create, register and lookup a display/audio client.
> A specific display driver (eg. i915) or audio driver (eg. HD-Audio driver) can create a client, add some resources
> objects (shared power wells, display outputs, and audio inputs, register ops) to the client, and then register this
> client to avisink core. The peer driver can look up a registered client by a name or type, or both. If a client gives
> a valid peer client name on registration, avsink core will bind the two clients as peer for each other. And we
> expect a display client and an audio client to be peers for each other in a system.
>
> int avsink_new_client ( const char *name,
> int type, /* client type, display or audio */
> struct module *module,
> void *context,
> const char *peer_name,
> struct avsink_client **client_ret);
>
> int avsink_free_client (struct avsink_client *client);
Hm, my idea was to create a new avsink bus and let vga drivers register
devices on that thing and audio drivers register as drivers. There's a bit
more work involved in creating a full-blown bus, but it has a lot of
upsides:
- Established infrastructure for matching drivers (i.e. audio drivers)
against devices (i.e. avsinks exported by gfx drivers).
- Module refcounting.
- power domain handling and well-integrated into runtime pm.
- Allows integration into componentized device framework since we're
dealing with a real struct device.
- Better decoupling between gfx and audio side since registration is done
at runtime.
- We can attach drv private date which the audio driver needs.
> int avsink_register_client(struct avsink_client *client);
> int avisink_unregister_client(int client_handle);
>
> struct avsink_client *avsink_lookup_client(const char *name, int type);
>
> struct avsink_client {
> const char *name; /* client name */
> int type; /* client type*/
> void *context;
> struct module *module; /* top-level module for locking */
>
> struct avsink_client *peer; /* peer client */
>
> /* shared power wells */
> struct avsink_power_well *power_well;
We need to have an struct power_domain here so that we can do proper
runtime pm. But like I've said above I think we actually want a full blown
struct device.
> int num_power_wells;
>
> /* endpoints, display outputs or audio inputs */
> struct avsink_endpoint * endpoint;
> int num_endpints;
>
> struct avsink_registers_ops *reg_ops; /* ops to access registers of a client */
> void *private_data;
> ...
> };
I think you're indeed implementing a full blown bus here ;-)
avsink->client = bus devices/childern
avsink->peer = driver for all this stuff
avsink->power_well = runtime pm support for the avsink bus
avsink->reg_ops = driver bind/unbind support
> On system boots, the avsink module is loaded before the display and audio driver module. And the display and audio
> driver may be loaded on parallel.
> * If a specific display driver (eg. i915) supports avsink, it can create a display client, add power wells and display
> outputs to the client, and then register the display client to the avsink core. Then it may look up if there is any
> audio client registered, by name or type, and may find an audio client registered by some audio driver.
>
> * If an audio driver supports avsink, it usually should look up a registered display client by name or type at first,
> because it may need the shared power well in GPU and check the display outputs' name to bind the audio inputs. If
> the display client is not registered yet, the audio driver can choose to wait (maybe in a work queue) or return
> -EAGAIN for a deferred probe. After the display client is found, the audio driver can register an audio client with
> the display client's name as the peer name, the avsink core will bind the display and audio clients to each other.
>
> Open question:
> If the display or audio driver is disabled by the black list, shall we introduce a time out to avoid waiting for the
> other client registered endlessly?
If the hdmi/dp side is a separate audio instance then we can just defer
forever I think. If that's not the case (i.e. other audio outputs are also
in the same alsa instance) then we need to be able to handle runtime
loading of the gfx driver.
Both cases would work easier I think if we have a real bus and
driver<->device matching.
> 2. Shared power wells (optional)
>
> The audio and display devices, maybe only part of them, may share a common power well (e.g. for Intel Haswell and
> Broadwell). If so, the driver that controls the power well should define a power well object, implement the get/put ops,
> and add it to its avsink client before registering the client to avsink core. Then the peer client can look up this
> power well by its name, and get/put this power well as a user.
>
> A client can have multiple power well objects.
>
> struct avsink_power_well {
> const char *name; /* name of the power well */
> void *context; /* parameter of get/put ops, maybe device pointer for this power well */
> struct avsink_power_well_ops *ops
> };
>
> struct avsink_power_well_ops {
> int (*get)(void *context);
> int (*put)(void *context);
> };
>
> API:
> int avsink_new_power(struct avsink_client *client,
> const char *power_name,
> void * power_context,
> struct avsink_power_well_ops *ops,
> struct avsink_power_well **power_ret);
>
> struct avsink_power_well *avisnk_lookup_power(const char *name);
>
> int avsink_get_power(struct avsink_power_well *power); /* Reqesut the power */
> int avsink_put_power(struct avsink_power_well *power); /* Release the power */
>
> For example, the i915 display driver can create a device for the shared power well in Haswell GPU, implement its PM
> functions, and use the device pointer as the context when creating the power well object, like below
>
> struct avsink_power_well_ops i915_power_well_ops = {
> .get = pm_runtime_get_sync;
> .put = pm_runtime_put_sync;
> };
> ...
> avsink_new_power ( display_client,
> "i915_display_power_well",
> pdev, /* pointer of the power well device */
> &i915_power_well_ops,
> ...)
>
> Power domain is not used here since a single device seems enough to represent a power well.
Imo the point of the avsink stuff is _not_ to reinvent the wheel again. A
real struct device per endpoint + runtime pm should be able to do
everything we want.
> 3. Display output and audio input endpoints
>
> A display client should register the display output endpoints and its audio peer client should register the audio input
> endpoints. A client can have multiple endpoints. The avsink core will bind an audio input and a display output as peer
> to each other. This is to allow the audio and display driver to synchronize with each other for each display pipeline.
>
> All endpoints should be added to a client before the client is registered to avsink core. Dynamic endpoints are not
> supported now.
>
> A display out here represents a physical HDMI/DP output port. And as long as it's usable in the system (i.e. physically
> connected to the HDMP/DP port on the machine board), the display output should be registered not matter the port is
> connected to an external display device or not. And if HW and display driver can support DP1.2 daisy chain (multiple DP
> display devices can be connected to a single port), multiple static displays outputs should be defined for the DP port
> according to the HW capability. The port & display device number can be indicated by the name (e.g. "i915_DDI_B",
> "i915_DDI_B_DEV0", "i915_DDI_B_DEV1", or "i915_DDI_B_DEV2"), defined by the display driver.
>
> The audio driver can check the endpoints of its peer display client and use an display endpoint's name, or a presumed
> display endpoint name, as peer name when registering an audio endpoint, thus the avsink core will bind the two display
> and audio endpoints as peers.
>
> struct avsink_endpoint {
> const char *name; /*name of the endpoint */
> int type; /* DISPLAY_OUTPUT or AUDIO_INPUT */
> void *context; /* private data, used as parameter of the ops */
> struct avsink_endpoint_ops *ops;
>
> struct avsink_endpoint *peer; /* peer endpoint */
> };
>
> struct avsink_endpoint_ops {
> int (*get_caps) (enum had_caps_list query_element,
> void *capabilities,
> void *context);
> int (*set_caps) (enum had_caps_list set_element,
> void *capabilities,
> void *context);
> int (*event_handler) (enum avsink_event_type event_type, void *context);
> };
Ok, this is confusing since get/set_caps are implemented by the gfx side.
The event handler otoh is implemented by the audio side. This needs to be
split up.
With a full device model the set/get stuff would be attached to the device
while the event handler would be part of the driver.
> API:
> int avsink_new_endpoint (struct avsink_client *client,
> const char *name,
> int type, /* DISPLAY_OUTPUT or AUDIO_INPUT*/
> void *context,
> const char *peer_name, /* can be NULL if no clue */
> avsink_endpoint_ops *ops,
> struct avsink_endpoint **endpoint_ret);
>
> int avsink_endpoint_get_caps(struct avsink_endpoint *endpoint,
> enum avsink_caps_list t get_element,
> void *capabilities);
> int avsink_endpoint_set_caps(struct avsink_endpoint *endpoint,
> enum had_caps_list set_element,
> void *capabilities);
>
> int avsink_endpoint_post_event(struct avsink_endpoint *endpoint,
> enum avsink_event_type event_type);
>
> 4. Get/Set caps on an endpoint
>
> The display or audio driver can get or set capabilities on an endpoint. Depending on the capability ID, the avsink core
> will call get_caps/set_caps ops of this endpoint, or call get_caps/set_caps ops of its peer endpoint and return the
> result to the caller.
>
> enum avsink_caps_list {
> /* capabilities for display output endpoints */
> AVSINK_GET_DISPLAY_ELD = 1,
> AVSINK_GET_DISPLAY_TYPE, /* HDMI or DisplayPort */
> AVSINK_GET_DISPLAY_NAME, /* Hope to use display device name under /sys/class/drm, like "card0-DP-1", for user
> * space to figure out which HDMI/DP output on the drm side corresponds to which audio
> * stream device on the alsa side */
> AVSINK_GET_DISPLAY_SAMPLING_FREQ, /* HDMI TMDS clock or DP link symbol clock, for audio driver to
> * program N value
> */
> AVSINK_GET_DISPLAY_HDCP_STATUS,
> AVSINK_GET_DISPLAY_AUDIO_STATUS, /* Whether audio is enabled */
> AVSINK_SET_DISPLAY_ENABLE_AUDIO, /* Enable audio */
> AVSINK_SET_DISPLAY_DISABLE_AUDIO, /* Disable audio */
> AVSINK_SET_DISPLAY_ENABLE_AUDIO_INT, /* Enable audio interrupt */
> AVSINK_SET_DISPLAY_DISABLE_AUDIO_INT, /* Disable audio interrupt */
>
> /* capabilities for audio input endpoints */
> AVSINK_GET_AUDIO_IS_BUSY, /* Whether there is an active audio streaming */
> OTHERS_TBD,
> };
I really don't like caps based apis. It's imo much better to have specific
set/get functions. Also a lot of this could be passed to more specific
event handlers directly (like the eld or the sampling freq).
If you have a void* somewhere in your interface you're throwing out an
awful lot of safety checks gcc provides. Which is not good.
> For example, the audio driver can query ELD info on an audio input endpoint by using caps AVSINK_GET_DISPLAY_ELD, and
> avsink core will call get_caps() on the peer display output endpoint and return the ELD info to the audio driver.
>
> Some audio driver may only use part of these caps. E.g. HD-Audio driver can use bus commands instead of the ops to
> control the audio on gfx side, so it doesn't use caps like ENABLE/DISABLE_AUDIO or ENABLE/DISABLE_AUDIO.
>
> When the display driver want to disable a display pipeline for hot-plug, mode change or power saving, it can use caps
> AVSINK_GET_AUDIO_IS_BUSY to check if the audio input is busy (active streaming) on this display pipeline. And if audio
> is busy, the display driver can choose to wait or go ahead to disable display pipeline anyway. For the latter case, the
> audio input endpoint will be notified by an event and should abort audio streaming.
>
> 5. Event handling of endpoints
>
> A driver can post events on an endpoint. Depending on the event type, the avsink core will call the endpoint's event
> handler or pass the event to its peer endpoint and trigger the peer's event handler function if defined.
>
> int avsink_endpoint_post_event(struct avsink_endpoint *endpoint,
> enum avsink_event_type event_type);
>
> Now we only defined event types which should be handled by the audio input endpoints. The event types can be extended
> in the future.
>
> enum avsink_event_type {
> AVSINK_EVENT_DISPLAY_DISABLE = 1, /* The display pipeline is disabled for hot-plug, mode change or
> * suspend. Audio driver should stop any active streaming.
> */
> AVSINK_EVENT_DISPLAY_ENABLE, /* The display pipeline is enabled after hot-plug, mode change or
> * resume. Audio driver can restore previously interrupted streaming
> */
> AVSINK_EVENT_DISPLAY_MODE_CHANGE, /* Display mode change event. At this time, the new display mode is
> * configured but the display pipeline is not enabled yet. Audio driver
> * can do some configurations such as programing N value */
> AVSINK_EVENT_DISPLAY_AUDIO_BUFFER_DONE, /* Audio Buffer done interrupts. Only for audio drivers if DMA and
> * interrupt are handled by GPU
> */
> AVSINK_EVENT_DISPLAY_AUDIO_BUFFER_UNDERRUN, /* Audio Buffer under run interrupts. Only for audio drivers if
> * DMA and interrupt are handled by GPU
> */
> };
>
> So for a display driver, it can post an event on a display output endpoint and get processed by the peer audio input
> endpoint. Or it can also directly post an event on a peer audio input endpoint, by using the 'peer' pointer on a
> display output endpoint.
Again I don't like the enumeration, much better to have a bunch of
specific callbacks. They can also supply interesting information to the
driver directly if instead of audio driver needing to jump through a few
get/set hooks.
> 6. Display register operation (optional)
>
> Some audio driver needs to access GPU audio registers. The register ops are provided by the peer display client.
>
> struct avsink_registers_ops {
> int (*read_register) (uint32_t reg_addr, uint32_t *data, void *context);
> int (*write_register) (uint32_t reg_addr, uint32_t data, void *context);
> int (*read_modify_register) (uint32_t reg_addr, uint32_t data, uint32_t mask, void *context);
>
> int avsink_define_reg_ops (struct avsink_client *client, struct avsink_registers_ops *ops);
>
> And avsink core provides API for the audio driver to access the display registers:
>
> int avsink_read_display_register(struct avsink_client *client , uint32_t offset, uint32_t *data);
> int avsink_write_display_register(struct avsink_client *client , uint32_t offset, uint32_t data);
> int avsink_read_modify_display_register(struct avsink_client *client, uint32_t offset, uint32_t data, uint32_t mask);
>
> If the client is an audio client, the avsink core will find it peer display client and call its register ops;
> and if the client is a display client, the avsink core will just call its own register ops.
Oh dear. Where do we need this? Imo this is really horrible, but if we
indeed need this then a struct device is better - we can attach mmio
resources to devices and let the audio side remap them as best as they see
fit.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)
2014-05-20 10:02 ` Daniel Vetter
@ 2014-05-20 10:04 ` Daniel Vetter
2014-05-20 12:43 ` Thierry Reding
0 siblings, 1 reply; 24+ messages in thread
From: Daniel Vetter @ 2014-05-20 10:04 UTC (permalink / raw)
To: Lin, Mengdong
Cc: Yang, Libin, alsa-devel@alsa-project.org, Nikkanen, Kimmo,
Takashi Iwai (tiwai@suse.de), Greg KH,
intel-gfx@lists.freedesktop.org, Babu, Ramesh, Koul, Vinod,
DRI Development, Girdwood, Liam R, Vetter, Daniel, linux-media
Also adding dri-devel and linux-media. Please don't forget these lists for
the next round.
-Daniel
On Tue, May 20, 2014 at 12:02:04PM +0200, Daniel Vetter wrote:
> Adding Greg just as an fyi since we've chatted briefly about the avsink
> bus. Comments below.
> -Daniel
>
> On Tue, May 20, 2014 at 02:52:19AM +0000, Lin, Mengdong wrote:
> > This RFC is based on previous discussion to set up a generic communication channel between display and audio driver and
> > an internal design of Intel MCG/VPG HDMI audio driver. It's still an initial draft and your advice would be appreciated
> > to improve the design.
> >
> > The basic idea is to create a new avsink module and let both drm and alsa depend on it.
> > This new module provides a framework and APIs for synchronization between the display and audio driver.
> >
> > 1. Display/Audio Client
> >
> > The avsink core provides APIs to create, register and lookup a display/audio client.
> > A specific display driver (eg. i915) or audio driver (eg. HD-Audio driver) can create a client, add some resources
> > objects (shared power wells, display outputs, and audio inputs, register ops) to the client, and then register this
> > client to avisink core. The peer driver can look up a registered client by a name or type, or both. If a client gives
> > a valid peer client name on registration, avsink core will bind the two clients as peer for each other. And we
> > expect a display client and an audio client to be peers for each other in a system.
> >
> > int avsink_new_client ( const char *name,
> > int type, /* client type, display or audio */
> > struct module *module,
> > void *context,
> > const char *peer_name,
> > struct avsink_client **client_ret);
> >
> > int avsink_free_client (struct avsink_client *client);
>
>
> Hm, my idea was to create a new avsink bus and let vga drivers register
> devices on that thing and audio drivers register as drivers. There's a bit
> more work involved in creating a full-blown bus, but it has a lot of
> upsides:
> - Established infrastructure for matching drivers (i.e. audio drivers)
> against devices (i.e. avsinks exported by gfx drivers).
> - Module refcounting.
> - power domain handling and well-integrated into runtime pm.
> - Allows integration into componentized device framework since we're
> dealing with a real struct device.
> - Better decoupling between gfx and audio side since registration is done
> at runtime.
> - We can attach drv private date which the audio driver needs.
>
> > int avsink_register_client(struct avsink_client *client);
> > int avisink_unregister_client(int client_handle);
> >
> > struct avsink_client *avsink_lookup_client(const char *name, int type);
> >
> > struct avsink_client {
> > const char *name; /* client name */
> > int type; /* client type*/
> > void *context;
> > struct module *module; /* top-level module for locking */
> >
> > struct avsink_client *peer; /* peer client */
> >
> > /* shared power wells */
> > struct avsink_power_well *power_well;
>
> We need to have an struct power_domain here so that we can do proper
> runtime pm. But like I've said above I think we actually want a full blown
> struct device.
>
> > int num_power_wells;
> >
> > /* endpoints, display outputs or audio inputs */
> > struct avsink_endpoint * endpoint;
> > int num_endpints;
> >
> > struct avsink_registers_ops *reg_ops; /* ops to access registers of a client */
> > void *private_data;
> > ...
> > };
>
> I think you're indeed implementing a full blown bus here ;-)
>
> avsink->client = bus devices/childern
> avsink->peer = driver for all this stuff
> avsink->power_well = runtime pm support for the avsink bus
> avsink->reg_ops = driver bind/unbind support
>
> > On system boots, the avsink module is loaded before the display and audio driver module. And the display and audio
> > driver may be loaded on parallel.
> > * If a specific display driver (eg. i915) supports avsink, it can create a display client, add power wells and display
> > outputs to the client, and then register the display client to the avsink core. Then it may look up if there is any
> > audio client registered, by name or type, and may find an audio client registered by some audio driver.
> >
> > * If an audio driver supports avsink, it usually should look up a registered display client by name or type at first,
> > because it may need the shared power well in GPU and check the display outputs' name to bind the audio inputs. If
> > the display client is not registered yet, the audio driver can choose to wait (maybe in a work queue) or return
> > -EAGAIN for a deferred probe. After the display client is found, the audio driver can register an audio client with
> > the display client's name as the peer name, the avsink core will bind the display and audio clients to each other.
> >
> > Open question:
> > If the display or audio driver is disabled by the black list, shall we introduce a time out to avoid waiting for the
> > other client registered endlessly?
>
> If the hdmi/dp side is a separate audio instance then we can just defer
> forever I think. If that's not the case (i.e. other audio outputs are also
> in the same alsa instance) then we need to be able to handle runtime
> loading of the gfx driver.
>
> Both cases would work easier I think if we have a real bus and
> driver<->device matching.
>
> > 2. Shared power wells (optional)
> >
> > The audio and display devices, maybe only part of them, may share a common power well (e.g. for Intel Haswell and
> > Broadwell). If so, the driver that controls the power well should define a power well object, implement the get/put ops,
> > and add it to its avsink client before registering the client to avsink core. Then the peer client can look up this
> > power well by its name, and get/put this power well as a user.
> >
> > A client can have multiple power well objects.
> >
> > struct avsink_power_well {
> > const char *name; /* name of the power well */
> > void *context; /* parameter of get/put ops, maybe device pointer for this power well */
> > struct avsink_power_well_ops *ops
> > };
> >
> > struct avsink_power_well_ops {
> > int (*get)(void *context);
> > int (*put)(void *context);
> > };
> >
> > API:
> > int avsink_new_power(struct avsink_client *client,
> > const char *power_name,
> > void * power_context,
> > struct avsink_power_well_ops *ops,
> > struct avsink_power_well **power_ret);
> >
> > struct avsink_power_well *avisnk_lookup_power(const char *name);
> >
> > int avsink_get_power(struct avsink_power_well *power); /* Reqesut the power */
> > int avsink_put_power(struct avsink_power_well *power); /* Release the power */
> >
> > For example, the i915 display driver can create a device for the shared power well in Haswell GPU, implement its PM
> > functions, and use the device pointer as the context when creating the power well object, like below
> >
> > struct avsink_power_well_ops i915_power_well_ops = {
> > .get = pm_runtime_get_sync;
> > .put = pm_runtime_put_sync;
> > };
> > ...
> > avsink_new_power ( display_client,
> > "i915_display_power_well",
> > pdev, /* pointer of the power well device */
> > &i915_power_well_ops,
> > ...)
> >
> > Power domain is not used here since a single device seems enough to represent a power well.
>
> Imo the point of the avsink stuff is _not_ to reinvent the wheel again. A
> real struct device per endpoint + runtime pm should be able to do
> everything we want.
>
> > 3. Display output and audio input endpoints
> >
> > A display client should register the display output endpoints and its audio peer client should register the audio input
> > endpoints. A client can have multiple endpoints. The avsink core will bind an audio input and a display output as peer
> > to each other. This is to allow the audio and display driver to synchronize with each other for each display pipeline.
> >
> > All endpoints should be added to a client before the client is registered to avsink core. Dynamic endpoints are not
> > supported now.
> >
> > A display out here represents a physical HDMI/DP output port. And as long as it's usable in the system (i.e. physically
> > connected to the HDMP/DP port on the machine board), the display output should be registered not matter the port is
> > connected to an external display device or not. And if HW and display driver can support DP1.2 daisy chain (multiple DP
> > display devices can be connected to a single port), multiple static displays outputs should be defined for the DP port
> > according to the HW capability. The port & display device number can be indicated by the name (e.g. "i915_DDI_B",
> > "i915_DDI_B_DEV0", "i915_DDI_B_DEV1", or "i915_DDI_B_DEV2"), defined by the display driver.
> >
> > The audio driver can check the endpoints of its peer display client and use an display endpoint's name, or a presumed
> > display endpoint name, as peer name when registering an audio endpoint, thus the avsink core will bind the two display
> > and audio endpoints as peers.
> >
> > struct avsink_endpoint {
> > const char *name; /*name of the endpoint */
> > int type; /* DISPLAY_OUTPUT or AUDIO_INPUT */
> > void *context; /* private data, used as parameter of the ops */
> > struct avsink_endpoint_ops *ops;
> >
> > struct avsink_endpoint *peer; /* peer endpoint */
> > };
> >
> > struct avsink_endpoint_ops {
> > int (*get_caps) (enum had_caps_list query_element,
> > void *capabilities,
> > void *context);
> > int (*set_caps) (enum had_caps_list set_element,
> > void *capabilities,
> > void *context);
> > int (*event_handler) (enum avsink_event_type event_type, void *context);
> > };
>
> Ok, this is confusing since get/set_caps are implemented by the gfx side.
> The event handler otoh is implemented by the audio side. This needs to be
> split up.
>
> With a full device model the set/get stuff would be attached to the device
> while the event handler would be part of the driver.
>
> > API:
> > int avsink_new_endpoint (struct avsink_client *client,
> > const char *name,
> > int type, /* DISPLAY_OUTPUT or AUDIO_INPUT*/
> > void *context,
> > const char *peer_name, /* can be NULL if no clue */
> > avsink_endpoint_ops *ops,
> > struct avsink_endpoint **endpoint_ret);
> >
> > int avsink_endpoint_get_caps(struct avsink_endpoint *endpoint,
> > enum avsink_caps_list t get_element,
> > void *capabilities);
> > int avsink_endpoint_set_caps(struct avsink_endpoint *endpoint,
> > enum had_caps_list set_element,
> > void *capabilities);
> >
> > int avsink_endpoint_post_event(struct avsink_endpoint *endpoint,
> > enum avsink_event_type event_type);
> >
> > 4. Get/Set caps on an endpoint
> >
> > The display or audio driver can get or set capabilities on an endpoint. Depending on the capability ID, the avsink core
> > will call get_caps/set_caps ops of this endpoint, or call get_caps/set_caps ops of its peer endpoint and return the
> > result to the caller.
> >
> > enum avsink_caps_list {
> > /* capabilities for display output endpoints */
> > AVSINK_GET_DISPLAY_ELD = 1,
> > AVSINK_GET_DISPLAY_TYPE, /* HDMI or DisplayPort */
> > AVSINK_GET_DISPLAY_NAME, /* Hope to use display device name under /sys/class/drm, like "card0-DP-1", for user
> > * space to figure out which HDMI/DP output on the drm side corresponds to which audio
> > * stream device on the alsa side */
> > AVSINK_GET_DISPLAY_SAMPLING_FREQ, /* HDMI TMDS clock or DP link symbol clock, for audio driver to
> > * program N value
> > */
> > AVSINK_GET_DISPLAY_HDCP_STATUS,
> > AVSINK_GET_DISPLAY_AUDIO_STATUS, /* Whether audio is enabled */
> > AVSINK_SET_DISPLAY_ENABLE_AUDIO, /* Enable audio */
> > AVSINK_SET_DISPLAY_DISABLE_AUDIO, /* Disable audio */
> > AVSINK_SET_DISPLAY_ENABLE_AUDIO_INT, /* Enable audio interrupt */
> > AVSINK_SET_DISPLAY_DISABLE_AUDIO_INT, /* Disable audio interrupt */
> >
> > /* capabilities for audio input endpoints */
> > AVSINK_GET_AUDIO_IS_BUSY, /* Whether there is an active audio streaming */
> > OTHERS_TBD,
> > };
>
> I really don't like caps based apis. It's imo much better to have specific
> set/get functions. Also a lot of this could be passed to more specific
> event handlers directly (like the eld or the sampling freq).
>
> If you have a void* somewhere in your interface you're throwing out an
> awful lot of safety checks gcc provides. Which is not good.
>
> > For example, the audio driver can query ELD info on an audio input endpoint by using caps AVSINK_GET_DISPLAY_ELD, and
> > avsink core will call get_caps() on the peer display output endpoint and return the ELD info to the audio driver.
> >
> > Some audio driver may only use part of these caps. E.g. HD-Audio driver can use bus commands instead of the ops to
> > control the audio on gfx side, so it doesn't use caps like ENABLE/DISABLE_AUDIO or ENABLE/DISABLE_AUDIO.
> >
> > When the display driver want to disable a display pipeline for hot-plug, mode change or power saving, it can use caps
> > AVSINK_GET_AUDIO_IS_BUSY to check if the audio input is busy (active streaming) on this display pipeline. And if audio
> > is busy, the display driver can choose to wait or go ahead to disable display pipeline anyway. For the latter case, the
> > audio input endpoint will be notified by an event and should abort audio streaming.
> >
> > 5. Event handling of endpoints
> >
> > A driver can post events on an endpoint. Depending on the event type, the avsink core will call the endpoint's event
> > handler or pass the event to its peer endpoint and trigger the peer's event handler function if defined.
> >
> > int avsink_endpoint_post_event(struct avsink_endpoint *endpoint,
> > enum avsink_event_type event_type);
> >
> > Now we only defined event types which should be handled by the audio input endpoints. The event types can be extended
> > in the future.
> >
> > enum avsink_event_type {
> > AVSINK_EVENT_DISPLAY_DISABLE = 1, /* The display pipeline is disabled for hot-plug, mode change or
> > * suspend. Audio driver should stop any active streaming.
> > */
> > AVSINK_EVENT_DISPLAY_ENABLE, /* The display pipeline is enabled after hot-plug, mode change or
> > * resume. Audio driver can restore previously interrupted streaming
> > */
> > AVSINK_EVENT_DISPLAY_MODE_CHANGE, /* Display mode change event. At this time, the new display mode is
> > * configured but the display pipeline is not enabled yet. Audio driver
> > * can do some configurations such as programing N value */
> > AVSINK_EVENT_DISPLAY_AUDIO_BUFFER_DONE, /* Audio Buffer done interrupts. Only for audio drivers if DMA and
> > * interrupt are handled by GPU
> > */
> > AVSINK_EVENT_DISPLAY_AUDIO_BUFFER_UNDERRUN, /* Audio Buffer under run interrupts. Only for audio drivers if
> > * DMA and interrupt are handled by GPU
> > */
> > };
> >
> > So for a display driver, it can post an event on a display output endpoint and get processed by the peer audio input
> > endpoint. Or it can also directly post an event on a peer audio input endpoint, by using the 'peer' pointer on a
> > display output endpoint.
>
> Again I don't like the enumeration, much better to have a bunch of
> specific callbacks. They can also supply interesting information to the
> driver directly if instead of audio driver needing to jump through a few
> get/set hooks.
>
> > 6. Display register operation (optional)
> >
> > Some audio driver needs to access GPU audio registers. The register ops are provided by the peer display client.
> >
> > struct avsink_registers_ops {
> > int (*read_register) (uint32_t reg_addr, uint32_t *data, void *context);
> > int (*write_register) (uint32_t reg_addr, uint32_t data, void *context);
> > int (*read_modify_register) (uint32_t reg_addr, uint32_t data, uint32_t mask, void *context);
> >
> > int avsink_define_reg_ops (struct avsink_client *client, struct avsink_registers_ops *ops);
> >
> > And avsink core provides API for the audio driver to access the display registers:
> >
> > int avsink_read_display_register(struct avsink_client *client , uint32_t offset, uint32_t *data);
> > int avsink_write_display_register(struct avsink_client *client , uint32_t offset, uint32_t data);
> > int avsink_read_modify_display_register(struct avsink_client *client, uint32_t offset, uint32_t data, uint32_t mask);
> >
> > If the client is an audio client, the avsink core will find it peer display client and call its register ops;
> > and if the client is a display client, the avsink core will just call its own register ops.
>
> Oh dear. Where do we need this? Imo this is really horrible, but if we
> indeed need this then a struct device is better - we can attach mmio
> resources to devices and let the audio side remap them as best as they see
> fit.
> -Daniel
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> +41 (0) 79 365 57 48 - http://blog.ffwll.ch
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)
2014-05-20 8:10 ` Takashi Iwai
@ 2014-05-20 10:37 ` Vinod Koul
2014-05-22 2:46 ` [alsa-devel] " Raymond Yau
1 sibling, 0 replies; 24+ messages in thread
From: Vinod Koul @ 2014-05-20 10:37 UTC (permalink / raw)
To: Takashi Iwai
Cc: Yang, Libin, alsa-devel@alsa-project.org, Nikkanen, Kimmo,
intel-gfx@lists.freedesktop.org, Babu, Ramesh, Girdwood, Liam R,
Vetter, Daniel
On Tue, May 20, 2014 at 10:10:00AM +0200, Takashi Iwai wrote:
> > struct avsink_client {
> > const char *name; /* client name */
> > int type; /* client type*/
> > void *context;
> > struct module *module; /* top-level module for locking */
> >
> > struct avsink_client *peer; /* peer client */
> >
> > /* shared power wells */
> > struct avsink_power_well *power_well;
> > int num_power_wells;
>
> The "power well" is Intel-specific things. Better to use a more
> generic term. (And, I'm always confused what "power well disable"
> means :)
Given that runtime pm is a prevalent usage, wouldn't it make sense
to say that I am HDMI client so keep the resources on? This can be easily
managed if we are able to create the audio device as child of display
controller. That would be implementation agnostic and controller can do whatever
(power well or not) to keep it on/off
>
> >
> > /* endpoints, display outputs or audio inputs */
> > struct avsink_endpoint * endpoint;
> > int num_endpints;
> >
> > struct avsink_registers_ops *reg_ops; /* ops to access registers of a client */
>
> Use const for ops pointers in general (also other cases below).
>
>
> > void *private_data;
> > ...
> > };
> >
> > On system boots, the avsink module is loaded before the display and audio driver module. And the display and audio
> > driver may be loaded on parallel.
>
> For HD-audio HDMI, both controller and codec drivers would need the
> avsink access. So, both drivers will register the own client?
that sound logical here..
>
>
> > * If a specific display driver (eg. i915) supports avsink, it can create a display client, add power wells and display
> > outputs to the client, and then register the display client to the avsink core. Then it may look up if there is any
> > audio client registered, by name or type, and may find an audio client registered by some audio driver.
> >
> > * If an audio driver supports avsink, it usually should look up a registered display client by name or type at first,
> > because it may need the shared power well in GPU and check the display outputs' name to bind the audio inputs. If
> > the display client is not registered yet, the audio driver can choose to wait (maybe in a work queue) or return
> > -EAGAIN for a deferred probe. After the display client is found, the audio driver can register an audio client with
> > the display client's name as the peer name, the avsink core will bind the display and audio clients to each other.
>
> There is already "component" framework, BTW. Can we integrate it into
> avsink instead?
>
>
> > Open question:
> > If the display or audio driver is disabled by the black list, shall we introduce a time out to avoid waiting for the
> > other client registered endlessly?
>
> Yes, timeout sounds like a sensible option.
>
>
> > 2. Shared power wells (optional)
> >
> > The audio and display devices, maybe only part of them, may share a common power well (e.g. for Intel Haswell and
> > Broadwell). If so, the driver that controls the power well should define a power well object, implement the get/put ops,
> > and add it to its avsink client before registering the client to avsink core. Then the peer client can look up this
> > power well by its name, and get/put this power well as a user.
> >
> > A client can have multiple power well objects.
> >
> > struct avsink_power_well {
> > const char *name; /* name of the power well */
> > void *context; /* parameter of get/put ops, maybe device pointer for this power well */
> > struct avsink_power_well_ops *ops
> > };
> >
> > struct avsink_power_well_ops {
> > int (*get)(void *context);
> > int (*put)(void *context);
> > };
> >
> > API:
> > int avsink_new_power(struct avsink_client *client,
> > const char *power_name,
> > void * power_context,
> > struct avsink_power_well_ops *ops,
> > struct avsink_power_well **power_ret);
> >
> > struct avsink_power_well *avisnk_lookup_power(const char *name);
> >
> > int avsink_get_power(struct avsink_power_well *power); /* Reqesut the power */
> > int avsink_put_power(struct avsink_power_well *power); /* Release the power */
> >
> > For example, the i915 display driver can create a device for the shared power well in Haswell GPU, implement its PM
> > functions, and use the device pointer as the context when creating the power well object, like below
> >
> > struct avsink_power_well_ops i915_power_well_ops = {
> > .get = pm_runtime_get_sync;
> > .put = pm_runtime_put_sync;
> > };
>
> This needs function pointer cast, and it's not portable although it'd
> work practically.
>
>
> > ...
> > avsink_new_power ( display_client,
> > "i915_display_power_well",
> > pdev, /* pointer of the power well device */
> > &i915_power_well_ops,
> > ...)
> >
> > Power domain is not used here since a single device seems enough to represent a power well.
> >
> > 3. Display output and audio input endpoints
> >
> > A display client should register the display output endpoints and its audio peer client should register the audio input
> > endpoints. A client can have multiple endpoints. The avsink core will bind an audio input and a display output as peer
> > to each other. This is to allow the audio and display driver to synchronize with each other for each display pipeline.
> >
> > All endpoints should be added to a client before the client is registered to avsink core. Dynamic endpoints are not
> > supported now.
> >
> > A display out here represents a physical HDMI/DP output port. And as long as it's usable in the system (i.e. physically
> > connected to the HDMP/DP port on the machine board), the display output should be registered not matter the port is
> > connected to an external display device or not. And if HW and display driver can support DP1.2 daisy chain (multiple DP
> > display devices can be connected to a single port), multiple static displays outputs should be defined for the DP port
> > according to the HW capability. The port & display device number can be indicated by the name (e.g. "i915_DDI_B",
> > "i915_DDI_B_DEV0", "i915_DDI_B_DEV1", or "i915_DDI_B_DEV2"), defined by the display driver.
> >
> > The audio driver can check the endpoints of its peer display client and use an display endpoint's name, or a presumed
> > display endpoint name, as peer name when registering an audio endpoint, thus the avsink core will bind the two display
> > and audio endpoints as peers.
> >
> > struct avsink_endpoint {
> > const char *name; /*name of the endpoint */
> > int type; /* DISPLAY_OUTPUT or AUDIO_INPUT */
> > void *context; /* private data, used as parameter of the ops */
> > struct avsink_endpoint_ops *ops;
> >
> > struct avsink_endpoint *peer; /* peer endpoint */
> > };
> >
> > struct avsink_endpoint_ops {
> > int (*get_caps) (enum had_caps_list query_element,
> > void *capabilities,
> > void *context);
> > int (*set_caps) (enum had_caps_list set_element,
> > void *capabilities,
> > void *context);
> > int (*event_handler) (enum avsink_event_type event_type, void *context);
> > };
> >
> > API:
> > int avsink_new_endpoint (struct avsink_client *client,
> > const char *name,
> > int type, /* DISPLAY_OUTPUT or AUDIO_INPUT*/
> > void *context,
> > const char *peer_name, /* can be NULL if no clue */
> > avsink_endpoint_ops *ops,
> > struct avsink_endpoint **endpoint_ret);
> >
> > int avsink_endpoint_get_caps(struct avsink_endpoint *endpoint,
> > enum avsink_caps_list t get_element,
> > void *capabilities);
> > int avsink_endpoint_set_caps(struct avsink_endpoint *endpoint,
> > enum had_caps_list set_element,
> > void *capabilities);
> >
> > int avsink_endpoint_post_event(struct avsink_endpoint *endpoint,
> > enum avsink_event_type event_type);
> >
> > 4. Get/Set caps on an endpoint
> >
> > The display or audio driver can get or set capabilities on an endpoint. Depending on the capability ID, the avsink core
> > will call get_caps/set_caps ops of this endpoint, or call get_caps/set_caps ops of its peer endpoint and return the
> > result to the caller.
> >
> > enum avsink_caps_list {
> > /* capabilities for display output endpoints */
> > AVSINK_GET_DISPLAY_ELD = 1,
> > AVSINK_GET_DISPLAY_TYPE, /* HDMI or DisplayPort */
> > AVSINK_GET_DISPLAY_NAME, /* Hope to use display device name under /sys/class/drm, like "card0-DP-1", for user
> > * space to figure out which HDMI/DP output on the drm side corresponds to which audio
> > * stream device on the alsa side */
> > AVSINK_GET_DISPLAY_SAMPLING_FREQ, /* HDMI TMDS clock or DP link symbol clock, for audio driver to
> > * program N value
> > */
> > AVSINK_GET_DISPLAY_HDCP_STATUS,
> > AVSINK_GET_DISPLAY_AUDIO_STATUS, /* Whether audio is enabled */
> > AVSINK_SET_DISPLAY_ENABLE_AUDIO, /* Enable audio */
> > AVSINK_SET_DISPLAY_DISABLE_AUDIO, /* Disable audio */
> > AVSINK_SET_DISPLAY_ENABLE_AUDIO_INT, /* Enable audio interrupt */
> > AVSINK_SET_DISPLAY_DISABLE_AUDIO_INT, /* Disable audio interrupt */
> >
> > /* capabilities for audio input endpoints */
> > AVSINK_GET_AUDIO_IS_BUSY, /* Whether there is an active audio streaming */
> > OTHERS_TBD,
> > };
> >
> > For example, the audio driver can query ELD info on an audio input endpoint by using caps AVSINK_GET_DISPLAY_ELD, and
> > avsink core will call get_caps() on the peer display output endpoint and return the ELD info to the audio driver.
> >
> > Some audio driver may only use part of these caps. E.g. HD-Audio driver can use bus commands instead of the ops to
> > control the audio on gfx side, so it doesn't use caps like ENABLE/DISABLE_AUDIO or ENABLE/DISABLE_AUDIO.
> >
> > When the display driver want to disable a display pipeline for hot-plug, mode change or power saving, it can use caps
> > AVSINK_GET_AUDIO_IS_BUSY to check if the audio input is busy (active streaming) on this display pipeline. And if audio
> > is busy, the display driver can choose to wait or go ahead to disable display pipeline anyway. For the latter case, the
> > audio input endpoint will be notified by an event and should abort audio streaming.
> >
> > 5. Event handling of endpoints
> >
> > A driver can post events on an endpoint. Depending on the event type, the avsink core will call the endpoint's event
> > handler or pass the event to its peer endpoint and trigger the peer's event handler function if defined.
> >
> > int avsink_endpoint_post_event(struct avsink_endpoint *endpoint,
> > enum avsink_event_type event_type);
what would be the context of callback. Anything less that atomic would tend to
cause timing issues...
> >
> > Now we only defined event types which should be handled by the audio input endpoints. The event types can be extended
> > in the future.
> >
> > enum avsink_event_type {
> > AVSINK_EVENT_DISPLAY_DISABLE = 1, /* The display pipeline is disabled for hot-plug, mode change or
> > * suspend. Audio driver should stop any active streaming.
> > */
> > AVSINK_EVENT_DISPLAY_ENABLE, /* The display pipeline is enabled after hot-plug, mode change or
> > * resume. Audio driver can restore previously interrupted streaming
> > */
> > AVSINK_EVENT_DISPLAY_MODE_CHANGE, /* Display mode change event. At this time, the new display mode is
> > * configured but the display pipeline is not enabled yet. Audio driver
> > * can do some configurations such as programing N value */
> > AVSINK_EVENT_DISPLAY_AUDIO_BUFFER_DONE, /* Audio Buffer done interrupts. Only for audio drivers if DMA and
> > * interrupt are handled by GPU
> > */
> > AVSINK_EVENT_DISPLAY_AUDIO_BUFFER_UNDERRUN, /* Audio Buffer under run interrupts. Only for audio drivers if
> > * DMA and interrupt are handled by GPU
> > */
> > };
> >
> > So for a display driver, it can post an event on a display output endpoint and get processed by the peer audio input
> > endpoint. Or it can also directly post an event on a peer audio input endpoint, by using the 'peer' pointer on a
> > display output endpoint.
>
> Hm, one unclear thing to me is who handles this event by how.
> Suppose you issue GET_ELD on an audio endpoint. Then what would
> avsink does against the display client exactly?
I think above 4 examples are for async events reported by controller to clients.
For example buffer done, or hot plug which needs to be notified. Something like
GET_ELD doesn't need notification but should be a sync read.
--
~Vinod
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)
2014-05-20 10:04 ` Daniel Vetter
@ 2014-05-20 12:43 ` Thierry Reding
2014-05-20 13:40 ` [Intel-gfx] " Jaroslav Kysela
0 siblings, 1 reply; 24+ messages in thread
From: Thierry Reding @ 2014-05-20 12:43 UTC (permalink / raw)
To: Daniel Vetter
Cc: Yang, Libin, alsa-devel@alsa-project.org, Nikkanen, Kimmo,
Koul, Vinod, intel-gfx@lists.freedesktop.org, Babu, Ramesh,
DRI Development, Girdwood, Liam R, Greg KH, Vetter, Daniel,
linux-media
[-- Attachment #1.1: Type: text/plain, Size: 6179 bytes --]
On Tue, May 20, 2014 at 12:04:38PM +0200, Daniel Vetter wrote:
> Also adding dri-devel and linux-media. Please don't forget these lists for
> the next round.
> -Daniel
>
> On Tue, May 20, 2014 at 12:02:04PM +0200, Daniel Vetter wrote:
> > Adding Greg just as an fyi since we've chatted briefly about the avsink
> > bus. Comments below.
> > -Daniel
> >
> > On Tue, May 20, 2014 at 02:52:19AM +0000, Lin, Mengdong wrote:
> > > This RFC is based on previous discussion to set up a generic communication channel between display and audio driver and
> > > an internal design of Intel MCG/VPG HDMI audio driver. It's still an initial draft and your advice would be appreciated
> > > to improve the design.
> > >
> > > The basic idea is to create a new avsink module and let both drm and alsa depend on it.
> > > This new module provides a framework and APIs for synchronization between the display and audio driver.
> > >
> > > 1. Display/Audio Client
> > >
> > > The avsink core provides APIs to create, register and lookup a display/audio client.
> > > A specific display driver (eg. i915) or audio driver (eg. HD-Audio driver) can create a client, add some resources
> > > objects (shared power wells, display outputs, and audio inputs, register ops) to the client, and then register this
> > > client to avisink core. The peer driver can look up a registered client by a name or type, or both. If a client gives
> > > a valid peer client name on registration, avsink core will bind the two clients as peer for each other. And we
> > > expect a display client and an audio client to be peers for each other in a system.
> > >
> > > int avsink_new_client ( const char *name,
> > > int type, /* client type, display or audio */
> > > struct module *module,
> > > void *context,
> > > const char *peer_name,
> > > struct avsink_client **client_ret);
> > >
> > > int avsink_free_client (struct avsink_client *client);
> >
> >
> > Hm, my idea was to create a new avsink bus and let vga drivers register
> > devices on that thing and audio drivers register as drivers. There's a bit
> > more work involved in creating a full-blown bus, but it has a lot of
> > upsides:
> > - Established infrastructure for matching drivers (i.e. audio drivers)
> > against devices (i.e. avsinks exported by gfx drivers).
> > - Module refcounting.
> > - power domain handling and well-integrated into runtime pm.
> > - Allows integration into componentized device framework since we're
> > dealing with a real struct device.
> > - Better decoupling between gfx and audio side since registration is done
> > at runtime.
> > - We can attach drv private date which the audio driver needs.
I think this would be another case where the interface framework[0]
could potentially be used. It doesn't give you all of the above, but
there's no reason it couldn't be extended. Then again, adding too much
would end up duplicating more of the driver core, so if something really
heavy-weight is required here, then the interface framework is not the
best option.
[0]: https://lkml.org/lkml/2014/5/13/525
> > > On system boots, the avsink module is loaded before the display and audio driver module. And the display and audio
> > > driver may be loaded on parallel.
> > > * If a specific display driver (eg. i915) supports avsink, it can create a display client, add power wells and display
> > > outputs to the client, and then register the display client to the avsink core. Then it may look up if there is any
> > > audio client registered, by name or type, and may find an audio client registered by some audio driver.
> > >
> > > * If an audio driver supports avsink, it usually should look up a registered display client by name or type at first,
> > > because it may need the shared power well in GPU and check the display outputs' name to bind the audio inputs. If
> > > the display client is not registered yet, the audio driver can choose to wait (maybe in a work queue) or return
> > > -EAGAIN for a deferred probe. After the display client is found, the audio driver can register an audio client with
-EPROBE_DEFER is the correct error code for deferred probing.
> > > 6. Display register operation (optional)
> > >
> > > Some audio driver needs to access GPU audio registers. The register ops are provided by the peer display client.
> > >
> > > struct avsink_registers_ops {
> > > int (*read_register) (uint32_t reg_addr, uint32_t *data, void *context);
> > > int (*write_register) (uint32_t reg_addr, uint32_t data, void *context);
> > > int (*read_modify_register) (uint32_t reg_addr, uint32_t data, uint32_t mask, void *context);
> > >
> > > int avsink_define_reg_ops (struct avsink_client *client, struct avsink_registers_ops *ops);
> > >
> > > And avsink core provides API for the audio driver to access the display registers:
> > >
> > > int avsink_read_display_register(struct avsink_client *client , uint32_t offset, uint32_t *data);
> > > int avsink_write_display_register(struct avsink_client *client , uint32_t offset, uint32_t data);
> > > int avsink_read_modify_display_register(struct avsink_client *client, uint32_t offset, uint32_t data, uint32_t mask);
> > >
> > > If the client is an audio client, the avsink core will find it peer display client and call its register ops;
> > > and if the client is a display client, the avsink core will just call its own register ops.
> >
> > Oh dear. Where do we need this? Imo this is really horrible, but if we
> > indeed need this then a struct device is better - we can attach mmio
> > resources to devices and let the audio side remap them as best as they see
> > fit.
Can't this just be put behind an explicit API that does what the
register writes would do? If you share an MMIO region between drivers
you always need to make sure that they don't step on each others' toes.
An explicity API can easily take care of that.
Thierry
[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]
[-- Attachment #2: Type: text/plain, Size: 159 bytes --]
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Intel-gfx] [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)
2014-05-20 12:43 ` Thierry Reding
@ 2014-05-20 13:40 ` Jaroslav Kysela
0 siblings, 0 replies; 24+ messages in thread
From: Jaroslav Kysela @ 2014-05-20 13:40 UTC (permalink / raw)
To: Thierry Reding, Daniel Vetter
Cc: Yang, Libin, alsa-devel@alsa-project.org, Nikkanen, Kimmo,
Koul, Vinod, Lin, Mengdong, intel-gfx@lists.freedesktop.org,
Babu, Ramesh, Shankar, Uma, DRI Development, Girdwood, Liam R,
Greg KH, Vetter, Daniel, linux-media
Date 20.5.2014 14:43, Thierry Reding wrote:
> On Tue, May 20, 2014 at 12:04:38PM +0200, Daniel Vetter wrote:
>> Also adding dri-devel and linux-media. Please don't forget these lists for
>> the next round.
>> -Daniel
>>
>> On Tue, May 20, 2014 at 12:02:04PM +0200, Daniel Vetter wrote:
>>> Adding Greg just as an fyi since we've chatted briefly about the avsink
>>> bus. Comments below.
>>> -Daniel
>>>
>>> On Tue, May 20, 2014 at 02:52:19AM +0000, Lin, Mengdong wrote:
>>>> This RFC is based on previous discussion to set up a generic communication channel between display and audio driver and
>>>> an internal design of Intel MCG/VPG HDMI audio driver. It's still an initial draft and your advice would be appreciated
>>>> to improve the design.
>>>>
>>>> The basic idea is to create a new avsink module and let both drm and alsa depend on it.
>>>> This new module provides a framework and APIs for synchronization between the display and audio driver.
>>>>
>>>> 1. Display/Audio Client
>>>>
>>>> The avsink core provides APIs to create, register and lookup a display/audio client.
>>>> A specific display driver (eg. i915) or audio driver (eg. HD-Audio driver) can create a client, add some resources
>>>> objects (shared power wells, display outputs, and audio inputs, register ops) to the client, and then register this
>>>> client to avisink core. The peer driver can look up a registered client by a name or type, or both. If a client gives
>>>> a valid peer client name on registration, avsink core will bind the two clients as peer for each other. And we
>>>> expect a display client and an audio client to be peers for each other in a system.
>>>>
>>>> int avsink_new_client ( const char *name,
>>>> int type, /* client type, display or audio */
>>>> struct module *module,
>>>> void *context,
>>>> const char *peer_name,
>>>> struct avsink_client **client_ret);
>>>>
>>>> int avsink_free_client (struct avsink_client *client);
>>>
>>>
>>> Hm, my idea was to create a new avsink bus and let vga drivers register
>>> devices on that thing and audio drivers register as drivers. There's a bit
>>> more work involved in creating a full-blown bus, but it has a lot of
>>> upsides:
>>> - Established infrastructure for matching drivers (i.e. audio drivers)
>>> against devices (i.e. avsinks exported by gfx drivers).
>>> - Module refcounting.
>>> - power domain handling and well-integrated into runtime pm.
>>> - Allows integration into componentized device framework since we're
>>> dealing with a real struct device.
>>> - Better decoupling between gfx and audio side since registration is done
>>> at runtime.
>>> - We can attach drv private date which the audio driver needs.
>
> I think this would be another case where the interface framework[0]
> could potentially be used. It doesn't give you all of the above, but
> there's no reason it couldn't be extended. Then again, adding too much
> would end up duplicating more of the driver core, so if something really
> heavy-weight is required here, then the interface framework is not the
> best option.
>
> [0]: https://lkml.org/lkml/2014/5/13/525
This looks like the right direction. I would go in this way rather than
create specific A/V grouping mechanisms. This seems to be applicable to
more use cases.
Jaroslav
--
Jaroslav Kysela <perex@perex.cz>
Linux Kernel Sound Maintainer
ALSA Project; Red Hat, Inc.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)
2014-05-20 2:52 [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM) Lin, Mengdong
2014-05-20 8:10 ` Takashi Iwai
2014-05-20 10:02 ` Daniel Vetter
@ 2014-05-20 14:29 ` Imre Deak
2014-05-20 14:35 ` Vinod Koul
2014-05-20 14:45 ` Daniel Vetter
2 siblings, 2 replies; 24+ messages in thread
From: Imre Deak @ 2014-05-20 14:29 UTC (permalink / raw)
To: Lin, Mengdong
Cc: Yang, Libin, alsa-devel@alsa-project.org, Nikkanen, Kimmo,
Takashi Iwai (tiwai@suse.de), intel-gfx@lists.freedesktop.org,
Babu, Ramesh, Koul, Vinod, Girdwood, Liam R, Vetter, Daniel
[-- Attachment #1.1: Type: text/plain, Size: 2196 bytes --]
On Tue, 2014-05-20 at 05:52 +0300, Lin, Mengdong wrote:
> This RFC is based on previous discussion to set up a generic
> communication channel between display and audio driver and
> an internal design of Intel MCG/VPG HDMI audio driver. It's still an
> initial draft and your advice would be appreciated
> to improve the design.
>
> The basic idea is to create a new avsink module and let both drm and
> alsa depend on it.
> This new module provides a framework and APIs for synchronization
> between the display and audio driver.
>
> 1. Display/Audio Client
>
> The avsink core provides APIs to create, register and lookup a
> display/audio client.
> A specific display driver (eg. i915) or audio driver (eg. HD-Audio
> driver) can create a client, add some resources
> objects (shared power wells, display outputs, and audio inputs,
> register ops) to the client, and then register this
> client to avisink core. The peer driver can look up a registered
> client by a name or type, or both. If a client gives
> a valid peer client name on registration, avsink core will bind the
> two clients as peer for each other. And we
> expect a display client and an audio client to be peers for each other
> in a system.
One problem we have at the moment is the order of calling the system
suspend/resume handlers of the display driver wrt. that of the audio
driver. Since the power well control is part of the display HW block, we
need to run the display driver's resume handler first, initialize the
HW, and only then let the audio driver's resume handler run. For similar
reasons we have to call the audio suspend handler first and only then
the display driver resume handler. Currently we solve this using the
display driver's late/early suspend/resume hooks, but we'd need a more
robust solution.
This seems to be a similar issue to the load time ordering problem that
you describe later. Having a real device for avsync that would be a
child of the display device would solve the ordering issue in both
cases. I admit I haven't looked into it if this is feasible, but I would
like to see some solution to this as part of the plan.
--Imre
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 490 bytes --]
[-- Attachment #2: Type: text/plain, Size: 159 bytes --]
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)
2014-05-20 14:29 ` Imre Deak
@ 2014-05-20 14:35 ` Vinod Koul
2014-05-20 15:02 ` Imre Deak
2014-05-21 15:56 ` Babu, Ramesh
2014-05-20 14:45 ` Daniel Vetter
1 sibling, 2 replies; 24+ messages in thread
From: Vinod Koul @ 2014-05-20 14:35 UTC (permalink / raw)
To: Imre Deak
Cc: Yang, Libin, alsa-devel@alsa-project.org, Nikkanen, Kimmo,
Takashi Iwai (tiwai@suse.de), intel-gfx@lists.freedesktop.org,
Babu, Ramesh, Girdwood, Liam R, Vetter, Daniel
[-- Attachment #1.1: Type: text/plain, Size: 2534 bytes --]
On Tue, May 20, 2014 at 05:29:07PM +0300, Imre Deak wrote:
> On Tue, 2014-05-20 at 05:52 +0300, Lin, Mengdong wrote:
> > This RFC is based on previous discussion to set up a generic
> > communication channel between display and audio driver and
> > an internal design of Intel MCG/VPG HDMI audio driver. It's still an
> > initial draft and your advice would be appreciated
> > to improve the design.
> >
> > The basic idea is to create a new avsink module and let both drm and
> > alsa depend on it.
> > This new module provides a framework and APIs for synchronization
> > between the display and audio driver.
> >
> > 1. Display/Audio Client
> >
> > The avsink core provides APIs to create, register and lookup a
> > display/audio client.
> > A specific display driver (eg. i915) or audio driver (eg. HD-Audio
> > driver) can create a client, add some resources
> > objects (shared power wells, display outputs, and audio inputs,
> > register ops) to the client, and then register this
> > client to avisink core. The peer driver can look up a registered
> > client by a name or type, or both. If a client gives
> > a valid peer client name on registration, avsink core will bind the
> > two clients as peer for each other. And we
> > expect a display client and an audio client to be peers for each other
> > in a system.
>
> One problem we have at the moment is the order of calling the system
> suspend/resume handlers of the display driver wrt. that of the audio
> driver. Since the power well control is part of the display HW block, we
> need to run the display driver's resume handler first, initialize the
> HW, and only then let the audio driver's resume handler run. For similar
> reasons we have to call the audio suspend handler first and only then
> the display driver resume handler. Currently we solve this using the
> display driver's late/early suspend/resume hooks, but we'd need a more
> robust solution.
>
> This seems to be a similar issue to the load time ordering problem that
> you describe later. Having a real device for avsync that would be a
> child of the display device would solve the ordering issue in both
> cases. I admit I haven't looked into it if this is feasible, but I would
> like to see some solution to this as part of the plan.
If we are able create and mandate that HDMI display controller is parent and
audio is child device, then this wouldn't be an issue and PM frameowrk will
ensure parent is suspended last.
--
~Vinod
[-- Attachment #1.2: Digital signature --]
[-- Type: application/pgp-signature, Size: 836 bytes --]
[-- Attachment #2: Type: text/plain, Size: 159 bytes --]
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)
2014-05-20 14:29 ` Imre Deak
2014-05-20 14:35 ` Vinod Koul
@ 2014-05-20 14:45 ` Daniel Vetter
2014-05-20 14:57 ` [alsa-devel] " Thierry Reding
1 sibling, 1 reply; 24+ messages in thread
From: Daniel Vetter @ 2014-05-20 14:45 UTC (permalink / raw)
To: Imre Deak
Cc: Yang, Libin, alsa-devel@alsa-project.org, Nikkanen, Kimmo,
Takashi Iwai (tiwai@suse.de), intel-gfx@lists.freedesktop.org,
Babu, Ramesh, Koul, Vinod, Girdwood, Liam R, Vetter, Daniel
On Tue, May 20, 2014 at 4:29 PM, Imre Deak <imre.deak@intel.com> wrote:
> On Tue, 2014-05-20 at 05:52 +0300, Lin, Mengdong wrote:
>> This RFC is based on previous discussion to set up a generic
>> communication channel between display and audio driver and
>> an internal design of Intel MCG/VPG HDMI audio driver. It's still an
>> initial draft and your advice would be appreciated
>> to improve the design.
>>
>> The basic idea is to create a new avsink module and let both drm and
>> alsa depend on it.
>> This new module provides a framework and APIs for synchronization
>> between the display and audio driver.
>>
>> 1. Display/Audio Client
>>
>> The avsink core provides APIs to create, register and lookup a
>> display/audio client.
>> A specific display driver (eg. i915) or audio driver (eg. HD-Audio
>> driver) can create a client, add some resources
>> objects (shared power wells, display outputs, and audio inputs,
>> register ops) to the client, and then register this
>> client to avisink core. The peer driver can look up a registered
>> client by a name or type, or both. If a client gives
>> a valid peer client name on registration, avsink core will bind the
>> two clients as peer for each other. And we
>> expect a display client and an audio client to be peers for each other
>> in a system.
>
> One problem we have at the moment is the order of calling the system
> suspend/resume handlers of the display driver wrt. that of the audio
> driver. Since the power well control is part of the display HW block, we
> need to run the display driver's resume handler first, initialize the
> HW, and only then let the audio driver's resume handler run. For similar
> reasons we have to call the audio suspend handler first and only then
> the display driver resume handler. Currently we solve this using the
> display driver's late/early suspend/resume hooks, but we'd need a more
> robust solution.
>
> This seems to be a similar issue to the load time ordering problem that
> you describe later. Having a real device for avsync that would be a
> child of the display device would solve the ordering issue in both
> cases. I admit I haven't looked into it if this is feasible, but I would
> like to see some solution to this as part of the plan.
Yeah, this is a big reason why I want real devices - we have piles of
infrastructure to solve these ordering issues as soon as there's a
struct device around. If we don't use that, we need to reinvent all
those wheels ourselves.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [alsa-devel] [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)
2014-05-20 14:45 ` Daniel Vetter
@ 2014-05-20 14:57 ` Thierry Reding
2014-05-20 15:07 ` Daniel Vetter
0 siblings, 1 reply; 24+ messages in thread
From: Thierry Reding @ 2014-05-20 14:57 UTC (permalink / raw)
To: Daniel Vetter
Cc: Yang, Libin, alsa-devel@alsa-project.org, Nikkanen, Kimmo,
Takashi Iwai (tiwai@suse.de), Babu, Ramesh, Koul, Vinod,
Girdwood, Liam R, Vetter, Daniel, intel-gfx@lists.freedesktop.org
[-- Attachment #1.1: Type: text/plain, Size: 3413 bytes --]
On Tue, May 20, 2014 at 04:45:56PM +0200, Daniel Vetter wrote:
> On Tue, May 20, 2014 at 4:29 PM, Imre Deak <imre.deak@intel.com> wrote:
> > On Tue, 2014-05-20 at 05:52 +0300, Lin, Mengdong wrote:
> >> This RFC is based on previous discussion to set up a generic
> >> communication channel between display and audio driver and
> >> an internal design of Intel MCG/VPG HDMI audio driver. It's still an
> >> initial draft and your advice would be appreciated
> >> to improve the design.
> >>
> >> The basic idea is to create a new avsink module and let both drm and
> >> alsa depend on it.
> >> This new module provides a framework and APIs for synchronization
> >> between the display and audio driver.
> >>
> >> 1. Display/Audio Client
> >>
> >> The avsink core provides APIs to create, register and lookup a
> >> display/audio client.
> >> A specific display driver (eg. i915) or audio driver (eg. HD-Audio
> >> driver) can create a client, add some resources
> >> objects (shared power wells, display outputs, and audio inputs,
> >> register ops) to the client, and then register this
> >> client to avisink core. The peer driver can look up a registered
> >> client by a name or type, or both. If a client gives
> >> a valid peer client name on registration, avsink core will bind the
> >> two clients as peer for each other. And we
> >> expect a display client and an audio client to be peers for each other
> >> in a system.
> >
> > One problem we have at the moment is the order of calling the system
> > suspend/resume handlers of the display driver wrt. that of the audio
> > driver. Since the power well control is part of the display HW block, we
> > need to run the display driver's resume handler first, initialize the
> > HW, and only then let the audio driver's resume handler run. For similar
> > reasons we have to call the audio suspend handler first and only then
> > the display driver resume handler. Currently we solve this using the
> > display driver's late/early suspend/resume hooks, but we'd need a more
> > robust solution.
> >
> > This seems to be a similar issue to the load time ordering problem that
> > you describe later. Having a real device for avsync that would be a
> > child of the display device would solve the ordering issue in both
> > cases. I admit I haven't looked into it if this is feasible, but I would
> > like to see some solution to this as part of the plan.
>
> Yeah, this is a big reason why I want real devices - we have piles of
> infrastructure to solve these ordering issues as soon as there's a
> struct device around. If we don't use that, we need to reinvent all
> those wheels ourselves.
To make the driver core's magic work I think you'd need to find a way to
reparent the audio device under the display device. Presumably they come
from two different parts of the device tree (two different PCI devices I
would guess for Intel, two different platform devices on SoCs). Changing
the parent after a device has been registered doesn't work as far as I
know. But even assuming that would work, I have trouble imagining what
the implications would be on the rest of the driver model.
I faced similar problems with the Tegra DRM driver, and the only way I
can see to make this kind of interaction between devices work is by
tacking on an extra layer outside the core driver model.
Thierry
[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]
[-- Attachment #2: Type: text/plain, Size: 159 bytes --]
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)
2014-05-20 14:35 ` Vinod Koul
@ 2014-05-20 15:02 ` Imre Deak
2014-05-21 15:56 ` Babu, Ramesh
1 sibling, 0 replies; 24+ messages in thread
From: Imre Deak @ 2014-05-20 15:02 UTC (permalink / raw)
To: Vinod Koul
Cc: Yang, Libin, alsa-devel@alsa-project.org, Nikkanen, Kimmo,
Takashi Iwai (tiwai@suse.de), intel-gfx@lists.freedesktop.org,
Babu, Ramesh, Girdwood, Liam R, Vetter, Daniel
[-- Attachment #1.1: Type: text/plain, Size: 2999 bytes --]
On Tue, 2014-05-20 at 20:05 +0530, Vinod Koul wrote:
> On Tue, May 20, 2014 at 05:29:07PM +0300, Imre Deak wrote:
> > On Tue, 2014-05-20 at 05:52 +0300, Lin, Mengdong wrote:
> > > This RFC is based on previous discussion to set up a generic
> > > communication channel between display and audio driver and
> > > an internal design of Intel MCG/VPG HDMI audio driver. It's still an
> > > initial draft and your advice would be appreciated
> > > to improve the design.
> > >
> > > The basic idea is to create a new avsink module and let both drm and
> > > alsa depend on it.
> > > This new module provides a framework and APIs for synchronization
> > > between the display and audio driver.
> > >
> > > 1. Display/Audio Client
> > >
> > > The avsink core provides APIs to create, register and lookup a
> > > display/audio client.
> > > A specific display driver (eg. i915) or audio driver (eg. HD-Audio
> > > driver) can create a client, add some resources
> > > objects (shared power wells, display outputs, and audio inputs,
> > > register ops) to the client, and then register this
> > > client to avisink core. The peer driver can look up a registered
> > > client by a name or type, or both. If a client gives
> > > a valid peer client name on registration, avsink core will bind the
> > > two clients as peer for each other. And we
> > > expect a display client and an audio client to be peers for each other
> > > in a system.
> >
> > One problem we have at the moment is the order of calling the system
> > suspend/resume handlers of the display driver wrt. that of the audio
> > driver. Since the power well control is part of the display HW block, we
> > need to run the display driver's resume handler first, initialize the
> > HW, and only then let the audio driver's resume handler run. For similar
> > reasons we have to call the audio suspend handler first and only then
> > the display driver resume handler. Currently we solve this using the
> > display driver's late/early suspend/resume hooks, but we'd need a more
> > robust solution.
> >
> > This seems to be a similar issue to the load time ordering problem that
> > you describe later. Having a real device for avsync that would be a
> > child of the display device would solve the ordering issue in both
> > cases. I admit I haven't looked into it if this is feasible, but I would
> > like to see some solution to this as part of the plan.
>
> If we are able create and mandate that HDMI display controller is parent and
> audio is child device, then this wouldn't be an issue and PM frameowrk will
> ensure parent is suspended last.
To my understanding we can't really do that since that's already fixed
by the physical bus topology. That is in the Intel case the parent of
both the audio and display device is the corresponding PCI bridge
device. But avsync could be a new virtual device and you could let that
be the child of the display device.
--Imre
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 490 bytes --]
[-- Attachment #2: Type: text/plain, Size: 159 bytes --]
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [alsa-devel] [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)
2014-05-20 14:57 ` [alsa-devel] " Thierry Reding
@ 2014-05-20 15:07 ` Daniel Vetter
2014-05-20 15:15 ` Thierry Reding
2014-05-22 14:59 ` Lin, Mengdong
0 siblings, 2 replies; 24+ messages in thread
From: Daniel Vetter @ 2014-05-20 15:07 UTC (permalink / raw)
To: Thierry Reding, Daniel Vetter
Cc: Yang, Libin, alsa-devel@alsa-project.org, Nikkanen, Kimmo,
Takashi Iwai (tiwai@suse.de), Babu, Ramesh, Koul, Vinod,
Girdwood, Liam R, intel-gfx@lists.freedesktop.org
On 20/05/2014 16:57, Thierry Reding wrote:
> On Tue, May 20, 2014 at 04:45:56PM +0200, Daniel Vetter wrote:
>> >On Tue, May 20, 2014 at 4:29 PM, Imre Deak<imre.deak@intel.com> wrote:
>>> > >On Tue, 2014-05-20 at 05:52 +0300, Lin, Mengdong wrote:
>>>> > >>This RFC is based on previous discussion to set up a generic
>>>> > >>communication channel between display and audio driver and
>>>> > >>an internal design of Intel MCG/VPG HDMI audio driver. It's still an
>>>> > >>initial draft and your advice would be appreciated
>>>> > >>to improve the design.
>>>> > >>
>>>> > >>The basic idea is to create a new avsink module and let both drm and
>>>> > >>alsa depend on it.
>>>> > >>This new module provides a framework and APIs for synchronization
>>>> > >>between the display and audio driver.
>>>> > >>
>>>> > >>1. Display/Audio Client
>>>> > >>
>>>> > >>The avsink core provides APIs to create, register and lookup a
>>>> > >>display/audio client.
>>>> > >>A specific display driver (eg. i915) or audio driver (eg. HD-Audio
>>>> > >>driver) can create a client, add some resources
>>>> > >>objects (shared power wells, display outputs, and audio inputs,
>>>> > >>register ops) to the client, and then register this
>>>> > >>client to avisink core. The peer driver can look up a registered
>>>> > >>client by a name or type, or both. If a client gives
>>>> > >>a valid peer client name on registration, avsink core will bind the
>>>> > >>two clients as peer for each other. And we
>>>> > >>expect a display client and an audio client to be peers for each other
>>>> > >>in a system.
>>> > >
>>> > >One problem we have at the moment is the order of calling the system
>>> > >suspend/resume handlers of the display driver wrt. that of the audio
>>> > >driver. Since the power well control is part of the display HW block, we
>>> > >need to run the display driver's resume handler first, initialize the
>>> > >HW, and only then let the audio driver's resume handler run. For similar
>>> > >reasons we have to call the audio suspend handler first and only then
>>> > >the display driver resume handler. Currently we solve this using the
>>> > >display driver's late/early suspend/resume hooks, but we'd need a more
>>> > >robust solution.
>>> > >
>>> > >This seems to be a similar issue to the load time ordering problem that
>>> > >you describe later. Having a real device for avsync that would be a
>>> > >child of the display device would solve the ordering issue in both
>>> > >cases. I admit I haven't looked into it if this is feasible, but I would
>>> > >like to see some solution to this as part of the plan.
>> >
>> >Yeah, this is a big reason why I want real devices - we have piles of
>> >infrastructure to solve these ordering issues as soon as there's a
>> >struct device around. If we don't use that, we need to reinvent all
>> >those wheels ourselves.
> To make the driver core's magic work I think you'd need to find a way to
> reparent the audio device under the display device. Presumably they come
> from two different parts of the device tree (two different PCI devices I
> would guess for Intel, two different platform devices on SoCs). Changing
> the parent after a device has been registered doesn't work as far as I
> know. But even assuming that would work, I have trouble imagining what
> the implications would be on the rest of the driver model.
>
> I faced similar problems with the Tegra DRM driver, and the only way I
> can see to make this kind of interaction between devices work is by
> tacking on an extra layer outside the core driver model.
That's why we need a new avsink device which is a proper child of the
gfx device, and the audio driver needs to use the componentized device
framework so that the suspend/resume ordering works correctly. Or at
least that's been my idea, might be we have some small gaps here and there.
-Daniel
Intel Semiconductor AG
Registered No. 020.30.913.786-7
Registered Office: Badenerstrasse 549, 8048 Zurich, Switzerland
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [alsa-devel] [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)
2014-05-20 15:07 ` Daniel Vetter
@ 2014-05-20 15:15 ` Thierry Reding
2014-05-20 15:22 ` Daniel Vetter
2014-05-22 14:59 ` Lin, Mengdong
1 sibling, 1 reply; 24+ messages in thread
From: Thierry Reding @ 2014-05-20 15:15 UTC (permalink / raw)
To: Daniel Vetter
Cc: Yang, Libin, alsa-devel@alsa-project.org, Nikkanen, Kimmo,
Takashi Iwai (tiwai@suse.de), Babu, Ramesh, Koul, Vinod,
Girdwood, Liam R, intel-gfx@lists.freedesktop.org
[-- Attachment #1.1: Type: text/plain, Size: 4402 bytes --]
On Tue, May 20, 2014 at 05:07:51PM +0200, Daniel Vetter wrote:
> On 20/05/2014 16:57, Thierry Reding wrote:
> >On Tue, May 20, 2014 at 04:45:56PM +0200, Daniel Vetter wrote:
> >>>On Tue, May 20, 2014 at 4:29 PM, Imre Deak<imre.deak@intel.com> wrote:
> >>>> >On Tue, 2014-05-20 at 05:52 +0300, Lin, Mengdong wrote:
> >>>>> >>This RFC is based on previous discussion to set up a generic
> >>>>> >>communication channel between display and audio driver and
> >>>>> >>an internal design of Intel MCG/VPG HDMI audio driver. It's still an
> >>>>> >>initial draft and your advice would be appreciated
> >>>>> >>to improve the design.
> >>>>> >>
> >>>>> >>The basic idea is to create a new avsink module and let both drm and
> >>>>> >>alsa depend on it.
> >>>>> >>This new module provides a framework and APIs for synchronization
> >>>>> >>between the display and audio driver.
> >>>>> >>
> >>>>> >>1. Display/Audio Client
> >>>>> >>
> >>>>> >>The avsink core provides APIs to create, register and lookup a
> >>>>> >>display/audio client.
> >>>>> >>A specific display driver (eg. i915) or audio driver (eg. HD-Audio
> >>>>> >>driver) can create a client, add some resources
> >>>>> >>objects (shared power wells, display outputs, and audio inputs,
> >>>>> >>register ops) to the client, and then register this
> >>>>> >>client to avisink core. The peer driver can look up a registered
> >>>>> >>client by a name or type, or both. If a client gives
> >>>>> >>a valid peer client name on registration, avsink core will bind the
> >>>>> >>two clients as peer for each other. And we
> >>>>> >>expect a display client and an audio client to be peers for each other
> >>>>> >>in a system.
> >>>> >
> >>>> >One problem we have at the moment is the order of calling the system
> >>>> >suspend/resume handlers of the display driver wrt. that of the audio
> >>>> >driver. Since the power well control is part of the display HW block, we
> >>>> >need to run the display driver's resume handler first, initialize the
> >>>> >HW, and only then let the audio driver's resume handler run. For similar
> >>>> >reasons we have to call the audio suspend handler first and only then
> >>>> >the display driver resume handler. Currently we solve this using the
> >>>> >display driver's late/early suspend/resume hooks, but we'd need a more
> >>>> >robust solution.
> >>>> >
> >>>> >This seems to be a similar issue to the load time ordering problem that
> >>>> >you describe later. Having a real device for avsync that would be a
> >>>> >child of the display device would solve the ordering issue in both
> >>>> >cases. I admit I haven't looked into it if this is feasible, but I would
> >>>> >like to see some solution to this as part of the plan.
> >>>
> >>>Yeah, this is a big reason why I want real devices - we have piles of
> >>>infrastructure to solve these ordering issues as soon as there's a
> >>>struct device around. If we don't use that, we need to reinvent all
> >>>those wheels ourselves.
> >To make the driver core's magic work I think you'd need to find a way to
> >reparent the audio device under the display device. Presumably they come
> >from two different parts of the device tree (two different PCI devices I
> >would guess for Intel, two different platform devices on SoCs). Changing
> >the parent after a device has been registered doesn't work as far as I
> >know. But even assuming that would work, I have trouble imagining what
> >the implications would be on the rest of the driver model.
> >
> >I faced similar problems with the Tegra DRM driver, and the only way I
> >can see to make this kind of interaction between devices work is by
> >tacking on an extra layer outside the core driver model.
> That's why we need a new avsink device which is a proper child of the gfx
> device, and the audio driver needs to use the componentized device framework
> so that the suspend/resume ordering works correctly. Or at least that's been
> my idea, might be we have some small gaps here and there.
The component/master helpers don't allow you to do that. Essentially
what it does is provide a way to glue together multiple devices (the
components) to produce a meta-device (the master). What you get is a
pair of .bind()/.unbind() functions that are called on each of the
components when the master binds or unbinds the meta-device. I don't
see how that could be made to work for suspend/resume.
Thierry
[-- Attachment #1.2: Type: application/pgp-signature, Size: 836 bytes --]
[-- Attachment #2: Type: text/plain, Size: 159 bytes --]
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [alsa-devel] [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)
2014-05-20 15:15 ` Thierry Reding
@ 2014-05-20 15:22 ` Daniel Vetter
0 siblings, 0 replies; 24+ messages in thread
From: Daniel Vetter @ 2014-05-20 15:22 UTC (permalink / raw)
To: Thierry Reding
Cc: Yang, Libin, alsa-devel@alsa-project.org, Nikkanen, Kimmo,
Takashi Iwai (tiwai@suse.de), Babu, Ramesh, Koul, Vinod,
Girdwood, Liam R, Daniel Vetter, intel-gfx@lists.freedesktop.org
On Tue, May 20, 2014 at 5:15 PM, Thierry Reding
<thierry.reding@gmail.com> wrote:
> The component/master helpers don't allow you to do that. Essentially
> what it does is provide a way to glue together multiple devices (the
> components) to produce a meta-device (the master). What you get is a
> pair of .bind()/.unbind() functions that are called on each of the
> components when the master binds or unbinds the meta-device. I don't
> see how that could be made to work for suspend/resume.
We'll we could add a pm_ops pointer to the master and auto-register a
pile of suspend/resume hooks to all the component devices. Then we'd
suspend the master as soon as the first component gets suspended and
resume it only when the last component is resumed. Should be doable
with a bunch of refcounts.
On top of that we should be able to use runtime pm to do fine-grained
pm control for each component. So in my naive world here (never used
the component stuff myself after all) this should all work out ;-)
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)
2014-05-20 14:35 ` Vinod Koul
2014-05-20 15:02 ` Imre Deak
@ 2014-05-21 15:56 ` Babu, Ramesh
2014-05-21 16:05 ` Daniel Vetter
1 sibling, 1 reply; 24+ messages in thread
From: Babu, Ramesh @ 2014-05-21 15:56 UTC (permalink / raw)
To: Koul, Vinod, Deak, Imre
Cc: Yang, Libin, alsa-devel@alsa-project.org, Nikkanen, Kimmo,
Takashi Iwai (tiwai@suse.de), intel-gfx@lists.freedesktop.org,
Girdwood, Liam R, Vetter, Daniel
> On Tue, May 20, 2014 at 05:29:07PM +0300, Imre Deak wrote:
> > On Tue, 2014-05-20 at 05:52 +0300, Lin, Mengdong wrote:
> > > This RFC is based on previous discussion to set up a generic
> > > communication channel between display and audio driver and an
> > > internal design of Intel MCG/VPG HDMI audio driver. It's still an
> > > initial draft and your advice would be appreciated to improve the
> > > design.
> > >
> > > The basic idea is to create a new avsink module and let both drm and
> > > alsa depend on it.
> > > This new module provides a framework and APIs for synchronization
> > > between the display and audio driver.
> > >
> > > 1. Display/Audio Client
> > >
> > > The avsink core provides APIs to create, register and lookup a
> > > display/audio client.
> > > A specific display driver (eg. i915) or audio driver (eg. HD-Audio
> > > driver) can create a client, add some resources objects (shared
> > > power wells, display outputs, and audio inputs, register ops) to the
> > > client, and then register this client to avisink core. The peer
> > > driver can look up a registered client by a name or type, or both.
> > > If a client gives a valid peer client name on registration, avsink
> > > core will bind the two clients as peer for each other. And we expect
> > > a display client and an audio client to be peers for each other in a
> > > system.
> >
> > One problem we have at the moment is the order of calling the system
> > suspend/resume handlers of the display driver wrt. that of the audio
> > driver. Since the power well control is part of the display HW block,
> > we need to run the display driver's resume handler first, initialize
> > the HW, and only then let the audio driver's resume handler run. For
> > similar reasons we have to call the audio suspend handler first and
> > only then the display driver resume handler. Currently we solve this
> > using the display driver's late/early suspend/resume hooks, but we'd
> > need a more robust solution.
> >
> > This seems to be a similar issue to the load time ordering problem
> > that you describe later. Having a real device for avsync that would be
> > a child of the display device would solve the ordering issue in both
> > cases. I admit I haven't looked into it if this is feasible, but I
> > would like to see some solution to this as part of the plan.
>
> If we are able create and mandate that HDMI display controller is parent and
> audio is child device, then this wouldn't be an issue and PM frameowrk will
> ensure parent is suspended last.
>
If there is a scenario where HDMI audio has to active but display has to go to low power, then
parent-child device is not optimal. There needs to be a mechanism to turn on/off individual hw blocks within
the controller.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)
2014-05-21 15:56 ` Babu, Ramesh
@ 2014-05-21 16:05 ` Daniel Vetter
2014-05-21 17:07 ` Imre Deak
0 siblings, 1 reply; 24+ messages in thread
From: Daniel Vetter @ 2014-05-21 16:05 UTC (permalink / raw)
To: Babu, Ramesh
Cc: Yang, Libin, alsa-devel@alsa-project.org, Nikkanen, Kimmo,
Koul, Vinod, Takashi Iwai (tiwai@suse.de), Girdwood, Liam R,
Vetter, Daniel, intel-gfx@lists.freedesktop.org
On Wed, May 21, 2014 at 5:56 PM, Babu, Ramesh <ramesh.babu@intel.com> wrote:
>> On Tue, May 20, 2014 at 05:29:07PM +0300, Imre Deak wrote:
>> > On Tue, 2014-05-20 at 05:52 +0300, Lin, Mengdong wrote:
>> > > This RFC is based on previous discussion to set up a generic
>> > > communication channel between display and audio driver and an
>> > > internal design of Intel MCG/VPG HDMI audio driver. It's still an
>> > > initial draft and your advice would be appreciated to improve the
>> > > design.
>> > >
>> > > The basic idea is to create a new avsink module and let both drm and
>> > > alsa depend on it.
>> > > This new module provides a framework and APIs for synchronization
>> > > between the display and audio driver.
>> > >
>> > > 1. Display/Audio Client
>> > >
>> > > The avsink core provides APIs to create, register and lookup a
>> > > display/audio client.
>> > > A specific display driver (eg. i915) or audio driver (eg. HD-Audio
>> > > driver) can create a client, add some resources objects (shared
>> > > power wells, display outputs, and audio inputs, register ops) to the
>> > > client, and then register this client to avisink core. The peer
>> > > driver can look up a registered client by a name or type, or both.
>> > > If a client gives a valid peer client name on registration, avsink
>> > > core will bind the two clients as peer for each other. And we expect
>> > > a display client and an audio client to be peers for each other in a
>> > > system.
>> >
>> > One problem we have at the moment is the order of calling the system
>> > suspend/resume handlers of the display driver wrt. that of the audio
>> > driver. Since the power well control is part of the display HW block,
>> > we need to run the display driver's resume handler first, initialize
>> > the HW, and only then let the audio driver's resume handler run. For
>> > similar reasons we have to call the audio suspend handler first and
>> > only then the display driver resume handler. Currently we solve this
>> > using the display driver's late/early suspend/resume hooks, but we'd
>> > need a more robust solution.
>> >
>> > This seems to be a similar issue to the load time ordering problem
>> > that you describe later. Having a real device for avsync that would be
>> > a child of the display device would solve the ordering issue in both
>> > cases. I admit I haven't looked into it if this is feasible, but I
>> > would like to see some solution to this as part of the plan.
>>
>> If we are able create and mandate that HDMI display controller is parent and
>> audio is child device, then this wouldn't be an issue and PM frameowrk will
>> ensure parent is suspended last.
>>
> If there is a scenario where HDMI audio has to active but display has to go to low power, then
> parent-child device is not optimal. There needs to be a mechanism to turn on/off individual hw blocks within
> the controller.
Our gfx runtime pm code is a _lot_ better than that. We track each
power domain individually and enable/disable them only when need.
armsoc drivers could do the same or make sure that the avsink device
is a child of the right block. Of course if your driver only has
binary runtime pm and fires up everything then we have a problem. But
imo that's a problem with that driver, not with making avsink real
devices as children of something.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)
2014-05-21 16:05 ` Daniel Vetter
@ 2014-05-21 17:07 ` Imre Deak
2014-05-22 14:29 ` Lin, Mengdong
0 siblings, 1 reply; 24+ messages in thread
From: Imre Deak @ 2014-05-21 17:07 UTC (permalink / raw)
To: Daniel Vetter
Cc: Yang, Libin, alsa-devel@alsa-project.org, Nikkanen, Kimmo,
Koul, Vinod, intel-gfx@lists.freedesktop.org, Babu, Ramesh,
Takashi Iwai (tiwai@suse.de), Girdwood, Liam R, Vetter, Daniel
[-- Attachment #1.1: Type: text/plain, Size: 4215 bytes --]
On Wed, 2014-05-21 at 18:05 +0200, Daniel Vetter wrote:
> On Wed, May 21, 2014 at 5:56 PM, Babu, Ramesh <ramesh.babu@intel.com> wrote:
> >> On Tue, May 20, 2014 at 05:29:07PM +0300, Imre Deak wrote:
> >> > On Tue, 2014-05-20 at 05:52 +0300, Lin, Mengdong wrote:
> >> > > This RFC is based on previous discussion to set up a generic
> >> > > communication channel between display and audio driver and an
> >> > > internal design of Intel MCG/VPG HDMI audio driver. It's still an
> >> > > initial draft and your advice would be appreciated to improve the
> >> > > design.
> >> > >
> >> > > The basic idea is to create a new avsink module and let both drm and
> >> > > alsa depend on it.
> >> > > This new module provides a framework and APIs for synchronization
> >> > > between the display and audio driver.
> >> > >
> >> > > 1. Display/Audio Client
> >> > >
> >> > > The avsink core provides APIs to create, register and lookup a
> >> > > display/audio client.
> >> > > A specific display driver (eg. i915) or audio driver (eg. HD-Audio
> >> > > driver) can create a client, add some resources objects (shared
> >> > > power wells, display outputs, and audio inputs, register ops) to the
> >> > > client, and then register this client to avisink core. The peer
> >> > > driver can look up a registered client by a name or type, or both.
> >> > > If a client gives a valid peer client name on registration, avsink
> >> > > core will bind the two clients as peer for each other. And we expect
> >> > > a display client and an audio client to be peers for each other in a
> >> > > system.
> >> >
> >> > One problem we have at the moment is the order of calling the system
> >> > suspend/resume handlers of the display driver wrt. that of the audio
> >> > driver. Since the power well control is part of the display HW block,
> >> > we need to run the display driver's resume handler first, initialize
> >> > the HW, and only then let the audio driver's resume handler run. For
> >> > similar reasons we have to call the audio suspend handler first and
> >> > only then the display driver resume handler. Currently we solve this
> >> > using the display driver's late/early suspend/resume hooks, but we'd
> >> > need a more robust solution.
> >> >
> >> > This seems to be a similar issue to the load time ordering problem
> >> > that you describe later. Having a real device for avsync that would be
> >> > a child of the display device would solve the ordering issue in both
> >> > cases. I admit I haven't looked into it if this is feasible, but I
> >> > would like to see some solution to this as part of the plan.
> >>
> >> If we are able create and mandate that HDMI display controller is parent and
> >> audio is child device, then this wouldn't be an issue and PM frameowrk will
> >> ensure parent is suspended last.
> >>
> > If there is a scenario where HDMI audio has to active but display has
> > to go to low power, then > parent-child device is not optimal. There
> > needs to be a mechanism to turn on/off individual hw blocks within
> > the controller.
>
> Our gfx runtime pm code is a _lot_ better than that. We track each
> power domain individually and enable/disable them only when need.
> armsoc drivers could do the same or make sure that the avsink device
> is a child of the right block. Of course if your driver only has
> binary runtime pm and fires up everything then we have a problem. But
> imo that's a problem with that driver, not with making avsink real
> devices as children of something.
I would also add that at least in case of Haswell, there is really a
hard dependency between the display device and the HDMI audio
functionality: The power well required by HDMI is controlled via the
PWR_WELL_CTL2 register which is in turn part of the display power
domain. This domain is turned off when the display device is in D3
state, so to turn on audio we really have to first put the display
device into D0 state. Since the PM framework doesn't provide any way to
reorder the initialization of devices, we can only depend on the device
parent -> child relationship to achieve the above correct init order.
--Imre
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 490 bytes --]
[-- Attachment #2: Type: text/plain, Size: 159 bytes --]
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [alsa-devel] [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)
2014-05-20 8:10 ` Takashi Iwai
2014-05-20 10:37 ` Vinod Koul
@ 2014-05-22 2:46 ` Raymond Yau
1 sibling, 0 replies; 24+ messages in thread
From: Raymond Yau @ 2014-05-22 2:46 UTC (permalink / raw)
To: Takashi Iwai
Cc: Yang, Libin, alsa-devel@alsa-project.org, Nikkanen, Kimmo,
Babu, Ramesh, Koul, Vinod, Girdwood, Liam R, Vetter, Daniel,
intel-gfx@lists.freedesktop.org
[-- Attachment #1.1: Type: text/plain, Size: 1244 bytes --]
> >
> > This RFC is based on previous discussion to set up a generic
communication channel between display and audio driver and
> > an internal design of Intel MCG/VPG HDMI audio driver. It's still an
initial draft and your advice would be appreciated
> > to improve the design.
> >
> > The basic idea is to create a new avsink module and let both drm and
alsa depend on it.
>
> > 1. Display/Audio Client
> >
> > The avsink core provides APIs to create, register and lookup a
display/audio client.
>
> For HD-audio HDMI, both controller and codec drivers would need the
> avsink access. So, both drivers will register the own client?
http://nvidia.custhelp.com/app/answers/detail/a_id/2544/~/my-nvidia-graphics-card-came-with-an-internal-spdif-pass-through-audio-cable-to
http://www.intel.com/support/motherboards/desktop/sb/CS-032871.htm
Does it mean that those grpahic card HDMI which use motherboard internal
spdif connector will not be supported any more when graphic card have no
way to communicate with audio driver ?
https://git.kernel.org/cgit/linux/kernel/git/tiwai/sound.git/commit/sound/pci/hda/hda_auto_parser.c?id=3f25dcf691ebf45924a34b9aaedec78e5a255798
Should alsa regard these kind of digital device as HDMI or SPDIF ?
[-- Attachment #1.2: Type: text/html, Size: 1828 bytes --]
[-- Attachment #2: Type: text/plain, Size: 159 bytes --]
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)
2014-05-21 17:07 ` Imre Deak
@ 2014-05-22 14:29 ` Lin, Mengdong
0 siblings, 0 replies; 24+ messages in thread
From: Lin, Mengdong @ 2014-05-22 14:29 UTC (permalink / raw)
To: Deak, Imre, Daniel Vetter
Cc: Yang, Libin, alsa-devel@alsa-project.org, Nikkanen, Kimmo,
Koul, Vinod, intel-gfx@lists.freedesktop.org, Babu, Ramesh,
Takashi Iwai (tiwai@suse.de), Girdwood, Liam R, Vetter, Daniel
> -----Original Message-----
> From: Intel-gfx [mailto:intel-gfx-bounces@lists.freedesktop.org] On Behalf Of
> Imre Deak
> Sent: Thursday, May 22, 2014 1:08 AM
>
> On Wed, 2014-05-21 at 18:05 +0200, Daniel Vetter wrote:
> > On Wed, May 21, 2014 at 5:56 PM, Babu, Ramesh <ramesh.babu@intel.com>
> wrote:
> > >> On Tue, May 20, 2014 at 05:29:07PM +0300, Imre Deak wrote:
> > >> > On Tue, 2014-05-20 at 05:52 +0300, Lin, Mengdong wrote:
> > >> > > This RFC is based on previous discussion to set up a generic
> > >> > > communication channel between display and audio driver and an
> > >> > > internal design of Intel MCG/VPG HDMI audio driver. It's still
> > >> > > an initial draft and your advice would be appreciated to
> > >> > > improve the design.
> > >> > >
> > >> > > The basic idea is to create a new avsink module and let both
> > >> > > drm and alsa depend on it.
> > >> > > This new module provides a framework and APIs for
> > >> > > synchronization between the display and audio driver.
> > >> > >
> > >> > > 1. Display/Audio Client
> > >> > >
> > >> > > The avsink core provides APIs to create, register and lookup a
> > >> > > display/audio client.
> > >> > > A specific display driver (eg. i915) or audio driver (eg.
> > >> > > HD-Audio
> > >> > > driver) can create a client, add some resources objects (shared
> > >> > > power wells, display outputs, and audio inputs, register ops)
> > >> > > to the client, and then register this client to avisink core.
> > >> > > The peer driver can look up a registered client by a name or type, or
> both.
> > >> > > If a client gives a valid peer client name on registration,
> > >> > > avsink core will bind the two clients as peer for each other.
> > >> > > And we expect a display client and an audio client to be peers
> > >> > > for each other in a system.
> > >> >
> > >> > One problem we have at the moment is the order of calling the
> > >> > system suspend/resume handlers of the display driver wrt. that of
> > >> > the audio driver. Since the power well control is part of the
> > >> > display HW block, we need to run the display driver's resume
> > >> > handler first, initialize the HW, and only then let the audio
> > >> > driver's resume handler run. For similar reasons we have to call
> > >> > the audio suspend handler first and only then the display driver
> > >> > resume handler. Currently we solve this using the display
> > >> > driver's late/early suspend/resume hooks, but we'd need a more robust
> solution.
> > >> >
> > >> > This seems to be a similar issue to the load time ordering
> > >> > problem that you describe later. Having a real device for avsync
> > >> > that would be a child of the display device would solve the
> > >> > ordering issue in both cases. I admit I haven't looked into it if
> > >> > this is feasible, but I would like to see some solution to this as part of the
> plan.
> > >>
> > >> If we are able create and mandate that HDMI display controller is
> > >> parent and audio is child device, then this wouldn't be an issue
> > >> and PM frameowrk will ensure parent is suspended last.
> > >>
> > > If there is a scenario where HDMI audio has to active but display
> > > has to go to low power, then > parent-child device is not optimal.
> > > There needs to be a mechanism to turn on/off individual hw blocks
> > > within the controller.
> >
> > Our gfx runtime pm code is a _lot_ better than that. We track each
> > power domain individually and enable/disable them only when need.
> > armsoc drivers could do the same or make sure that the avsink device
> > is a child of the right block. Of course if your driver only has
> > binary runtime pm and fires up everything then we have a problem. But
> > imo that's a problem with that driver, not with making avsink real
> > devices as children of something.
>
> I would also add that at least in case of Haswell, there is really a hard
> dependency between the display device and the HDMI audio
> functionality: The power well required by HDMI is controlled via the
> PWR_WELL_CTL2 register which is in turn part of the display power domain.
> This domain is turned off when the display device is in D3 state, so to turn on
> audio we really have to first put the display device into D0 state. Since the PM
> framework doesn't provide any way to reorder the initialization of devices, we
> can only depend on the device parent -> child relationship to achieve the
> above correct init order.
>
> --Imre
So for Haswell, how about create a device for the 'power well' and make this power
device be a child of the display device?
And by any means (e.g. further extract the device as a power object and expose to audio driver),
the audio driver can finally trigger pm_runtime_get/put_sync() on this power device to solve the
power dependency on audio side, and the parent->child relationship will assure the order on drm side.
I feel it's a natural way for HD-Audio driver, which already binds to the HD-A controller.
And for MCG HDMI audio driver which directly feeds data from system memory to the display device,
I think it can either use pm_runtime_get/put_sync() on this power device (seems no necessary),
or just make the audio device as child of the display device.
Thanks
Mengdong
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [alsa-devel] [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)
2014-05-20 15:07 ` Daniel Vetter
2014-05-20 15:15 ` Thierry Reding
@ 2014-05-22 14:59 ` Lin, Mengdong
2014-05-22 19:58 ` Daniel Vetter
1 sibling, 1 reply; 24+ messages in thread
From: Lin, Mengdong @ 2014-05-22 14:59 UTC (permalink / raw)
To: Vetter, Daniel, Thierry Reding, Daniel Vetter
Cc: Yang, Libin, alsa-devel@alsa-project.org, Nikkanen, Kimmo,
Takashi Iwai (tiwai@suse.de), Babu, Ramesh, Koul, Vinod,
Girdwood, Liam R, intel-gfx@lists.freedesktop.org
> -----Original Message-----
> From: Vetter, Daniel
> Sent: Tuesday, May 20, 2014 11:08 PM
>
> On 20/05/2014 16:57, Thierry Reding wrote:
> > On Tue, May 20, 2014 at 04:45:56PM +0200, Daniel Vetter wrote:
> >> >On Tue, May 20, 2014 at 4:29 PM, Imre Deak<imre.deak@intel.com>
> wrote:
> >>> > >On Tue, 2014-05-20 at 05:52 +0300, Lin, Mengdong wrote:
> >>>> > >>This RFC is based on previous discussion to set up a generic
> >>>> > >>communication channel between display and audio driver and an
> >>>> > >>internal design of Intel MCG/VPG HDMI audio driver. It's still
> >>>> > >>an initial draft and your advice would be appreciated to
> >>>> > >>improve the design.
> >>>> > >>
> >>>> > >>The basic idea is to create a new avsink module and let both
> >>>> > >>drm and alsa depend on it.
> >>>> > >>This new module provides a framework and APIs for
> >>>> > >>synchronization between the display and audio driver.
> >>>> > >>
> >>>> > >>1. Display/Audio Client
> >>>> > >>
> >>>> > >>The avsink core provides APIs to create, register and lookup a
> >>>> > >>display/audio client.
> >>>> > >>A specific display driver (eg. i915) or audio driver (eg.
> >>>> > >>HD-Audio
> >>>> > >>driver) can create a client, add some resources objects (shared
> >>>> > >>power wells, display outputs, and audio inputs, register ops)
> >>>> > >>to the client, and then register this client to avisink core.
> >>>> > >>The peer driver can look up a registered client by a name or
> >>>> > >>type, or both. If a client gives a valid peer client name on
> >>>> > >>registration, avsink core will bind the two clients as peer for
> >>>> > >>each other. And we expect a display client and an audio client
> >>>> > >>to be peers for each other in a system.
> >>> > >
> >>> > >One problem we have at the moment is the order of calling the
> >>> > >system suspend/resume handlers of the display driver wrt. that of
> >>> > >the audio driver. Since the power well control is part of the
> >>> > >display HW block, we need to run the display driver's resume
> >>> > >handler first, initialize the HW, and only then let the audio
> >>> > >driver's resume handler run. For similar reasons we have to call
> >>> > >the audio suspend handler first and only then the display driver
> >>> > >resume handler. Currently we solve this using the display
> >>> > >driver's late/early suspend/resume hooks, but we'd need a more robust
> solution.
> >>> > >
> >>> > >This seems to be a similar issue to the load time ordering
> >>> > >problem that you describe later. Having a real device for avsync
> >>> > >that would be a child of the display device would solve the
> >>> > >ordering issue in both cases. I admit I haven't looked into it if
> >>> > >this is feasible, but I would like to see some solution to this as part of
> the plan.
> >> >
> >> >Yeah, this is a big reason why I want real devices - we have piles
> >> >of infrastructure to solve these ordering issues as soon as there's
> >> >a struct device around. If we don't use that, we need to reinvent
> >> >all those wheels ourselves.
> > To make the driver core's magic work I think you'd need to find a way
> > to reparent the audio device under the display device. Presumably they
> > come from two different parts of the device tree (two different PCI
> > devices I would guess for Intel, two different platform devices on
> > SoCs). Changing the parent after a device has been registered doesn't
> > work as far as I know. But even assuming that would work, I have
> > trouble imagining what the implications would be on the rest of the driver
> model.
> >
> > I faced similar problems with the Tegra DRM driver, and the only way I
> > can see to make this kind of interaction between devices work is by
> > tacking on an extra layer outside the core driver model.
> That's why we need a new avsink device which is a proper child of the gfx
> device, and the audio driver needs to use the componentized device
> framework so that the suspend/resume ordering works correctly. Or at least
> that's been my idea, might be we have some small gaps here and there.
> -Daniel
Hi Daniel,
Would you please share more info about your idea?
- What would be an avsink device represent here?
E.g. on Intel platforms, will the whole display device have a child avsink device or multiple avsink devices for each DDI port?
- And for the relationship between audio driver and the avsink device, which would be the master and which would be the component?
In addition, the component framework does not touch PM now.
And introducing PM to the component framework seems not easy since there can be potential conflict caused by parent-child relationship of the involved devices.
Thanks
Mengdong
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [alsa-devel] [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)
2014-05-22 14:59 ` Lin, Mengdong
@ 2014-05-22 19:58 ` Daniel Vetter
2014-06-03 1:42 ` Lin, Mengdong
0 siblings, 1 reply; 24+ messages in thread
From: Daniel Vetter @ 2014-05-22 19:58 UTC (permalink / raw)
To: Lin, Mengdong
Cc: Yang, Libin, alsa-devel@alsa-project.org, Nikkanen, Kimmo,
Takashi Iwai (tiwai@suse.de), Babu, Ramesh, Koul, Vinod,
Thierry Reding, Girdwood, Liam R, Vetter, Daniel,
intel-gfx@lists.freedesktop.org
On Thu, May 22, 2014 at 02:59:56PM +0000, Lin, Mengdong wrote:
> > -----Original Message-----
> > From: Vetter, Daniel
> > Sent: Tuesday, May 20, 2014 11:08 PM
> >
> > On 20/05/2014 16:57, Thierry Reding wrote:
> > > On Tue, May 20, 2014 at 04:45:56PM +0200, Daniel Vetter wrote:
> > >> >On Tue, May 20, 2014 at 4:29 PM, Imre Deak<imre.deak@intel.com>
> > wrote:
> > >>> > >On Tue, 2014-05-20 at 05:52 +0300, Lin, Mengdong wrote:
> > >>>> > >>This RFC is based on previous discussion to set up a generic
> > >>>> > >>communication channel between display and audio driver and an
> > >>>> > >>internal design of Intel MCG/VPG HDMI audio driver. It's still
> > >>>> > >>an initial draft and your advice would be appreciated to
> > >>>> > >>improve the design.
> > >>>> > >>
> > >>>> > >>The basic idea is to create a new avsink module and let both
> > >>>> > >>drm and alsa depend on it.
> > >>>> > >>This new module provides a framework and APIs for
> > >>>> > >>synchronization between the display and audio driver.
> > >>>> > >>
> > >>>> > >>1. Display/Audio Client
> > >>>> > >>
> > >>>> > >>The avsink core provides APIs to create, register and lookup a
> > >>>> > >>display/audio client.
> > >>>> > >>A specific display driver (eg. i915) or audio driver (eg.
> > >>>> > >>HD-Audio
> > >>>> > >>driver) can create a client, add some resources objects (shared
> > >>>> > >>power wells, display outputs, and audio inputs, register ops)
> > >>>> > >>to the client, and then register this client to avisink core.
> > >>>> > >>The peer driver can look up a registered client by a name or
> > >>>> > >>type, or both. If a client gives a valid peer client name on
> > >>>> > >>registration, avsink core will bind the two clients as peer for
> > >>>> > >>each other. And we expect a display client and an audio client
> > >>>> > >>to be peers for each other in a system.
> > >>> > >
> > >>> > >One problem we have at the moment is the order of calling the
> > >>> > >system suspend/resume handlers of the display driver wrt. that of
> > >>> > >the audio driver. Since the power well control is part of the
> > >>> > >display HW block, we need to run the display driver's resume
> > >>> > >handler first, initialize the HW, and only then let the audio
> > >>> > >driver's resume handler run. For similar reasons we have to call
> > >>> > >the audio suspend handler first and only then the display driver
> > >>> > >resume handler. Currently we solve this using the display
> > >>> > >driver's late/early suspend/resume hooks, but we'd need a more robust
> > solution.
> > >>> > >
> > >>> > >This seems to be a similar issue to the load time ordering
> > >>> > >problem that you describe later. Having a real device for avsync
> > >>> > >that would be a child of the display device would solve the
> > >>> > >ordering issue in both cases. I admit I haven't looked into it if
> > >>> > >this is feasible, but I would like to see some solution to this as part of
> > the plan.
> > >> >
> > >> >Yeah, this is a big reason why I want real devices - we have piles
> > >> >of infrastructure to solve these ordering issues as soon as there's
> > >> >a struct device around. If we don't use that, we need to reinvent
> > >> >all those wheels ourselves.
> > > To make the driver core's magic work I think you'd need to find a way
> > > to reparent the audio device under the display device. Presumably they
> > > come from two different parts of the device tree (two different PCI
> > > devices I would guess for Intel, two different platform devices on
> > > SoCs). Changing the parent after a device has been registered doesn't
> > > work as far as I know. But even assuming that would work, I have
> > > trouble imagining what the implications would be on the rest of the driver
> > model.
> > >
> > > I faced similar problems with the Tegra DRM driver, and the only way I
> > > can see to make this kind of interaction between devices work is by
> > > tacking on an extra layer outside the core driver model.
>
> > That's why we need a new avsink device which is a proper child of the gfx
> > device, and the audio driver needs to use the componentized device
> > framework so that the suspend/resume ordering works correctly. Or at least
> > that's been my idea, might be we have some small gaps here and there.
> > -Daniel
>
> Hi Daniel,
>
> Would you please share more info about your idea?
>
> - What would be an avsink device represent here?
> E.g. on Intel platforms, will the whole display device have a child
> avsink device or multiple avsink devices for each DDI port?
My idea would be to have one for each output pipe (i.e. the link between
audio and gfx), not one per ddi. Gfx driver would then let audio know when
a screen is connected and which one (e.g. exact model serial from edid).
This is somewhat important for dp mst where there's no longer a fixed
relationship between audio pin and screen
>
> - And for the relationship between audio driver and the avsink device,
> which would be the master and which would be the component?
1:1 for avsink:alsa pin (iirc it's called a pin, not sure about the name).
That way the audio driver has a clear point for getting at the eld and
similar information.
> In addition, the component framework does not touch PM now.
> And introducing PM to the component framework seems not easy since there
> can be potential conflict caused by parent-child relationship of the
> involved devices.
Yeah, the entire PM situation seems to be a bit bad. It also looks like on
resume/suspend we still have problems, at least on the audio side since we
need to coordinate between 2 completel different underlying devices. But
at least with the parent->child relationship we have a guranatee that the
avsink won't be suspended after the gfx device is already off.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [alsa-devel] [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)
2014-05-22 19:58 ` Daniel Vetter
@ 2014-06-03 1:42 ` Lin, Mengdong
2014-06-03 7:44 ` Daniel Vetter
0 siblings, 1 reply; 24+ messages in thread
From: Lin, Mengdong @ 2014-06-03 1:42 UTC (permalink / raw)
To: Daniel Vetter
Cc: Yang, Libin, alsa-devel@alsa-project.org, Nikkanen, Kimmo,
Takashi Iwai (tiwai@suse.de), Babu, Ramesh, Koul, Vinod,
Thierry Reding, Girdwood, Liam R, Vetter, Daniel,
intel-gfx@lists.freedesktop.org
> -----Original Message-----
> From: Daniel Vetter [mailto:daniel.vetter@ffwll.ch]
> > Hi Daniel,
> >
> > Would you please share more info about your idea?
> >
> > - What would be an avsink device represent here?
> > E.g. on Intel platforms, will the whole display device have a child
> > avsink device or multiple avsink devices for each DDI port?
>
> My idea would be to have one for each output pipe (i.e. the link between
> audio and gfx), not one per ddi. Gfx driver would then let audio know
> when a screen is connected and which one (e.g. exact model serial from
> edid).
> This is somewhat important for dp mst where there's no longer a fixed
> relationship between audio pin and screen
Thanks. But if we use avsink device, I prefer to have an avsink device per DDI or several avsink devices per DDI,
It's because
1. Without DP MST, there is a fixed mapping between each audio codec pin and DDI;
2. With DP MST, the above pin: DDI mapping is still valid (at least on Intel platforms),
and there is also a fixed mapping between each device (screen) connected to a pin/DDI.
3. HD-Audio driver creates a PCM (audio stream) devices for each pin.
Keeping this behavior can make audio driver works on platforms without implementing the sound/gfx sync channel.
And I guess in the future the audio driver will creates more than one PCM devices for a DP MST-capable pin, according how many devices a DDI can support.
4. Display mode change can change the pipe connected to a DDI even if the monitor stays on the same DDI,
If we have an avsink device per pipe, the audio driver will have to check another avsink device for this case. It seems not convenient.
> > - And for the relationship between audio driver and the avsink device,
> > which would be the master and which would be the component?
>
> 1:1 for avsink:alsa pin (iirc it's called a pin, not sure about the name).
> That way the audio driver has a clear point for getting at the eld and
> similar information.
Since the audio driver usually already binds to some device (PCI or platform device),
I think the audio driver cannot bind to the new avsink devices created by display driver, and we need a new driver to handle these device and communication.
While the display driver creates the new endpoint "avsink" devices, the audio driver can also create the same number of audio endpoint devices.
And we could let the audio endpoint device be the master and its peer display endpoint device be the component.
Thus the master/component framework can help us to bind/unbind each pair of display/audio endpoint devices.
Is it doable? If okay, I'll modify the RFC and see if there are other gaps.
> > In addition, the component framework does not touch PM now.
> > And introducing PM to the component framework seems not easy since
> > there can be potential conflict caused by parent-child relationship of
> > the involved devices.
>
> Yeah, the entire PM situation seems to be a bit bad. It also looks like on
> resume/suspend we still have problems, at least on the audio side since
> we need to coordinate between 2 completel different underlying devices.
> But at least with the parent->child relationship we have a guranatee that
> the avsink won't be suspended after the gfx device is already off.
> -Daniel
Yes. You're right.
And we could find a way to hide the Intel-specific display "power well" from the audio driver by using runtime PM API on these devices.
Thanks
Mengdong
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [alsa-devel] [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM)
2014-06-03 1:42 ` Lin, Mengdong
@ 2014-06-03 7:44 ` Daniel Vetter
0 siblings, 0 replies; 24+ messages in thread
From: Daniel Vetter @ 2014-06-03 7:44 UTC (permalink / raw)
To: Lin, Mengdong
Cc: Yang, Libin, alsa-devel@alsa-project.org, Nikkanen, Kimmo,
Takashi Iwai (tiwai@suse.de), Babu, Ramesh, Koul, Vinod,
Thierry Reding, Girdwood, Liam R, Vetter, Daniel,
intel-gfx@lists.freedesktop.org
On Tue, Jun 03, 2014 at 01:42:03AM +0000, Lin, Mengdong wrote:
> > -----Original Message-----
> > From: Daniel Vetter [mailto:daniel.vetter@ffwll.ch]
>
> > > Hi Daniel,
> > >
> > > Would you please share more info about your idea?
> > >
> > > - What would be an avsink device represent here?
> > > E.g. on Intel platforms, will the whole display device have a child
> > > avsink device or multiple avsink devices for each DDI port?
> >
> > My idea would be to have one for each output pipe (i.e. the link between
> > audio and gfx), not one per ddi. Gfx driver would then let audio know
> > when a screen is connected and which one (e.g. exact model serial from
> > edid).
> > This is somewhat important for dp mst where there's no longer a fixed
> > relationship between audio pin and screen
>
> Thanks. But if we use avsink device, I prefer to have an avsink device per DDI or several avsink devices per DDI,
> It's because
> 1. Without DP MST, there is a fixed mapping between each audio codec pin and DDI;
> 2. With DP MST, the above pin: DDI mapping is still valid (at least on Intel platforms),
> and there is also a fixed mapping between each device (screen) connected to a pin/DDI.
> 3. HD-Audio driver creates a PCM (audio stream) devices for each pin.
> Keeping this behavior can make audio driver works on platforms without implementing the sound/gfx sync channel.
> And I guess in the future the audio driver will creates more than one PCM devices for a DP MST-capable pin, according how many devices a DDI can support.
>
> 4. Display mode change can change the pipe connected to a DDI even if the monitor stays on the same DDI,
> If we have an avsink device per pipe, the audio driver will have to check another avsink device for this case. It seems not convenient.
All this can also be solved by making the connector/avsink/sound pcm known
to userspace and let userspace figure it out. A few links in sysfs should
be good enough, plus exposing the full edid on the sound pcm side (so that
userspace can compare the serial number in the edid).
> > > - And for the relationship between audio driver and the avsink device,
> > > which would be the master and which would be the component?
> >
> > 1:1 for avsink:alsa pin (iirc it's called a pin, not sure about the name).
> > That way the audio driver has a clear point for getting at the eld and
> > similar information.
>
> Since the audio driver usually already binds to some device (PCI or platform device),
> I think the audio driver cannot bind to the new avsink devices created by display driver, and we need a new driver to handle these device and communication.
>
> While the display driver creates the new endpoint "avsink" devices, the audio driver can also create the same number of audio endpoint devices.
> And we could let the audio endpoint device be the master and its peer display endpoint device be the component.
> Thus the master/component framework can help us to bind/unbind each pair of display/audio endpoint devices.
>
> Is it doable? If okay, I'll modify the RFC and see if there are other gaps.
Yeah, that should be doable. gfx creates avsink devices, audio binds to
them using the component framework.
> > > In addition, the component framework does not touch PM now.
> > > And introducing PM to the component framework seems not easy since
> > > there can be potential conflict caused by parent-child relationship of
> > > the involved devices.
> >
> > Yeah, the entire PM situation seems to be a bit bad. It also looks like on
> > resume/suspend we still have problems, at least on the audio side since
> > we need to coordinate between 2 completel different underlying devices.
> > But at least with the parent->child relationship we have a guranatee that
> > the avsink won't be suspended after the gfx device is already off.
> > -Daniel
>
> Yes. You're right.
> And we could find a way to hide the Intel-specific display "power well"
> from the audio driver by using runtime PM API on these devices.
Yeah, that's one of the goals a I have here.
Cheers, Daneil
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2014-06-03 7:44 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-05-20 2:52 [RFC] set up an sync channel between audio and display driver (i.e. ALSA and DRM) Lin, Mengdong
2014-05-20 8:10 ` Takashi Iwai
2014-05-20 10:37 ` Vinod Koul
2014-05-22 2:46 ` [alsa-devel] " Raymond Yau
2014-05-20 10:02 ` Daniel Vetter
2014-05-20 10:04 ` Daniel Vetter
2014-05-20 12:43 ` Thierry Reding
2014-05-20 13:40 ` [Intel-gfx] " Jaroslav Kysela
2014-05-20 14:29 ` Imre Deak
2014-05-20 14:35 ` Vinod Koul
2014-05-20 15:02 ` Imre Deak
2014-05-21 15:56 ` Babu, Ramesh
2014-05-21 16:05 ` Daniel Vetter
2014-05-21 17:07 ` Imre Deak
2014-05-22 14:29 ` Lin, Mengdong
2014-05-20 14:45 ` Daniel Vetter
2014-05-20 14:57 ` [alsa-devel] " Thierry Reding
2014-05-20 15:07 ` Daniel Vetter
2014-05-20 15:15 ` Thierry Reding
2014-05-20 15:22 ` Daniel Vetter
2014-05-22 14:59 ` Lin, Mengdong
2014-05-22 19:58 ` Daniel Vetter
2014-06-03 1:42 ` Lin, Mengdong
2014-06-03 7:44 ` Daniel Vetter
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).