linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH] hwspinlock: core: support for TEGRA hardware spinlocks
       [not found] <1321334342-3283-1-git-send-email-vwadekar@nvidia.com>
@ 2011-11-15  6:24 ` Varun Wadekar
  2011-11-17  5:13   ` Varun Wadekar
  2011-11-21 10:21   ` Ohad Ben-Cohen
  0 siblings, 2 replies; 5+ messages in thread
From: Varun Wadekar @ 2011-11-15  6:24 UTC (permalink / raw)
  To: Varun Wadekar; +Cc: ohad@wizery.com, linux-tegra@vger.kernel.org, LKML

+LKML ml

This is an idea that I wanted some review comments on (to check if I am
on the right path), before the TEGRA driver gets written.

On Tuesday 15 November 2011 10:49 AM, Varun Wadekar wrote:
> TEGRA has a hardware spinlock block which is used
> to get exclusive access to the hardware blocks either
> from the CPU or the COP. The TEGRA hardware spinlock
> block has the capability to interrupt the requester
> on success. For this the core need not disable
> preemption or interrupts before calling into the
> actual driver. Add lock_timeout to the ops structure
> to facilitate this working and to maintain backward
> compatibility.
>
> Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
> ---
>  Documentation/hwspinlock.txt             |   39 ++++++++++++++++++++++++++---
>  drivers/hwspinlock/hwspinlock_core.c     |   16 ++++++++++--
>  drivers/hwspinlock/hwspinlock_internal.h |    3 ++
>  3 files changed, 51 insertions(+), 7 deletions(-)
>
> diff --git a/Documentation/hwspinlock.txt b/Documentation/hwspinlock.txt
> index a903ee5..35730b8 100644
> --- a/Documentation/hwspinlock.txt
> +++ b/Documentation/hwspinlock.txt
> @@ -29,6 +29,18 @@ the remote processors, and access to it is synchronized using the hwspinlock
>  module (remote processor directly places new messages in this shared data
>  structure).
>  
> +The Tegra line of processors has a CPU and a COP similar to the OMAP processors.
> +The Tegra SoC has a hardware block which arbitrates access to different hardware
> +modules on the SoC between the CPU and the COP. The hardware spinlock block
> +also has the capability to interrupt the caller when it has access to the
> +requested hardware module. To facilitate SoCs like Tegra which do a lot more
> +than just share data structures between the CPU and the COP, a new callback,
> +int (*lock_timeout)(struct hwspinlock *lock, unsigned int to), has been added
> +to struct hwspinlock_ops. The drivers which support functionality similar to
> +the Tegra SoCs, will use lock_timeout to grant access to the hardware spinlock
> +block on the SoCs. Such SoC drivers will not support all the apis exposed
> +by the hwspinlock framework. More information below.
> +
>  A common hwspinlock interface makes it possible to have generic, platform-
>  independent, drivers.
>  
> @@ -58,8 +70,11 @@ independent, drivers.
>     - lock a previously-assigned hwspinlock with a timeout limit (specified in
>       msecs). If the hwspinlock is already taken, the function will busy loop
>       waiting for it to be released, but give up when the timeout elapses.
> -     Upon a successful return from this function, preemption is disabled so
> -     the caller must not sleep, and is advised to release the hwspinlock as
> +     If the underlying hardware driver supports lock_timeout in it's ops structure
> +     then upon a successful return from this function, the caller is allowed
> +     to sleep since we do not disable premption or interrupts in this scenario.
> +     If not, upon a successful return from this function, preemption is disabled
> +     so the caller must not sleep, and is advised to release the hwspinlock as
>       soon as possible, in order to minimize remote cores polling on the
>       hardware interconnect.
>       Returns 0 when successful and an appropriate error code otherwise (most
> @@ -68,7 +83,10 @@ independent, drivers.
>  
>    int hwspin_lock_timeout_irq(struct hwspinlock *hwlock, unsigned int timeout);
>     - lock a previously-assigned hwspinlock with a timeout limit (specified in
> -     msecs). If the hwspinlock is already taken, the function will busy loop
> +     msecs).
> +     Not supported if lock_timeout is exported from the ops struct by the
> +     underlying driver.
> +     If the hwspinlock is already taken, the function will busy loop
>       waiting for it to be released, but give up when the timeout elapses.
>       Upon a successful return from this function, preemption and the local
>       interrupts are disabled, so the caller must not sleep, and is advised to
> @@ -80,7 +98,10 @@ independent, drivers.
>    int hwspin_lock_timeout_irqsave(struct hwspinlock *hwlock, unsigned int to,
>  							unsigned long *flags);
>     - lock a previously-assigned hwspinlock with a timeout limit (specified in
> -     msecs). If the hwspinlock is already taken, the function will busy loop
> +     msecs).
> +     Not supported if lock_timeout is exported from the ops struct by the
> +     underlying driver.
> +     If the hwspinlock is already taken, the function will busy loop
>       waiting for it to be released, but give up when the timeout elapses.
>       Upon a successful return from this function, preemption is disabled,
>       local interrupts are disabled and their previous state is saved at the
> @@ -93,6 +114,8 @@ independent, drivers.
>    int hwspin_trylock(struct hwspinlock *hwlock);
>     - attempt to lock a previously-assigned hwspinlock, but immediately fail if
>       it is already taken.
> +     Not supported if lock_timeout is exported from the ops struct by the
> +     underlying driver.
>       Upon a successful return from this function, preemption is disabled so
>       caller must not sleep, and is advised to release the hwspinlock as soon as
>       possible, in order to minimize remote cores polling on the hardware
> @@ -104,6 +127,8 @@ independent, drivers.
>    int hwspin_trylock_irq(struct hwspinlock *hwlock);
>     - attempt to lock a previously-assigned hwspinlock, but immediately fail if
>       it is already taken.
> +     Not supported if lock_timeout is exported from the ops struct by the
> +     underlying driver.
>       Upon a successful return from this function, preemption and the local
>       interrupts are disabled so caller must not sleep, and is advised to
>       release the hwspinlock as soon as possible.
> @@ -114,6 +139,8 @@ independent, drivers.
>    int hwspin_trylock_irqsave(struct hwspinlock *hwlock, unsigned long *flags);
>     - attempt to lock a previously-assigned hwspinlock, but immediately fail if
>       it is already taken.
> +     Not supported if lock_timeout is exported from the ops struct by the
> +     underlying driver.
>       Upon a successful return from this function, preemption is disabled,
>       the local interrupts are disabled and their previous state is saved
>       at the given flags placeholder. The caller must not sleep, and is advised
> @@ -130,6 +157,8 @@ independent, drivers.
>  
>    void hwspin_unlock_irq(struct hwspinlock *hwlock);
>     - unlock a previously-locked hwspinlock and enable local interrupts.
> +     Not supported if lock_timeout is exported from the ops struct by the
> +     underlying driver.
>       The caller should _never_ unlock an hwspinlock which is already unlocked.
>       Doing so is considered a bug (there is no protection against this).
>       Upon a successful return from this function, preemption and local
> @@ -138,6 +167,8 @@ independent, drivers.
>    void
>    hwspin_unlock_irqrestore(struct hwspinlock *hwlock, unsigned long *flags);
>     - unlock a previously-locked hwspinlock.
> +     Not supported if lock_timeout is exported from the ops struct by the
> +     underlying driver.
>       The caller should _never_ unlock an hwspinlock which is already unlocked.
>       Doing so is considered a bug (there is no protection against this).
>       Upon a successful return from this function, preemption is reenabled,
> diff --git a/drivers/hwspinlock/hwspinlock_core.c b/drivers/hwspinlock/hwspinlock_core.c
> index 61c9cf1..520c2c9 100644
> --- a/drivers/hwspinlock/hwspinlock_core.c
> +++ b/drivers/hwspinlock/hwspinlock_core.c
> @@ -91,6 +91,10 @@ int __hwspin_trylock(struct hwspinlock *hwlock, int mode, unsigned long *flags)
>  
>  	BUG_ON(!hwlock);
>  	BUG_ON(!flags && mode == HWLOCK_IRQSTATE);
> +	BUG_ON(hwlock->bank->ops->trylock && hwlock->bank->ops->lock_timeout);
> +
> +	if (!hwlock->bank->ops->trylock)
> +		return -EINVAL;
>  
>  	/*
>  	 * This spin_lock{_irq, _irqsave} serves three purposes:
> @@ -182,6 +186,9 @@ int __hwspin_lock_timeout(struct hwspinlock *hwlock, unsigned int to,
>  
>  	expire = msecs_to_jiffies(to) + jiffies;
>  
> +	if (hwlock->bank->ops->lock_timeout)
> +		return hwlock->bank->ops->lock_timeout(hwlock, expire);
> +
>  	for (;;) {
>  		/* Try to take the hwspinlock */
>  		ret = __hwspin_trylock(hwlock, mode, flags);
> @@ -245,7 +252,10 @@ void __hwspin_unlock(struct hwspinlock *hwlock, int mode, unsigned long *flags)
>  	 */
>  	mb();
>  
> -	hwlock->bank->ops->unlock(hwlock);
> +	if (hwlock->bank->ops->lock_timeout)
> +		return hwlock->bank->ops->unlock(hwlock);
> +	else
> +		hwlock->bank->ops->unlock(hwlock);
>  
>  	/* Undo the spin_trylock{_irq, _irqsave} called while locking */
>  	if (mode == HWLOCK_IRQSTATE)
> @@ -328,8 +338,8 @@ int hwspin_lock_register(struct hwspinlock_device *bank, struct device *dev,
>  	struct hwspinlock *hwlock;
>  	int ret = 0, i;
>  
> -	if (!bank || !ops || !dev || !num_locks || !ops->trylock ||
> -							!ops->unlock) {
> +	if (!bank || !ops || !dev || !num_locks ||
> +	    (!ops->trylock && !ops->lock_timeout) || !ops->unlock) {
>  		pr_err("invalid parameters\n");
>  		return -EINVAL;
>  	}
> diff --git a/drivers/hwspinlock/hwspinlock_internal.h b/drivers/hwspinlock/hwspinlock_internal.h
> index d26f78b..b0ba642 100644
> --- a/drivers/hwspinlock/hwspinlock_internal.h
> +++ b/drivers/hwspinlock/hwspinlock_internal.h
> @@ -26,6 +26,8 @@ struct hwspinlock_device;
>  /**
>   * struct hwspinlock_ops - platform-specific hwspinlock handlers
>   *
> + * @lock_timeout: attempt to take the lock and wait with a timeout.
> + * 		  returns 0 on failure and true on success. may_sleep.
>   * @trylock: make a single attempt to take the lock. returns 0 on
>   *	     failure and true on success. may _not_ sleep.
>   * @unlock:  release the lock. always succeed. may _not_ sleep.
> @@ -35,6 +37,7 @@ struct hwspinlock_device;
>   */
>  struct hwspinlock_ops {
>  	int (*trylock)(struct hwspinlock *lock);
> +	int (*lock_timeout)(struct hwspinlock *lock, unsigned int to);
>  	void (*unlock)(struct hwspinlock *lock);
>  	void (*relax)(struct hwspinlock *lock);
>  };


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] hwspinlock: core: support for TEGRA hardware spinlocks
  2011-11-15  6:24 ` [PATCH] hwspinlock: core: support for TEGRA hardware spinlocks Varun Wadekar
@ 2011-11-17  5:13   ` Varun Wadekar
  2011-11-17  6:07     ` Ohad Ben-Cohen
  2011-11-21 10:21   ` Ohad Ben-Cohen
  1 sibling, 1 reply; 5+ messages in thread
From: Varun Wadekar @ 2011-11-17  5:13 UTC (permalink / raw)
  To: Varun Wadekar; +Cc: ohad@wizery.com, linux-tegra@vger.kernel.org, LKML

On Tuesday 15 November 2011 11:54 AM, Varun Wadekar wrote:
> +LKML ml
>
> This is an idea that I wanted some review comments on (to check if I am
> on the right path), before the TEGRA driver gets written.

Any feedback on this patch?

> On Tuesday 15 November 2011 10:49 AM, Varun Wadekar wrote:
>> TEGRA has a hardware spinlock block which is used
>> to get exclusive access to the hardware blocks either
>> from the CPU or the COP. The TEGRA hardware spinlock
>> block has the capability to interrupt the requester
>> on success. For this the core need not disable
>> preemption or interrupts before calling into the
>> actual driver. Add lock_timeout to the ops structure
>> to facilitate this working and to maintain backward
>> compatibility.
>>
>> Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
>> ---
>>  Documentation/hwspinlock.txt             |   39 ++++++++++++++++++++++++++---
>>  drivers/hwspinlock/hwspinlock_core.c     |   16 ++++++++++--
>>  drivers/hwspinlock/hwspinlock_internal.h |    3 ++
>>  3 files changed, 51 insertions(+), 7 deletions(-)
>>
>> diff --git a/Documentation/hwspinlock.txt b/Documentation/hwspinlock.txt
>> index a903ee5..35730b8 100644
>> --- a/Documentation/hwspinlock.txt
>> +++ b/Documentation/hwspinlock.txt
>> @@ -29,6 +29,18 @@ the remote processors, and access to it is synchronized using the hwspinlock
>>  module (remote processor directly places new messages in this shared data
>>  structure).
>>  
>> +The Tegra line of processors has a CPU and a COP similar to the OMAP processors.
>> +The Tegra SoC has a hardware block which arbitrates access to different hardware
>> +modules on the SoC between the CPU and the COP. The hardware spinlock block
>> +also has the capability to interrupt the caller when it has access to the
>> +requested hardware module. To facilitate SoCs like Tegra which do a lot more
>> +than just share data structures between the CPU and the COP, a new callback,
>> +int (*lock_timeout)(struct hwspinlock *lock, unsigned int to), has been added
>> +to struct hwspinlock_ops. The drivers which support functionality similar to
>> +the Tegra SoCs, will use lock_timeout to grant access to the hardware spinlock
>> +block on the SoCs. Such SoC drivers will not support all the apis exposed
>> +by the hwspinlock framework. More information below.
>> +
>>  A common hwspinlock interface makes it possible to have generic, platform-
>>  independent, drivers.
>>  
>> @@ -58,8 +70,11 @@ independent, drivers.
>>     - lock a previously-assigned hwspinlock with a timeout limit (specified in
>>       msecs). If the hwspinlock is already taken, the function will busy loop
>>       waiting for it to be released, but give up when the timeout elapses.
>> -     Upon a successful return from this function, preemption is disabled so
>> -     the caller must not sleep, and is advised to release the hwspinlock as
>> +     If the underlying hardware driver supports lock_timeout in it's ops structure
>> +     then upon a successful return from this function, the caller is allowed
>> +     to sleep since we do not disable premption or interrupts in this scenario.
>> +     If not, upon a successful return from this function, preemption is disabled
>> +     so the caller must not sleep, and is advised to release the hwspinlock as
>>       soon as possible, in order to minimize remote cores polling on the
>>       hardware interconnect.
>>       Returns 0 when successful and an appropriate error code otherwise (most
>> @@ -68,7 +83,10 @@ independent, drivers.
>>  
>>    int hwspin_lock_timeout_irq(struct hwspinlock *hwlock, unsigned int timeout);
>>     - lock a previously-assigned hwspinlock with a timeout limit (specified in
>> -     msecs). If the hwspinlock is already taken, the function will busy loop
>> +     msecs).
>> +     Not supported if lock_timeout is exported from the ops struct by the
>> +     underlying driver.
>> +     If the hwspinlock is already taken, the function will busy loop
>>       waiting for it to be released, but give up when the timeout elapses.
>>       Upon a successful return from this function, preemption and the local
>>       interrupts are disabled, so the caller must not sleep, and is advised to
>> @@ -80,7 +98,10 @@ independent, drivers.
>>    int hwspin_lock_timeout_irqsave(struct hwspinlock *hwlock, unsigned int to,
>>  							unsigned long *flags);
>>     - lock a previously-assigned hwspinlock with a timeout limit (specified in
>> -     msecs). If the hwspinlock is already taken, the function will busy loop
>> +     msecs).
>> +     Not supported if lock_timeout is exported from the ops struct by the
>> +     underlying driver.
>> +     If the hwspinlock is already taken, the function will busy loop
>>       waiting for it to be released, but give up when the timeout elapses.
>>       Upon a successful return from this function, preemption is disabled,
>>       local interrupts are disabled and their previous state is saved at the
>> @@ -93,6 +114,8 @@ independent, drivers.
>>    int hwspin_trylock(struct hwspinlock *hwlock);
>>     - attempt to lock a previously-assigned hwspinlock, but immediately fail if
>>       it is already taken.
>> +     Not supported if lock_timeout is exported from the ops struct by the
>> +     underlying driver.
>>       Upon a successful return from this function, preemption is disabled so
>>       caller must not sleep, and is advised to release the hwspinlock as soon as
>>       possible, in order to minimize remote cores polling on the hardware
>> @@ -104,6 +127,8 @@ independent, drivers.
>>    int hwspin_trylock_irq(struct hwspinlock *hwlock);
>>     - attempt to lock a previously-assigned hwspinlock, but immediately fail if
>>       it is already taken.
>> +     Not supported if lock_timeout is exported from the ops struct by the
>> +     underlying driver.
>>       Upon a successful return from this function, preemption and the local
>>       interrupts are disabled so caller must not sleep, and is advised to
>>       release the hwspinlock as soon as possible.
>> @@ -114,6 +139,8 @@ independent, drivers.
>>    int hwspin_trylock_irqsave(struct hwspinlock *hwlock, unsigned long *flags);
>>     - attempt to lock a previously-assigned hwspinlock, but immediately fail if
>>       it is already taken.
>> +     Not supported if lock_timeout is exported from the ops struct by the
>> +     underlying driver.
>>       Upon a successful return from this function, preemption is disabled,
>>       the local interrupts are disabled and their previous state is saved
>>       at the given flags placeholder. The caller must not sleep, and is advised
>> @@ -130,6 +157,8 @@ independent, drivers.
>>  
>>    void hwspin_unlock_irq(struct hwspinlock *hwlock);
>>     - unlock a previously-locked hwspinlock and enable local interrupts.
>> +     Not supported if lock_timeout is exported from the ops struct by the
>> +     underlying driver.
>>       The caller should _never_ unlock an hwspinlock which is already unlocked.
>>       Doing so is considered a bug (there is no protection against this).
>>       Upon a successful return from this function, preemption and local
>> @@ -138,6 +167,8 @@ independent, drivers.
>>    void
>>    hwspin_unlock_irqrestore(struct hwspinlock *hwlock, unsigned long *flags);
>>     - unlock a previously-locked hwspinlock.
>> +     Not supported if lock_timeout is exported from the ops struct by the
>> +     underlying driver.
>>       The caller should _never_ unlock an hwspinlock which is already unlocked.
>>       Doing so is considered a bug (there is no protection against this).
>>       Upon a successful return from this function, preemption is reenabled,
>> diff --git a/drivers/hwspinlock/hwspinlock_core.c b/drivers/hwspinlock/hwspinlock_core.c
>> index 61c9cf1..520c2c9 100644
>> --- a/drivers/hwspinlock/hwspinlock_core.c
>> +++ b/drivers/hwspinlock/hwspinlock_core.c
>> @@ -91,6 +91,10 @@ int __hwspin_trylock(struct hwspinlock *hwlock, int mode, unsigned long *flags)
>>  
>>  	BUG_ON(!hwlock);
>>  	BUG_ON(!flags && mode == HWLOCK_IRQSTATE);
>> +	BUG_ON(hwlock->bank->ops->trylock && hwlock->bank->ops->lock_timeout);
>> +
>> +	if (!hwlock->bank->ops->trylock)
>> +		return -EINVAL;
>>  
>>  	/*
>>  	 * This spin_lock{_irq, _irqsave} serves three purposes:
>> @@ -182,6 +186,9 @@ int __hwspin_lock_timeout(struct hwspinlock *hwlock, unsigned int to,
>>  
>>  	expire = msecs_to_jiffies(to) + jiffies;
>>  
>> +	if (hwlock->bank->ops->lock_timeout)
>> +		return hwlock->bank->ops->lock_timeout(hwlock, expire);
>> +
>>  	for (;;) {
>>  		/* Try to take the hwspinlock */
>>  		ret = __hwspin_trylock(hwlock, mode, flags);
>> @@ -245,7 +252,10 @@ void __hwspin_unlock(struct hwspinlock *hwlock, int mode, unsigned long *flags)
>>  	 */
>>  	mb();
>>  
>> -	hwlock->bank->ops->unlock(hwlock);
>> +	if (hwlock->bank->ops->lock_timeout)
>> +		return hwlock->bank->ops->unlock(hwlock);
>> +	else
>> +		hwlock->bank->ops->unlock(hwlock);
>>  
>>  	/* Undo the spin_trylock{_irq, _irqsave} called while locking */
>>  	if (mode == HWLOCK_IRQSTATE)
>> @@ -328,8 +338,8 @@ int hwspin_lock_register(struct hwspinlock_device *bank, struct device *dev,
>>  	struct hwspinlock *hwlock;
>>  	int ret = 0, i;
>>  
>> -	if (!bank || !ops || !dev || !num_locks || !ops->trylock ||
>> -							!ops->unlock) {
>> +	if (!bank || !ops || !dev || !num_locks ||
>> +	    (!ops->trylock && !ops->lock_timeout) || !ops->unlock) {
>>  		pr_err("invalid parameters\n");
>>  		return -EINVAL;
>>  	}
>> diff --git a/drivers/hwspinlock/hwspinlock_internal.h b/drivers/hwspinlock/hwspinlock_internal.h
>> index d26f78b..b0ba642 100644
>> --- a/drivers/hwspinlock/hwspinlock_internal.h
>> +++ b/drivers/hwspinlock/hwspinlock_internal.h
>> @@ -26,6 +26,8 @@ struct hwspinlock_device;
>>  /**
>>   * struct hwspinlock_ops - platform-specific hwspinlock handlers
>>   *
>> + * @lock_timeout: attempt to take the lock and wait with a timeout.
>> + * 		  returns 0 on failure and true on success. may_sleep.
>>   * @trylock: make a single attempt to take the lock. returns 0 on
>>   *	     failure and true on success. may _not_ sleep.
>>   * @unlock:  release the lock. always succeed. may _not_ sleep.
>> @@ -35,6 +37,7 @@ struct hwspinlock_device;
>>   */
>>  struct hwspinlock_ops {
>>  	int (*trylock)(struct hwspinlock *lock);
>> +	int (*lock_timeout)(struct hwspinlock *lock, unsigned int to);
>>  	void (*unlock)(struct hwspinlock *lock);
>>  	void (*relax)(struct hwspinlock *lock);
>>  };


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] hwspinlock: core: support for TEGRA hardware spinlocks
  2011-11-17  5:13   ` Varun Wadekar
@ 2011-11-17  6:07     ` Ohad Ben-Cohen
  2011-11-17  8:18       ` Varun Wadekar
  0 siblings, 1 reply; 5+ messages in thread
From: Ohad Ben-Cohen @ 2011-11-17  6:07 UTC (permalink / raw)
  To: Varun Wadekar; +Cc: linux-tegra@vger.kernel.org, LKML

On Thu, Nov 17, 2011 at 7:13 AM, Varun Wadekar <vwadekar@nvidia.com> wrote:
> On Tuesday 15 November 2011 11:54 AM, Varun Wadekar wrote:
>> +LKML ml
>>
>> This is an idea that I wanted some review comments on (to check if I am
>> on the right path), before the TEGRA driver gets written.
>
> Any feedback on this patch?

Sure.

I'm a bit tied up with some schedule I have to meet this week, so I'll
provide my feedback early next week at the latest.

Sorry for the delay,
Ohad.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] hwspinlock: core: support for TEGRA hardware spinlocks
  2011-11-17  6:07     ` Ohad Ben-Cohen
@ 2011-11-17  8:18       ` Varun Wadekar
  0 siblings, 0 replies; 5+ messages in thread
From: Varun Wadekar @ 2011-11-17  8:18 UTC (permalink / raw)
  To: Ohad Ben-Cohen; +Cc: linux-tegra@vger.kernel.org, LKML


> Sure.
>
> I'm a bit tied up with some schedule I have to meet this week, so I'll
> provide my feedback early next week at the latest.
>

Thanks.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] hwspinlock: core: support for TEGRA hardware spinlocks
  2011-11-15  6:24 ` [PATCH] hwspinlock: core: support for TEGRA hardware spinlocks Varun Wadekar
  2011-11-17  5:13   ` Varun Wadekar
@ 2011-11-21 10:21   ` Ohad Ben-Cohen
  1 sibling, 0 replies; 5+ messages in thread
From: Ohad Ben-Cohen @ 2011-11-21 10:21 UTC (permalink / raw)
  To: Varun Wadekar; +Cc: linux-tegra@vger.kernel.org, LKML, linux-arm

Hi Varun,

On Tue, Nov 15, 2011 at 8:24 AM, Varun Wadekar <vwadekar@nvidia.com> wrote:
>> +The Tegra line of processors has a CPU and a COP similar to the OMAP processors.
>> +The Tegra SoC has a hardware block which arbitrates access to different hardware
>> +modules on the SoC between the CPU and the COP. The hardware spinlock block
>> +also has the capability to interrupt the caller when it has access to the
>> +requested hardware module. To facilitate SoCs like Tegra which do a lot more
>> +than just share data structures between the CPU and the COP, a new callback,
>> +int (*lock_timeout)(struct hwspinlock *lock, unsigned int to), has been added
>> +to struct hwspinlock_ops. The drivers which support functionality similar to
>> +the Tegra SoCs, will use lock_timeout to grant access to the hardware spinlock
>> +block on the SoCs. Such SoC drivers will not support all the apis exposed
>> +by the hwspinlock framework. More information below.
...
>>     - lock a previously-assigned hwspinlock with a timeout limit (specified in
>>       msecs). If the hwspinlock is already taken, the function will busy loop
>>       waiting for it to be released, but give up when the timeout elapses.
>> -     Upon a successful return from this function, preemption is disabled so
>> -     the caller must not sleep, and is advised to release the hwspinlock as
>> +     If the underlying hardware driver supports lock_timeout in it's ops structure
>> +     then upon a successful return from this function, the caller is allowed
>> +     to sleep since we do not disable premption or interrupts in this scenario.
>> +     If not, upon a successful return from this function, preemption is disabled
>> +     so the caller must not sleep, and is advised to release the hwspinlock as
>>       soon as possible, in order to minimize remote cores polling on the
>>       hardware interconnect.
>>       Returns 0 when successful and an appropriate error code otherwise (most
...
>> @@ -182,6 +186,9 @@ int __hwspin_lock_timeout(struct hwspinlock *hwlock, unsigned int to,
>>
>>       expire = msecs_to_jiffies(to) + jiffies;
>>
>> +     if (hwlock->bank->ops->lock_timeout)
>> +             return hwlock->bank->ops->lock_timeout(hwlock, expire);
>> +
>>       for (;;) {
>>               /* Try to take the hwspinlock */
>>               ret = __hwspin_trylock(hwlock, mode, flags);

I'm afraid this makes it impossible to write generic drivers, as the
locking requirements of __hwspin_lock_timeout() depend now on the
underlying hardware: on OMAP it's a spinlock, and on Tegra it becomes
a semaphore. We should instead provide driver authors a well defined
API that is consistent across all systems.

In addition this proposal hides the entire ->lock_time()
implementation in the platform-specific driver, but we really only
want to hide the platform-specific differences, and keep the generic
parts in the core, so other platforms wouldn't have to duplicate it.

I think we should:
1. define a new 'struct hwsemaphore' object (or hwmutex to get shorter
api names..).
2. allow users to request either a hwspinlock or a hwsemaphore. it's
up to the drivers to request the synchronization primitive they need.
If it's not supported by the hardware, the request will fail.
3. add API to manipulate these new hwsemaphore objects (lock_timeout, unlock)
4. abstract only the bare minimum hw differences in the
platform-specific drivers, and keep generic parts in the core

This way drivers will always know whether a certain hardware
synchronization primitive allow them to sleep or not. It's much less
error prone too; a casual reader of the code will immediately tell the
difference between hwspin_lock and hwsemaphore_lock.

Eventually we will probably also have to rename the hwspinlock
subsystem to something like hwlock, but that's just a minor detail at
this point.

Thanks,
Ohad.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2011-11-21 10:21 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <1321334342-3283-1-git-send-email-vwadekar@nvidia.com>
2011-11-15  6:24 ` [PATCH] hwspinlock: core: support for TEGRA hardware spinlocks Varun Wadekar
2011-11-17  5:13   ` Varun Wadekar
2011-11-17  6:07     ` Ohad Ben-Cohen
2011-11-17  8:18       ` Varun Wadekar
2011-11-21 10:21   ` Ohad Ben-Cohen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).