public inbox for linux-acpi@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] ACPI: EC: clean up tmp variable before reuse
@ 2008-10-31 18:42 Alexey Starikovskiy
  2008-11-03  8:02 ` Zhao Yakui
                   ` (7 more replies)
  0 siblings, 8 replies; 37+ messages in thread
From: Alexey Starikovskiy @ 2008-10-31 18:42 UTC (permalink / raw)
  To: LenBrown; +Cc: Linux-acpi

Fix breakage introduced by following patch
27663c5855b10af9ec67bc7dfba001426ba21222 is first bad commit
commit 27663c5855b10af9ec67bc7dfba001426ba21222
Author: Matthew Wilcox <willy@linux.intel.com>
Date:   Fri Oct 10 02:22:59 2008 -0400

acpi_evaluate_integer() does not clear passed variable if
there is an error at evaluation.
So if we ignore error, we must supply initialized variable.

References: http://bugzilla.kernel.org/show_bug.cgi?id=11917
	    http://bugzilla.kernel.org/show_bug.cgi?id=11896

Signed-off-by: Alexey Starikovskiy <astarikovskiy@suse.de>
Tested-by: Alan Jenkins <alan-jenkins@tuffmail.co.uk>
---

 drivers/acpi/ec.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)


diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
index ef42316..523ac5b 100644
--- a/drivers/acpi/ec.c
+++ b/drivers/acpi/ec.c
@@ -736,7 +736,7 @@ static acpi_status
 ec_parse_device(acpi_handle handle, u32 Level, void *context, void **retval)
 {
 	acpi_status status;
-	unsigned long long tmp;
+	unsigned long long tmp = 0;
 
 	struct acpi_ec *ec = context;
 	status = acpi_walk_resources(handle, METHOD_NAME__CRS,
@@ -751,6 +751,7 @@ ec_parse_device(acpi_handle handle, u32 Level, void *context, void **retval)
 		return status;
 	ec->gpe = tmp;
 	/* Use the global lock for all EC transactions? */
+	tmp = 0;
 	acpi_evaluate_integer(handle, "_GLK", NULL, &tmp);
 	ec->global_lock = tmp;
 	ec->handle = handle;


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* Re: [PATCH] ACPI: EC: clean up tmp variable before reuse
  2008-10-31 18:42 [PATCH] ACPI: EC: clean up tmp variable before reuse Alexey Starikovskiy
@ 2008-11-03  8:02 ` Zhao Yakui
  2008-11-03  8:24 ` [PATCH]: ACPI: Initialize EC global lock based on the return value of _GLK Zhao Yakui
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 37+ messages in thread
From: Zhao Yakui @ 2008-11-03  8:02 UTC (permalink / raw)
  To: Alexey Starikovskiy; +Cc: LenBrown, Linux-acpi@vger.kernel.org

On Sat, 2008-11-01 at 02:42 +0800, Alexey Starikovskiy wrote:
> Fix breakage introduced by following patch
> 27663c5855b10af9ec67bc7dfba001426ba21222 is first bad commit
> commit 27663c5855b10af9ec67bc7dfba001426ba21222
> Author: Matthew Wilcox <willy@linux.intel.com>
> Date:   Fri Oct 10 02:22:59 2008 -0400
> 
> acpi_evaluate_integer() does not clear passed variable if
> there is an error at evaluation.
> So if we ignore error, we must supply initialized variable.
> 
> References: http://bugzilla.kernel.org/show_bug.cgi?id=11917
> 	    http://bugzilla.kernel.org/show_bug.cgi?id=11896
The patch from Alexey can fix the problem in bug 11917. 
But the better solution is that the EC global lock should be initialized
based on the return value of _GLK.
    If AE_OK is returned by the acpi_evaluate_integer, the EC global
lock will be assigned based on the return value of _GLK object. If it is
not zero, it indicates that global lock will be used when EC is
accessed. Otherwise it means that global lock won't be used when EC is
accessed.

How about the following patch?

Subject: ACPI: Initialize EC global lock based on the return value of
_GLK
From: Zhao Yakui <yakui.zhao@intel.com>

Initialize the EC global lock based on the return value of _GLK object.
Only when AE_OK is returned by the acpi_evaluate_integer, the EC global
lock will be assigned based on the return value of _GLK object.
If the return value is not zero, it means that GLobal lock will be used
when EC is accessed.Otherwise it means that global lock won't be used
when EC is accessed.

Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
cc: Alexey Starikovskiy <astarikovskiy@suse.de>
---
 drivers/acpi/ec.c |   15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

Index: linux-2.6/drivers/acpi/ec.c
===================================================================
--- linux-2.6.orig/drivers/acpi/ec.c
+++ linux-2.6/drivers/acpi/ec.c
@@ -751,8 +751,19 @@ ec_parse_device(acpi_handle handle, u32 
 		return status;
 	ec->gpe = tmp;
 	/* Use the global lock for all EC transactions? */
-	acpi_evaluate_integer(handle, "_GLK", NULL, &tmp);
-	ec->global_lock = tmp;
+	status = acpi_evaluate_integer(handle, "_GLK", NULL, &tmp);
+	/*
+	 * Only when AE_OK is returned by acpi_evaluate_interger,
+	 * the ec->global_lock will be assigned based on the returned
+	 * value by _GLK.
+	 * If the return value is not zero, it means that Global lock will be
+	 * used when EC is accessed.
+	 * Otherwise the global lock won't be used when EC is accessed.
+	 */
+	if (ACPI_SUCCESS(status))
+		ec->global_lock = !!tmp;
+	 else
+		ec->global_lock = FALSE;
 	ec->handle = handle;
 	return AE_CTRL_TERMINATE;
 }





^ permalink raw reply	[flat|nested] 37+ messages in thread

* [PATCH]: ACPI: Initialize EC global lock based on the return value of _GLK
  2008-10-31 18:42 [PATCH] ACPI: EC: clean up tmp variable before reuse Alexey Starikovskiy
  2008-11-03  8:02 ` Zhao Yakui
@ 2008-11-03  8:24 ` Zhao Yakui
  2008-11-04  7:41 ` [PATCH]: ACPI Cleanup :Initialize EC global lock based on the return status Zhao Yakui
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 37+ messages in thread
From: Zhao Yakui @ 2008-11-03  8:24 UTC (permalink / raw)
  To: LenBrown; +Cc: Linux-acpi@vger.kernel.org, Alexey Starikovskiy, alan-jenkins

Subject: ACPI: Initialize EC global lock based on the return value of
_GLK
From: Zhao Yakui <yakui.zhao@intel.com>

Initialize the EC global lock based on the returned value of _GLK
object.Only when AE_OK is returned by the acpi_evaluate_integer, the EC
global lock will be assigned based on the return value of _GLK
object.Otherwise it means that there is no _GLK object and the global
lock won't be required when EC is accessed.
If the return value of _GLK object is not zero, it means that GLobal
lock will be required when EC is accessed.
If the return value of _GLK object is zero, it means that GLobal lock 
won't be required when EC is accessed.

http://bugzilla.kernel.org/show_bug.cgi?id=11917

Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
cc: Alexey Starikovskiy <astarikovskiy@suse.de>
cc: Alan Jenkins <alan-jenkins@tuffmail.co.uk>
---
 drivers/acpi/ec.c |   17 +++++++++++++++--
 1 file changed, 15 insertions(+), 2 deletions(-)

Index: linux-2.6/drivers/acpi/ec.c
===================================================================
--- linux-2.6.orig/drivers/acpi/ec.c
+++ linux-2.6/drivers/acpi/ec.c
@@ -751,8 +751,21 @@ ec_parse_device(acpi_handle handle, u32 
 		return status;
 	ec->gpe = tmp;
 	/* Use the global lock for all EC transactions? */
-	acpi_evaluate_integer(handle, "_GLK", NULL, &tmp);
-	ec->global_lock = tmp;
+	status = acpi_evaluate_integer(handle, "_GLK", NULL, &tmp);
+	/*
+	 * Only when AE_OK is returned by acpi_evaluate_interger,
+	 * the ec->global_lock will be assigned based on the returned
+	 * value by _GLK. Otherwise it means that there is no _GLK object
+	 * and global lock won't be required when EC is accessed.
+	 * If the return value of _GLK object is not zero, it means that
+	 * global lock will be required when EC is accessed.
+	 * If the return value of _GLK object is zero, it means that
+	 * global lock won't be required when EC is accessed.
+	 */
+	if (ACPI_SUCCESS(status))
+		ec->global_lock = !!tmp;
+	 else
+		ec->global_lock = 0;
 	ec->handle = handle;
 	return AE_CTRL_TERMINATE;
 }



^ permalink raw reply	[flat|nested] 37+ messages in thread

* [PATCH]: ACPI Cleanup :Initialize EC global lock based on the return status
  2008-10-31 18:42 [PATCH] ACPI: EC: clean up tmp variable before reuse Alexey Starikovskiy
  2008-11-03  8:02 ` Zhao Yakui
  2008-11-03  8:24 ` [PATCH]: ACPI: Initialize EC global lock based on the return value of _GLK Zhao Yakui
@ 2008-11-04  7:41 ` Zhao Yakui
  2008-11-04  8:05   ` Alexey Starikovskiy
  2008-12-17  8:55 ` [PATCH] : ACPI : Use RSDT instead of XSDT by adding boot option of "acpi=rsdt" Zhao Yakui
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 37+ messages in thread
From: Zhao Yakui @ 2008-11-04  7:41 UTC (permalink / raw)
  To: LenBrown; +Cc: Linux-acpi@vger.kernel.org, Alexey Starikovskiy, alan-jenkins

Subject: ACPI: Cleanup :Initialize EC global lock based on the return status
From: Zhao Yakui <yakui.zhao@intel.com>

Initialize the EC global lock based on the returned value of _GLK object.
Only when AE_OK is returned by the acpi_evaluate_integer, the EC global lock
will be assigned based on the return value of _GLK object.Otherwise it means
that there is no _GLK object and the global lock won't be required when EC
is accessed.
If the return value of _GLK object is not zero, it means that GLobal lock
will be required when EC is accessed.
If the return value of _GLK object is zero, it means that GLobal lock
won't be required when EC is accessed.

Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
cc: Alexey Starikovskiy <astarikovskiy@suse.de>
cc: Alan Jenkins <alan-jenkins@tuffmail.co.uk>

---
 drivers/acpi/ec.c |   24 ++++++++++++++++++++++--
 1 file changed, 22 insertions(+), 2 deletions(-)

Index: linux-2.6/drivers/acpi/ec.c
===================================================================
--- linux-2.6.orig/drivers/acpi/ec.c
+++ linux-2.6/drivers/acpi/ec.c
@@ -752,8 +752,28 @@ ec_parse_device(acpi_handle handle, u32 
 	ec->gpe = tmp;
 	/* Use the global lock for all EC transactions? */
 	tmp = 0;
-	acpi_evaluate_integer(handle, "_GLK", NULL, &tmp);
-	ec->global_lock = tmp;
+	/*
+	 * Only when AE_OK is returned by acpi_evaluate_interger,
+	 * the ec->global_lock will be assigned based on the returned
+	 * value by _GLK. Otherwise it means that there is no _GLK object
+	 * and global lock won't be required when EC is accessed.
+	 * If the return value of _GLK object is not zero, it means that
+	 * global lock will be required when EC is accessed.
+	 * If the return value of _GLK object is zero, it means that
+	 * global lock won't be required when EC is accessed.
+	 */
+	status = acpi_evaluate_integer(handle, "_GLK", NULL, &tmp);
+	if (ACPI_SUCCESS(status)) {
+		/*
+		 * If the return value is not zero, it means that global lock
+		 * is required when EC is accessed
+		 */
+		if (tmp)
+			ec->global_lock = 1;
+		else
+			ec->global_lock = 0;
+	} else
+		ec->global_lock = 0;
 	ec->handle = handle;
 	return AE_CTRL_TERMINATE;
 }



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH]: ACPI Cleanup :Initialize EC global lock based on the return status
  2008-11-04  7:41 ` [PATCH]: ACPI Cleanup :Initialize EC global lock based on the return status Zhao Yakui
@ 2008-11-04  8:05   ` Alexey Starikovskiy
  2008-11-04  8:58     ` Rafael J. Wysocki
  2008-11-04  9:37     ` Zhao Yakui
  0 siblings, 2 replies; 37+ messages in thread
From: Alexey Starikovskiy @ 2008-11-04  8:05 UTC (permalink / raw)
  To: Zhao Yakui; +Cc: LenBrown, Linux-acpi@vger.kernel.org, alan-jenkins

NAK

Zhao Yakui wrote:
> Subject: ACPI: Cleanup :Initialize EC global lock based on the return status
> From: Zhao Yakui <yakui.zhao@intel.com>
> 
> Initialize the EC global lock based on the returned value of _GLK object.
> Only when AE_OK is returned by the acpi_evaluate_integer, the EC global lock
> will be assigned based on the return value of _GLK object.Otherwise it means
> that there is no _GLK object and the global lock won't be required when EC
> is accessed.
> If the return value of _GLK object is not zero, it means that GLobal lock
> will be required when EC is accessed.
> If the return value of _GLK object is zero, it means that GLobal lock
> won't be required when EC is accessed.
> 
> Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
> cc: Alexey Starikovskiy <astarikovskiy@suse.de>
> cc: Alan Jenkins <alan-jenkins@tuffmail.co.uk>
> 
> ---
>  drivers/acpi/ec.c |   24 ++++++++++++++++++++++--
>  1 file changed, 22 insertions(+), 2 deletions(-)
> 
> Index: linux-2.6/drivers/acpi/ec.c
> ===================================================================
> --- linux-2.6.orig/drivers/acpi/ec.c
> +++ linux-2.6/drivers/acpi/ec.c
> @@ -752,8 +752,28 @@ ec_parse_device(acpi_handle handle, u32 
>  	ec->gpe = tmp;
>  	/* Use the global lock for all EC transactions? */
>  	tmp = 0;
> -	acpi_evaluate_integer(handle, "_GLK", NULL, &tmp);
> -	ec->global_lock = tmp;
> +	/*
> +	 * Only when AE_OK is returned by acpi_evaluate_interger,
> +	 * the ec->global_lock will be assigned based on the returned
> +	 * value by _GLK. Otherwise it means that there is no _GLK object
> +	 * and global lock won't be required when EC is accessed.
> +	 * If the return value of _GLK object is not zero, it means that
> +	 * global lock will be required when EC is accessed.
> +	 * If the return value of _GLK object is zero, it means that
> +	 * global lock won't be required when EC is accessed.
> +	 */
> +	status = acpi_evaluate_integer(handle, "_GLK", NULL, &tmp);
> +	if (ACPI_SUCCESS(status)) {
> +		/*
> +		 * If the return value is not zero, it means that global lock
> +		 * is required when EC is accessed
> +		 */
> +		if (tmp)
> +			ec->global_lock = 1;
> +		else
> +			ec->global_lock = 0;
> +	} else
> +		ec->global_lock = 0;
>  	ec->handle = handle;
>  	return AE_CTRL_TERMINATE;
>  }
> 
> 


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH]: ACPI Cleanup :Initialize EC global lock based on the return status
  2008-11-04  8:05   ` Alexey Starikovskiy
@ 2008-11-04  8:58     ` Rafael J. Wysocki
  2008-11-04  9:21       ` Alexey Starikovskiy
  2008-11-04  9:37     ` Zhao Yakui
  1 sibling, 1 reply; 37+ messages in thread
From: Rafael J. Wysocki @ 2008-11-04  8:58 UTC (permalink / raw)
  To: Alexey Starikovskiy
  Cc: Zhao Yakui, LenBrown, Linux-acpi@vger.kernel.org, alan-jenkins

On Tuesday, 4 of November 2008, Alexey Starikovskiy wrote:
> NAK

Can you elaborate, a bit, please?

> Zhao Yakui wrote:
> > Subject: ACPI: Cleanup :Initialize EC global lock based on the return status
> > From: Zhao Yakui <yakui.zhao@intel.com>
> > 
> > Initialize the EC global lock based on the returned value of _GLK object.
> > Only when AE_OK is returned by the acpi_evaluate_integer, the EC global lock
> > will be assigned based on the return value of _GLK object.Otherwise it means
> > that there is no _GLK object and the global lock won't be required when EC
> > is accessed.
> > If the return value of _GLK object is not zero, it means that GLobal lock
> > will be required when EC is accessed.
> > If the return value of _GLK object is zero, it means that GLobal lock
> > won't be required when EC is accessed.
> > 
> > Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
> > cc: Alexey Starikovskiy <astarikovskiy@suse.de>
> > cc: Alan Jenkins <alan-jenkins@tuffmail.co.uk>
> > 
> > ---
> >  drivers/acpi/ec.c |   24 ++++++++++++++++++++++--
> >  1 file changed, 22 insertions(+), 2 deletions(-)
> > 
> > Index: linux-2.6/drivers/acpi/ec.c
> > ===================================================================
> > --- linux-2.6.orig/drivers/acpi/ec.c
> > +++ linux-2.6/drivers/acpi/ec.c
> > @@ -752,8 +752,28 @@ ec_parse_device(acpi_handle handle, u32 
> >  	ec->gpe = tmp;
> >  	/* Use the global lock for all EC transactions? */
> >  	tmp = 0;
> > -	acpi_evaluate_integer(handle, "_GLK", NULL, &tmp);
> > -	ec->global_lock = tmp;
> > +	/*
> > +	 * Only when AE_OK is returned by acpi_evaluate_interger,
> > +	 * the ec->global_lock will be assigned based on the returned
> > +	 * value by _GLK. Otherwise it means that there is no _GLK object
> > +	 * and global lock won't be required when EC is accessed.
> > +	 * If the return value of _GLK object is not zero, it means that
> > +	 * global lock will be required when EC is accessed.
> > +	 * If the return value of _GLK object is zero, it means that
> > +	 * global lock won't be required when EC is accessed.
> > +	 */
> > +	status = acpi_evaluate_integer(handle, "_GLK", NULL, &tmp);
> > +	if (ACPI_SUCCESS(status)) {
> > +		/*
> > +		 * If the return value is not zero, it means that global lock
> > +		 * is required when EC is accessed
> > +		 */
> > +		if (tmp)
> > +			ec->global_lock = 1;
> > +		else
> > +			ec->global_lock = 0;
> > +	} else
> > +		ec->global_lock = 0;
> >  	ec->handle = handle;
> >  	return AE_CTRL_TERMINATE;
> >  }
> > 
> > 
> 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH]: ACPI Cleanup :Initialize EC global lock based on the return status
  2008-11-04  8:58     ` Rafael J. Wysocki
@ 2008-11-04  9:21       ` Alexey Starikovskiy
  0 siblings, 0 replies; 37+ messages in thread
From: Alexey Starikovskiy @ 2008-11-04  9:21 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Zhao Yakui, LenBrown, Linux-acpi@vger.kernel.org, alan-jenkins

Rafael J. Wysocki wrote:
> On Tuesday, 4 of November 2008, Alexey Starikovskiy wrote:
>> NAK
> 
> Can you elaborate, a bit, please?
This patch does not fix anything, just adds several lines of code.
It appeared as a concurrent patch to 11917 bug several days later, what I don't understand completely.
If Yakui has nothing else to do, this is not _my_ problem.
ec->global_lock is zero before entering ec_parse_device (zero alloced as part of acpi_ec struct),
thus there is no need to set it explicitely once again.
We don't care if call to _GLK succeeds or not, this is reflected in current design, so there is
no need in inserting the check either.

Overall, "don't fix it if it is not broken".

Regards,
Alex.
> 
>> Zhao Yakui wrote:
>>> Subject: ACPI: Cleanup :Initialize EC global lock based on the return status
>>> From: Zhao Yakui <yakui.zhao@intel.com>
>>>
>>> Initialize the EC global lock based on the returned value of _GLK object.
>>> Only when AE_OK is returned by the acpi_evaluate_integer, the EC global lock
>>> will be assigned based on the return value of _GLK object.Otherwise it means
>>> that there is no _GLK object and the global lock won't be required when EC
>>> is accessed.
>>> If the return value of _GLK object is not zero, it means that GLobal lock
>>> will be required when EC is accessed.
>>> If the return value of _GLK object is zero, it means that GLobal lock
>>> won't be required when EC is accessed.
>>>
>>> Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
>>> cc: Alexey Starikovskiy <astarikovskiy@suse.de>
>>> cc: Alan Jenkins <alan-jenkins@tuffmail.co.uk>
>>>
>>> ---
>>>  drivers/acpi/ec.c |   24 ++++++++++++++++++++++--
>>>  1 file changed, 22 insertions(+), 2 deletions(-)
>>>
>>> Index: linux-2.6/drivers/acpi/ec.c
>>> ===================================================================
>>> --- linux-2.6.orig/drivers/acpi/ec.c
>>> +++ linux-2.6/drivers/acpi/ec.c
>>> @@ -752,8 +752,28 @@ ec_parse_device(acpi_handle handle, u32 
>>>  	ec->gpe = tmp;
>>>  	/* Use the global lock for all EC transactions? */
>>>  	tmp = 0;
>>> -	acpi_evaluate_integer(handle, "_GLK", NULL, &tmp);
>>> -	ec->global_lock = tmp;
>>> +	/*
>>> +	 * Only when AE_OK is returned by acpi_evaluate_interger,
>>> +	 * the ec->global_lock will be assigned based on the returned
>>> +	 * value by _GLK. Otherwise it means that there is no _GLK object
>>> +	 * and global lock won't be required when EC is accessed.
>>> +	 * If the return value of _GLK object is not zero, it means that
>>> +	 * global lock will be required when EC is accessed.
>>> +	 * If the return value of _GLK object is zero, it means that
>>> +	 * global lock won't be required when EC is accessed.
>>> +	 */
>>> +	status = acpi_evaluate_integer(handle, "_GLK", NULL, &tmp);
>>> +	if (ACPI_SUCCESS(status)) {
>>> +		/*
>>> +		 * If the return value is not zero, it means that global lock
>>> +		 * is required when EC is accessed
>>> +		 */
>>> +		if (tmp)
>>> +			ec->global_lock = 1;
>>> +		else
>>> +			ec->global_lock = 0;
>>> +	} else
>>> +		ec->global_lock = 0;
>>>  	ec->handle = handle;
>>>  	return AE_CTRL_TERMINATE;
>>>  }
>>>
>>>


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH]: ACPI Cleanup :Initialize EC global lock based on the return status
  2008-11-04  8:05   ` Alexey Starikovskiy
  2008-11-04  8:58     ` Rafael J. Wysocki
@ 2008-11-04  9:37     ` Zhao Yakui
  2008-11-04  9:38       ` Alexey Starikovskiy
  1 sibling, 1 reply; 37+ messages in thread
From: Zhao Yakui @ 2008-11-04  9:37 UTC (permalink / raw)
  To: Alexey Starikovskiy
  Cc: LenBrown, Linux-acpi@vger.kernel.org, alan-jenkins@tuffmail.co.uk

On Tue, 2008-11-04 at 16:05 +0800, Alexey Starikovskiy wrote:
> NAK
Will you please describe the detailed reason?

In the bug 11917 the regression is related with the following commit:
    >commit 27663c5855b10af9ec67bc7dfba001426ba21222
    >Author: Matthew Wilcox <willy@linux.intel.com>
    >Date:   Fri Oct 10 02:22:59 2008 -0400
    >ACPI: Change acpi_evaluate_integer to support 64-bit on 32-bit
kernels

    But IMO the main reason is that EC driver misuses the Linux-ACPI
utility interface.(acpi_evaluate_integer).  
    It will be better to determine whether the return value of ACPI
object is effective according to the return status. In such case the
code still can work well even after the Linux-ACPI utility interface is
changed again.
     
    
> Zhao Yakui wrote:
> > Subject: ACPI: Cleanup :Initialize EC global lock based on the return status
> > From: Zhao Yakui <yakui.zhao@intel.com>
> > 
> > Initialize the EC global lock based on the returned value of _GLK object.
> > Only when AE_OK is returned by the acpi_evaluate_integer, the EC global lock
> > will be assigned based on the return value of _GLK object.Otherwise it means
> > that there is no _GLK object and the global lock won't be required when EC
> > is accessed.
> > If the return value of _GLK object is not zero, it means that GLobal lock
> > will be required when EC is accessed.
> > If the return value of _GLK object is zero, it means that GLobal lock
> > won't be required when EC is accessed.
> > 
> > Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
> > cc: Alexey Starikovskiy <astarikovskiy@suse.de>
> > cc: Alan Jenkins <alan-jenkins@tuffmail.co.uk>
> > 
> > ---
> >  drivers/acpi/ec.c |   24 ++++++++++++++++++++++--
> >  1 file changed, 22 insertions(+), 2 deletions(-)
> > 
> > Index: linux-2.6/drivers/acpi/ec.c
> > ===================================================================
> > --- linux-2.6.orig/drivers/acpi/ec.c
> > +++ linux-2.6/drivers/acpi/ec.c
> > @@ -752,8 +752,28 @@ ec_parse_device(acpi_handle handle, u32 
> >  	ec->gpe = tmp;
> >  	/* Use the global lock for all EC transactions? */
> >  	tmp = 0;
> > -	acpi_evaluate_integer(handle, "_GLK", NULL, &tmp);
> > -	ec->global_lock = tmp;
> > +	/*
> > +	 * Only when AE_OK is returned by acpi_evaluate_interger,
> > +	 * the ec->global_lock will be assigned based on the returned
> > +	 * value by _GLK. Otherwise it means that there is no _GLK object
> > +	 * and global lock won't be required when EC is accessed.
> > +	 * If the return value of _GLK object is not zero, it means that
> > +	 * global lock will be required when EC is accessed.
> > +	 * If the return value of _GLK object is zero, it means that
> > +	 * global lock won't be required when EC is accessed.
> > +	 */
> > +	status = acpi_evaluate_integer(handle, "_GLK", NULL, &tmp);
> > +	if (ACPI_SUCCESS(status)) {
> > +		/*
> > +		 * If the return value is not zero, it means that global lock
> > +		 * is required when EC is accessed
> > +		 */
> > +		if (tmp)
> > +			ec->global_lock = 1;
> > +		else
> > +			ec->global_lock = 0;
> > +	} else
> > +		ec->global_lock = 0;
> >  	ec->handle = handle;
> >  	return AE_CTRL_TERMINATE;
> >  }
> > 
> > 
> 


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH]: ACPI Cleanup :Initialize EC global lock based on the return status
  2008-11-04  9:37     ` Zhao Yakui
@ 2008-11-04  9:38       ` Alexey Starikovskiy
  2008-11-05  1:05         ` Zhao Yakui
  0 siblings, 1 reply; 37+ messages in thread
From: Alexey Starikovskiy @ 2008-11-04  9:38 UTC (permalink / raw)
  To: Zhao Yakui
  Cc: LenBrown, Linux-acpi@vger.kernel.org, alan-jenkins@tuffmail.co.uk

Zhao Yakui wrote:
> On Tue, 2008-11-04 at 16:05 +0800, Alexey Starikovskiy wrote:
>> NAK
> Will you please describe the detailed reason?
> 
> In the bug 11917 the regression is related with the following commit:
>     >commit 27663c5855b10af9ec67bc7dfba001426ba21222
>     >Author: Matthew Wilcox <willy@linux.intel.com>
>     >Date:   Fri Oct 10 02:22:59 2008 -0400
>     >ACPI: Change acpi_evaluate_integer to support 64-bit on 32-bit
> kernels
> 
>     But IMO the main reason is that EC driver misuses the Linux-ACPI
> utility interface.(acpi_evaluate_integer).  
Did you _read_ the interface specification?
It explicitly states that if any function call does not succeed it will not
change the data passed to it.
So it is again only your not-so-humble opinion.
>     It will be better to determine whether the return value of ACPI
> object is effective according to the return status. In such case the
> code still can work well even after the Linux-ACPI utility interface is
> changed again.
Code works fine until someone tries to optimize it...

Regards,
Alex.


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH]: ACPI Cleanup :Initialize EC global lock based on the return status
  2008-11-04  9:38       ` Alexey Starikovskiy
@ 2008-11-05  1:05         ` Zhao Yakui
  2008-11-05  7:24           ` Alexey Starikovskiy
  0 siblings, 1 reply; 37+ messages in thread
From: Zhao Yakui @ 2008-11-05  1:05 UTC (permalink / raw)
  To: Alexey Starikovskiy
  Cc: LenBrown, Linux-acpi@vger.kernel.org, alan-jenkins@tuffmail.co.uk

On Tue, 2008-11-04 at 17:38 +0800, Alexey Starikovskiy wrote:
> Zhao Yakui wrote:
> > On Tue, 2008-11-04 at 16:05 +0800, Alexey Starikovskiy wrote:
> >> NAK
> > Will you please describe the detailed reason?
> > 
> > In the bug 11917 the regression is related with the following commit:
> >     >commit 27663c5855b10af9ec67bc7dfba001426ba21222
> >     >Author: Matthew Wilcox <willy@linux.intel.com>
> >     >Date:   Fri Oct 10 02:22:59 2008 -0400
> >     >ACPI: Change acpi_evaluate_integer to support 64-bit on 32-bit
> > kernels
> > 
> >     But IMO the main reason is that EC driver misuses the Linux-ACPI
> > utility interface.(acpi_evaluate_integer).  
> Did you _read_ the interface specification?
> It explicitly states that if any function call does not succeed it will not
> change the data passed to it.
> So it is again only your not-so-humble opinion.
What do you mean "not-so-humble"?      
> >     It will be better to determine whether the return value of ACPI
> > object is effective according to the return status. In such case the
> > code still can work well even after the Linux-ACPI utility interface is
> > changed again.
> Code works fine until someone tries to optimize it...
     I agree that your patch can work well. But it depends on the
internal realization of Linux-ACPI utility interface. If Linux-ACPI
utility interface is changed, maybe it will be broken. If so, why not to
determine whether the return value is effective based on the return
status of Linux-ACPI utility?
      
> 
> Regards,
> Alex.
> 


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH]: ACPI Cleanup :Initialize EC global lock based on the return status
  2008-11-05  1:05         ` Zhao Yakui
@ 2008-11-05  7:24           ` Alexey Starikovskiy
  0 siblings, 0 replies; 37+ messages in thread
From: Alexey Starikovskiy @ 2008-11-05  7:24 UTC (permalink / raw)
  To: Zhao Yakui
  Cc: Alexey Starikovskiy, LenBrown, Linux-acpi@vger.kernel.org,
	alan-jenkins@tuffmail.co.uk

Zhao Yakui wrote:
> On Tue, 2008-11-04 at 17:38 +0800, Alexey Starikovskiy wrote:
>   
>> Zhao Yakui wrote:
>>     
>>> On Tue, 2008-11-04 at 16:05 +0800, Alexey Starikovskiy wrote:
>>>       
>>>> NAK
>>>>         
>>> Will you please describe the detailed reason?
>>>
>>> In the bug 11917 the regression is related with the following commit:
>>>     >commit 27663c5855b10af9ec67bc7dfba001426ba21222
>>>     >Author: Matthew Wilcox <willy@linux.intel.com>
>>>     >Date:   Fri Oct 10 02:22:59 2008 -0400
>>>     >ACPI: Change acpi_evaluate_integer to support 64-bit on 32-bit
>>> kernels
>>>
>>>     But IMO the main reason is that EC driver misuses the Linux-ACPI
>>> utility interface.(acpi_evaluate_integer).  
>>>       
>> Did you _read_ the interface specification?
>> It explicitly states that if any function call does not succeed it will not
>> change the data passed to it.
>> So it is again only your not-so-humble opinion.
>>     
> What do you mean "not-so-humble"?
Did you ever noticed the difference between IMO and IMHO?
>       
>   
>>>     It will be better to determine whether the return value of ACPI
>>> object is effective according to the return status. In such case the
>>> code still can work well even after the Linux-ACPI utility interface is
>>> changed again.
>>>       
>> Code works fine until someone tries to optimize it...
>>     
>      I agree that your patch can work well. But it depends on the
> internal realization of Linux-ACPI utility interface. If Linux-ACPI
> utility interface is changed, maybe it will be broken. If so, why not to
> determine whether the return value is effective based on the return
> status of Linux-ACPI utility?
>   
Once again, my code is based not on internal realization, but on the 
ACPI CA spec.
ACPI code is already littered with un-needed checks, transitions, etc;
I see no reason to spread it to whole Linux.
>       
>   
>> Regards,
>> Alex.
>>
>>     

^ permalink raw reply	[flat|nested] 37+ messages in thread

* [PATCH] : ACPI : Use RSDT instead of XSDT by adding boot option of "acpi=rsdt"
  2008-10-31 18:42 [PATCH] ACPI: EC: clean up tmp variable before reuse Alexey Starikovskiy
                   ` (2 preceding siblings ...)
  2008-11-04  7:41 ` [PATCH]: ACPI Cleanup :Initialize EC global lock based on the return status Zhao Yakui
@ 2008-12-17  8:55 ` Zhao Yakui
  2009-01-09  6:35   ` Len Brown
                     ` (3 more replies)
  2008-12-30  4:01 ` [PATCH] : ACPI : Add the MWAIT C-state mask to avoid overflow Zhao Yakui
                   ` (3 subsequent siblings)
  7 siblings, 4 replies; 37+ messages in thread
From: Zhao Yakui @ 2008-12-17  8:55 UTC (permalink / raw)
  To: LenBrown; +Cc: Linux-acpi@vger.kernel.org, trenn

From: Zhao Yakui <yakui.zhao@intel.com>

On some boxes there exist both RSDT and XSDT table. But unfortunately
sometimes there exists the following error when XSDT table is used:
   a. 32/64X address mismatch
   b. The 32/64X FACS address mismatch

   In such case the boot option of "acpi=rsdt" is provided so that
RSDT is tried instead of XSDT table when the system can't work well.

http://bugzilla.kernel.org/show_bug.cgi?id=8246

Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
cc:Thomas Renninger <trenn@suse.de>
---
 Documentation/kernel-parameters.txt |    2 ++
 arch/ia64/kernel/acpi.c             |    1 +
 arch/x86/kernel/acpi/boot.c         |    6 +++++-
 drivers/acpi/tables/tbutils.c       |    3 ++-
 include/acpi/acpixf.h               |    1 +
 5 files changed, 11 insertions(+), 2 deletions(-)

Index: linux-2.6/arch/x86/kernel/acpi/boot.c
===================================================================
--- linux-2.6.orig/arch/x86/kernel/acpi/boot.c
+++ linux-2.6/arch/x86/kernel/acpi/boot.c
@@ -47,7 +47,7 @@
 #endif
 
 static int __initdata acpi_force = 0;
-
+u32 acpi_rsdt_forced;
 #ifdef	CONFIG_ACPI
 int acpi_disabled = 0;
 #else
@@ -1783,6 +1783,10 @@ static int __init parse_acpi(char *arg)
 			disable_acpi();
 		acpi_ht = 1;
 	}
+	/* acpi=rsdt use RSDT instead of XSDT */
+	else if (strcmp(arg, "rsdt") == 0) {
+		acpi_rsdt_forced = 1;
+	}
 	/* "acpi=noirq" disables ACPI interrupt routing */
 	else if (strcmp(arg, "noirq") == 0) {
 		acpi_noirq_set();
Index: linux-2.6/include/acpi/acpixf.h
===================================================================
--- linux-2.6.orig/include/acpi/acpixf.h
+++ linux-2.6/include/acpi/acpixf.h
@@ -48,6 +48,7 @@
 #include "actypes.h"
 #include "actbl.h"
 
+extern u32 acpi_rsdt_forced;
 /*
  * Global interfaces
  */
Index: linux-2.6/drivers/acpi/tables/tbutils.c
===================================================================
--- linux-2.6.orig/drivers/acpi/tables/tbutils.c
+++ linux-2.6/drivers/acpi/tables/tbutils.c
@@ -420,7 +420,8 @@ acpi_tb_parse_root_table(acpi_physical_a
 
 	/* Differentiate between RSDT and XSDT root tables */
 
-	if (rsdp->revision > 1 && rsdp->xsdt_physical_address) {
+	if (rsdp->revision > 1 && rsdp->xsdt_physical_address
+			&& !acpi_rsdt_forced) {
 		/*
 		 * Root table is an XSDT (64-bit physical addresses). We must use the
 		 * XSDT if the revision is > 1 and the XSDT pointer is present, as per
Index: linux-2.6/arch/ia64/kernel/acpi.c
===================================================================
--- linux-2.6.orig/arch/ia64/kernel/acpi.c
+++ linux-2.6/arch/ia64/kernel/acpi.c
@@ -65,6 +65,7 @@ EXPORT_SYMBOL(pm_idle);
 void (*pm_power_off) (void);
 EXPORT_SYMBOL(pm_power_off);
 
+u32 acpi_rsdt_forced;
 unsigned int acpi_cpei_override;
 unsigned int acpi_cpei_phys_cpuid;
 
Index: linux-2.6/Documentation/kernel-parameters.txt
===================================================================
--- linux-2.6.orig/Documentation/kernel-parameters.txt
+++ linux-2.6/Documentation/kernel-parameters.txt
@@ -139,6 +139,8 @@ and is between 256 and 4096 characters. 
 			ht -- run only enough ACPI to enable Hyper Threading
 			strict -- Be less tolerant of platforms that are not
 				strictly ACPI specification compliant.
+			rsdt -- RSDT is used instead of XSDT table when both
+				exits.
 
 			See also Documentation/power/pm.txt, pci=noacpi
 



^ permalink raw reply	[flat|nested] 37+ messages in thread

* [PATCH] : ACPI : Add the MWAIT C-state mask to avoid overflow
  2008-10-31 18:42 [PATCH] ACPI: EC: clean up tmp variable before reuse Alexey Starikovskiy
                   ` (3 preceding siblings ...)
  2008-12-17  8:55 ` [PATCH] : ACPI : Use RSDT instead of XSDT by adding boot option of "acpi=rsdt" Zhao Yakui
@ 2008-12-30  4:01 ` Zhao Yakui
  2009-01-04  4:04 ` Zhao Yakui
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 37+ messages in thread
From: Zhao Yakui @ 2008-12-30  4:01 UTC (permalink / raw)
  To: LenBrown; +Cc: linux-acpi, venkatesh.pallipadi

Subject: ACPI: Add the MWAIT C-state mask to avoid overflow
From: Zhao Yakui <yakui.zhao@intel.com>

Add the MWAIT C-state mask to avoid overflow 

Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Acked-by:Venki Pallipadi <venkatesh.pallipadi@intel.com>
---
 arch/x86/kernel/acpi/cstate.c |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

Index: linux-2.6/arch/x86/kernel/acpi/cstate.c
===================================================================
--- linux-2.6.orig/arch/x86/kernel/acpi/cstate.c
+++ linux-2.6/arch/x86/kernel/acpi/cstate.c
@@ -56,6 +56,7 @@ static struct cstate_entry *cpu_cstate_e
 static short mwait_supported[ACPI_PROCESSOR_MAX_POWER];
 
 #define MWAIT_SUBSTATE_MASK	(0xf)
+#define MWAIT_CSTATE_MASK	(0xf)
 #define MWAIT_SUBSTATE_SIZE	(4)
 
 #define CPUID_MWAIT_LEAF (5)
@@ -98,7 +99,8 @@ int acpi_processor_ffh_cstate_probe(unsi
 	cpuid(CPUID_MWAIT_LEAF, &eax, &ebx, &ecx, &edx);
 
 	/* Check whether this particular cx_type (in CST) is supported or not */
-	cstate_type = (cx->address >> MWAIT_SUBSTATE_SIZE) + 1;
+	cstate_type = ((cx->address >> MWAIT_SUBSTATE_SIZE) &
+			MWAIT_CSTATE_MASK) + 1;
 	edx_part = edx >> (cstate_type * MWAIT_SUBSTATE_SIZE);
 	num_cstate_subtype = edx_part & MWAIT_SUBSTATE_MASK;
 



^ permalink raw reply	[flat|nested] 37+ messages in thread

* [PATCH] : ACPI : Add the MWAIT C-state mask to avoid overflow
  2008-10-31 18:42 [PATCH] ACPI: EC: clean up tmp variable before reuse Alexey Starikovskiy
                   ` (4 preceding siblings ...)
  2008-12-30  4:01 ` [PATCH] : ACPI : Add the MWAIT C-state mask to avoid overflow Zhao Yakui
@ 2009-01-04  4:04 ` Zhao Yakui
  2009-01-09  6:28   ` Len Brown
  2009-01-12  7:07 ` [PATCH] : ACPI : Use clocksource to get the C-state time instead of ACPI PM timer Zhao Yakui
  2009-01-13  3:50 ` [RESEND] " Zhao Yakui
  7 siblings, 1 reply; 37+ messages in thread
From: Zhao Yakui @ 2009-01-04  4:04 UTC (permalink / raw)
  To: LenBrown; +Cc: linux-acpi, venkatesh.pallipadi

Subject: ACPI: Add the MWAIT C-state mask to avoid overflow
From: Zhao Yakui <yakui.zhao@intel.com>

The Cx Register address obtained from the _CST object is used as the mait
hints if the register type is FFixedHW. And it is used to check whether
the Cx type is supported or not.

On some boxes the following Cx state package is obtained from _CST object:
    >{
                ResourceTemplate ()
                {
                    Register (FFixedHW,
                        0x01,               // Bit Width
                        0x02,               // Bit Offset
                        0x0000000000889759, // Address
                        0x03,               // Access Size
                        )
                },

                0x03,
                0xF5,
                0x015E }
   In such case we should use the bit[7:4] of Cx address to check whether
the Cx type is supported or not.

Add the MWAIT C-state mask to avoid overflow 

Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Acked-by:Venki Pallipadi <venkatesh.pallipadi@intel.com>
---
 arch/x86/kernel/acpi/cstate.c |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

Index: linux-2.6/arch/x86/kernel/acpi/cstate.c
===================================================================
--- linux-2.6.orig/arch/x86/kernel/acpi/cstate.c
+++ linux-2.6/arch/x86/kernel/acpi/cstate.c
@@ -56,6 +56,7 @@ static struct cstate_entry *cpu_cstate_e
 static short mwait_supported[ACPI_PROCESSOR_MAX_POWER];
 
 #define MWAIT_SUBSTATE_MASK	(0xf)
+#define MWAIT_CSTATE_MASK	(0xf)
 #define MWAIT_SUBSTATE_SIZE	(4)
 
 #define CPUID_MWAIT_LEAF (5)
@@ -98,7 +99,8 @@ int acpi_processor_ffh_cstate_probe(unsi
 	cpuid(CPUID_MWAIT_LEAF, &eax, &ebx, &ecx, &edx);
 
 	/* Check whether this particular cx_type (in CST) is supported or not */
-	cstate_type = (cx->address >> MWAIT_SUBSTATE_SIZE) + 1;
+	cstate_type = ((cx->address >> MWAIT_SUBSTATE_SIZE) &
+			MWAIT_CSTATE_MASK) + 1;
 	edx_part = edx >> (cstate_type * MWAIT_SUBSTATE_SIZE);
 	num_cstate_subtype = edx_part & MWAIT_SUBSTATE_MASK;
 



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] : ACPI : Add the MWAIT C-state mask to avoid overflow
  2009-01-04  4:04 ` Zhao Yakui
@ 2009-01-09  6:28   ` Len Brown
  0 siblings, 0 replies; 37+ messages in thread
From: Len Brown @ 2009-01-09  6:28 UTC (permalink / raw)
  To: Zhao Yakui; +Cc: linux-acpi, venkatesh.pallipadi

applied.


--
Len Brown, Intel Open Source Technology Center

On Sun, 4 Jan 2009, Zhao Yakui wrote:

> Subject: ACPI: Add the MWAIT C-state mask to avoid overflow
> From: Zhao Yakui <yakui.zhao@intel.com>
> 
> The Cx Register address obtained from the _CST object is used as the mait
> hints if the register type is FFixedHW. And it is used to check whether
> the Cx type is supported or not.
> 
> On some boxes the following Cx state package is obtained from _CST object:
>     >{
>                 ResourceTemplate ()
>                 {
>                     Register (FFixedHW,
>                         0x01,               // Bit Width
>                         0x02,               // Bit Offset
>                         0x0000000000889759, // Address
>                         0x03,               // Access Size
>                         )
>                 },
> 
>                 0x03,
>                 0xF5,
>                 0x015E }
>    In such case we should use the bit[7:4] of Cx address to check whether
> the Cx type is supported or not.
> 
> Add the MWAIT C-state mask to avoid overflow 
> 
> Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
> Acked-by:Venki Pallipadi <venkatesh.pallipadi@intel.com>
> ---
>  arch/x86/kernel/acpi/cstate.c |    4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> Index: linux-2.6/arch/x86/kernel/acpi/cstate.c
> ===================================================================
> --- linux-2.6.orig/arch/x86/kernel/acpi/cstate.c
> +++ linux-2.6/arch/x86/kernel/acpi/cstate.c
> @@ -56,6 +56,7 @@ static struct cstate_entry *cpu_cstate_e
>  static short mwait_supported[ACPI_PROCESSOR_MAX_POWER];
>  
>  #define MWAIT_SUBSTATE_MASK	(0xf)
> +#define MWAIT_CSTATE_MASK	(0xf)
>  #define MWAIT_SUBSTATE_SIZE	(4)
>  
>  #define CPUID_MWAIT_LEAF (5)
> @@ -98,7 +99,8 @@ int acpi_processor_ffh_cstate_probe(unsi
>  	cpuid(CPUID_MWAIT_LEAF, &eax, &ebx, &ecx, &edx);
>  
>  	/* Check whether this particular cx_type (in CST) is supported or not */
> -	cstate_type = (cx->address >> MWAIT_SUBSTATE_SIZE) + 1;
> +	cstate_type = ((cx->address >> MWAIT_SUBSTATE_SIZE) &
> +			MWAIT_CSTATE_MASK) + 1;
>  	edx_part = edx >> (cstate_type * MWAIT_SUBSTATE_SIZE);
>  	num_cstate_subtype = edx_part & MWAIT_SUBSTATE_MASK;
>  
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-acpi" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> Subject: ACPI: Add the MWAIT C-state mask to avoid overflow
> From: Zhao Yakui <yakui.zhao@intel.com>
> 
> The Cx Register address obtained from the _CST object is used as the mait
> hints if the register type is FFixedHW. And it is used to check whether
> the Cx type is supported or not.
> 
> On some boxes the following Cx state package is obtained from _CST object:
>     >{
>                 ResourceTemplate ()
>                 {
>                     Register (FFixedHW,
>                         0x01,               // Bit Width
>                         0x02,               // Bit Offset
>                         0x0000000000889759, // Address
>                         0x03,               // Access Size
>                         )
>                 },
> 
>                 0x03,
>                 0xF5,
>                 0x015E }
>    In such case we should use the bit[7:4] of Cx address to check whether
> the Cx type is supported or not.
> 
> Add the MWAIT C-state mask to avoid overflow 
> 
> Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
> Acked-by:Venki Pallipadi <venkatesh.pallipadi@intel.com>
> ---
>  arch/x86/kernel/acpi/cstate.c |    4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> Index: linux-2.6/arch/x86/kernel/acpi/cstate.c
> ===================================================================
> --- linux-2.6.orig/arch/x86/kernel/acpi/cstate.c
> +++ linux-2.6/arch/x86/kernel/acpi/cstate.c
> @@ -56,6 +56,7 @@ static struct cstate_entry *cpu_cstate_e
>  static short mwait_supported[ACPI_PROCESSOR_MAX_POWER];
>  
>  #define MWAIT_SUBSTATE_MASK	(0xf)
> +#define MWAIT_CSTATE_MASK	(0xf)
>  #define MWAIT_SUBSTATE_SIZE	(4)
>  
>  #define CPUID_MWAIT_LEAF (5)
> @@ -98,7 +99,8 @@ int acpi_processor_ffh_cstate_probe(unsi
>  	cpuid(CPUID_MWAIT_LEAF, &eax, &ebx, &ecx, &edx);
>  
>  	/* Check whether this particular cx_type (in CST) is supported or not */
> -	cstate_type = (cx->address >> MWAIT_SUBSTATE_SIZE) + 1;
> +	cstate_type = ((cx->address >> MWAIT_SUBSTATE_SIZE) &
> +			MWAIT_CSTATE_MASK) + 1;
>  	edx_part = edx >> (cstate_type * MWAIT_SUBSTATE_SIZE);
>  	num_cstate_subtype = edx_part & MWAIT_SUBSTATE_MASK;
>  
> 
> 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] : ACPI : Use RSDT instead of XSDT by adding boot option of "acpi=rsdt"
  2008-12-17  8:55 ` [PATCH] : ACPI : Use RSDT instead of XSDT by adding boot option of "acpi=rsdt" Zhao Yakui
@ 2009-01-09  6:35   ` Len Brown
  2009-01-09 10:54     ` Thomas Renninger
  2009-01-09 10:58   ` Blacklist known broken machines to use the rsdt and enabled Cstates on R40e Thomas Renninger
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 37+ messages in thread
From: Len Brown @ 2009-01-09  6:35 UTC (permalink / raw)
  To: Zhao Yakui; +Cc: Linux-acpi@vger.kernel.org, trenn

applied

--
Len Brown, Intel Open Source Technology Center

On Wed, 17 Dec 2008, Zhao Yakui wrote:

> From: Zhao Yakui <yakui.zhao@intel.com>
> 
> On some boxes there exist both RSDT and XSDT table. But unfortunately
> sometimes there exists the following error when XSDT table is used:
>    a. 32/64X address mismatch
>    b. The 32/64X FACS address mismatch
> 
>    In such case the boot option of "acpi=rsdt" is provided so that
> RSDT is tried instead of XSDT table when the system can't work well.
> 
> http://bugzilla.kernel.org/show_bug.cgi?id=8246
> 
> Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
> cc:Thomas Renninger <trenn@suse.de>
> ---
>  Documentation/kernel-parameters.txt |    2 ++
>  arch/ia64/kernel/acpi.c             |    1 +
>  arch/x86/kernel/acpi/boot.c         |    6 +++++-
>  drivers/acpi/tables/tbutils.c       |    3 ++-
>  include/acpi/acpixf.h               |    1 +
>  5 files changed, 11 insertions(+), 2 deletions(-)
> 
> Index: linux-2.6/arch/x86/kernel/acpi/boot.c
> ===================================================================
> --- linux-2.6.orig/arch/x86/kernel/acpi/boot.c
> +++ linux-2.6/arch/x86/kernel/acpi/boot.c
> @@ -47,7 +47,7 @@
>  #endif
>  
>  static int __initdata acpi_force = 0;
> -
> +u32 acpi_rsdt_forced;
>  #ifdef	CONFIG_ACPI
>  int acpi_disabled = 0;
>  #else
> @@ -1783,6 +1783,10 @@ static int __init parse_acpi(char *arg)
>  			disable_acpi();
>  		acpi_ht = 1;
>  	}
> +	/* acpi=rsdt use RSDT instead of XSDT */
> +	else if (strcmp(arg, "rsdt") == 0) {
> +		acpi_rsdt_forced = 1;
> +	}
>  	/* "acpi=noirq" disables ACPI interrupt routing */
>  	else if (strcmp(arg, "noirq") == 0) {
>  		acpi_noirq_set();
> Index: linux-2.6/include/acpi/acpixf.h
> ===================================================================
> --- linux-2.6.orig/include/acpi/acpixf.h
> +++ linux-2.6/include/acpi/acpixf.h
> @@ -48,6 +48,7 @@
>  #include "actypes.h"
>  #include "actbl.h"
>  
> +extern u32 acpi_rsdt_forced;
>  /*
>   * Global interfaces
>   */
> Index: linux-2.6/drivers/acpi/tables/tbutils.c
> ===================================================================
> --- linux-2.6.orig/drivers/acpi/tables/tbutils.c
> +++ linux-2.6/drivers/acpi/tables/tbutils.c
> @@ -420,7 +420,8 @@ acpi_tb_parse_root_table(acpi_physical_a
>  
>  	/* Differentiate between RSDT and XSDT root tables */
>  
> -	if (rsdp->revision > 1 && rsdp->xsdt_physical_address) {
> +	if (rsdp->revision > 1 && rsdp->xsdt_physical_address
> +			&& !acpi_rsdt_forced) {
>  		/*
>  		 * Root table is an XSDT (64-bit physical addresses). We must use the
>  		 * XSDT if the revision is > 1 and the XSDT pointer is present, as per
> Index: linux-2.6/arch/ia64/kernel/acpi.c
> ===================================================================
> --- linux-2.6.orig/arch/ia64/kernel/acpi.c
> +++ linux-2.6/arch/ia64/kernel/acpi.c
> @@ -65,6 +65,7 @@ EXPORT_SYMBOL(pm_idle);
>  void (*pm_power_off) (void);
>  EXPORT_SYMBOL(pm_power_off);
>  
> +u32 acpi_rsdt_forced;
>  unsigned int acpi_cpei_override;
>  unsigned int acpi_cpei_phys_cpuid;
>  
> Index: linux-2.6/Documentation/kernel-parameters.txt
> ===================================================================
> --- linux-2.6.orig/Documentation/kernel-parameters.txt
> +++ linux-2.6/Documentation/kernel-parameters.txt
> @@ -139,6 +139,8 @@ and is between 256 and 4096 characters. 
>  			ht -- run only enough ACPI to enable Hyper Threading
>  			strict -- Be less tolerant of platforms that are not
>  				strictly ACPI specification compliant.
> +			rsdt -- RSDT is used instead of XSDT table when both
> +				exits.
>  
>  			See also Documentation/power/pm.txt, pci=noacpi
>  
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-acpi" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] : ACPI : Use RSDT instead of XSDT by adding boot option of "acpi=rsdt"
  2009-01-09  6:35   ` Len Brown
@ 2009-01-09 10:54     ` Thomas Renninger
  2009-01-09 10:59       ` Len Brown
  0 siblings, 1 reply; 37+ messages in thread
From: Thomas Renninger @ 2009-01-09 10:54 UTC (permalink / raw)
  To: Len Brown; +Cc: Zhao Yakui, Linux-acpi@vger.kernel.org, me, ibm-acpi-devel

On Friday 09 January 2009 07:35:33 Len Brown wrote:
> applied
>
> --
> Len Brown, Intel Open Source Technology Center
>
> On Wed, 17 Dec 2008, Zhao Yakui wrote:
> > From: Zhao Yakui <yakui.zhao@intel.com>
> >
> > On some boxes there exist both RSDT and XSDT table. But unfortunately
> > sometimes there exists the following error when XSDT table is used:
> >    a. 32/64X address mismatch
> >    b. The 32/64X FACS address mismatch
> >
> >    In such case the boot option of "acpi=rsdt" is provided so that
> > RSDT is tried instead of XSDT table when the system can't work well.

Great, now we still need a dmi blacklist with machines which are known
broken. Then we are at the patchset I posted about a year ago -> scnr.

I am going to post the missing two patches to make R40e and R50e work.

Thanks,

      Thomas
> >
> > http://bugzilla.kernel.org/show_bug.cgi?id=8246
> >
> > Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
> > cc:Thomas Renninger <trenn@suse.de>
> > ---
> >  Documentation/kernel-parameters.txt |    2 ++
> >  arch/ia64/kernel/acpi.c             |    1 +
> >  arch/x86/kernel/acpi/boot.c         |    6 +++++-
> >  drivers/acpi/tables/tbutils.c       |    3 ++-
> >  include/acpi/acpixf.h               |    1 +
> >  5 files changed, 11 insertions(+), 2 deletions(-)
> >
> > Index: linux-2.6/arch/x86/kernel/acpi/boot.c
> > ===================================================================
> > --- linux-2.6.orig/arch/x86/kernel/acpi/boot.c
> > +++ linux-2.6/arch/x86/kernel/acpi/boot.c
> > @@ -47,7 +47,7 @@
> >  #endif
> >
> >  static int __initdata acpi_force = 0;
> > -
> > +u32 acpi_rsdt_forced;
> >  #ifdef	CONFIG_ACPI
> >  int acpi_disabled = 0;
> >  #else
> > @@ -1783,6 +1783,10 @@ static int __init parse_acpi(char *arg)
> >  			disable_acpi();
> >  		acpi_ht = 1;
> >  	}
> > +	/* acpi=rsdt use RSDT instead of XSDT */
> > +	else if (strcmp(arg, "rsdt") == 0) {
> > +		acpi_rsdt_forced = 1;
> > +	}
> >  	/* "acpi=noirq" disables ACPI interrupt routing */
> >  	else if (strcmp(arg, "noirq") == 0) {
> >  		acpi_noirq_set();
> > Index: linux-2.6/include/acpi/acpixf.h
> > ===================================================================
> > --- linux-2.6.orig/include/acpi/acpixf.h
> > +++ linux-2.6/include/acpi/acpixf.h
> > @@ -48,6 +48,7 @@
> >  #include "actypes.h"
> >  #include "actbl.h"
> >
> > +extern u32 acpi_rsdt_forced;
> >  /*
> >   * Global interfaces
> >   */
> > Index: linux-2.6/drivers/acpi/tables/tbutils.c
> > ===================================================================
> > --- linux-2.6.orig/drivers/acpi/tables/tbutils.c
> > +++ linux-2.6/drivers/acpi/tables/tbutils.c
> > @@ -420,7 +420,8 @@ acpi_tb_parse_root_table(acpi_physical_a
> >
> >  	/* Differentiate between RSDT and XSDT root tables */
> >
> > -	if (rsdp->revision > 1 && rsdp->xsdt_physical_address) {
> > +	if (rsdp->revision > 1 && rsdp->xsdt_physical_address
> > +			&& !acpi_rsdt_forced) {
> >  		/*
> >  		 * Root table is an XSDT (64-bit physical addresses). We must use the
> >  		 * XSDT if the revision is > 1 and the XSDT pointer is present, as per
> > Index: linux-2.6/arch/ia64/kernel/acpi.c
> > ===================================================================
> > --- linux-2.6.orig/arch/ia64/kernel/acpi.c
> > +++ linux-2.6/arch/ia64/kernel/acpi.c
> > @@ -65,6 +65,7 @@ EXPORT_SYMBOL(pm_idle);
> >  void (*pm_power_off) (void);
> >  EXPORT_SYMBOL(pm_power_off);
> >
> > +u32 acpi_rsdt_forced;
> >  unsigned int acpi_cpei_override;
> >  unsigned int acpi_cpei_phys_cpuid;
> >
> > Index: linux-2.6/Documentation/kernel-parameters.txt
> > ===================================================================
> > --- linux-2.6.orig/Documentation/kernel-parameters.txt
> > +++ linux-2.6/Documentation/kernel-parameters.txt
> > @@ -139,6 +139,8 @@ and is between 256 and 4096 characters.
> >  			ht -- run only enough ACPI to enable Hyper Threading
> >  			strict -- Be less tolerant of platforms that are not
> >  				strictly ACPI specification compliant.
> > +			rsdt -- RSDT is used instead of XSDT table when both
> > +				exits.
> >
> >  			See also Documentation/power/pm.txt, pci=noacpi
> >
> >
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-acpi" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Blacklist known broken machines to use the rsdt and enabled Cstates on R40e
  2008-12-17  8:55 ` [PATCH] : ACPI : Use RSDT instead of XSDT by adding boot option of "acpi=rsdt" Zhao Yakui
  2009-01-09  6:35   ` Len Brown
@ 2009-01-09 10:58   ` Thomas Renninger
  2009-01-09 10:58   ` [PATCH 1/2] Blacklist known broken machines (ThinkPad R40e and R50e) to use rsdt instead xsdt Thomas Renninger
  2009-01-09 10:58   ` [PATCH 2/2] R40e using rsdt (previous patch) makes all Cstates work -> remove blacklisting Thomas Renninger
  3 siblings, 0 replies; 37+ messages in thread
From: Thomas Renninger @ 2009-01-09 10:58 UTC (permalink / raw)
  To: lenb, yakui.zhao, Linux-acpi, me, ibm-acpi-devel

Compile tested.
R40e has been tested to have working C-states when the rsdt is used with another
patchset some time ago.
The R50e has been tested to boot much faster when using the rsdt.
Lenovo confirmed that this is due to BIOS bugs (wrong 64 bit addresses in a FADT pointed to
by the XSDT) which will not be fixed, because the machines are too old.

    Thomas



^ permalink raw reply	[flat|nested] 37+ messages in thread

* [PATCH 1/2] Blacklist known broken machines (ThinkPad R40e and R50e) to use rsdt instead xsdt
  2008-12-17  8:55 ` [PATCH] : ACPI : Use RSDT instead of XSDT by adding boot option of "acpi=rsdt" Zhao Yakui
  2009-01-09  6:35   ` Len Brown
  2009-01-09 10:58   ` Blacklist known broken machines to use the rsdt and enabled Cstates on R40e Thomas Renninger
@ 2009-01-09 10:58   ` Thomas Renninger
  2009-01-09 10:58   ` [PATCH 2/2] R40e using rsdt (previous patch) makes all Cstates work -> remove blacklisting Thomas Renninger
  3 siblings, 0 replies; 37+ messages in thread
From: Thomas Renninger @ 2009-01-09 10:58 UTC (permalink / raw)
  To: lenb, yakui.zhao, Linux-acpi, me, ibm-acpi-devel; +Cc: Thomas Renninger

Signed-off-by: Thomas Renninger <trenn@suse.de>
---
 drivers/acpi/blacklist.c |   21 +++++++++++++++++++++
 1 files changed, 21 insertions(+), 0 deletions(-)

diff --git a/drivers/acpi/blacklist.c b/drivers/acpi/blacklist.c
index 09c6980..b64bdeb 100644
--- a/drivers/acpi/blacklist.c
+++ b/drivers/acpi/blacklist.c
@@ -183,6 +183,13 @@ static int __init dmi_disable_osi_vista(const struct dmi_system_id *d)
 	return 0;
 }
 
+static int __init dmi_use_rsdt(const struct dmi_system_id *d)
+{
+	printk(KERN_NOTICE PREFIX "DMI detected: %s, using rsdt\n", d->ident);
+	acpi_rsdt_forced = 1;
+	return 0;
+}
+
 static struct dmi_system_id acpi_osi_dmi_table[] __initdata = {
 	{
 	.callback = dmi_disable_osi_vista,
@@ -235,6 +242,20 @@ static struct dmi_system_id acpi_osi_dmi_table[] __initdata = {
 		     DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad X61"),
 		},
 	},
+	{
+		.callback = dmi_use_rsdt,
+		.ident = "ThinkPad R40e family", /* R40e, broken C-states */
+		.matches = {
+		     DMI_MATCH(DMI_BIOS_VENDOR, "IBM"),
+		     DMI_MATCH(DMI_BIOS_VERSION, "1SET")},
+	},
+	{
+		.callback = dmi_use_rsdt,
+		.ident = "ThinkPad R50e family", /* R50e, slow booting */
+		.matches = {
+		     DMI_MATCH(DMI_BIOS_VENDOR, "IBM"),
+		     DMI_MATCH(DMI_BIOS_VERSION, "1WET")},
+	},
 	{}
 };
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 2/2] R40e using rsdt (previous patch) makes all Cstates work -> remove blacklisting
  2008-12-17  8:55 ` [PATCH] : ACPI : Use RSDT instead of XSDT by adding boot option of "acpi=rsdt" Zhao Yakui
                     ` (2 preceding siblings ...)
  2009-01-09 10:58   ` [PATCH 1/2] Blacklist known broken machines (ThinkPad R40e and R50e) to use rsdt instead xsdt Thomas Renninger
@ 2009-01-09 10:58   ` Thomas Renninger
  3 siblings, 0 replies; 37+ messages in thread
From: Thomas Renninger @ 2009-01-09 10:58 UTC (permalink / raw)
  To: lenb, yakui.zhao, Linux-acpi, me, ibm-acpi-devel; +Cc: Thomas Renninger

Signed-off-by: Thomas Renninger <trenn@suse.de>
---
 drivers/acpi/processor_idle.c |   51 -----------------------------------------
 1 files changed, 0 insertions(+), 51 deletions(-)

diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
index 66a9d81..93bd366 100644
--- a/drivers/acpi/processor_idle.c
+++ b/drivers/acpi/processor_idle.c
@@ -127,57 +127,6 @@ static int set_max_cstate(const struct dmi_system_id *id)
 /* Actually this shouldn't be __cpuinitdata, would be better to fix the
    callers to only run once -AK */
 static struct dmi_system_id __cpuinitdata processor_power_dmi_table[] = {
-	{ set_max_cstate, "IBM ThinkPad R40e", {
-	  DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
-	  DMI_MATCH(DMI_BIOS_VERSION,"1SET70WW")}, (void *)1},
-	{ set_max_cstate, "IBM ThinkPad R40e", {
-	  DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
-	  DMI_MATCH(DMI_BIOS_VERSION,"1SET60WW")}, (void *)1},
-	{ set_max_cstate, "IBM ThinkPad R40e", {
-	  DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
-	  DMI_MATCH(DMI_BIOS_VERSION,"1SET43WW") }, (void*)1},
-	{ set_max_cstate, "IBM ThinkPad R40e", {
-	  DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
-	  DMI_MATCH(DMI_BIOS_VERSION,"1SET45WW") }, (void*)1},
-	{ set_max_cstate, "IBM ThinkPad R40e", {
-	  DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
-	  DMI_MATCH(DMI_BIOS_VERSION,"1SET47WW") }, (void*)1},
-	{ set_max_cstate, "IBM ThinkPad R40e", {
-	  DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
-	  DMI_MATCH(DMI_BIOS_VERSION,"1SET50WW") }, (void*)1},
-	{ set_max_cstate, "IBM ThinkPad R40e", {
-	  DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
-	  DMI_MATCH(DMI_BIOS_VERSION,"1SET52WW") }, (void*)1},
-	{ set_max_cstate, "IBM ThinkPad R40e", {
-	  DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
-	  DMI_MATCH(DMI_BIOS_VERSION,"1SET55WW") }, (void*)1},
-	{ set_max_cstate, "IBM ThinkPad R40e", {
-	  DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
-	  DMI_MATCH(DMI_BIOS_VERSION,"1SET56WW") }, (void*)1},
-	{ set_max_cstate, "IBM ThinkPad R40e", {
-	  DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
-	  DMI_MATCH(DMI_BIOS_VERSION,"1SET59WW") }, (void*)1},
-	{ set_max_cstate, "IBM ThinkPad R40e", {
-	  DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
-	  DMI_MATCH(DMI_BIOS_VERSION,"1SET60WW") }, (void*)1},
-	{ set_max_cstate, "IBM ThinkPad R40e", {
-	  DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
-	  DMI_MATCH(DMI_BIOS_VERSION,"1SET61WW") }, (void*)1},
-	{ set_max_cstate, "IBM ThinkPad R40e", {
-	  DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
-	  DMI_MATCH(DMI_BIOS_VERSION,"1SET62WW") }, (void*)1},
-	{ set_max_cstate, "IBM ThinkPad R40e", {
-	  DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
-	  DMI_MATCH(DMI_BIOS_VERSION,"1SET64WW") }, (void*)1},
-	{ set_max_cstate, "IBM ThinkPad R40e", {
-	  DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
-	  DMI_MATCH(DMI_BIOS_VERSION,"1SET65WW") }, (void*)1},
-	{ set_max_cstate, "IBM ThinkPad R40e", {
-	  DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
-	  DMI_MATCH(DMI_BIOS_VERSION,"1SET68WW") }, (void*)1},
-	{ set_max_cstate, "Medion 41700", {
-	  DMI_MATCH(DMI_BIOS_VENDOR,"Phoenix Technologies LTD"),
-	  DMI_MATCH(DMI_BIOS_VERSION,"R01-A1J")}, (void *)1},
 	{ set_max_cstate, "Clevo 5600D", {
 	  DMI_MATCH(DMI_BIOS_VENDOR,"Phoenix Technologies LTD"),
 	  DMI_MATCH(DMI_BIOS_VERSION,"SHE845M0.86C.0013.D.0302131307")},
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* Re: [PATCH] : ACPI : Use RSDT instead of XSDT by adding boot option of "acpi=rsdt"
  2009-01-09 10:54     ` Thomas Renninger
@ 2009-01-09 10:59       ` Len Brown
  2009-01-09 12:16         ` Thomas Renninger
  0 siblings, 1 reply; 37+ messages in thread
From: Len Brown @ 2009-01-09 10:59 UTC (permalink / raw)
  To: Thomas Renninger
  Cc: Zhao Yakui, Linux-acpi@vger.kernel.org, me, ibm-acpi-devel



> > >    In such case the boot option of "acpi=rsdt" is provided so that
> > > RSDT is tried instead of XSDT table when the system can't work well.
> 
> Great, now we still need a dmi blacklist with machines which are known
> broken. Then we are at the patchset I posted about a year ago -> scnr.
> 
> I am going to post the missing two patches to make R40e and R50e work.

please do not.

this option is just for manual debugging.
as i wrote when i nacked your original series,
the object should be to figure out which path to take w/o using dmi.

thanks,
-Len


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] : ACPI : Use RSDT instead of XSDT by adding boot option of "acpi=rsdt"
  2009-01-09 10:59       ` Len Brown
@ 2009-01-09 12:16         ` Thomas Renninger
  2009-01-09 12:34           ` Matthew Garrett
  0 siblings, 1 reply; 37+ messages in thread
From: Thomas Renninger @ 2009-01-09 12:16 UTC (permalink / raw)
  To: Len Brown; +Cc: Zhao Yakui, Linux-acpi@vger.kernel.org, me, ibm-acpi-devel

On Friday 09 January 2009 11:59:28 Len Brown wrote:
> > > >    In such case the boot option of "acpi=rsdt" is provided so that
> > > > RSDT is tried instead of XSDT table when the system can't work well.
> >
> > Great, now we still need a dmi blacklist with machines which are known
> > broken. Then we are at the patchset I posted about a year ago -> scnr.
> >
> > I am going to post the missing two patches to make R40e and R50e work.
>
> please do not.
>
> this option is just for manual debugging.
> as i wrote when i nacked your original series,
> the object should be to figure out which path to take w/o using dmi.

It's not possible (maybe someone has an idea, there wasn't any yet).
Either you always use it or not, you cannot check for much, because it is
too early.
If it's always used, I expect it's not the RSDT that should be taken, but
the 32 bit addresses of the RSDT/XSDT.

IMO this cannot generally be done, because chances are high that machines
which do not support Windows likely will break.
Chances are high that a machine which does not support Windows uses 64 bit
addresses and leaves the 32 bit ones uninitialized, a spec valid configuration 
which will then break.

Also: Until the object (figure out which path to take w/o using dmi) is
achieved, these machines should work properly, right?
This task seem to not be easy and unsuccessfully took several kernel 
iterations already,
So why can't these simple patches just be added and the machines be fixed 
until eventually at some time this will generally be fixed?

     Thomas

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] : ACPI : Use RSDT instead of XSDT by adding boot option of "acpi=rsdt"
  2009-01-09 12:16         ` Thomas Renninger
@ 2009-01-09 12:34           ` Matthew Garrett
  2009-01-12 14:13             ` Thomas Renninger
  0 siblings, 1 reply; 37+ messages in thread
From: Matthew Garrett @ 2009-01-09 12:34 UTC (permalink / raw)
  To: Thomas Renninger
  Cc: Len Brown, Zhao Yakui, Linux-acpi@vger.kernel.org, me,
	ibm-acpi-devel

On Fri, Jan 09, 2009 at 01:16:15PM +0100, Thomas Renninger wrote:

> IMO this cannot generally be done, because chances are high that machines
> which do not support Windows likely will break.
> Chances are high that a machine which does not support Windows uses 64 bit
> addresses and leaves the 32 bit ones uninitialized, a spec valid configuration 
> which will then break.

Bear in mind that the values in the 32-bit entries are *io port* 
addresses, not physical memory addresses. There's only 16 bits of io 
ports, so the probability of the 64-bit values being programmed 
correctly and the 32-bit ones containing a valid but not-working set is 
tiny. If you know of any machines that behave this way, I'd be 
impressed - and it'll be far easier to dmi whitelist them than the other 
way around.

-- 
Matthew Garrett | mjg59@srcf.ucam.org

^ permalink raw reply	[flat|nested] 37+ messages in thread

* [PATCH] : ACPI : Use clocksource to get the C-state time instead of ACPI PM timer
  2008-10-31 18:42 [PATCH] ACPI: EC: clean up tmp variable before reuse Alexey Starikovskiy
                   ` (5 preceding siblings ...)
  2009-01-04  4:04 ` Zhao Yakui
@ 2009-01-12  7:07 ` Zhao Yakui
  2009-01-12  7:58   ` Rafael J. Wysocki
  2009-01-12 22:09   ` Pallipadi, Venkatesh
  2009-01-13  3:50 ` [RESEND] " Zhao Yakui
  7 siblings, 2 replies; 37+ messages in thread
From: Zhao Yakui @ 2009-01-12  7:07 UTC (permalink / raw)
  To: LenBrown; +Cc: linux-acpi, venkatesh.pallipadi

Subject: ACPI: Use clocksource to get the C-state time instead of ACPI PM timer
From: Zhao Yakui <yakui.zhao@intel.com>

    On most boxes the ACPI PM timer is 24-bit counter that runs on 3.579545MHz
clock. In such case the max C-state sleep time should be less than 4687ms when
it is used to record C2/C3 duration time. 
    But on some boxes the max C-state sleep time is more than 4687ms. In such
case the overflow happens and the C-state duration time can't be counted
accurately.
    Use clocksource to get the C-state time instead of ACPI PM timer

Signed-off-by: Yakui Zhao <yakui.zhao@intel.com>
Signed-off-by: Alexs Shi <alex.shi@intel.com>
Signed-off-by: Shaohua Li <shaohua.li@intel.com>
---
 drivers/acpi/processor_idle.c |   68 +++++++++++++++++++++++++-----------------
 1 file changed, 42 insertions(+), 26 deletions(-)

Index: linux-2.6/drivers/acpi/processor_idle.c
===================================================================
--- linux-2.6.orig/drivers/acpi/processor_idle.c
+++ linux-2.6/drivers/acpi/processor_idle.c
@@ -397,8 +397,8 @@ static void acpi_processor_idle(void)
 	struct acpi_processor_cx *cx = NULL;
 	struct acpi_processor_cx *next_state = NULL;
 	int sleep_ticks = 0;
-	u32 t1, t2 = 0;
-
+	ktime_t kt1, kt2;
+	u64 sleep_interval;
 	/*
 	 * Interrupts must be disabled during bus mastering calculations and
 	 * for C2/C3 transitions.
@@ -543,23 +543,26 @@ static void acpi_processor_idle(void)
 		break;
 
 	case ACPI_STATE_C2:
-		/* Get start time (ticks) */
-		t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
+		kt1 = ktime_get();
 		/* Tell the scheduler that we are going deep-idle: */
 		sched_clock_idle_sleep_event();
 		/* Invoke C2 */
 		acpi_state_timer_broadcast(pr, cx, 1);
 		acpi_cstate_enter(cx);
 		/* Get end time (ticks) */
-		t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
-
+		kt2 = ktime_get();
 #if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86)
 		/* TSC halts in C2, so notify users */
 		if (tsc_halts_in_c(ACPI_STATE_C2))
 			mark_tsc_unstable("possible TSC halt in C2");
 #endif
+		/*
+		 * Use the function of ktime_sub to get the time interval and
+		 * then convert it to microsecond
+		 */
+		sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
 		/* Compute time (ticks) that we were actually asleep */
-		sleep_ticks = ticks_elapsed(t1, t2);
+		sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
 
 		/* Tell the scheduler how much we idled: */
 		sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
@@ -605,13 +608,13 @@ static void acpi_processor_idle(void)
 		}
 
 		/* Get start time (ticks) */
-		t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
+		kt1 = ktime_get();
 		/* Invoke C3 */
 		/* Tell the scheduler that we are going deep-idle: */
 		sched_clock_idle_sleep_event();
 		acpi_cstate_enter(cx);
 		/* Get end time (ticks) */
-		t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
+		kt2 = ktime_get();
 		if (pr->flags.bm_check && pr->flags.bm_control) {
 			/* Enable bus master arbitration */
 			atomic_dec(&c3_cpu_count);
@@ -623,8 +626,13 @@ static void acpi_processor_idle(void)
 		if (tsc_halts_in_c(ACPI_STATE_C3))
 			mark_tsc_unstable("TSC halts in C3");
 #endif
+		/*
+		 * Use the function of ktime_sub to get the time interval
+		 * and convert it to microsecond.
+		 */
+		sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
 		/* Compute time (ticks) that we were actually asleep */
-		sleep_ticks = ticks_elapsed(t1, t2);
+		sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
 		/* Tell the scheduler how much we idled: */
 		sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
 
@@ -1455,7 +1463,8 @@ static inline void acpi_idle_do_entry(st
 static int acpi_idle_enter_c1(struct cpuidle_device *dev,
 			      struct cpuidle_state *state)
 {
-	u32 t1, t2;
+	ktime_t kt1, kt2;
+	u64 sleep_interval;
 	struct acpi_processor *pr;
 	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
 
@@ -1476,14 +1485,18 @@ static int acpi_idle_enter_c1(struct cpu
 	if (pr->flags.bm_check)
 		acpi_idle_update_bm_rld(pr, cx);
 
-	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
+	kt1 = ktime_get();
 	acpi_idle_do_entry(cx);
-	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
+	kt2 = ktime_get();
 
 	local_irq_enable();
 	cx->usage++;
-
-	return ticks_elapsed_in_us(t1, t2);
+	/*
+	 * Use the ktime_sub to get the time interval and then convert
+	 * it to microseconds
+	 */
+	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
+	return sleep_interval;
 }
 
 /**
@@ -1496,7 +1509,8 @@ static int acpi_idle_enter_simple(struct
 {
 	struct acpi_processor *pr;
 	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
-	u32 t1, t2;
+	ktime_t kt1, kt2;
+	u64 sleep_interval;
 	int sleep_ticks = 0;
 
 	pr = __get_cpu_var(processors);
@@ -1533,18 +1547,19 @@ static int acpi_idle_enter_simple(struct
 	if (cx->type == ACPI_STATE_C3)
 		ACPI_FLUSH_CPU_CACHE();
 
-	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
+	kt1 = ktime_get();
 	/* Tell the scheduler that we are going deep-idle: */
 	sched_clock_idle_sleep_event();
 	acpi_idle_do_entry(cx);
-	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
+	kt2 = ktime_get();
 
 #if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86)
 	/* TSC could halt in idle, so notify users */
 	if (tsc_halts_in_c(cx->type))
 		mark_tsc_unstable("TSC halts in idle");;
 #endif
-	sleep_ticks = ticks_elapsed(t1, t2);
+	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
+	sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
 
 	/* Tell the scheduler how much we idled: */
 	sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
@@ -1556,7 +1571,7 @@ static int acpi_idle_enter_simple(struct
 
 	acpi_state_timer_broadcast(pr, cx, 0);
 	cx->time += sleep_ticks;
-	return ticks_elapsed_in_us(t1, t2);
+	return sleep_interval;
 }
 
 static int c3_cpu_count;
@@ -1574,7 +1589,8 @@ static int acpi_idle_enter_bm(struct cpu
 {
 	struct acpi_processor *pr;
 	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
-	u32 t1, t2;
+	ktime_t  kt1, kt2;
+	u64 sleep_interval;
 	int sleep_ticks = 0;
 
 	pr = __get_cpu_var(processors);
@@ -1643,10 +1659,9 @@ static int acpi_idle_enter_bm(struct cpu
 	} else if (!pr->flags.bm_check) {
 		ACPI_FLUSH_CPU_CACHE();
 	}
-
-	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
+	kt1 = ktime_get();
 	acpi_idle_do_entry(cx);
-	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
+	kt2 = ktime_get();
 
 	/* Re-enable bus master arbitration */
 	if (pr->flags.bm_check && pr->flags.bm_control) {
@@ -1661,7 +1676,8 @@ static int acpi_idle_enter_bm(struct cpu
 	if (tsc_halts_in_c(ACPI_STATE_C3))
 		mark_tsc_unstable("TSC halts in idle");
 #endif
-	sleep_ticks = ticks_elapsed(t1, t2);
+	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
+	sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
 	/* Tell the scheduler how much we idled: */
 	sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
 
@@ -1672,7 +1688,7 @@ static int acpi_idle_enter_bm(struct cpu
 
 	acpi_state_timer_broadcast(pr, cx, 0);
 	cx->time += sleep_ticks;
-	return ticks_elapsed_in_us(t1, t2);
+	return sleep_interval;
 }
 
 struct cpuidle_driver acpi_idle_driver = {



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] : ACPI : Use clocksource to get the C-state time instead of ACPI PM timer
  2009-01-12  7:07 ` [PATCH] : ACPI : Use clocksource to get the C-state time instead of ACPI PM timer Zhao Yakui
@ 2009-01-12  7:58   ` Rafael J. Wysocki
  2009-01-12  9:31     ` Zhao Yakui
  2009-01-12  9:39     ` Zhao Yakui
  2009-01-12 22:09   ` Pallipadi, Venkatesh
  1 sibling, 2 replies; 37+ messages in thread
From: Rafael J. Wysocki @ 2009-01-12  7:58 UTC (permalink / raw)
  To: Zhao Yakui; +Cc: LenBrown, linux-acpi, venkatesh.pallipadi

On Monday 12 January 2009, Zhao Yakui wrote:
> Subject: ACPI: Use clocksource to get the C-state time instead of ACPI PM timer
> From: Zhao Yakui <yakui.zhao@intel.com>
> 
>     On most boxes the ACPI PM timer is 24-bit counter that runs on 3.579545MHz
> clock. In such case the max C-state sleep time should be less than 4687ms when
> it is used to record C2/C3 duration time. 
>     But on some boxes the max C-state sleep time is more than 4687ms. In such
> case the overflow happens and the C-state duration time can't be counted
> accurately.
>     Use clocksource to get the C-state time instead of ACPI PM timer

I haven't looked at the patch in detail, but what I would do would be to use
the ACPI PM timer by default and _if_ the results from that are not reasonable,
_then_ fall back to the clocksource.  Is this what you have implemented?

Rafael

 
> Signed-off-by: Yakui Zhao <yakui.zhao@intel.com>
> Signed-off-by: Alexs Shi <alex.shi@intel.com>
> Signed-off-by: Shaohua Li <shaohua.li@intel.com>
> ---
>  drivers/acpi/processor_idle.c |   68 +++++++++++++++++++++++++-----------------
>  1 file changed, 42 insertions(+), 26 deletions(-)
> 
> Index: linux-2.6/drivers/acpi/processor_idle.c
> ===================================================================
> --- linux-2.6.orig/drivers/acpi/processor_idle.c
> +++ linux-2.6/drivers/acpi/processor_idle.c
> @@ -397,8 +397,8 @@ static void acpi_processor_idle(void)
>  	struct acpi_processor_cx *cx = NULL;
>  	struct acpi_processor_cx *next_state = NULL;
>  	int sleep_ticks = 0;
> -	u32 t1, t2 = 0;
> -
> +	ktime_t kt1, kt2;
> +	u64 sleep_interval;
>  	/*
>  	 * Interrupts must be disabled during bus mastering calculations and
>  	 * for C2/C3 transitions.
> @@ -543,23 +543,26 @@ static void acpi_processor_idle(void)
>  		break;
>  
>  	case ACPI_STATE_C2:
> -		/* Get start time (ticks) */
> -		t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +		kt1 = ktime_get();
>  		/* Tell the scheduler that we are going deep-idle: */
>  		sched_clock_idle_sleep_event();
>  		/* Invoke C2 */
>  		acpi_state_timer_broadcast(pr, cx, 1);
>  		acpi_cstate_enter(cx);
>  		/* Get end time (ticks) */
> -		t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> -
> +		kt2 = ktime_get();
>  #if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86)
>  		/* TSC halts in C2, so notify users */
>  		if (tsc_halts_in_c(ACPI_STATE_C2))
>  			mark_tsc_unstable("possible TSC halt in C2");
>  #endif
> +		/*
> +		 * Use the function of ktime_sub to get the time interval and
> +		 * then convert it to microsecond
> +		 */
> +		sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
>  		/* Compute time (ticks) that we were actually asleep */
> -		sleep_ticks = ticks_elapsed(t1, t2);
> +		sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
>  
>  		/* Tell the scheduler how much we idled: */
>  		sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
> @@ -605,13 +608,13 @@ static void acpi_processor_idle(void)
>  		}
>  
>  		/* Get start time (ticks) */
> -		t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +		kt1 = ktime_get();
>  		/* Invoke C3 */
>  		/* Tell the scheduler that we are going deep-idle: */
>  		sched_clock_idle_sleep_event();
>  		acpi_cstate_enter(cx);
>  		/* Get end time (ticks) */
> -		t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +		kt2 = ktime_get();
>  		if (pr->flags.bm_check && pr->flags.bm_control) {
>  			/* Enable bus master arbitration */
>  			atomic_dec(&c3_cpu_count);
> @@ -623,8 +626,13 @@ static void acpi_processor_idle(void)
>  		if (tsc_halts_in_c(ACPI_STATE_C3))
>  			mark_tsc_unstable("TSC halts in C3");
>  #endif
> +		/*
> +		 * Use the function of ktime_sub to get the time interval
> +		 * and convert it to microsecond.
> +		 */
> +		sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
>  		/* Compute time (ticks) that we were actually asleep */
> -		sleep_ticks = ticks_elapsed(t1, t2);
> +		sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
>  		/* Tell the scheduler how much we idled: */
>  		sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
>  
> @@ -1455,7 +1463,8 @@ static inline void acpi_idle_do_entry(st
>  static int acpi_idle_enter_c1(struct cpuidle_device *dev,
>  			      struct cpuidle_state *state)
>  {
> -	u32 t1, t2;
> +	ktime_t kt1, kt2;
> +	u64 sleep_interval;
>  	struct acpi_processor *pr;
>  	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
>  
> @@ -1476,14 +1485,18 @@ static int acpi_idle_enter_c1(struct cpu
>  	if (pr->flags.bm_check)
>  		acpi_idle_update_bm_rld(pr, cx);
>  
> -	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +	kt1 = ktime_get();
>  	acpi_idle_do_entry(cx);
> -	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +	kt2 = ktime_get();
>  
>  	local_irq_enable();
>  	cx->usage++;
> -
> -	return ticks_elapsed_in_us(t1, t2);
> +	/*
> +	 * Use the ktime_sub to get the time interval and then convert
> +	 * it to microseconds
> +	 */
> +	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> +	return sleep_interval;
>  }
>  
>  /**
> @@ -1496,7 +1509,8 @@ static int acpi_idle_enter_simple(struct
>  {
>  	struct acpi_processor *pr;
>  	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
> -	u32 t1, t2;
> +	ktime_t kt1, kt2;
> +	u64 sleep_interval;
>  	int sleep_ticks = 0;
>  
>  	pr = __get_cpu_var(processors);
> @@ -1533,18 +1547,19 @@ static int acpi_idle_enter_simple(struct
>  	if (cx->type == ACPI_STATE_C3)
>  		ACPI_FLUSH_CPU_CACHE();
>  
> -	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +	kt1 = ktime_get();
>  	/* Tell the scheduler that we are going deep-idle: */
>  	sched_clock_idle_sleep_event();
>  	acpi_idle_do_entry(cx);
> -	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +	kt2 = ktime_get();
>  
>  #if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86)
>  	/* TSC could halt in idle, so notify users */
>  	if (tsc_halts_in_c(cx->type))
>  		mark_tsc_unstable("TSC halts in idle");;
>  #endif
> -	sleep_ticks = ticks_elapsed(t1, t2);
> +	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> +	sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
>  
>  	/* Tell the scheduler how much we idled: */
>  	sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
> @@ -1556,7 +1571,7 @@ static int acpi_idle_enter_simple(struct
>  
>  	acpi_state_timer_broadcast(pr, cx, 0);
>  	cx->time += sleep_ticks;
> -	return ticks_elapsed_in_us(t1, t2);
> +	return sleep_interval;
>  }
>  
>  static int c3_cpu_count;
> @@ -1574,7 +1589,8 @@ static int acpi_idle_enter_bm(struct cpu
>  {
>  	struct acpi_processor *pr;
>  	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
> -	u32 t1, t2;
> +	ktime_t  kt1, kt2;
> +	u64 sleep_interval;
>  	int sleep_ticks = 0;
>  
>  	pr = __get_cpu_var(processors);
> @@ -1643,10 +1659,9 @@ static int acpi_idle_enter_bm(struct cpu
>  	} else if (!pr->flags.bm_check) {
>  		ACPI_FLUSH_CPU_CACHE();
>  	}
> -
> -	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +	kt1 = ktime_get();
>  	acpi_idle_do_entry(cx);
> -	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +	kt2 = ktime_get();
>  
>  	/* Re-enable bus master arbitration */
>  	if (pr->flags.bm_check && pr->flags.bm_control) {
> @@ -1661,7 +1676,8 @@ static int acpi_idle_enter_bm(struct cpu
>  	if (tsc_halts_in_c(ACPI_STATE_C3))
>  		mark_tsc_unstable("TSC halts in idle");
>  #endif
> -	sleep_ticks = ticks_elapsed(t1, t2);
> +	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> +	sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
>  	/* Tell the scheduler how much we idled: */
>  	sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
>  
> @@ -1672,7 +1688,7 @@ static int acpi_idle_enter_bm(struct cpu
>  
>  	acpi_state_timer_broadcast(pr, cx, 0);
>  	cx->time += sleep_ticks;
> -	return ticks_elapsed_in_us(t1, t2);
> +	return sleep_interval;
>  }
>  
>  struct cpuidle_driver acpi_idle_driver = {
> 
> 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] : ACPI : Use clocksource to get the C-state time instead of ACPI PM timer
  2009-01-12  7:58   ` Rafael J. Wysocki
@ 2009-01-12  9:31     ` Zhao Yakui
  2009-01-12 12:27       ` Rafael J. Wysocki
  2009-01-12  9:39     ` Zhao Yakui
  1 sibling, 1 reply; 37+ messages in thread
From: Zhao Yakui @ 2009-01-12  9:31 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: LenBrown, linux-acpi@vger.kernel.org, Pallipadi, Venkatesh

On Mon, 2009-01-12 at 15:58 +0800, Rafael J. Wysocki wrote:
> On Monday 12 January 2009, Zhao Yakui wrote:
> > Subject: ACPI: Use clocksource to get the C-state time instead of ACPI PM timer
> > From: Zhao Yakui <yakui.zhao@intel.com>
> > 
> >     On most boxes the ACPI PM timer is 24-bit counter that runs on 3.579545MHz
> > clock. In such case the max C-state sleep time should be less than 4687ms when
> > it is used to record C2/C3 duration time. 
> >     But on some boxes the max C-state sleep time is more than 4687ms. In such
> > case the overflow happens and the C-state duration time can't be counted
> > accurately.
> >     Use clocksource to get the C-state time instead of ACPI PM timer
> 
> I haven't looked at the patch in detail, but what I would do would be to use
> the ACPI PM timer by default and _if_ the results from that are not reasonable,
> _then_ fall back to the clocksource.  Is this what you have implemented?

What you said is also OK. But it is not easy to detect whether the
result by using ACPI PM timer is not reasonable.  If doing so, another
clocksource(for example: hpet) should also be used in cpuidle driver and
compared with ACPI PM timer. (Sometimes the ACPI PM timer will be used
as the current clocksource. In such case it will be duplicated).

So the current clocksource will be used to count the C-state time
instead of ACPI PM timer. If the current clocksource is changed by user,
OS will select the reliable clocksource as the current one.

Best regards.
   Yakui.
> 
> Rafael
> 
>  
> > Signed-off-by: Yakui Zhao <yakui.zhao@intel.com>
> > Signed-off-by: Alexs Shi <alex.shi@intel.com>
> > Signed-off-by: Shaohua Li <shaohua.li@intel.com>
> > ---
> >  drivers/acpi/processor_idle.c |   68 +++++++++++++++++++++++++-----------------
> >  1 file changed, 42 insertions(+), 26 deletions(-)
> > 
> > Index: linux-2.6/drivers/acpi/processor_idle.c
> > ===================================================================
> > --- linux-2.6.orig/drivers/acpi/processor_idle.c
> > +++ linux-2.6/drivers/acpi/processor_idle.c
> > @@ -397,8 +397,8 @@ static void acpi_processor_idle(void)
> >  	struct acpi_processor_cx *cx = NULL;
> >  	struct acpi_processor_cx *next_state = NULL;
> >  	int sleep_ticks = 0;
> > -	u32 t1, t2 = 0;
> > -
> > +	ktime_t kt1, kt2;
> > +	u64 sleep_interval;
> >  	/*
> >  	 * Interrupts must be disabled during bus mastering calculations and
> >  	 * for C2/C3 transitions.
> > @@ -543,23 +543,26 @@ static void acpi_processor_idle(void)
> >  		break;
> >  
> >  	case ACPI_STATE_C2:
> > -		/* Get start time (ticks) */
> > -		t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +		kt1 = ktime_get();
> >  		/* Tell the scheduler that we are going deep-idle: */
> >  		sched_clock_idle_sleep_event();
> >  		/* Invoke C2 */
> >  		acpi_state_timer_broadcast(pr, cx, 1);
> >  		acpi_cstate_enter(cx);
> >  		/* Get end time (ticks) */
> > -		t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > -
> > +		kt2 = ktime_get();
> >  #if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86)
> >  		/* TSC halts in C2, so notify users */
> >  		if (tsc_halts_in_c(ACPI_STATE_C2))
> >  			mark_tsc_unstable("possible TSC halt in C2");
> >  #endif
> > +		/*
> > +		 * Use the function of ktime_sub to get the time interval and
> > +		 * then convert it to microsecond
> > +		 */
> > +		sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> >  		/* Compute time (ticks) that we were actually asleep */
> > -		sleep_ticks = ticks_elapsed(t1, t2);
> > +		sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
> >  
> >  		/* Tell the scheduler how much we idled: */
> >  		sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
> > @@ -605,13 +608,13 @@ static void acpi_processor_idle(void)
> >  		}
> >  
> >  		/* Get start time (ticks) */
> > -		t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +		kt1 = ktime_get();
> >  		/* Invoke C3 */
> >  		/* Tell the scheduler that we are going deep-idle: */
> >  		sched_clock_idle_sleep_event();
> >  		acpi_cstate_enter(cx);
> >  		/* Get end time (ticks) */
> > -		t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +		kt2 = ktime_get();
> >  		if (pr->flags.bm_check && pr->flags.bm_control) {
> >  			/* Enable bus master arbitration */
> >  			atomic_dec(&c3_cpu_count);
> > @@ -623,8 +626,13 @@ static void acpi_processor_idle(void)
> >  		if (tsc_halts_in_c(ACPI_STATE_C3))
> >  			mark_tsc_unstable("TSC halts in C3");
> >  #endif
> > +		/*
> > +		 * Use the function of ktime_sub to get the time interval
> > +		 * and convert it to microsecond.
> > +		 */
> > +		sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> >  		/* Compute time (ticks) that we were actually asleep */
> > -		sleep_ticks = ticks_elapsed(t1, t2);
> > +		sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
> >  		/* Tell the scheduler how much we idled: */
> >  		sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
> >  
> > @@ -1455,7 +1463,8 @@ static inline void acpi_idle_do_entry(st
> >  static int acpi_idle_enter_c1(struct cpuidle_device *dev,
> >  			      struct cpuidle_state *state)
> >  {
> > -	u32 t1, t2;
> > +	ktime_t kt1, kt2;
> > +	u64 sleep_interval;
> >  	struct acpi_processor *pr;
> >  	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
> >  
> > @@ -1476,14 +1485,18 @@ static int acpi_idle_enter_c1(struct cpu
> >  	if (pr->flags.bm_check)
> >  		acpi_idle_update_bm_rld(pr, cx);
> >  
> > -	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +	kt1 = ktime_get();
> >  	acpi_idle_do_entry(cx);
> > -	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +	kt2 = ktime_get();
> >  
> >  	local_irq_enable();
> >  	cx->usage++;
> > -
> > -	return ticks_elapsed_in_us(t1, t2);
> > +	/*
> > +	 * Use the ktime_sub to get the time interval and then convert
> > +	 * it to microseconds
> > +	 */
> > +	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> > +	return sleep_interval;
> >  }
> >  
> >  /**
> > @@ -1496,7 +1509,8 @@ static int acpi_idle_enter_simple(struct
> >  {
> >  	struct acpi_processor *pr;
> >  	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
> > -	u32 t1, t2;
> > +	ktime_t kt1, kt2;
> > +	u64 sleep_interval;
> >  	int sleep_ticks = 0;
> >  
> >  	pr = __get_cpu_var(processors);
> > @@ -1533,18 +1547,19 @@ static int acpi_idle_enter_simple(struct
> >  	if (cx->type == ACPI_STATE_C3)
> >  		ACPI_FLUSH_CPU_CACHE();
> >  
> > -	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +	kt1 = ktime_get();
> >  	/* Tell the scheduler that we are going deep-idle: */
> >  	sched_clock_idle_sleep_event();
> >  	acpi_idle_do_entry(cx);
> > -	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +	kt2 = ktime_get();
> >  
> >  #if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86)
> >  	/* TSC could halt in idle, so notify users */
> >  	if (tsc_halts_in_c(cx->type))
> >  		mark_tsc_unstable("TSC halts in idle");;
> >  #endif
> > -	sleep_ticks = ticks_elapsed(t1, t2);
> > +	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> > +	sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
> >  
> >  	/* Tell the scheduler how much we idled: */
> >  	sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
> > @@ -1556,7 +1571,7 @@ static int acpi_idle_enter_simple(struct
> >  
> >  	acpi_state_timer_broadcast(pr, cx, 0);
> >  	cx->time += sleep_ticks;
> > -	return ticks_elapsed_in_us(t1, t2);
> > +	return sleep_interval;
> >  }
> >  
> >  static int c3_cpu_count;
> > @@ -1574,7 +1589,8 @@ static int acpi_idle_enter_bm(struct cpu
> >  {
> >  	struct acpi_processor *pr;
> >  	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
> > -	u32 t1, t2;
> > +	ktime_t  kt1, kt2;
> > +	u64 sleep_interval;
> >  	int sleep_ticks = 0;
> >  
> >  	pr = __get_cpu_var(processors);
> > @@ -1643,10 +1659,9 @@ static int acpi_idle_enter_bm(struct cpu
> >  	} else if (!pr->flags.bm_check) {
> >  		ACPI_FLUSH_CPU_CACHE();
> >  	}
> > -
> > -	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +	kt1 = ktime_get();
> >  	acpi_idle_do_entry(cx);
> > -	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +	kt2 = ktime_get();
> >  
> >  	/* Re-enable bus master arbitration */
> >  	if (pr->flags.bm_check && pr->flags.bm_control) {
> > @@ -1661,7 +1676,8 @@ static int acpi_idle_enter_bm(struct cpu
> >  	if (tsc_halts_in_c(ACPI_STATE_C3))
> >  		mark_tsc_unstable("TSC halts in idle");
> >  #endif
> > -	sleep_ticks = ticks_elapsed(t1, t2);
> > +	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> > +	sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
> >  	/* Tell the scheduler how much we idled: */
> >  	sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
> >  
> > @@ -1672,7 +1688,7 @@ static int acpi_idle_enter_bm(struct cpu
> >  
> >  	acpi_state_timer_broadcast(pr, cx, 0);
> >  	cx->time += sleep_ticks;
> > -	return ticks_elapsed_in_us(t1, t2);
> > +	return sleep_interval;
> >  }
> >  
> >  struct cpuidle_driver acpi_idle_driver = {
> > 
> > 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-acpi" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] : ACPI : Use clocksource to get the C-state time instead of ACPI PM timer
  2009-01-12  7:58   ` Rafael J. Wysocki
  2009-01-12  9:31     ` Zhao Yakui
@ 2009-01-12  9:39     ` Zhao Yakui
  1 sibling, 0 replies; 37+ messages in thread
From: Zhao Yakui @ 2009-01-12  9:39 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: LenBrown, linux-acpi@vger.kernel.org, Pallipadi, Venkatesh

On Mon, 2009-01-12 at 15:58 +0800, Rafael J. Wysocki wrote:
> On Monday 12 January 2009, Zhao Yakui wrote:
> > Subject: ACPI: Use clocksource to get the C-state time instead of ACPI PM timer
> > From: Zhao Yakui <yakui.zhao@intel.com>
> > 
> >     On most boxes the ACPI PM timer is 24-bit counter that runs on 3.579545MHz
> > clock. In such case the max C-state sleep time should be less than 4687ms when
> > it is used to record C2/C3 duration time. 
> >     But on some boxes the max C-state sleep time is more than 4687ms. In such
> > case the overflow happens and the C-state duration time can't be counted
> > accurately.
> >     Use clocksource to get the C-state time instead of ACPI PM timer
> 
> I haven't looked at the patch in detail, but what I would do would be to use
> the ACPI PM timer by default and _if_ the results from that are not reasonable,
> _then_ fall back to the clocksource.  Is this what you have implemented?

> What you said is also OK. But it is not easy to detect whether the
result by using ACPI PM timer is not reasonable.  If doing so, another
clocksource(for example: hpet) should also be used in cpuidle driver and
compared with ACPI PM timer. (Sometimes the ACPI PM timer will be used
as the current clocksource. In such case it will be duplicated).

So the current clocksource will be used to count the C-state time
instead of ACPI PM timer. If the current clocksource is not changed by
user, OS will select the reliable clocksource as the current one. 
> Rafael
> 
>  
> > Signed-off-by: Yakui Zhao <yakui.zhao@intel.com>
> > Signed-off-by: Alexs Shi <alex.shi@intel.com>
> > Signed-off-by: Shaohua Li <shaohua.li@intel.com>
> > ---
> >  drivers/acpi/processor_idle.c |   68 +++++++++++++++++++++++++-----------------
> >  1 file changed, 42 insertions(+), 26 deletions(-)
> > 
> > Index: linux-2.6/drivers/acpi/processor_idle.c
> > ===================================================================
> > --- linux-2.6.orig/drivers/acpi/processor_idle.c
> > +++ linux-2.6/drivers/acpi/processor_idle.c
> > @@ -397,8 +397,8 @@ static void acpi_processor_idle(void)
> >  	struct acpi_processor_cx *cx = NULL;
> >  	struct acpi_processor_cx *next_state = NULL;
> >  	int sleep_ticks = 0;
> > -	u32 t1, t2 = 0;
> > -
> > +	ktime_t kt1, kt2;
> > +	u64 sleep_interval;
> >  	/*
> >  	 * Interrupts must be disabled during bus mastering calculations and
> >  	 * for C2/C3 transitions.
> > @@ -543,23 +543,26 @@ static void acpi_processor_idle(void)
> >  		break;
> >  
> >  	case ACPI_STATE_C2:
> > -		/* Get start time (ticks) */
> > -		t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +		kt1 = ktime_get();
> >  		/* Tell the scheduler that we are going deep-idle: */
> >  		sched_clock_idle_sleep_event();
> >  		/* Invoke C2 */
> >  		acpi_state_timer_broadcast(pr, cx, 1);
> >  		acpi_cstate_enter(cx);
> >  		/* Get end time (ticks) */
> > -		t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > -
> > +		kt2 = ktime_get();
> >  #if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86)
> >  		/* TSC halts in C2, so notify users */
> >  		if (tsc_halts_in_c(ACPI_STATE_C2))
> >  			mark_tsc_unstable("possible TSC halt in C2");
> >  #endif
> > +		/*
> > +		 * Use the function of ktime_sub to get the time interval and
> > +		 * then convert it to microsecond
> > +		 */
> > +		sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> >  		/* Compute time (ticks) that we were actually asleep */
> > -		sleep_ticks = ticks_elapsed(t1, t2);
> > +		sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
> >  
> >  		/* Tell the scheduler how much we idled: */
> >  		sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
> > @@ -605,13 +608,13 @@ static void acpi_processor_idle(void)
> >  		}
> >  
> >  		/* Get start time (ticks) */
> > -		t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +		kt1 = ktime_get();
> >  		/* Invoke C3 */
> >  		/* Tell the scheduler that we are going deep-idle: */
> >  		sched_clock_idle_sleep_event();
> >  		acpi_cstate_enter(cx);
> >  		/* Get end time (ticks) */
> > -		t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +		kt2 = ktime_get();
> >  		if (pr->flags.bm_check && pr->flags.bm_control) {
> >  			/* Enable bus master arbitration */
> >  			atomic_dec(&c3_cpu_count);
> > @@ -623,8 +626,13 @@ static void acpi_processor_idle(void)
> >  		if (tsc_halts_in_c(ACPI_STATE_C3))
> >  			mark_tsc_unstable("TSC halts in C3");
> >  #endif
> > +		/*
> > +		 * Use the function of ktime_sub to get the time interval
> > +		 * and convert it to microsecond.
> > +		 */
> > +		sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> >  		/* Compute time (ticks) that we were actually asleep */
> > -		sleep_ticks = ticks_elapsed(t1, t2);
> > +		sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
> >  		/* Tell the scheduler how much we idled: */
> >  		sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
> >  
> > @@ -1455,7 +1463,8 @@ static inline void acpi_idle_do_entry(st
> >  static int acpi_idle_enter_c1(struct cpuidle_device *dev,
> >  			      struct cpuidle_state *state)
> >  {
> > -	u32 t1, t2;
> > +	ktime_t kt1, kt2;
> > +	u64 sleep_interval;
> >  	struct acpi_processor *pr;
> >  	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
> >  
> > @@ -1476,14 +1485,18 @@ static int acpi_idle_enter_c1(struct cpu
> >  	if (pr->flags.bm_check)
> >  		acpi_idle_update_bm_rld(pr, cx);
> >  
> > -	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +	kt1 = ktime_get();
> >  	acpi_idle_do_entry(cx);
> > -	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +	kt2 = ktime_get();
> >  
> >  	local_irq_enable();
> >  	cx->usage++;
> > -
> > -	return ticks_elapsed_in_us(t1, t2);
> > +	/*
> > +	 * Use the ktime_sub to get the time interval and then convert
> > +	 * it to microseconds
> > +	 */
> > +	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> > +	return sleep_interval;
> >  }
> >  
> >  /**
> > @@ -1496,7 +1509,8 @@ static int acpi_idle_enter_simple(struct
> >  {
> >  	struct acpi_processor *pr;
> >  	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
> > -	u32 t1, t2;
> > +	ktime_t kt1, kt2;
> > +	u64 sleep_interval;
> >  	int sleep_ticks = 0;
> >  
> >  	pr = __get_cpu_var(processors);
> > @@ -1533,18 +1547,19 @@ static int acpi_idle_enter_simple(struct
> >  	if (cx->type == ACPI_STATE_C3)
> >  		ACPI_FLUSH_CPU_CACHE();
> >  
> > -	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +	kt1 = ktime_get();
> >  	/* Tell the scheduler that we are going deep-idle: */
> >  	sched_clock_idle_sleep_event();
> >  	acpi_idle_do_entry(cx);
> > -	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +	kt2 = ktime_get();
> >  
> >  #if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86)
> >  	/* TSC could halt in idle, so notify users */
> >  	if (tsc_halts_in_c(cx->type))
> >  		mark_tsc_unstable("TSC halts in idle");;
> >  #endif
> > -	sleep_ticks = ticks_elapsed(t1, t2);
> > +	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> > +	sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
> >  
> >  	/* Tell the scheduler how much we idled: */
> >  	sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
> > @@ -1556,7 +1571,7 @@ static int acpi_idle_enter_simple(struct
> >  
> >  	acpi_state_timer_broadcast(pr, cx, 0);
> >  	cx->time += sleep_ticks;
> > -	return ticks_elapsed_in_us(t1, t2);
> > +	return sleep_interval;
> >  }
> >  
> >  static int c3_cpu_count;
> > @@ -1574,7 +1589,8 @@ static int acpi_idle_enter_bm(struct cpu
> >  {
> >  	struct acpi_processor *pr;
> >  	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
> > -	u32 t1, t2;
> > +	ktime_t  kt1, kt2;
> > +	u64 sleep_interval;
> >  	int sleep_ticks = 0;
> >  
> >  	pr = __get_cpu_var(processors);
> > @@ -1643,10 +1659,9 @@ static int acpi_idle_enter_bm(struct cpu
> >  	} else if (!pr->flags.bm_check) {
> >  		ACPI_FLUSH_CPU_CACHE();
> >  	}
> > -
> > -	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +	kt1 = ktime_get();
> >  	acpi_idle_do_entry(cx);
> > -	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +	kt2 = ktime_get();
> >  
> >  	/* Re-enable bus master arbitration */
> >  	if (pr->flags.bm_check && pr->flags.bm_control) {
> > @@ -1661,7 +1676,8 @@ static int acpi_idle_enter_bm(struct cpu
> >  	if (tsc_halts_in_c(ACPI_STATE_C3))
> >  		mark_tsc_unstable("TSC halts in idle");
> >  #endif
> > -	sleep_ticks = ticks_elapsed(t1, t2);
> > +	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> > +	sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
> >  	/* Tell the scheduler how much we idled: */
> >  	sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
> >  
> > @@ -1672,7 +1688,7 @@ static int acpi_idle_enter_bm(struct cpu
> >  
> >  	acpi_state_timer_broadcast(pr, cx, 0);
> >  	cx->time += sleep_ticks;
> > -	return ticks_elapsed_in_us(t1, t2);
> > +	return sleep_interval;
> >  }
> >  
> >  struct cpuidle_driver acpi_idle_driver = {
> > 
> > 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-acpi" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] : ACPI : Use clocksource to get the C-state time instead of ACPI PM timer
  2009-01-12  9:31     ` Zhao Yakui
@ 2009-01-12 12:27       ` Rafael J. Wysocki
  0 siblings, 0 replies; 37+ messages in thread
From: Rafael J. Wysocki @ 2009-01-12 12:27 UTC (permalink / raw)
  To: Zhao Yakui; +Cc: LenBrown, linux-acpi@vger.kernel.org, Pallipadi, Venkatesh

On Monday 12 January 2009, Zhao Yakui wrote:
> On Mon, 2009-01-12 at 15:58 +0800, Rafael J. Wysocki wrote:
> > On Monday 12 January 2009, Zhao Yakui wrote:
> > > Subject: ACPI: Use clocksource to get the C-state time instead of ACPI PM timer
> > > From: Zhao Yakui <yakui.zhao@intel.com>
> > > 
> > >     On most boxes the ACPI PM timer is 24-bit counter that runs on 3.579545MHz
> > > clock. In such case the max C-state sleep time should be less than 4687ms when
> > > it is used to record C2/C3 duration time. 
> > >     But on some boxes the max C-state sleep time is more than 4687ms. In such
> > > case the overflow happens and the C-state duration time can't be counted
> > > accurately.
> > >     Use clocksource to get the C-state time instead of ACPI PM timer
> > 
> > I haven't looked at the patch in detail, but what I would do would be to use
> > the ACPI PM timer by default and _if_ the results from that are not reasonable,
> > _then_ fall back to the clocksource.  Is this what you have implemented?
> 
> What you said is also OK. But it is not easy to detect whether the
> result by using ACPI PM timer is not reasonable.  If doing so, another
> clocksource(for example: hpet) should also be used in cpuidle driver and
> compared with ACPI PM timer. (Sometimes the ACPI PM timer will be used
> as the current clocksource. In such case it will be duplicated).
> 
> So the current clocksource will be used to count the C-state time
> instead of ACPI PM timer. If the current clocksource is changed by user,
> OS will select the reliable clocksource as the current one.

I understand the idea.  My point was to stick to the current behavior where
it is known to work and only change the behavior when it doesn't work.  Still,
if that is difficult to implement, the idea to always use the clocksource is
probably better than the current situation.

Of course, if it happens to introduce regressions you'll have to consider the
other approach anyway.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] : ACPI : Use RSDT instead of XSDT by adding boot option of "acpi=rsdt"
  2009-01-09 12:34           ` Matthew Garrett
@ 2009-01-12 14:13             ` Thomas Renninger
  2009-01-12 14:16               ` Matthew Garrett
  0 siblings, 1 reply; 37+ messages in thread
From: Thomas Renninger @ 2009-01-12 14:13 UTC (permalink / raw)
  To: Matthew Garrett
  Cc: Len Brown, Zhao Yakui, Linux-acpi@vger.kernel.org, me,
	ibm-acpi-devel

On Friday 09 January 2009 13:34:16 Matthew Garrett wrote:
> On Fri, Jan 09, 2009 at 01:16:15PM +0100, Thomas Renninger wrote:
> > IMO this cannot generally be done, because chances are high that machines
> > which do not support Windows likely will break.
> > Chances are high that a machine which does not support Windows uses 64
> > bit addresses and leaves the 32 bit ones uninitialized, a spec valid
> > configuration which will then break.
>
> Bear in mind that the values in the 32-bit entries are *io port*
> addresses, not physical memory addresses. There's only 16 bits of io
> ports, so the probability of the 64-bit values being programmed
> correctly and the 32-bit ones containing a valid but not-working set is
> tiny.
This is not about programming, but about a vendor just filling FADT values?
Why should someone fill up 32 bit values if the spec says that they get
ignored if you provide 64 bit ones (Because Windows is ..., but if you do
not support Windows and just follow the spec...)?

> If you know of any machines that behave this way, I'd be
> impressed - and it'll be far easier to dmi whitelist them than the other
> way around.
What do you mean with "easy"?

99.998% of all machines work fine with the current, spec conform 
implementation.
There are the two ThinkPads for which the vendor admitted that the BIOS is 
broken which work better with the rsdt blacklisted.

You want to change the default which is field tested and known to work on
all BIOSes (beside the two 5 year old broken ThinkPads)?

   Thomas

PS: Leave the ThinkPads broken..., I am tired of arguing about this ...

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] : ACPI : Use RSDT instead of XSDT by adding boot option of "acpi=rsdt"
  2009-01-12 14:13             ` Thomas Renninger
@ 2009-01-12 14:16               ` Matthew Garrett
  2009-01-12 22:17                 ` Thomas Renninger
  0 siblings, 1 reply; 37+ messages in thread
From: Matthew Garrett @ 2009-01-12 14:16 UTC (permalink / raw)
  To: Thomas Renninger
  Cc: Len Brown, Zhao Yakui, Linux-acpi@vger.kernel.org, me,
	ibm-acpi-devel

On Mon, Jan 12, 2009 at 03:13:04PM +0100, Thomas Renninger wrote:
> On Friday 09 January 2009 13:34:16 Matthew Garrett wrote:
> > If you know of any machines that behave this way, I'd be
> > impressed - and it'll be far easier to dmi whitelist them than the other
> > way around.
> What do you mean with "easy"?
> 
> 99.998% of all machines work fine with the current, spec conform 
> implementation.
> There are the two ThinkPads for which the vendor admitted that the BIOS is 
> broken which work better with the rsdt blacklisted.

Two Thinkpads, an entire range of HP workstations, how many others? We 
have no idea which bugs are being triggered by this. Refusing to fix it 
because we'll potentially break some machines that may not even exist is 
not a sensible plan.

-- 
Matthew Garrett | mjg59@srcf.ucam.org

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] : ACPI : Use clocksource to get the C-state time instead of ACPI PM timer
  2009-01-12  7:07 ` [PATCH] : ACPI : Use clocksource to get the C-state time instead of ACPI PM timer Zhao Yakui
  2009-01-12  7:58   ` Rafael J. Wysocki
@ 2009-01-12 22:09   ` Pallipadi, Venkatesh
  2009-01-13  1:26     ` Zhao Yakui
  2009-01-13  1:42     ` Zhao Yakui
  1 sibling, 2 replies; 37+ messages in thread
From: Pallipadi, Venkatesh @ 2009-01-12 22:09 UTC (permalink / raw)
  To: Zhao, Yakui; +Cc: LenBrown, linux-acpi@vger.kernel.org, Pallipadi, Venkatesh



Patch looks ok in general. However, I am wonderng whether we reall need
ktime_get() here or can we use getnstimeofday() directly. Using
getnstimeofday() should be cheaper, especially if we are doing frequent
calls from multiple CPUs on every idle enter/exit. And I don't think we
need to worry about monotonic clock from ktime here...

Thanks,
Venki

On Sun, Jan 11, 2009 at 11:07:50PM -0800, Zhao, Yakui wrote:
> Subject: ACPI: Use clocksource to get the C-state time instead of ACPI PM timer
> From: Zhao Yakui <yakui.zhao@intel.com>
> 
>     On most boxes the ACPI PM timer is 24-bit counter that runs on 3.579545MHz
> clock. In such case the max C-state sleep time should be less than 4687ms when
> it is used to record C2/C3 duration time. 
>     But on some boxes the max C-state sleep time is more than 4687ms. In such
> case the overflow happens and the C-state duration time can't be counted
> accurately.
>     Use clocksource to get the C-state time instead of ACPI PM timer
> 
> Signed-off-by: Yakui Zhao <yakui.zhao@intel.com>
> Signed-off-by: Alexs Shi <alex.shi@intel.com>
> Signed-off-by: Shaohua Li <shaohua.li@intel.com>
> ---
>  drivers/acpi/processor_idle.c |   68 +++++++++++++++++++++++++-----------------
>  1 file changed, 42 insertions(+), 26 deletions(-)
> 
> Index: linux-2.6/drivers/acpi/processor_idle.c
> ===================================================================
> --- linux-2.6.orig/drivers/acpi/processor_idle.c
> +++ linux-2.6/drivers/acpi/processor_idle.c
> @@ -397,8 +397,8 @@ static void acpi_processor_idle(void)
>  	struct acpi_processor_cx *cx = NULL;
>  	struct acpi_processor_cx *next_state = NULL;
>  	int sleep_ticks = 0;
> -	u32 t1, t2 = 0;
> -
> +	ktime_t kt1, kt2;
> +	u64 sleep_interval;
>  	/*
>  	 * Interrupts must be disabled during bus mastering calculations and
>  	 * for C2/C3 transitions.
> @@ -543,23 +543,26 @@ static void acpi_processor_idle(void)
>  		break;
>  
>  	case ACPI_STATE_C2:
> -		/* Get start time (ticks) */
> -		t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +		kt1 = ktime_get();
>  		/* Tell the scheduler that we are going deep-idle: */
>  		sched_clock_idle_sleep_event();
>  		/* Invoke C2 */
>  		acpi_state_timer_broadcast(pr, cx, 1);
>  		acpi_cstate_enter(cx);
>  		/* Get end time (ticks) */
> -		t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> -
> +		kt2 = ktime_get();
>  #if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86)
>  		/* TSC halts in C2, so notify users */
>  		if (tsc_halts_in_c(ACPI_STATE_C2))
>  			mark_tsc_unstable("possible TSC halt in C2");
>  #endif
> +		/*
> +		 * Use the function of ktime_sub to get the time interval and
> +		 * then convert it to microsecond
> +		 */
> +		sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
>  		/* Compute time (ticks) that we were actually asleep */
> -		sleep_ticks = ticks_elapsed(t1, t2);
> +		sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
>  
>  		/* Tell the scheduler how much we idled: */
>  		sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
> @@ -605,13 +608,13 @@ static void acpi_processor_idle(void)
>  		}
>  
>  		/* Get start time (ticks) */
> -		t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +		kt1 = ktime_get();
>  		/* Invoke C3 */
>  		/* Tell the scheduler that we are going deep-idle: */
>  		sched_clock_idle_sleep_event();
>  		acpi_cstate_enter(cx);
>  		/* Get end time (ticks) */
> -		t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +		kt2 = ktime_get();
>  		if (pr->flags.bm_check && pr->flags.bm_control) {
>  			/* Enable bus master arbitration */
>  			atomic_dec(&c3_cpu_count);
> @@ -623,8 +626,13 @@ static void acpi_processor_idle(void)
>  		if (tsc_halts_in_c(ACPI_STATE_C3))
>  			mark_tsc_unstable("TSC halts in C3");
>  #endif
> +		/*
> +		 * Use the function of ktime_sub to get the time interval
> +		 * and convert it to microsecond.
> +		 */
> +		sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
>  		/* Compute time (ticks) that we were actually asleep */
> -		sleep_ticks = ticks_elapsed(t1, t2);
> +		sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
>  		/* Tell the scheduler how much we idled: */
>  		sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
>  
> @@ -1455,7 +1463,8 @@ static inline void acpi_idle_do_entry(st
>  static int acpi_idle_enter_c1(struct cpuidle_device *dev,
>  			      struct cpuidle_state *state)
>  {
> -	u32 t1, t2;
> +	ktime_t kt1, kt2;
> +	u64 sleep_interval;
>  	struct acpi_processor *pr;
>  	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
>  
> @@ -1476,14 +1485,18 @@ static int acpi_idle_enter_c1(struct cpu
>  	if (pr->flags.bm_check)
>  		acpi_idle_update_bm_rld(pr, cx);
>  
> -	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +	kt1 = ktime_get();
>  	acpi_idle_do_entry(cx);
> -	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +	kt2 = ktime_get();
>  
>  	local_irq_enable();
>  	cx->usage++;
> -
> -	return ticks_elapsed_in_us(t1, t2);
> +	/*
> +	 * Use the ktime_sub to get the time interval and then convert
> +	 * it to microseconds
> +	 */
> +	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> +	return sleep_interval;
>  }
>  
>  /**
> @@ -1496,7 +1509,8 @@ static int acpi_idle_enter_simple(struct
>  {
>  	struct acpi_processor *pr;
>  	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
> -	u32 t1, t2;
> +	ktime_t kt1, kt2;
> +	u64 sleep_interval;
>  	int sleep_ticks = 0;
>  
>  	pr = __get_cpu_var(processors);
> @@ -1533,18 +1547,19 @@ static int acpi_idle_enter_simple(struct
>  	if (cx->type == ACPI_STATE_C3)
>  		ACPI_FLUSH_CPU_CACHE();
>  
> -	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +	kt1 = ktime_get();
>  	/* Tell the scheduler that we are going deep-idle: */
>  	sched_clock_idle_sleep_event();
>  	acpi_idle_do_entry(cx);
> -	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +	kt2 = ktime_get();
>  
>  #if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86)
>  	/* TSC could halt in idle, so notify users */
>  	if (tsc_halts_in_c(cx->type))
>  		mark_tsc_unstable("TSC halts in idle");;
>  #endif
> -	sleep_ticks = ticks_elapsed(t1, t2);
> +	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> +	sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
>  
>  	/* Tell the scheduler how much we idled: */
>  	sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
> @@ -1556,7 +1571,7 @@ static int acpi_idle_enter_simple(struct
>  
>  	acpi_state_timer_broadcast(pr, cx, 0);
>  	cx->time += sleep_ticks;
> -	return ticks_elapsed_in_us(t1, t2);
> +	return sleep_interval;
>  }
>  
>  static int c3_cpu_count;
> @@ -1574,7 +1589,8 @@ static int acpi_idle_enter_bm(struct cpu
>  {
>  	struct acpi_processor *pr;
>  	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
> -	u32 t1, t2;
> +	ktime_t  kt1, kt2;
> +	u64 sleep_interval;
>  	int sleep_ticks = 0;
>  
>  	pr = __get_cpu_var(processors);
> @@ -1643,10 +1659,9 @@ static int acpi_idle_enter_bm(struct cpu
>  	} else if (!pr->flags.bm_check) {
>  		ACPI_FLUSH_CPU_CACHE();
>  	}
> -
> -	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +	kt1 = ktime_get();
>  	acpi_idle_do_entry(cx);
> -	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +	kt2 = ktime_get();
>  
>  	/* Re-enable bus master arbitration */
>  	if (pr->flags.bm_check && pr->flags.bm_control) {
> @@ -1661,7 +1676,8 @@ static int acpi_idle_enter_bm(struct cpu
>  	if (tsc_halts_in_c(ACPI_STATE_C3))
>  		mark_tsc_unstable("TSC halts in idle");
>  #endif
> -	sleep_ticks = ticks_elapsed(t1, t2);
> +	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> +	sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
>  	/* Tell the scheduler how much we idled: */
>  	sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
>  
> @@ -1672,7 +1688,7 @@ static int acpi_idle_enter_bm(struct cpu
>  
>  	acpi_state_timer_broadcast(pr, cx, 0);
>  	cx->time += sleep_ticks;
> -	return ticks_elapsed_in_us(t1, t2);
> +	return sleep_interval;
>  }
>  
>  struct cpuidle_driver acpi_idle_driver = {
> 
> 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] : ACPI : Use RSDT instead of XSDT by adding boot option of "acpi=rsdt"
  2009-01-12 14:16               ` Matthew Garrett
@ 2009-01-12 22:17                 ` Thomas Renninger
  2009-01-12 23:38                   ` Matthew Garrett
  0 siblings, 1 reply; 37+ messages in thread
From: Thomas Renninger @ 2009-01-12 22:17 UTC (permalink / raw)
  To: Matthew Garrett
  Cc: Len Brown, Zhao Yakui, Linux-acpi@vger.kernel.org, me,
	ibm-acpi-devel

On Monday 12 January 2009 03:16:55 pm Matthew Garrett wrote:
> On Mon, Jan 12, 2009 at 03:13:04PM +0100, Thomas Renninger wrote:
> > On Friday 09 January 2009 13:34:16 Matthew Garrett wrote:
> > > If you know of any machines that behave this way, I'd be
> > > impressed - and it'll be far easier to dmi whitelist them than the
> > > other way around.
> >
> > What do you mean with "easy"?
> >
> > 99.998% of all machines work fine with the current, spec conform
> > implementation.
> > There are the two ThinkPads for which the vendor admitted that the BIOS
> > is broken which work better with the rsdt blacklisted.
>
> Two Thinkpads, an entire range of HP workstations, how many others?
You wanted to provide more info about the HPs. Are they even sold? Have you 
informed HP that their BIOSes are broken?
It's easy and safe (also in respect of not breaking Windows) for them to fix 
them. If we do not wait some more months, I am confident they are going to do 
it, tell me privately if you need contacts.

> We have no idea which bugs are being triggered by this. Refusing to fix it 
> because we'll potentially break some machines that may not even exist is 
> not a sensible plan.
The plan is to stay to the spec. Is that so wrong?
There are netbooks which only support Linux and I hope there will be a lot 
more machines only supporting Linux as OS in the future.
Such Windows compatibility, Spec violating fixes will break them.
It's the broken BIOSes, those machines the vendor never ever tried booting 
Linux which need the boot param and blacklist adding...

Maybe we come away with this change this time, maybe we hurt someone badly who
did everything right: stick to the spec, tested and verified that Linux works 
fine.

       Thomas

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] : ACPI : Use RSDT instead of XSDT by adding boot option of "acpi=rsdt"
  2009-01-12 22:17                 ` Thomas Renninger
@ 2009-01-12 23:38                   ` Matthew Garrett
  0 siblings, 0 replies; 37+ messages in thread
From: Matthew Garrett @ 2009-01-12 23:38 UTC (permalink / raw)
  To: Thomas Renninger
  Cc: Len Brown, Zhao Yakui, Linux-acpi@vger.kernel.org, me,
	ibm-acpi-devel

On Mon, Jan 12, 2009 at 11:17:07PM +0100, Thomas Renninger wrote:
> On Monday 12 January 2009 03:16:55 pm Matthew Garrett wrote:
> > On Mon, Jan 12, 2009 at 03:13:04PM +0100, Thomas Renninger wrote:
> > > 99.998% of all machines work fine with the current, spec conform
> > > implementation.
> > > There are the two ThinkPads for which the vendor admitted that the BIOS
> > > is broken which work better with the rsdt blacklisted.
> >
> > Two Thinkpads, an entire range of HP workstations, how many others?
> You wanted to provide more info about the HPs. Are they even sold? Have you 
> informed HP that their BIOSes are broken?
> It's easy and safe (also in respect of not breaking Windows) for them to fix 
> them. If we do not wait some more months, I am confident they are going to do 
> it, tell me privately if you need contacts.

Yes, this has been discussed with HP. Some of the machines are still 
being sold - some are out of support and so will not receive updated 
BIOSes. http://www.google.com/search?q=32%2F64X%20address%20mismatch 
shows that it's not just limited to Thinkpads and HPs.

> > We have no idea which bugs are being triggered by this. Refusing to fix it 
> > because we'll potentially break some machines that may not even exist is 
> > not a sensible plan.
> The plan is to stay to the spec. Is that so wrong?

When sticking to the spec results in hardware that fails to work, then 
yes, sticking to the spec is the wrong thing to do. We violate all sorts 
of specs in all sorts of places simply because hardware and software are 
written by humans who make mistakes. When producing an operating system, 
the aim is for it to work on the hardware our users want supported.

> There are netbooks which only support Linux and I hope there will be a lot 
> more machines only supporting Linux as OS in the future.
> Such Windows compatibility, Spec violating fixes will break them.
> It's the broken BIOSes, those machines the vendor never ever tried booting 
> Linux which need the boot param and blacklist adding...

Find me one x86 netbook vendor who hasn't validated Windows on their 
system.

> Maybe we come away with this change this time, maybe we hurt someone badly who
> did everything right: stick to the spec, tested and verified that Linux works 
> fine.

Or, more likely, we make this change and people find that their hardware 
starts working.

-- 
Matthew Garrett | mjg59@srcf.ucam.org

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] : ACPI : Use clocksource to get the C-state time instead of ACPI PM timer
  2009-01-12 22:09   ` Pallipadi, Venkatesh
@ 2009-01-13  1:26     ` Zhao Yakui
  2009-01-13  1:42     ` Zhao Yakui
  1 sibling, 0 replies; 37+ messages in thread
From: Zhao Yakui @ 2009-01-13  1:26 UTC (permalink / raw)
  To: Pallipadi, Venkatesh; +Cc: LenBrown, linux-acpi@vger.kernel.org

On Tue, 2009-01-13 at 06:09 +0800, Pallipadi, Venkatesh wrote:
> 
> Patch looks ok in general. However, I am wonderng whether we reall need
> ktime_get() here or can we use getnstimeofday() directly. Using
It will also be OK to use the function of getnstimeofday(). 
In fact the main part of ktime_get is to call the function of
getnstimeofday.
And the monotonic time is returned by ktime_get. 

> getnstimeofday() should be cheaper, especially if we are doing frequent
> calls from multiple CPUs on every idle enter/exit. And I don't think we
> need to worry about monotonic clock from ktime here...

> 
> Thanks,
> Venki
> 
> On Sun, Jan 11, 2009 at 11:07:50PM -0800, Zhao, Yakui wrote:
> > Subject: ACPI: Use clocksource to get the C-state time instead of ACPI PM timer
> > From: Zhao Yakui <yakui.zhao@intel.com>
> > 
> >     On most boxes the ACPI PM timer is 24-bit counter that runs on 3.579545MHz
> > clock. In such case the max C-state sleep time should be less than 4687ms when
> > it is used to record C2/C3 duration time. 
> >     But on some boxes the max C-state sleep time is more than 4687ms. In such
> > case the overflow happens and the C-state duration time can't be counted
> > accurately.
> >     Use clocksource to get the C-state time instead of ACPI PM timer
> > 
> > Signed-off-by: Yakui Zhao <yakui.zhao@intel.com>
> > Signed-off-by: Alexs Shi <alex.shi@intel.com>
> > Signed-off-by: Shaohua Li <shaohua.li@intel.com>
> > ---
> >  drivers/acpi/processor_idle.c |   68 +++++++++++++++++++++++++-----------------
> >  1 file changed, 42 insertions(+), 26 deletions(-)
> > 
> > Index: linux-2.6/drivers/acpi/processor_idle.c
> > ===================================================================
> > --- linux-2.6.orig/drivers/acpi/processor_idle.c
> > +++ linux-2.6/drivers/acpi/processor_idle.c
> > @@ -397,8 +397,8 @@ static void acpi_processor_idle(void)
> >  	struct acpi_processor_cx *cx = NULL;
> >  	struct acpi_processor_cx *next_state = NULL;
> >  	int sleep_ticks = 0;
> > -	u32 t1, t2 = 0;
> > -
> > +	ktime_t kt1, kt2;
> > +	u64 sleep_interval;
> >  	/*
> >  	 * Interrupts must be disabled during bus mastering calculations and
> >  	 * for C2/C3 transitions.
> > @@ -543,23 +543,26 @@ static void acpi_processor_idle(void)
> >  		break;
> >  
> >  	case ACPI_STATE_C2:
> > -		/* Get start time (ticks) */
> > -		t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +		kt1 = ktime_get();
> >  		/* Tell the scheduler that we are going deep-idle: */
> >  		sched_clock_idle_sleep_event();
> >  		/* Invoke C2 */
> >  		acpi_state_timer_broadcast(pr, cx, 1);
> >  		acpi_cstate_enter(cx);
> >  		/* Get end time (ticks) */
> > -		t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > -
> > +		kt2 = ktime_get();
> >  #if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86)
> >  		/* TSC halts in C2, so notify users */
> >  		if (tsc_halts_in_c(ACPI_STATE_C2))
> >  			mark_tsc_unstable("possible TSC halt in C2");
> >  #endif
> > +		/*
> > +		 * Use the function of ktime_sub to get the time interval and
> > +		 * then convert it to microsecond
> > +		 */
> > +		sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> >  		/* Compute time (ticks) that we were actually asleep */
> > -		sleep_ticks = ticks_elapsed(t1, t2);
> > +		sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
> >  
> >  		/* Tell the scheduler how much we idled: */
> >  		sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
> > @@ -605,13 +608,13 @@ static void acpi_processor_idle(void)
> >  		}
> >  
> >  		/* Get start time (ticks) */
> > -		t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +		kt1 = ktime_get();
> >  		/* Invoke C3 */
> >  		/* Tell the scheduler that we are going deep-idle: */
> >  		sched_clock_idle_sleep_event();
> >  		acpi_cstate_enter(cx);
> >  		/* Get end time (ticks) */
> > -		t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +		kt2 = ktime_get();
> >  		if (pr->flags.bm_check && pr->flags.bm_control) {
> >  			/* Enable bus master arbitration */
> >  			atomic_dec(&c3_cpu_count);
> > @@ -623,8 +626,13 @@ static void acpi_processor_idle(void)
> >  		if (tsc_halts_in_c(ACPI_STATE_C3))
> >  			mark_tsc_unstable("TSC halts in C3");
> >  #endif
> > +		/*
> > +		 * Use the function of ktime_sub to get the time interval
> > +		 * and convert it to microsecond.
> > +		 */
> > +		sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> >  		/* Compute time (ticks) that we were actually asleep */
> > -		sleep_ticks = ticks_elapsed(t1, t2);
> > +		sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
> >  		/* Tell the scheduler how much we idled: */
> >  		sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
> >  
> > @@ -1455,7 +1463,8 @@ static inline void acpi_idle_do_entry(st
> >  static int acpi_idle_enter_c1(struct cpuidle_device *dev,
> >  			      struct cpuidle_state *state)
> >  {
> > -	u32 t1, t2;
> > +	ktime_t kt1, kt2;
> > +	u64 sleep_interval;
> >  	struct acpi_processor *pr;
> >  	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
> >  
> > @@ -1476,14 +1485,18 @@ static int acpi_idle_enter_c1(struct cpu
> >  	if (pr->flags.bm_check)
> >  		acpi_idle_update_bm_rld(pr, cx);
> >  
> > -	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +	kt1 = ktime_get();
> >  	acpi_idle_do_entry(cx);
> > -	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +	kt2 = ktime_get();
> >  
> >  	local_irq_enable();
> >  	cx->usage++;
> > -
> > -	return ticks_elapsed_in_us(t1, t2);
> > +	/*
> > +	 * Use the ktime_sub to get the time interval and then convert
> > +	 * it to microseconds
> > +	 */
> > +	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> > +	return sleep_interval;
> >  }
> >  
> >  /**
> > @@ -1496,7 +1509,8 @@ static int acpi_idle_enter_simple(struct
> >  {
> >  	struct acpi_processor *pr;
> >  	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
> > -	u32 t1, t2;
> > +	ktime_t kt1, kt2;
> > +	u64 sleep_interval;
> >  	int sleep_ticks = 0;
> >  
> >  	pr = __get_cpu_var(processors);
> > @@ -1533,18 +1547,19 @@ static int acpi_idle_enter_simple(struct
> >  	if (cx->type == ACPI_STATE_C3)
> >  		ACPI_FLUSH_CPU_CACHE();
> >  
> > -	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +	kt1 = ktime_get();
> >  	/* Tell the scheduler that we are going deep-idle: */
> >  	sched_clock_idle_sleep_event();
> >  	acpi_idle_do_entry(cx);
> > -	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +	kt2 = ktime_get();
> >  
> >  #if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86)
> >  	/* TSC could halt in idle, so notify users */
> >  	if (tsc_halts_in_c(cx->type))
> >  		mark_tsc_unstable("TSC halts in idle");;
> >  #endif
> > -	sleep_ticks = ticks_elapsed(t1, t2);
> > +	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> > +	sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
> >  
> >  	/* Tell the scheduler how much we idled: */
> >  	sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
> > @@ -1556,7 +1571,7 @@ static int acpi_idle_enter_simple(struct
> >  
> >  	acpi_state_timer_broadcast(pr, cx, 0);
> >  	cx->time += sleep_ticks;
> > -	return ticks_elapsed_in_us(t1, t2);
> > +	return sleep_interval;
> >  }
> >  
> >  static int c3_cpu_count;
> > @@ -1574,7 +1589,8 @@ static int acpi_idle_enter_bm(struct cpu
> >  {
> >  	struct acpi_processor *pr;
> >  	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
> > -	u32 t1, t2;
> > +	ktime_t  kt1, kt2;
> > +	u64 sleep_interval;
> >  	int sleep_ticks = 0;
> >  
> >  	pr = __get_cpu_var(processors);
> > @@ -1643,10 +1659,9 @@ static int acpi_idle_enter_bm(struct cpu
> >  	} else if (!pr->flags.bm_check) {
> >  		ACPI_FLUSH_CPU_CACHE();
> >  	}
> > -
> > -	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +	kt1 = ktime_get();
> >  	acpi_idle_do_entry(cx);
> > -	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +	kt2 = ktime_get();
> >  
> >  	/* Re-enable bus master arbitration */
> >  	if (pr->flags.bm_check && pr->flags.bm_control) {
> > @@ -1661,7 +1676,8 @@ static int acpi_idle_enter_bm(struct cpu
> >  	if (tsc_halts_in_c(ACPI_STATE_C3))
> >  		mark_tsc_unstable("TSC halts in idle");
> >  #endif
> > -	sleep_ticks = ticks_elapsed(t1, t2);
> > +	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> > +	sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
> >  	/* Tell the scheduler how much we idled: */
> >  	sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
> >  
> > @@ -1672,7 +1688,7 @@ static int acpi_idle_enter_bm(struct cpu
> >  
> >  	acpi_state_timer_broadcast(pr, cx, 0);
> >  	cx->time += sleep_ticks;
> > -	return ticks_elapsed_in_us(t1, t2);
> > +	return sleep_interval;
> >  }
> >  
> >  struct cpuidle_driver acpi_idle_driver = {
> > 
> > 


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] : ACPI : Use clocksource to get the C-state time instead of ACPI PM timer
  2009-01-12 22:09   ` Pallipadi, Venkatesh
  2009-01-13  1:26     ` Zhao Yakui
@ 2009-01-13  1:42     ` Zhao Yakui
  1 sibling, 0 replies; 37+ messages in thread
From: Zhao Yakui @ 2009-01-13  1:42 UTC (permalink / raw)
  To: Pallipadi, Venkatesh; +Cc: LenBrown, linux-acpi@vger.kernel.org

On Tue, 2009-01-13 at 06:09 +0800, Pallipadi, Venkatesh wrote:
> 
> Patch looks ok in general. However, I am wonderng whether we reall need
> ktime_get() here or can we use getnstimeofday() directly. Using
> getnstimeofday() should be cheaper, especially if we are doing frequent
> calls from multiple CPUs on every idle enter/exit. And I don't think we
> need to worry about monotonic clock from ktime here...
What you said is right. It is unnecessary to get the monotonic clock.
How about using the function of ktime_get_real? The function of
getnstimeofday is called in ktime_get_real. And then time is converted
to ktime format, which is very convenient to get the time interval.

> 
> Thanks,
> Venki
> 
> On Sun, Jan 11, 2009 at 11:07:50PM -0800, Zhao, Yakui wrote:
> > Subject: ACPI: Use clocksource to get the C-state time instead of ACPI PM timer
> > From: Zhao Yakui <yakui.zhao@intel.com>
> > 
> >     On most boxes the ACPI PM timer is 24-bit counter that runs on 3.579545MHz
> > clock. In such case the max C-state sleep time should be less than 4687ms when
> > it is used to record C2/C3 duration time. 
> >     But on some boxes the max C-state sleep time is more than 4687ms. In such
> > case the overflow happens and the C-state duration time can't be counted
> > accurately.
> >     Use clocksource to get the C-state time instead of ACPI PM timer
> > 
> > Signed-off-by: Yakui Zhao <yakui.zhao@intel.com>
> > Signed-off-by: Alexs Shi <alex.shi@intel.com>
> > Signed-off-by: Shaohua Li <shaohua.li@intel.com>
> > ---
> >  drivers/acpi/processor_idle.c |   68 +++++++++++++++++++++++++-----------------
> >  1 file changed, 42 insertions(+), 26 deletions(-)
> > 
> > Index: linux-2.6/drivers/acpi/processor_idle.c
> > ===================================================================
> > --- linux-2.6.orig/drivers/acpi/processor_idle.c
> > +++ linux-2.6/drivers/acpi/processor_idle.c
> > @@ -397,8 +397,8 @@ static void acpi_processor_idle(void)
> >  	struct acpi_processor_cx *cx = NULL;
> >  	struct acpi_processor_cx *next_state = NULL;
> >  	int sleep_ticks = 0;
> > -	u32 t1, t2 = 0;
> > -
> > +	ktime_t kt1, kt2;
> > +	u64 sleep_interval;
> >  	/*
> >  	 * Interrupts must be disabled during bus mastering calculations and
> >  	 * for C2/C3 transitions.
> > @@ -543,23 +543,26 @@ static void acpi_processor_idle(void)
> >  		break;
> >  
> >  	case ACPI_STATE_C2:
> > -		/* Get start time (ticks) */
> > -		t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +		kt1 = ktime_get();
> >  		/* Tell the scheduler that we are going deep-idle: */
> >  		sched_clock_idle_sleep_event();
> >  		/* Invoke C2 */
> >  		acpi_state_timer_broadcast(pr, cx, 1);
> >  		acpi_cstate_enter(cx);
> >  		/* Get end time (ticks) */
> > -		t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > -
> > +		kt2 = ktime_get();
> >  #if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86)
> >  		/* TSC halts in C2, so notify users */
> >  		if (tsc_halts_in_c(ACPI_STATE_C2))
> >  			mark_tsc_unstable("possible TSC halt in C2");
> >  #endif
> > +		/*
> > +		 * Use the function of ktime_sub to get the time interval and
> > +		 * then convert it to microsecond
> > +		 */
> > +		sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> >  		/* Compute time (ticks) that we were actually asleep */
> > -		sleep_ticks = ticks_elapsed(t1, t2);
> > +		sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
> >  
> >  		/* Tell the scheduler how much we idled: */
> >  		sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
> > @@ -605,13 +608,13 @@ static void acpi_processor_idle(void)
> >  		}
> >  
> >  		/* Get start time (ticks) */
> > -		t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +		kt1 = ktime_get();
> >  		/* Invoke C3 */
> >  		/* Tell the scheduler that we are going deep-idle: */
> >  		sched_clock_idle_sleep_event();
> >  		acpi_cstate_enter(cx);
> >  		/* Get end time (ticks) */
> > -		t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +		kt2 = ktime_get();
> >  		if (pr->flags.bm_check && pr->flags.bm_control) {
> >  			/* Enable bus master arbitration */
> >  			atomic_dec(&c3_cpu_count);
> > @@ -623,8 +626,13 @@ static void acpi_processor_idle(void)
> >  		if (tsc_halts_in_c(ACPI_STATE_C3))
> >  			mark_tsc_unstable("TSC halts in C3");
> >  #endif
> > +		/*
> > +		 * Use the function of ktime_sub to get the time interval
> > +		 * and convert it to microsecond.
> > +		 */
> > +		sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> >  		/* Compute time (ticks) that we were actually asleep */
> > -		sleep_ticks = ticks_elapsed(t1, t2);
> > +		sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
> >  		/* Tell the scheduler how much we idled: */
> >  		sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
> >  
> > @@ -1455,7 +1463,8 @@ static inline void acpi_idle_do_entry(st
> >  static int acpi_idle_enter_c1(struct cpuidle_device *dev,
> >  			      struct cpuidle_state *state)
> >  {
> > -	u32 t1, t2;
> > +	ktime_t kt1, kt2;
> > +	u64 sleep_interval;
> >  	struct acpi_processor *pr;
> >  	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
> >  
> > @@ -1476,14 +1485,18 @@ static int acpi_idle_enter_c1(struct cpu
> >  	if (pr->flags.bm_check)
> >  		acpi_idle_update_bm_rld(pr, cx);
> >  
> > -	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +	kt1 = ktime_get();
> >  	acpi_idle_do_entry(cx);
> > -	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +	kt2 = ktime_get();
> >  
> >  	local_irq_enable();
> >  	cx->usage++;
> > -
> > -	return ticks_elapsed_in_us(t1, t2);
> > +	/*
> > +	 * Use the ktime_sub to get the time interval and then convert
> > +	 * it to microseconds
> > +	 */
> > +	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> > +	return sleep_interval;
> >  }
> >  
> >  /**
> > @@ -1496,7 +1509,8 @@ static int acpi_idle_enter_simple(struct
> >  {
> >  	struct acpi_processor *pr;
> >  	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
> > -	u32 t1, t2;
> > +	ktime_t kt1, kt2;
> > +	u64 sleep_interval;
> >  	int sleep_ticks = 0;
> >  
> >  	pr = __get_cpu_var(processors);
> > @@ -1533,18 +1547,19 @@ static int acpi_idle_enter_simple(struct
> >  	if (cx->type == ACPI_STATE_C3)
> >  		ACPI_FLUSH_CPU_CACHE();
> >  
> > -	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +	kt1 = ktime_get();
> >  	/* Tell the scheduler that we are going deep-idle: */
> >  	sched_clock_idle_sleep_event();
> >  	acpi_idle_do_entry(cx);
> > -	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +	kt2 = ktime_get();
> >  
> >  #if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86)
> >  	/* TSC could halt in idle, so notify users */
> >  	if (tsc_halts_in_c(cx->type))
> >  		mark_tsc_unstable("TSC halts in idle");;
> >  #endif
> > -	sleep_ticks = ticks_elapsed(t1, t2);
> > +	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> > +	sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
> >  
> >  	/* Tell the scheduler how much we idled: */
> >  	sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
> > @@ -1556,7 +1571,7 @@ static int acpi_idle_enter_simple(struct
> >  
> >  	acpi_state_timer_broadcast(pr, cx, 0);
> >  	cx->time += sleep_ticks;
> > -	return ticks_elapsed_in_us(t1, t2);
> > +	return sleep_interval;
> >  }
> >  
> >  static int c3_cpu_count;
> > @@ -1574,7 +1589,8 @@ static int acpi_idle_enter_bm(struct cpu
> >  {
> >  	struct acpi_processor *pr;
> >  	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
> > -	u32 t1, t2;
> > +	ktime_t  kt1, kt2;
> > +	u64 sleep_interval;
> >  	int sleep_ticks = 0;
> >  
> >  	pr = __get_cpu_var(processors);
> > @@ -1643,10 +1659,9 @@ static int acpi_idle_enter_bm(struct cpu
> >  	} else if (!pr->flags.bm_check) {
> >  		ACPI_FLUSH_CPU_CACHE();
> >  	}
> > -
> > -	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +	kt1 = ktime_get();
> >  	acpi_idle_do_entry(cx);
> > -	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> > +	kt2 = ktime_get();
> >  
> >  	/* Re-enable bus master arbitration */
> >  	if (pr->flags.bm_check && pr->flags.bm_control) {
> > @@ -1661,7 +1676,8 @@ static int acpi_idle_enter_bm(struct cpu
> >  	if (tsc_halts_in_c(ACPI_STATE_C3))
> >  		mark_tsc_unstable("TSC halts in idle");
> >  #endif
> > -	sleep_ticks = ticks_elapsed(t1, t2);
> > +	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> > +	sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
> >  	/* Tell the scheduler how much we idled: */
> >  	sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
> >  
> > @@ -1672,7 +1688,7 @@ static int acpi_idle_enter_bm(struct cpu
> >  
> >  	acpi_state_timer_broadcast(pr, cx, 0);
> >  	cx->time += sleep_ticks;
> > -	return ticks_elapsed_in_us(t1, t2);
> > +	return sleep_interval;
> >  }
> >  
> >  struct cpuidle_driver acpi_idle_driver = {
> > 
> > 


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [RESEND] [PATCH] : ACPI : Use clocksource to get the C-state time instead of ACPI PM timer
  2008-10-31 18:42 [PATCH] ACPI: EC: clean up tmp variable before reuse Alexey Starikovskiy
                   ` (6 preceding siblings ...)
  2009-01-12  7:07 ` [PATCH] : ACPI : Use clocksource to get the C-state time instead of ACPI PM timer Zhao Yakui
@ 2009-01-13  3:50 ` Zhao Yakui
  2009-01-20  2:52   ` Len Brown
  7 siblings, 1 reply; 37+ messages in thread
From: Zhao Yakui @ 2009-01-13  3:50 UTC (permalink / raw)
  To: LenBrown; +Cc: linux-acpi, venkatesh.pallipadi

Subject: ACPI: Use clocksource to get the C-state time instead of ACPI PM timer
From: Zhao Yakui <yakui.zhao@intel.com>

    On most boxes the ACPI PM timer is 24-bit counter that runs on 3.579545MHz
clock. In such case the max C-state sleep time should be less than 4687ms when
it is used to record C2/C3 duration time. 
    But on some boxes the max C-state sleep time is more than 4687ms. In such
case the overflow happens and the C-state duration time can't be counted
accurately.
    Use clocksource to get the C-state time instead of ACPI PM timer

Based on the Venki's suggestion the function of ktime_get_real is used as it 
is unnecessary to use the monotonic time.

Signed-off-by: Yakui Zhao <yakui.zhao@intel.com>
Signed-off-by: Alexs Shi <alex.shi@intel.com>
Signed-off-by: Shaohua Li <shaohua.li@intel.com>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
---
 drivers/acpi/processor_idle.c |   68 +++++++++++++++++++++++++-----------------
 1 file changed, 42 insertions(+), 26 deletions(-)

Index: linux-2.6/drivers/acpi/processor_idle.c
===================================================================
--- linux-2.6.orig/drivers/acpi/processor_idle.c
+++ linux-2.6/drivers/acpi/processor_idle.c
@@ -397,8 +397,8 @@ static void acpi_processor_idle(void)
 	struct acpi_processor_cx *cx = NULL;
 	struct acpi_processor_cx *next_state = NULL;
 	int sleep_ticks = 0;
-	u32 t1, t2 = 0;
-
+	ktime_t kt1, kt2;
+	u64 sleep_interval;
 	/*
 	 * Interrupts must be disabled during bus mastering calculations and
 	 * for C2/C3 transitions.
@@ -543,23 +543,26 @@ static void acpi_processor_idle(void)
 		break;
 
 	case ACPI_STATE_C2:
-		/* Get start time (ticks) */
-		t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
+		kt1 = ktime_get_real();
 		/* Tell the scheduler that we are going deep-idle: */
 		sched_clock_idle_sleep_event();
 		/* Invoke C2 */
 		acpi_state_timer_broadcast(pr, cx, 1);
 		acpi_cstate_enter(cx);
 		/* Get end time (ticks) */
-		t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
-
+		kt2 = ktime_get_real();
 #if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86)
 		/* TSC halts in C2, so notify users */
 		if (tsc_halts_in_c(ACPI_STATE_C2))
 			mark_tsc_unstable("possible TSC halt in C2");
 #endif
+		/*
+		 * Use the function of ktime_sub to get the time interval and
+		 * then convert it to microsecond
+		 */
+		sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
 		/* Compute time (ticks) that we were actually asleep */
-		sleep_ticks = ticks_elapsed(t1, t2);
+		sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
 
 		/* Tell the scheduler how much we idled: */
 		sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
@@ -605,13 +608,13 @@ static void acpi_processor_idle(void)
 		}
 
 		/* Get start time (ticks) */
-		t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
+		kt1 = ktime_get_real();
 		/* Invoke C3 */
 		/* Tell the scheduler that we are going deep-idle: */
 		sched_clock_idle_sleep_event();
 		acpi_cstate_enter(cx);
 		/* Get end time (ticks) */
-		t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
+		kt2 = ktime_get_real();
 		if (pr->flags.bm_check && pr->flags.bm_control) {
 			/* Enable bus master arbitration */
 			atomic_dec(&c3_cpu_count);
@@ -623,8 +626,13 @@ static void acpi_processor_idle(void)
 		if (tsc_halts_in_c(ACPI_STATE_C3))
 			mark_tsc_unstable("TSC halts in C3");
 #endif
+		/*
+		 * Use the function of ktime_sub to get the time interval
+		 * and convert it to microsecond.
+		 */
+		sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
 		/* Compute time (ticks) that we were actually asleep */
-		sleep_ticks = ticks_elapsed(t1, t2);
+		sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
 		/* Tell the scheduler how much we idled: */
 		sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
 
@@ -1455,7 +1463,8 @@ static inline void acpi_idle_do_entry(st
 static int acpi_idle_enter_c1(struct cpuidle_device *dev,
 			      struct cpuidle_state *state)
 {
-	u32 t1, t2;
+	ktime_t kt1, kt2;
+	u64 sleep_interval;
 	struct acpi_processor *pr;
 	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
 
@@ -1476,14 +1485,18 @@ static int acpi_idle_enter_c1(struct cpu
 	if (pr->flags.bm_check)
 		acpi_idle_update_bm_rld(pr, cx);
 
-	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
+	kt1 = ktime_get_real();
 	acpi_idle_do_entry(cx);
-	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
+	kt2 = ktime_get_real();
 
 	local_irq_enable();
 	cx->usage++;
-
-	return ticks_elapsed_in_us(t1, t2);
+	/*
+	 * Use the ktime_sub to get the time interval and then convert
+	 * it to microseconds
+	 */
+	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
+	return sleep_interval;
 }
 
 /**
@@ -1496,7 +1509,8 @@ static int acpi_idle_enter_simple(struct
 {
 	struct acpi_processor *pr;
 	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
-	u32 t1, t2;
+	ktime_t kt1, kt2;
+	u64 sleep_interval;
 	int sleep_ticks = 0;
 
 	pr = __get_cpu_var(processors);
@@ -1533,18 +1547,19 @@ static int acpi_idle_enter_simple(struct
 	if (cx->type == ACPI_STATE_C3)
 		ACPI_FLUSH_CPU_CACHE();
 
-	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
+	kt1 = ktime_get_real();
 	/* Tell the scheduler that we are going deep-idle: */
 	sched_clock_idle_sleep_event();
 	acpi_idle_do_entry(cx);
-	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
+	kt2 = ktime_get_real();
 
 #if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86)
 	/* TSC could halt in idle, so notify users */
 	if (tsc_halts_in_c(cx->type))
 		mark_tsc_unstable("TSC halts in idle");;
 #endif
-	sleep_ticks = ticks_elapsed(t1, t2);
+	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
+	sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
 
 	/* Tell the scheduler how much we idled: */
 	sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
@@ -1556,7 +1571,7 @@ static int acpi_idle_enter_simple(struct
 
 	acpi_state_timer_broadcast(pr, cx, 0);
 	cx->time += sleep_ticks;
-	return ticks_elapsed_in_us(t1, t2);
+	return sleep_interval;
 }
 
 static int c3_cpu_count;
@@ -1574,7 +1589,8 @@ static int acpi_idle_enter_bm(struct cpu
 {
 	struct acpi_processor *pr;
 	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
-	u32 t1, t2;
+	ktime_t  kt1, kt2;
+	u64 sleep_interval;
 	int sleep_ticks = 0;
 
 	pr = __get_cpu_var(processors);
@@ -1643,10 +1659,9 @@ static int acpi_idle_enter_bm(struct cpu
 	} else if (!pr->flags.bm_check) {
 		ACPI_FLUSH_CPU_CACHE();
 	}
-
-	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
+	kt1 = ktime_get_real();
 	acpi_idle_do_entry(cx);
-	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
+	kt2 = ktime_get_real();
 
 	/* Re-enable bus master arbitration */
 	if (pr->flags.bm_check && pr->flags.bm_control) {
@@ -1661,7 +1676,8 @@ static int acpi_idle_enter_bm(struct cpu
 	if (tsc_halts_in_c(ACPI_STATE_C3))
 		mark_tsc_unstable("TSC halts in idle");
 #endif
-	sleep_ticks = ticks_elapsed(t1, t2);
+	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
+	sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
 	/* Tell the scheduler how much we idled: */
 	sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
 
@@ -1672,7 +1688,7 @@ static int acpi_idle_enter_bm(struct cpu
 
 	acpi_state_timer_broadcast(pr, cx, 0);
 	cx->time += sleep_ticks;
-	return ticks_elapsed_in_us(t1, t2);
+	return sleep_interval;
 }
 
 struct cpuidle_driver acpi_idle_driver = {



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RESEND] [PATCH] : ACPI : Use clocksource to get the C-state time instead of ACPI PM timer
  2009-01-13  3:50 ` [RESEND] " Zhao Yakui
@ 2009-01-20  2:52   ` Len Brown
  0 siblings, 0 replies; 37+ messages in thread
From: Len Brown @ 2009-01-20  2:52 UTC (permalink / raw)
  To: Zhao Yakui
  Cc: linux-acpi, Venkatesh Pallipadi, Thomas Gleixner,
	Linux Kernel Mailing List


Adding Thomas Gleixner and LKML to this thread.

Thomas,
Basically we're now faced with the possibility
of cores left idle for longer than the 24-bit
pm-timer can count (4.6 seconds), we we can't
hard code use of that timer for
C-state residency accounting.

Is ktime_get_real() the right interface to use?

thanks,
-Len

On Tue, 13 Jan 2009, Zhao Yakui wrote:

> Subject: ACPI: Use clocksource to get the C-state time instead of ACPI PM timer
> From: Zhao Yakui <yakui.zhao@intel.com>
> 
>     On most boxes the ACPI PM timer is 24-bit counter that runs on 3.579545MHz
> clock. In such case the max C-state sleep time should be less than 4687ms when
> it is used to record C2/C3 duration time. 
>     But on some boxes the max C-state sleep time is more than 4687ms. In such
> case the overflow happens and the C-state duration time can't be counted
> accurately.
>     Use clocksource to get the C-state time instead of ACPI PM timer
> 
> Based on the Venki's suggestion the function of ktime_get_real is used as it 
> is unnecessary to use the monotonic time.
> 
> Signed-off-by: Yakui Zhao <yakui.zhao@intel.com>
> Signed-off-by: Alexs Shi <alex.shi@intel.com>
> Signed-off-by: Shaohua Li <shaohua.li@intel.com>
> Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
> ---
>  drivers/acpi/processor_idle.c |   68 +++++++++++++++++++++++++-----------------
>  1 file changed, 42 insertions(+), 26 deletions(-)
> 
> Index: linux-2.6/drivers/acpi/processor_idle.c
> ===================================================================
> --- linux-2.6.orig/drivers/acpi/processor_idle.c
> +++ linux-2.6/drivers/acpi/processor_idle.c
> @@ -397,8 +397,8 @@ static void acpi_processor_idle(void)
>  	struct acpi_processor_cx *cx = NULL;
>  	struct acpi_processor_cx *next_state = NULL;
>  	int sleep_ticks = 0;
> -	u32 t1, t2 = 0;
> -
> +	ktime_t kt1, kt2;
> +	u64 sleep_interval;
>  	/*
>  	 * Interrupts must be disabled during bus mastering calculations and
>  	 * for C2/C3 transitions.
> @@ -543,23 +543,26 @@ static void acpi_processor_idle(void)
>  		break;
>  
>  	case ACPI_STATE_C2:
> -		/* Get start time (ticks) */
> -		t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +		kt1 = ktime_get_real();
>  		/* Tell the scheduler that we are going deep-idle: */
>  		sched_clock_idle_sleep_event();
>  		/* Invoke C2 */
>  		acpi_state_timer_broadcast(pr, cx, 1);
>  		acpi_cstate_enter(cx);
>  		/* Get end time (ticks) */
> -		t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> -
> +		kt2 = ktime_get_real();
>  #if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86)
>  		/* TSC halts in C2, so notify users */
>  		if (tsc_halts_in_c(ACPI_STATE_C2))
>  			mark_tsc_unstable("possible TSC halt in C2");
>  #endif
> +		/*
> +		 * Use the function of ktime_sub to get the time interval and
> +		 * then convert it to microsecond
> +		 */
> +		sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
>  		/* Compute time (ticks) that we were actually asleep */
> -		sleep_ticks = ticks_elapsed(t1, t2);
> +		sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
>  
>  		/* Tell the scheduler how much we idled: */
>  		sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
> @@ -605,13 +608,13 @@ static void acpi_processor_idle(void)
>  		}
>  
>  		/* Get start time (ticks) */
> -		t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +		kt1 = ktime_get_real();
>  		/* Invoke C3 */
>  		/* Tell the scheduler that we are going deep-idle: */
>  		sched_clock_idle_sleep_event();
>  		acpi_cstate_enter(cx);
>  		/* Get end time (ticks) */
> -		t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +		kt2 = ktime_get_real();
>  		if (pr->flags.bm_check && pr->flags.bm_control) {
>  			/* Enable bus master arbitration */
>  			atomic_dec(&c3_cpu_count);
> @@ -623,8 +626,13 @@ static void acpi_processor_idle(void)
>  		if (tsc_halts_in_c(ACPI_STATE_C3))
>  			mark_tsc_unstable("TSC halts in C3");
>  #endif
> +		/*
> +		 * Use the function of ktime_sub to get the time interval
> +		 * and convert it to microsecond.
> +		 */
> +		sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
>  		/* Compute time (ticks) that we were actually asleep */
> -		sleep_ticks = ticks_elapsed(t1, t2);
> +		sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
>  		/* Tell the scheduler how much we idled: */
>  		sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
>  
> @@ -1455,7 +1463,8 @@ static inline void acpi_idle_do_entry(st
>  static int acpi_idle_enter_c1(struct cpuidle_device *dev,
>  			      struct cpuidle_state *state)
>  {
> -	u32 t1, t2;
> +	ktime_t kt1, kt2;
> +	u64 sleep_interval;
>  	struct acpi_processor *pr;
>  	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
>  
> @@ -1476,14 +1485,18 @@ static int acpi_idle_enter_c1(struct cpu
>  	if (pr->flags.bm_check)
>  		acpi_idle_update_bm_rld(pr, cx);
>  
> -	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +	kt1 = ktime_get_real();
>  	acpi_idle_do_entry(cx);
> -	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +	kt2 = ktime_get_real();
>  
>  	local_irq_enable();
>  	cx->usage++;
> -
> -	return ticks_elapsed_in_us(t1, t2);
> +	/*
> +	 * Use the ktime_sub to get the time interval and then convert
> +	 * it to microseconds
> +	 */
> +	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> +	return sleep_interval;
>  }
>  
>  /**
> @@ -1496,7 +1509,8 @@ static int acpi_idle_enter_simple(struct
>  {
>  	struct acpi_processor *pr;
>  	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
> -	u32 t1, t2;
> +	ktime_t kt1, kt2;
> +	u64 sleep_interval;
>  	int sleep_ticks = 0;
>  
>  	pr = __get_cpu_var(processors);
> @@ -1533,18 +1547,19 @@ static int acpi_idle_enter_simple(struct
>  	if (cx->type == ACPI_STATE_C3)
>  		ACPI_FLUSH_CPU_CACHE();
>  
> -	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +	kt1 = ktime_get_real();
>  	/* Tell the scheduler that we are going deep-idle: */
>  	sched_clock_idle_sleep_event();
>  	acpi_idle_do_entry(cx);
> -	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +	kt2 = ktime_get_real();
>  
>  #if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86)
>  	/* TSC could halt in idle, so notify users */
>  	if (tsc_halts_in_c(cx->type))
>  		mark_tsc_unstable("TSC halts in idle");;
>  #endif
> -	sleep_ticks = ticks_elapsed(t1, t2);
> +	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> +	sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
>  
>  	/* Tell the scheduler how much we idled: */
>  	sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
> @@ -1556,7 +1571,7 @@ static int acpi_idle_enter_simple(struct
>  
>  	acpi_state_timer_broadcast(pr, cx, 0);
>  	cx->time += sleep_ticks;
> -	return ticks_elapsed_in_us(t1, t2);
> +	return sleep_interval;
>  }
>  
>  static int c3_cpu_count;
> @@ -1574,7 +1589,8 @@ static int acpi_idle_enter_bm(struct cpu
>  {
>  	struct acpi_processor *pr;
>  	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
> -	u32 t1, t2;
> +	ktime_t  kt1, kt2;
> +	u64 sleep_interval;
>  	int sleep_ticks = 0;
>  
>  	pr = __get_cpu_var(processors);
> @@ -1643,10 +1659,9 @@ static int acpi_idle_enter_bm(struct cpu
>  	} else if (!pr->flags.bm_check) {
>  		ACPI_FLUSH_CPU_CACHE();
>  	}
> -
> -	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +	kt1 = ktime_get_real();
>  	acpi_idle_do_entry(cx);
> -	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +	kt2 = ktime_get_real();
>  
>  	/* Re-enable bus master arbitration */
>  	if (pr->flags.bm_check && pr->flags.bm_control) {
> @@ -1661,7 +1676,8 @@ static int acpi_idle_enter_bm(struct cpu
>  	if (tsc_halts_in_c(ACPI_STATE_C3))
>  		mark_tsc_unstable("TSC halts in idle");
>  #endif
> -	sleep_ticks = ticks_elapsed(t1, t2);
> +	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> +	sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
>  	/* Tell the scheduler how much we idled: */
>  	sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
>  
> @@ -1672,7 +1688,7 @@ static int acpi_idle_enter_bm(struct cpu
>  
>  	acpi_state_timer_broadcast(pr, cx, 0);
>  	cx->time += sleep_ticks;
> -	return ticks_elapsed_in_us(t1, t2);
> +	return sleep_interval;
>  }
>  
>  struct cpuidle_driver acpi_idle_driver = {
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-acpi" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> Subject: ACPI: Use clocksource to get the C-state time instead of ACPI PM timer
> From: Zhao Yakui <yakui.zhao@intel.com>
> 
>     On most boxes the ACPI PM timer is 24-bit counter that runs on 3.579545MHz
> clock. In such case the max C-state sleep time should be less than 4687ms when
> it is used to record C2/C3 duration time. 
>     But on some boxes the max C-state sleep time is more than 4687ms. In such
> case the overflow happens and the C-state duration time can't be counted
> accurately.
>     Use clocksource to get the C-state time instead of ACPI PM timer
> 
> Based on the Venki's suggestion the function of ktime_get_real is used as it 
> is unnecessary to use the monotonic time.
> 
> Signed-off-by: Yakui Zhao <yakui.zhao@intel.com>
> Signed-off-by: Alexs Shi <alex.shi@intel.com>
> Signed-off-by: Shaohua Li <shaohua.li@intel.com>
> Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
> ---
>  drivers/acpi/processor_idle.c |   68 +++++++++++++++++++++++++-----------------
>  1 file changed, 42 insertions(+), 26 deletions(-)
> 
> Index: linux-2.6/drivers/acpi/processor_idle.c
> ===================================================================
> --- linux-2.6.orig/drivers/acpi/processor_idle.c
> +++ linux-2.6/drivers/acpi/processor_idle.c
> @@ -397,8 +397,8 @@ static void acpi_processor_idle(void)
>  	struct acpi_processor_cx *cx = NULL;
>  	struct acpi_processor_cx *next_state = NULL;
>  	int sleep_ticks = 0;
> -	u32 t1, t2 = 0;
> -
> +	ktime_t kt1, kt2;
> +	u64 sleep_interval;
>  	/*
>  	 * Interrupts must be disabled during bus mastering calculations and
>  	 * for C2/C3 transitions.
> @@ -543,23 +543,26 @@ static void acpi_processor_idle(void)
>  		break;
>  
>  	case ACPI_STATE_C2:
> -		/* Get start time (ticks) */
> -		t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +		kt1 = ktime_get_real();
>  		/* Tell the scheduler that we are going deep-idle: */
>  		sched_clock_idle_sleep_event();
>  		/* Invoke C2 */
>  		acpi_state_timer_broadcast(pr, cx, 1);
>  		acpi_cstate_enter(cx);
>  		/* Get end time (ticks) */
> -		t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> -
> +		kt2 = ktime_get_real();
>  #if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86)
>  		/* TSC halts in C2, so notify users */
>  		if (tsc_halts_in_c(ACPI_STATE_C2))
>  			mark_tsc_unstable("possible TSC halt in C2");
>  #endif
> +		/*
> +		 * Use the function of ktime_sub to get the time interval and
> +		 * then convert it to microsecond
> +		 */
> +		sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
>  		/* Compute time (ticks) that we were actually asleep */
> -		sleep_ticks = ticks_elapsed(t1, t2);
> +		sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
>  
>  		/* Tell the scheduler how much we idled: */
>  		sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
> @@ -605,13 +608,13 @@ static void acpi_processor_idle(void)
>  		}
>  
>  		/* Get start time (ticks) */
> -		t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +		kt1 = ktime_get_real();
>  		/* Invoke C3 */
>  		/* Tell the scheduler that we are going deep-idle: */
>  		sched_clock_idle_sleep_event();
>  		acpi_cstate_enter(cx);
>  		/* Get end time (ticks) */
> -		t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +		kt2 = ktime_get_real();
>  		if (pr->flags.bm_check && pr->flags.bm_control) {
>  			/* Enable bus master arbitration */
>  			atomic_dec(&c3_cpu_count);
> @@ -623,8 +626,13 @@ static void acpi_processor_idle(void)
>  		if (tsc_halts_in_c(ACPI_STATE_C3))
>  			mark_tsc_unstable("TSC halts in C3");
>  #endif
> +		/*
> +		 * Use the function of ktime_sub to get the time interval
> +		 * and convert it to microsecond.
> +		 */
> +		sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
>  		/* Compute time (ticks) that we were actually asleep */
> -		sleep_ticks = ticks_elapsed(t1, t2);
> +		sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
>  		/* Tell the scheduler how much we idled: */
>  		sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
>  
> @@ -1455,7 +1463,8 @@ static inline void acpi_idle_do_entry(st
>  static int acpi_idle_enter_c1(struct cpuidle_device *dev,
>  			      struct cpuidle_state *state)
>  {
> -	u32 t1, t2;
> +	ktime_t kt1, kt2;
> +	u64 sleep_interval;
>  	struct acpi_processor *pr;
>  	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
>  
> @@ -1476,14 +1485,18 @@ static int acpi_idle_enter_c1(struct cpu
>  	if (pr->flags.bm_check)
>  		acpi_idle_update_bm_rld(pr, cx);
>  
> -	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +	kt1 = ktime_get_real();
>  	acpi_idle_do_entry(cx);
> -	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +	kt2 = ktime_get_real();
>  
>  	local_irq_enable();
>  	cx->usage++;
> -
> -	return ticks_elapsed_in_us(t1, t2);
> +	/*
> +	 * Use the ktime_sub to get the time interval and then convert
> +	 * it to microseconds
> +	 */
> +	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> +	return sleep_interval;
>  }
>  
>  /**
> @@ -1496,7 +1509,8 @@ static int acpi_idle_enter_simple(struct
>  {
>  	struct acpi_processor *pr;
>  	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
> -	u32 t1, t2;
> +	ktime_t kt1, kt2;
> +	u64 sleep_interval;
>  	int sleep_ticks = 0;
>  
>  	pr = __get_cpu_var(processors);
> @@ -1533,18 +1547,19 @@ static int acpi_idle_enter_simple(struct
>  	if (cx->type == ACPI_STATE_C3)
>  		ACPI_FLUSH_CPU_CACHE();
>  
> -	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +	kt1 = ktime_get_real();
>  	/* Tell the scheduler that we are going deep-idle: */
>  	sched_clock_idle_sleep_event();
>  	acpi_idle_do_entry(cx);
> -	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +	kt2 = ktime_get_real();
>  
>  #if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86)
>  	/* TSC could halt in idle, so notify users */
>  	if (tsc_halts_in_c(cx->type))
>  		mark_tsc_unstable("TSC halts in idle");;
>  #endif
> -	sleep_ticks = ticks_elapsed(t1, t2);
> +	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> +	sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
>  
>  	/* Tell the scheduler how much we idled: */
>  	sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
> @@ -1556,7 +1571,7 @@ static int acpi_idle_enter_simple(struct
>  
>  	acpi_state_timer_broadcast(pr, cx, 0);
>  	cx->time += sleep_ticks;
> -	return ticks_elapsed_in_us(t1, t2);
> +	return sleep_interval;
>  }
>  
>  static int c3_cpu_count;
> @@ -1574,7 +1589,8 @@ static int acpi_idle_enter_bm(struct cpu
>  {
>  	struct acpi_processor *pr;
>  	struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
> -	u32 t1, t2;
> +	ktime_t  kt1, kt2;
> +	u64 sleep_interval;
>  	int sleep_ticks = 0;
>  
>  	pr = __get_cpu_var(processors);
> @@ -1643,10 +1659,9 @@ static int acpi_idle_enter_bm(struct cpu
>  	} else if (!pr->flags.bm_check) {
>  		ACPI_FLUSH_CPU_CACHE();
>  	}
> -
> -	t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +	kt1 = ktime_get_real();
>  	acpi_idle_do_entry(cx);
> -	t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
> +	kt2 = ktime_get_real();
>  
>  	/* Re-enable bus master arbitration */
>  	if (pr->flags.bm_check && pr->flags.bm_control) {
> @@ -1661,7 +1676,8 @@ static int acpi_idle_enter_bm(struct cpu
>  	if (tsc_halts_in_c(ACPI_STATE_C3))
>  		mark_tsc_unstable("TSC halts in idle");
>  #endif
> -	sleep_ticks = ticks_elapsed(t1, t2);
> +	sleep_interval = ktime_to_us(ktime_sub(kt2, kt1));
> +	sleep_ticks = US_TO_PM_TIMER_TICKS(sleep_interval);
>  	/* Tell the scheduler how much we idled: */
>  	sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
>  
> @@ -1672,7 +1688,7 @@ static int acpi_idle_enter_bm(struct cpu
>  
>  	acpi_state_timer_broadcast(pr, cx, 0);
>  	cx->time += sleep_ticks;
> -	return ticks_elapsed_in_us(t1, t2);
> +	return sleep_interval;
>  }
>  
>  struct cpuidle_driver acpi_idle_driver = {
> 
> 

^ permalink raw reply	[flat|nested] 37+ messages in thread

end of thread, other threads:[~2009-01-20  2:52 UTC | newest]

Thread overview: 37+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-10-31 18:42 [PATCH] ACPI: EC: clean up tmp variable before reuse Alexey Starikovskiy
2008-11-03  8:02 ` Zhao Yakui
2008-11-03  8:24 ` [PATCH]: ACPI: Initialize EC global lock based on the return value of _GLK Zhao Yakui
2008-11-04  7:41 ` [PATCH]: ACPI Cleanup :Initialize EC global lock based on the return status Zhao Yakui
2008-11-04  8:05   ` Alexey Starikovskiy
2008-11-04  8:58     ` Rafael J. Wysocki
2008-11-04  9:21       ` Alexey Starikovskiy
2008-11-04  9:37     ` Zhao Yakui
2008-11-04  9:38       ` Alexey Starikovskiy
2008-11-05  1:05         ` Zhao Yakui
2008-11-05  7:24           ` Alexey Starikovskiy
2008-12-17  8:55 ` [PATCH] : ACPI : Use RSDT instead of XSDT by adding boot option of "acpi=rsdt" Zhao Yakui
2009-01-09  6:35   ` Len Brown
2009-01-09 10:54     ` Thomas Renninger
2009-01-09 10:59       ` Len Brown
2009-01-09 12:16         ` Thomas Renninger
2009-01-09 12:34           ` Matthew Garrett
2009-01-12 14:13             ` Thomas Renninger
2009-01-12 14:16               ` Matthew Garrett
2009-01-12 22:17                 ` Thomas Renninger
2009-01-12 23:38                   ` Matthew Garrett
2009-01-09 10:58   ` Blacklist known broken machines to use the rsdt and enabled Cstates on R40e Thomas Renninger
2009-01-09 10:58   ` [PATCH 1/2] Blacklist known broken machines (ThinkPad R40e and R50e) to use rsdt instead xsdt Thomas Renninger
2009-01-09 10:58   ` [PATCH 2/2] R40e using rsdt (previous patch) makes all Cstates work -> remove blacklisting Thomas Renninger
2008-12-30  4:01 ` [PATCH] : ACPI : Add the MWAIT C-state mask to avoid overflow Zhao Yakui
2009-01-04  4:04 ` Zhao Yakui
2009-01-09  6:28   ` Len Brown
2009-01-12  7:07 ` [PATCH] : ACPI : Use clocksource to get the C-state time instead of ACPI PM timer Zhao Yakui
2009-01-12  7:58   ` Rafael J. Wysocki
2009-01-12  9:31     ` Zhao Yakui
2009-01-12 12:27       ` Rafael J. Wysocki
2009-01-12  9:39     ` Zhao Yakui
2009-01-12 22:09   ` Pallipadi, Venkatesh
2009-01-13  1:26     ` Zhao Yakui
2009-01-13  1:42     ` Zhao Yakui
2009-01-13  3:50 ` [RESEND] " Zhao Yakui
2009-01-20  2:52   ` Len Brown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox