public inbox for kexec@lists.infradead.org
 help / color / mirror / Atom feed
* [PATCH] percpu: add comment to per_cpu_ptr_to_phys
@ 2011-11-23  8:45 Dave Young
  2011-11-23 16:22 ` Tejun Heo
  0 siblings, 1 reply; 2+ messages in thread
From: Dave Young @ 2011-11-23  8:45 UTC (permalink / raw)
  To: tj, xiyou.wangcong, kexec, tim, linux-kernel

add comments about current per_cpu_ptr_to_phys implementation
to explain why the logic is more complicated than necessary.

Signed-off-by: Dave Young <dyoung@redhat.com>
---
 mm/percpu.c |   13 +++++++++++++
 1 file changed, 13 insertions(+)

--- linux-2.6.orig/mm/percpu.c	2011-11-22 10:18:46.000000000 +0800
+++ linux-2.6/mm/percpu.c	2011-11-23 16:27:01.667562973 +0800
@@ -988,6 +988,19 @@ phys_addr_t per_cpu_ptr_to_phys(void *ad
 	unsigned int cpu;
 
 	/*
+	 * percpu allocator has special setup for the first chunk,
+	 * which currently supports either embedding in linear address space
+	 * or vmalloc mapping, and, from the second one, the backing
+	 * allocator (currently either vm or km) provides translation.
+	 *
+	 * The addr can be tranlated simply without checking if it falls
+	 * into the first chunk. But the current code reflects better
+	 * how percpu allocator actually works, and the verification can
+	 * discover both bugs in percpu allocator itself and
+	 * per_cpu_ptr_to_phys() callers. So we keep current code.
+	 */
+
+	/*
 	 * The following test on first_start/end isn't strictly
 	 * necessary but will speed up lookups of addresses which
 	 * aren't in the first chunk.

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2011-11-23 16:23 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-11-23  8:45 [PATCH] percpu: add comment to per_cpu_ptr_to_phys Dave Young
2011-11-23 16:22 ` Tejun Heo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox