public inbox for linux-ia64@vger.kernel.org
 help / color / mirror / Atom feed
From: Jes Sorensen <jes@trained-monkey.org>
To: linux-ia64@vger.kernel.org
Subject: smp_flush_tlb_mm
Date: Wed, 26 Nov 2003 15:00:42 +0000	[thread overview]
Message-ID: <marc-linux-ia64-106985904913387@msgid-missing> (raw)

Hi

Looking at some profiles on a 512p box I noticed that we are seeing a
few more smp_call_function calls than we really would like ;-)

To get around it I have implemented a on_each_cpu_masked() and using it
in flush_tlb_mm to reduce the call rate a bit. For flush_tlb_range it is
a little trickier since it relies on platform_global_purge_tlb() rather
than smp_call_function, so I am hoping we might be able to do a
platform_purge_tlb_masked() as well?

A preliminary patch for flush_tlb_mm and on_each_cpu_masked is attached.

Comments?

Cheers,
Jes

diff -urN -X /usr/people/jes/exclude-linux orig/linux-2.6.0-test10/arch/ia64/kernel/smp.c linux-2.6.0-test10/arch/ia64/kernel/smp.c
--- orig/linux-2.6.0-test10/arch/ia64/kernel/smp.c	Sun Nov 23 17:33:24 2003
+++ linux-2.6.0-test10/arch/ia64/kernel/smp.c	Wed Nov 26 05:57:32 2003
@@ -205,6 +205,55 @@
 	platform_send_ipi(cpu, IA64_IPI_RESCHEDULE, IA64_IPI_DM_INT, 0);
 }
 
+
+/*
+ * Call a function on all processors
+ */
+static inline int on_each_cpu_masked(void (*func) (void *info), void *info,
+				     int retry, int wait, cpumask_t cpumask)
+{
+	cpumask_t tmp;
+	struct call_data_struct data;
+	int ret = 0;
+	int cpus = 0;
+	int i;
+
+	cpus_and(tmp, cpumask, cpu_online_map);
+
+	data.func = func;
+	data.info = info;
+	atomic_set(&data.started, 0);
+	data.wait = wait;
+	if (wait)
+		atomic_set(&data.finished, 0);
+
+	get_cpu();
+	spin_lock_bh(&call_lock);
+
+	call_data = &data;
+	mb();	/* ensure store to call_data precedes setting of IPI_CALL_FUNC */
+	for (i = 0; i < NR_CPUS; i++) {
+		if (cpu_isset(i, tmp), cpu_online(i)) {
+			cpus++;
+			send_IPI_single(i, IPI_CALL_FUNC);
+		}
+	}
+
+	/* Wait for response */
+	while (atomic_read(&data.started) != cpus)
+		barrier();
+
+	if (wait)
+		while (atomic_read(&data.finished) != cpus)
+			barrier();
+	call_data = NULL;
+
+	spin_unlock_bh(&call_lock);
+	put_cpu();
+	return ret;
+}
+
+
 void
 smp_flush_tlb_all (void)
 {
@@ -228,7 +277,12 @@
 	 * anyhow, and once a CPU is interrupted, the cost of local_flush_tlb_all() is
 	 * rather trivial.
 	 */
+#if 0
 	on_each_cpu((void (*)(void *))local_finish_flush_tlb_mm, mm, 1, 1);
+#else
+	on_each_cpu_masked((void (*)(void *))local_finish_flush_tlb_mm, mm,
+			   1, 1, mm->cpu_vm_mask);
+#endif
 }
 
 /*

             reply	other threads:[~2003-11-26 15:00 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2003-11-26 15:00 Jes Sorensen [this message]
2003-11-26 19:19 ` smp_flush_tlb_mm Jack Steiner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=marc-linux-ia64-106985904913387@msgid-missing \
    --to=jes@trained-monkey.org \
    --cc=linux-ia64@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox