* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm [not found] ` <1337072138-8323-7-git-send-email-alex.shi@intel.com> @ 2012-05-15 9:15 ` Nick Piggin 2012-05-15 9:17 ` Nick Piggin ` (2 more replies) 0 siblings, 3 replies; 42+ messages in thread From: Nick Piggin @ 2012-05-15 9:15 UTC (permalink / raw) To: Alex Shi Cc: tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, a.p.zijlstra, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch So this should go to linux-arch... On 15 May 2012 18:55, Alex Shi <alex.shi@intel.com> wrote: > Not every flush_tlb_mm execution moment is really need to evacuate all > TLB entries, like in munmap, just few 'invlpg' is better for whole > process performance, since it leaves most of TLB entries for later > accessing. > > This patch is changing flush_tlb_mm(mm) to flush_tlb_mm(mm, start, end) > in cases. What happened with Peter's comment about using flush_tlb_range for this? flush_tlb_mm() API should just stay unchanged AFAIKS. Then you need to work out the best way to give range info to the tlb/mmu gather API. Possibly passing in the rage for that guy is OK, which x86 can then implement as flush range. ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-15 9:15 ` [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm Nick Piggin @ 2012-05-15 9:17 ` Nick Piggin 2012-05-15 12:58 ` Luming Yu 2012-05-15 14:07 ` Alex Shi 2012-05-15 9:18 ` Peter Zijlstra 2012-05-15 13:24 ` Alex Shi 2 siblings, 2 replies; 42+ messages in thread From: Nick Piggin @ 2012-05-15 9:17 UTC (permalink / raw) To: Alex Shi Cc: tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, a.p.zijlstra, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch On 15 May 2012 19:15, Nick Piggin <npiggin@gmail.com> wrote: > So this should go to linux-arch... > > On 15 May 2012 18:55, Alex Shi <alex.shi@intel.com> wrote: >> Not every flush_tlb_mm execution moment is really need to evacuate all >> TLB entries, like in munmap, just few 'invlpg' is better for whole >> process performance, since it leaves most of TLB entries for later >> accessing. Did you have microbenchmarks for this like your mprotect numbers, by the way? Test munmap numbers and see how that looks. Also, does it show up on any macro-benchmarks like specjbb? ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-15 9:17 ` Nick Piggin @ 2012-05-15 12:58 ` Luming Yu 2012-05-15 13:06 ` Peter Zijlstra 2012-05-15 13:08 ` Luming Yu 2012-05-15 14:07 ` Alex Shi 1 sibling, 2 replies; 42+ messages in thread From: Luming Yu @ 2012-05-15 12:58 UTC (permalink / raw) To: Nick Piggin Cc: Alex Shi, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, a.p.zijlstra, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch, jcm On Tue, May 15, 2012 at 5:17 PM, Nick Piggin <npiggin@gmail.com> wrote: > On 15 May 2012 19:15, Nick Piggin <npiggin@gmail.com> wrote: >> So this should go to linux-arch... >> >> On 15 May 2012 18:55, Alex Shi <alex.shi@intel.com> wrote: >>> Not every flush_tlb_mm execution moment is really need to evacuate all >>> TLB entries, like in munmap, just few 'invlpg' is better for whole >>> process performance, since it leaves most of TLB entries for later >>> accessing. > > Did you have microbenchmarks for this like your mprotect numbers, > by the way? Test munmap numbers and see how that looks. Also, Might be off topic, but I just spent few minutes to test out the difference between write CR3 vs. invlpg on a pretty old but still reliable P4 desktop with my simple hardware latency and bandwidth test tool I posted for RFC several weeks ago on LKML. Both __native_flush_tlb() and __native_flush_tlb_single(...) introduced roughly 1 ns latency to tsc sampling executed in stop_machine_context in two logical CPUs Just to fuel the discussion. :-) Cheers, /l ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-15 12:58 ` Luming Yu @ 2012-05-15 13:06 ` Peter Zijlstra 2012-05-15 13:27 ` Luming Yu ` (2 more replies) 2012-05-15 13:08 ` Luming Yu 1 sibling, 3 replies; 42+ messages in thread From: Peter Zijlstra @ 2012-05-15 13:06 UTC (permalink / raw) To: Luming Yu Cc: Nick Piggin, Alex Shi, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch, jcm On Tue, 2012-05-15 at 20:58 +0800, Luming Yu wrote: > > > Both __native_flush_tlb() and __native_flush_tlb_single(...) > introduced roughly 1 ns latency to tsc sampling executed in > stop_machine_context in two logical CPUs But you have to weight that against the cost of re-population, and that's the difficult bit, since we have no clue how many tlb entries are in use by the current cr3. It might be possible for intel to give us this information, I've asked for something similar for cachelines. ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-15 13:06 ` Peter Zijlstra @ 2012-05-15 13:27 ` Luming Yu 2012-05-15 13:28 ` Alex Shi 2012-05-15 13:33 ` Alex Shi 2012-05-15 13:39 ` Steven Rostedt 2 siblings, 1 reply; 42+ messages in thread From: Luming Yu @ 2012-05-15 13:27 UTC (permalink / raw) To: Peter Zijlstra Cc: Nick Piggin, Alex Shi, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch, jcm On Tue, May 15, 2012 at 9:06 PM, Peter Zijlstra <a.p.zijlstra@chello.nl> wrote: > On Tue, 2012-05-15 at 20:58 +0800, Luming Yu wrote: >> >> >> Both __native_flush_tlb() and __native_flush_tlb_single(...) >> introduced roughly 1 ns latency to tsc sampling executed in Fix typo, I just observed 1us with current tool, I would check if I can push the accuracy to nanoseconds level. >> stop_machine_context in two logical CPUs > > But you have to weight that against the cost of re-population, and Right, it's hard to detect, but I will try if I can get measurement done in a simple test tool to help people measure this kind of stuff in few minutes. > that's the difficult bit, since we have no clue how many tlb entries are > in use by the current cr3. > > It might be possible for intel to give us this information, I've asked > for something similar for cachelines. This is the official document http://www.intel.com/content/dam/doc/manual/64-ia-32-architectures-optimization-manual.pdf Let me know if it can answer your question. > ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-15 13:27 ` Luming Yu @ 2012-05-15 13:28 ` Alex Shi 2012-05-15 13:28 ` Alex Shi 0 siblings, 1 reply; 42+ messages in thread From: Alex Shi @ 2012-05-15 13:28 UTC (permalink / raw) To: Luming Yu Cc: Peter Zijlstra, Nick Piggin, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch, jcm On 05/15/2012 09:27 PM, Luming Yu wrote: > On Tue, May 15, 2012 at 9:06 PM, Peter Zijlstra <a.p.zijlstra@chello.nl> wrote: >> On Tue, 2012-05-15 at 20:58 +0800, Luming Yu wrote: >>> >>> >>> Both __native_flush_tlb() and __native_flush_tlb_single(...) >>> introduced roughly 1 ns latency to tsc sampling executed in > > Fix typo, I just observed 1us with current tool, I would check if I > can push the accuracy to nanoseconds level. > >>> stop_machine_context in two logical CPUs >> >> But you have to weight that against the cost of re-population, and > > Right, it's hard to detect, but I will try if I can get measurement > done in a simple test tool to help people measure > this kind of stuff in few minutes. > >> that's the difficult bit, since we have no clue how many tlb entries are >> in use by the current cr3. >> >> It might be possible for intel to give us this information, I've asked >> for something similar for cachelines. > > This is the official document > http://www.intel.com/content/dam/doc/manual/64-ia-32-architectures-optimization-manual.pdf > Please, such huge documents! and it also has no such info. > Let me know if it can answer your question. > >> ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-15 13:28 ` Alex Shi @ 2012-05-15 13:28 ` Alex Shi 0 siblings, 0 replies; 42+ messages in thread From: Alex Shi @ 2012-05-15 13:28 UTC (permalink / raw) To: Luming Yu Cc: Peter Zijlstra, Nick Piggin, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch, jcm On 05/15/2012 09:27 PM, Luming Yu wrote: > On Tue, May 15, 2012 at 9:06 PM, Peter Zijlstra <a.p.zijlstra@chello.nl> wrote: >> On Tue, 2012-05-15 at 20:58 +0800, Luming Yu wrote: >>> >>> >>> Both __native_flush_tlb() and __native_flush_tlb_single(...) >>> introduced roughly 1 ns latency to tsc sampling executed in > > Fix typo, I just observed 1us with current tool, I would check if I > can push the accuracy to nanoseconds level. > >>> stop_machine_context in two logical CPUs >> >> But you have to weight that against the cost of re-population, and > > Right, it's hard to detect, but I will try if I can get measurement > done in a simple test tool to help people measure > this kind of stuff in few minutes. > >> that's the difficult bit, since we have no clue how many tlb entries are >> in use by the current cr3. >> >> It might be possible for intel to give us this information, I've asked >> for something similar for cachelines. > > This is the official document > http://www.intel.com/content/dam/doc/manual/64-ia-32-architectures-optimization-manual.pdf > Please, such huge documents! and it also has no such info. > Let me know if it can answer your question. > >> ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-15 13:06 ` Peter Zijlstra 2012-05-15 13:27 ` Luming Yu @ 2012-05-15 13:33 ` Alex Shi 2012-05-15 13:39 ` Steven Rostedt 2 siblings, 0 replies; 42+ messages in thread From: Alex Shi @ 2012-05-15 13:33 UTC (permalink / raw) To: Peter Zijlstra Cc: Luming Yu, Nick Piggin, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch, jcm On 05/15/2012 09:06 PM, Peter Zijlstra wrote: > On Tue, 2012-05-15 at 20:58 +0800, Luming Yu wrote: >> >> >> Both __native_flush_tlb() and __native_flush_tlb_single(...) >> introduced roughly 1 ns latency to tsc sampling executed in >> stop_machine_context in two logical CPUs > > But you have to weight that against the cost of re-population, and > that's the difficult bit, since we have no clue how many tlb entries are > in use by the current cr3. > > It might be possible for intel to give us this information, I've asked > for something similar for cachelines. > I don't know if such info exist in cpu. Maybe US engineer know more. ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-15 13:06 ` Peter Zijlstra 2012-05-15 13:27 ` Luming Yu 2012-05-15 13:33 ` Alex Shi @ 2012-05-15 13:39 ` Steven Rostedt 2012-05-15 14:04 ` Borislav Petkov 2 siblings, 1 reply; 42+ messages in thread From: Steven Rostedt @ 2012-05-15 13:39 UTC (permalink / raw) To: Peter Zijlstra Cc: Luming Yu, Nick Piggin, Alex Shi, tglx, mingo, hpa, arnd, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch, jcm On Tue, 2012-05-15 at 15:06 +0200, Peter Zijlstra wrote: > On Tue, 2012-05-15 at 20:58 +0800, Luming Yu wrote: > > > > > > Both __native_flush_tlb() and __native_flush_tlb_single(...) > > introduced roughly 1 ns latency to tsc sampling executed in > > stop_machine_context in two logical CPUs > > But you have to weight that against the cost of re-population, and > that's the difficult bit, since we have no clue how many tlb entries are > in use by the current cr3. > > It might be possible for intel to give us this information, I've asked > for something similar for cachelines. What information? The # of tlb entries in use? -- Steve ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-15 13:39 ` Steven Rostedt @ 2012-05-15 14:04 ` Borislav Petkov 0 siblings, 0 replies; 42+ messages in thread From: Borislav Petkov @ 2012-05-15 14:04 UTC (permalink / raw) To: Steven Rostedt Cc: Peter Zijlstra, Luming Yu, Nick Piggin, Alex Shi, tglx, mingo, hpa, arnd, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch, jcm On Tue, May 15, 2012 at 09:39:05AM -0400, Steven Rostedt wrote: > > But you have to weight that against the cost of re-population, and > > that's the difficult bit, since we have no clue how many tlb entries are > > in use by the current cr3. > > > > It might be possible for intel to give us this information, I've asked > > for something similar for cachelines. > > What information? The # of tlb entries in use? ... by the current %cr3, yes. And also, before we delve into details, we still don't have a representative benchmark where this shows any improvement. -- Regards/Gruss, Boris. Advanced Micro Devices GmbH Einsteinring 24, 85609 Dornach GM: Alberto Bozzo Reg: Dornach, Landkreis Muenchen HRB Nr. 43632 WEEE Registernr: 129 19551 ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-15 12:58 ` Luming Yu 2012-05-15 13:06 ` Peter Zijlstra @ 2012-05-15 13:08 ` Luming Yu 2012-05-15 13:08 ` Luming Yu 1 sibling, 1 reply; 42+ messages in thread From: Luming Yu @ 2012-05-15 13:08 UTC (permalink / raw) To: Nick Piggin Cc: Alex Shi, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, a.p.zijlstra, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch, jcm On Tue, May 15, 2012 at 8:58 PM, Luming Yu <luming.yu@gmail.com> wrote: > On Tue, May 15, 2012 at 5:17 PM, Nick Piggin <npiggin@gmail.com> wrote: >> On 15 May 2012 19:15, Nick Piggin <npiggin@gmail.com> wrote: >>> So this should go to linux-arch... >>> >>> On 15 May 2012 18:55, Alex Shi <alex.shi@intel.com> wrote: >>>> Not every flush_tlb_mm execution moment is really need to evacuate all >>>> TLB entries, like in munmap, just few 'invlpg' is better for whole >>>> process performance, since it leaves most of TLB entries for later >>>> accessing. >> >> Did you have microbenchmarks for this like your mprotect numbers, >> by the way? Test munmap numbers and see how that looks. Also, > > Might be off topic, but I just spent few minutes to test out the difference > between write CR3 vs. invlpg on a pretty old but still reliable P4 desktop > with my simple hardware latency and bandwidth test tool I posted for > RFC several weeks ago on LKML. > > Both __native_flush_tlb() and __native_flush_tlb_single(...) > introduced roughly 1 ns latency to tsc sampling executed in sorry, typo, 1us.. but I should capture nanosecond data. :-( > stop_machine_context in two logical CPUs > > Just to fuel the discussion. :-) > > Cheers, > /l ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-15 13:08 ` Luming Yu @ 2012-05-15 13:08 ` Luming Yu 0 siblings, 0 replies; 42+ messages in thread From: Luming Yu @ 2012-05-15 13:08 UTC (permalink / raw) To: Nick Piggin Cc: Alex Shi, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, a.p.zijlstra, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch, jcm On Tue, May 15, 2012 at 8:58 PM, Luming Yu <luming.yu@gmail.com> wrote: > On Tue, May 15, 2012 at 5:17 PM, Nick Piggin <npiggin@gmail.com> wrote: >> On 15 May 2012 19:15, Nick Piggin <npiggin@gmail.com> wrote: >>> So this should go to linux-arch... >>> >>> On 15 May 2012 18:55, Alex Shi <alex.shi@intel.com> wrote: >>>> Not every flush_tlb_mm execution moment is really need to evacuate all >>>> TLB entries, like in munmap, just few 'invlpg' is better for whole >>>> process performance, since it leaves most of TLB entries for later >>>> accessing. >> >> Did you have microbenchmarks for this like your mprotect numbers, >> by the way? Test munmap numbers and see how that looks. Also, > > Might be off topic, but I just spent few minutes to test out the difference > between write CR3 vs. invlpg on a pretty old but still reliable P4 desktop > with my simple hardware latency and bandwidth test tool I posted for > RFC several weeks ago on LKML. > > Both __native_flush_tlb() and __native_flush_tlb_single(...) > introduced roughly 1 ns latency to tsc sampling executed in sorry, typo, 1us.. but I should capture nanosecond data. :-( > stop_machine_context in two logical CPUs > > Just to fuel the discussion. :-) > > Cheers, > /l ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-15 9:17 ` Nick Piggin 2012-05-15 12:58 ` Luming Yu @ 2012-05-15 14:07 ` Alex Shi 1 sibling, 0 replies; 42+ messages in thread From: Alex Shi @ 2012-05-15 14:07 UTC (permalink / raw) To: Nick Piggin, yongjie.ren Cc: tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, a.p.zijlstra, kamezawa.hiroyu, viro, linux-kernel, linux-arch On 05/15/2012 05:17 PM, Nick Piggin wrote: > On 15 May 2012 19:15, Nick Piggin <npiggin@gmail.com> wrote: >> So this should go to linux-arch... >> >> On 15 May 2012 18:55, Alex Shi <alex.shi@intel.com> wrote: >>> Not every flush_tlb_mm execution moment is really need to evacuate all >>> TLB entries, like in munmap, just few 'invlpg' is better for whole >>> process performance, since it leaves most of TLB entries for later >>> accessing. > > Did you have microbenchmarks for this like your mprotect numbers, > by the way? Test munmap numbers and see how that looks. Also, > does it show up on any macro-benchmarks like specjbb? Yongjie has tested the patchset and get some positive data on Virtual machine. Yongjie, could you like to share your data? ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-15 9:15 ` [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm Nick Piggin 2012-05-15 9:17 ` Nick Piggin @ 2012-05-15 9:18 ` Peter Zijlstra 2012-05-15 9:52 ` Nick Piggin 2012-05-15 14:04 ` Alex Shi 2012-05-15 13:24 ` Alex Shi 2 siblings, 2 replies; 42+ messages in thread From: Peter Zijlstra @ 2012-05-15 9:18 UTC (permalink / raw) To: Nick Piggin Cc: Alex Shi, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch On Tue, 2012-05-15 at 19:15 +1000, Nick Piggin wrote: > So this should go to linux-arch... > > On 15 May 2012 18:55, Alex Shi <alex.shi@intel.com> wrote: > > Not every flush_tlb_mm execution moment is really need to evacuate all > > TLB entries, like in munmap, just few 'invlpg' is better for whole > > process performance, since it leaves most of TLB entries for later > > accessing. > > > > This patch is changing flush_tlb_mm(mm) to flush_tlb_mm(mm, start, end) > > in cases. > > What happened with Peter's comment about using flush_tlb_range for this? > > flush_tlb_mm() API should just stay unchanged AFAIKS. > > Then you need to work out the best way to give range info to the tlb/mmu gather > API. Possibly passing in the rage for that guy is OK, which x86 can > then implement > as flush range. Right, most archs that have tlb_flush_range() do range tracking in mmu_gather. Our TLB ops fully support that, there's absolutely no need to go change the interface for thos. ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-15 9:18 ` Peter Zijlstra @ 2012-05-15 9:52 ` Nick Piggin 2012-05-15 10:00 ` Peter Zijlstra 2012-05-15 14:04 ` Alex Shi 1 sibling, 1 reply; 42+ messages in thread From: Nick Piggin @ 2012-05-15 9:52 UTC (permalink / raw) To: Peter Zijlstra Cc: Alex Shi, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch On 15 May 2012 19:18, Peter Zijlstra <a.p.zijlstra@chello.nl> wrote: > On Tue, 2012-05-15 at 19:15 +1000, Nick Piggin wrote: >> So this should go to linux-arch... >> >> On 15 May 2012 18:55, Alex Shi <alex.shi@intel.com> wrote: >> > Not every flush_tlb_mm execution moment is really need to evacuate all >> > TLB entries, like in munmap, just few 'invlpg' is better for whole >> > process performance, since it leaves most of TLB entries for later >> > accessing. >> > >> > This patch is changing flush_tlb_mm(mm) to flush_tlb_mm(mm, start, end) >> > in cases. >> >> What happened with Peter's comment about using flush_tlb_range for this? >> >> flush_tlb_mm() API should just stay unchanged AFAIKS. >> >> Then you need to work out the best way to give range info to the tlb/mmu gather >> API. Possibly passing in the rage for that guy is OK, which x86 can >> then implement >> as flush range. > > Right, most archs that have tlb_flush_range() do range tracking in > mmu_gather. Our TLB ops fully support that, there's absolutely no need > to go change the interface for thos. It could be warranted to change tlb_flush_mmu to a range API to avoid doing the per-entry tracking which those architectures do? The callers have range available easily, so ignoring those could be noop for generic helpers. ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-15 9:52 ` Nick Piggin @ 2012-05-15 10:00 ` Peter Zijlstra 2012-05-15 10:06 ` Nick Piggin 0 siblings, 1 reply; 42+ messages in thread From: Peter Zijlstra @ 2012-05-15 10:00 UTC (permalink / raw) To: Nick Piggin Cc: Alex Shi, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch On Tue, 2012-05-15 at 19:52 +1000, Nick Piggin wrote: > > It could be warranted to change tlb_flush_mmu to a range API to > avoid doing the per-entry tracking which those architectures do? The per-entry could result in a much smaller range, there's no point in flushing tlbs for unpopulated pages. Anyway, I don't think even think we'd need to change the API for that, you could track the entire range through tlb_start_vma() if you wanted (although nobody does that IIRC). ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-15 10:00 ` Peter Zijlstra @ 2012-05-15 10:06 ` Nick Piggin 2012-05-15 10:13 ` Peter Zijlstra 0 siblings, 1 reply; 42+ messages in thread From: Nick Piggin @ 2012-05-15 10:06 UTC (permalink / raw) To: Peter Zijlstra Cc: Alex Shi, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch On 15 May 2012 20:00, Peter Zijlstra <a.p.zijlstra@chello.nl> wrote: > On Tue, 2012-05-15 at 19:52 +1000, Nick Piggin wrote: >> >> It could be warranted to change tlb_flush_mmu to a range API to >> avoid doing the per-entry tracking which those architectures do? > > The per-entry could result in a much smaller range, there's no point in > flushing tlbs for unpopulated pages. Well common case for small ranges hopefully would be quite dense I think. It could be not worth the extra work (although maybe it would be). > > Anyway, I don't think even think we'd need to change the API for that, > you could track the entire range through tlb_start_vma() if you wanted > (although nobody does that IIRC). I'm not sure if you can do that very well, because the tlb might have to be flushed part way through a vma when we fill up the gather, so you don't want to flush the full range each time. ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-15 10:06 ` Nick Piggin @ 2012-05-15 10:13 ` Peter Zijlstra 0 siblings, 0 replies; 42+ messages in thread From: Peter Zijlstra @ 2012-05-15 10:13 UTC (permalink / raw) To: Nick Piggin Cc: Alex Shi, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch On Tue, 2012-05-15 at 20:06 +1000, Nick Piggin wrote: > On 15 May 2012 20:00, Peter Zijlstra <a.p.zijlstra@chello.nl> wrote: > > On Tue, 2012-05-15 at 19:52 +1000, Nick Piggin wrote: > >> > >> It could be warranted to change tlb_flush_mmu to a range API to > >> avoid doing the per-entry tracking which those architectures do? > > > > The per-entry could result in a much smaller range, there's no point in > > flushing tlbs for unpopulated pages. > > Well common case for small ranges hopefully would be quite dense > I think. It could be not worth the extra work (although maybe it would > be). > > > > > Anyway, I don't think even think we'd need to change the API for that, > > you could track the entire range through tlb_start_vma() if you wanted > > (although nobody does that IIRC). > > I'm not sure if you can do that very well, because the tlb might have to > be flushed part way through a vma when we fill up the gather, so you > don't want to flush the full range each time. Fair enough. But that's still an entirely unrelated optimization and should go with proper benchmarking and preferably across all archs that have flush_tlb_range() :-) ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-15 9:18 ` Peter Zijlstra 2012-05-15 9:52 ` Nick Piggin @ 2012-05-15 14:04 ` Alex Shi 1 sibling, 0 replies; 42+ messages in thread From: Alex Shi @ 2012-05-15 14:04 UTC (permalink / raw) To: Peter Zijlstra Cc: Nick Piggin, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch On 05/15/2012 05:18 PM, Peter Zijlstra wrote: > On Tue, 2012-05-15 at 19:15 +1000, Nick Piggin wrote: >> So this should go to linux-arch... >> >> On 15 May 2012 18:55, Alex Shi <alex.shi@intel.com> wrote: >>> Not every flush_tlb_mm execution moment is really need to evacuate all >>> TLB entries, like in munmap, just few 'invlpg' is better for whole >>> process performance, since it leaves most of TLB entries for later >>> accessing. >>> >>> This patch is changing flush_tlb_mm(mm) to flush_tlb_mm(mm, start, end) >>> in cases. >> >> What happened with Peter's comment about using flush_tlb_range for this? >> >> flush_tlb_mm() API should just stay unchanged AFAIKS. >> >> Then you need to work out the best way to give range info to the tlb/mmu gather >> API. Possibly passing in the rage for that guy is OK, which x86 can >> then implement >> as flush range. > > Right, most archs that have tlb_flush_range() do range tracking in > mmu_gather. Our TLB ops fully support that, there's absolutely no need > to go change the interface for thos. Ok. this should be your wanted, -#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) +#define tlb_flush(tlb, start, end) __flush_tlb_range((tlb)->mm, start, end) If no objection, I will modify the patch accordingly. ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-15 9:15 ` [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm Nick Piggin 2012-05-15 9:17 ` Nick Piggin 2012-05-15 9:18 ` Peter Zijlstra @ 2012-05-15 13:24 ` Alex Shi 2012-05-15 14:36 ` Peter Zijlstra 2 siblings, 1 reply; 42+ messages in thread From: Alex Shi @ 2012-05-15 13:24 UTC (permalink / raw) To: Nick Piggin Cc: tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, a.p.zijlstra, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch On 05/15/2012 05:15 PM, Nick Piggin wrote: > So this should go to linux-arch... > > On 15 May 2012 18:55, Alex Shi <alex.shi@intel.com> wrote: >> Not every flush_tlb_mm execution moment is really need to evacuate all >> TLB entries, like in munmap, just few 'invlpg' is better for whole >> process performance, since it leaves most of TLB entries for later >> accessing. >> >> This patch is changing flush_tlb_mm(mm) to flush_tlb_mm(mm, start, end) >> in cases. > > What happened with Peter's comment about using flush_tlb_range for this? > > flush_tlb_mm() API should just stay unchanged AFAIKS. > > Then you need to work out the best way to give range info to the tlb/mmu gather > API. Possibly passing in the rage for that guy is OK, which x86 can > then implement > as flush range. Sorry. I don't understand what's the comments Peter's made days ago. I should ask for more details originally. So, Peter, the correct change should like following, am I right? -#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) +#define tlb_flush(tlb, start, end) __flush_tlb_range((tlb)->mm, start, end) ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-15 13:24 ` Alex Shi @ 2012-05-15 14:36 ` Peter Zijlstra 2012-05-15 14:36 ` Peter Zijlstra 2012-05-15 14:57 ` Peter Zijlstra 0 siblings, 2 replies; 42+ messages in thread From: Peter Zijlstra @ 2012-05-15 14:36 UTC (permalink / raw) To: Alex Shi Cc: Nick Piggin, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch On Tue, 2012-05-15 at 21:24 +0800, Alex Shi wrote: > Sorry. I don't understand what's the comments Peter's made days ago. I should ask for more details originally. > > So, Peter, the correct change should like following, am I right? > > -#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) > +#define tlb_flush(tlb, start, end) __flush_tlb_range((tlb)->mm, start, end) No.. the correct change is to do range tracking like the other archs that support flush_tlb_range() do. You do not modify the tlb interface. Again, see: http://marc.info/?l=linux-arch&m=129952026504268&w=2 ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-15 14:36 ` Peter Zijlstra @ 2012-05-15 14:36 ` Peter Zijlstra 2012-05-15 14:57 ` Peter Zijlstra 1 sibling, 0 replies; 42+ messages in thread From: Peter Zijlstra @ 2012-05-15 14:36 UTC (permalink / raw) To: Alex Shi Cc: Nick Piggin, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch On Tue, 2012-05-15 at 21:24 +0800, Alex Shi wrote: > Sorry. I don't understand what's the comments Peter's made days ago. I should ask for more details originally. > > So, Peter, the correct change should like following, am I right? > > -#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) > +#define tlb_flush(tlb, start, end) __flush_tlb_range((tlb)->mm, start, end) No.. the correct change is to do range tracking like the other archs that support flush_tlb_range() do. You do not modify the tlb interface. Again, see: http://marc.info/?l=linux-arch&m=129952026504268&w=2 ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-15 14:36 ` Peter Zijlstra 2012-05-15 14:36 ` Peter Zijlstra @ 2012-05-15 14:57 ` Peter Zijlstra 2012-05-15 15:01 ` Alex Shi 2012-05-16 6:46 ` Alex Shi 1 sibling, 2 replies; 42+ messages in thread From: Peter Zijlstra @ 2012-05-15 14:57 UTC (permalink / raw) To: Alex Shi Cc: Nick Piggin, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch On Tue, 2012-05-15 at 16:36 +0200, Peter Zijlstra wrote: > On Tue, 2012-05-15 at 21:24 +0800, Alex Shi wrote: > > > Sorry. I don't understand what's the comments Peter's made days ago. I should ask for more details originally. > > > > So, Peter, the correct change should like following, am I right? > > > > -#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) > > +#define tlb_flush(tlb, start, end) __flush_tlb_range((tlb)->mm, start, end) > > No.. the correct change is to do range tracking like the other archs > that support flush_tlb_range() do. > > You do not modify the tlb interface. > > Again, see: http://marc.info/?l=linux-arch&m=129952026504268&w=2 Just to be _very_ clear, you do not modify: mm/memory.c | 9 ++-- kernel/fork.c | 2 +- fs/proc/task_mmu.c | 2 +- As it stands your patch breaks compilation on a whole bunch of architectures. If you touch the TLB interface, you get to touch _ALL_ architectures. ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-15 14:57 ` Peter Zijlstra @ 2012-05-15 15:01 ` Alex Shi 2012-05-16 6:46 ` Alex Shi 1 sibling, 0 replies; 42+ messages in thread From: Alex Shi @ 2012-05-15 15:01 UTC (permalink / raw) To: Peter Zijlstra Cc: Nick Piggin, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch On 05/15/2012 10:57 PM, Peter Zijlstra wrote: > On Tue, 2012-05-15 at 16:36 +0200, Peter Zijlstra wrote: >> On Tue, 2012-05-15 at 21:24 +0800, Alex Shi wrote: >> >>> Sorry. I don't understand what's the comments Peter's made days ago. I should ask for more details originally. >>> >>> So, Peter, the correct change should like following, am I right? >>> >>> -#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) >>> +#define tlb_flush(tlb, start, end) __flush_tlb_range((tlb)->mm, start, end) >> >> No.. the correct change is to do range tracking like the other archs >> that support flush_tlb_range() do. >> >> You do not modify the tlb interface. >> >> Again, see: http://marc.info/?l=linux-arch&m=129952026504268&w=2 this code is for multiple architecture, but x86 still need implement 'flush tlb range' with 'invlpg'. > > Just to be _very_ clear, you do not modify: > > mm/memory.c | 9 ++-- > kernel/fork.c | 2 +- > fs/proc/task_mmu.c | 2 +- > Thanks a lot. I see. > As it stands your patch breaks compilation on a whole bunch of > architectures. > > If you touch the TLB interface, you get to touch _ALL_ architectures. ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-15 14:57 ` Peter Zijlstra 2012-05-15 15:01 ` Alex Shi @ 2012-05-16 6:46 ` Alex Shi 2012-05-16 8:00 ` Peter Zijlstra 1 sibling, 1 reply; 42+ messages in thread From: Alex Shi @ 2012-05-16 6:46 UTC (permalink / raw) To: Peter Zijlstra Cc: Nick Piggin, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch On 05/15/2012 10:57 PM, Peter Zijlstra wrote: > On Tue, 2012-05-15 at 16:36 +0200, Peter Zijlstra wrote: >> On Tue, 2012-05-15 at 21:24 +0800, Alex Shi wrote: >> >>> Sorry. I don't understand what's the comments Peter's made days ago. I should ask for more details originally. >>> >>> So, Peter, the correct change should like following, am I right? >>> >>> -#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) >>> +#define tlb_flush(tlb, start, end) __flush_tlb_range((tlb)->mm, start, end) >> >> No.. the correct change is to do range tracking like the other archs >> that support flush_tlb_range() do. >> >> You do not modify the tlb interface. >> >> Again, see: http://marc.info/?l=linux-arch&m=129952026504268&w=2 > > Just to be _very_ clear, you do not modify: > > mm/memory.c | 9 ++-- > kernel/fork.c | 2 +- > fs/proc/task_mmu.c | 2 +- > > As it stands your patch breaks compilation on a whole bunch of > architectures. > > If you touch the TLB interface, you get to touch _ALL_ architectures. Thanks for Nick and Peter's comments. I rewrite the patch according to your opinions. Is this met your expectation? ---- From a01864af75d8c86668f4fa73d6ca18ebe5835b18 Mon Sep 17 00:00:00 2001 From: Alex Shi <alex.shi@intel.com> Date: Mon, 14 May 2012 09:17:03 +0800 Subject: [PATCH 6/7] x86/tlb: optimizing tlb_finish_mmu on x86 Not every tlb_flush execution moment is really need to evacuate all TLB entries, like in munmap, just few 'invlpg' is better for whole process performance, since it leaves most of TLB entries for later accessing. Thanks for Peter Zijlstra reminder, tlb interfaces in mm/memory.c are for all architectures. So, I keep current interfaces, just reimplement x86 specific 'tlb_flush' only. Some of ideas also are picked up from Peter's old patch, thanks! This patch also rewrite flush_tlb_range for 2 purposes: 1, split it out to get flush_blt_mm_range function. 2, clean up to reduce line breaking, thanks for Borislav's input. Signed-off-by: Alex Shi <alex.shi@intel.com> --- arch/x86/include/asm/tlb.h | 9 +++- arch/x86/include/asm/tlbflush.h | 2 + arch/x86/mm/tlb.c | 120 +++++++++++++++++++++------------------ include/asm-generic/tlb.h | 2 + mm/memory.c | 6 ++ 5 files changed, 82 insertions(+), 57 deletions(-) diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h index 829215f..4fef207 100644 --- a/arch/x86/include/asm/tlb.h +++ b/arch/x86/include/asm/tlb.h @@ -4,7 +4,14 @@ #define tlb_start_vma(tlb, vma) do { } while (0) #define tlb_end_vma(tlb, vma) do { } while (0) #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) -#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) + +#define tlb_flush(tlb) \ +{ \ + if (tlb->fullmm == 0) \ + flush_tlb_mm_range(tlb->mm, tlb->start, tlb->end, 0UL); \ + else \ + flush_tlb_mm_range(tlb->mm, 0UL, TLB_FLUSH_ALL, 0UL); \ +} #include <asm-generic/tlb.h> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index c39c94e..0107f3c 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -128,6 +128,8 @@ extern void flush_tlb_mm(struct mm_struct *); extern void flush_tlb_page(struct vm_area_struct *, unsigned long); extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); +extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, + unsigned long end, unsigned long vmflag); #define flush_tlb() flush_tlb_current_task() diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 5bf4e85..52f6a5a 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -298,22 +298,6 @@ void flush_tlb_current_task(void) preempt_enable(); } -void flush_tlb_mm(struct mm_struct *mm) -{ - preempt_disable(); - - if (current->active_mm == mm) { - if (current->mm) - local_flush_tlb(); - else - leave_mm(smp_processor_id()); - } - if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids) - flush_tlb_others(mm_cpumask(mm), mm, 0UL, TLB_FLUSH_ALL); - - preempt_enable(); -} - #ifdef CONFIG_TRANSPARENT_HUGEPAGE static inline int has_large_page(struct mm_struct *mm, unsigned long start, unsigned long end) @@ -343,61 +327,85 @@ static inline int has_large_page(struct mm_struct *mm, return 0; } #endif -void flush_tlb_range(struct vm_area_struct *vma, - unsigned long start, unsigned long end) + +void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, + unsigned long end, unsigned long vmflag) { - struct mm_struct *mm; + unsigned long addr; + unsigned act_entries, tlb_entries = 0; - if (!cpu_has_invlpg || vma->vm_flags & VM_HUGETLB - || tlb_flushall_shift == (u16)TLB_FLUSH_ALL) { -flush_all: - flush_tlb_mm(vma->vm_mm); - return; + preempt_disable(); + if (current->active_mm != mm) + goto flush_all; + + if (!current->mm) { + leave_mm(smp_processor_id()); + goto flush_all; } - preempt_disable(); - mm = vma->vm_mm; - if (current->active_mm == mm) { - if (current->mm) { - unsigned long addr, vmflag = vma->vm_flags; - unsigned act_entries, tlb_entries = 0; + if (end == TLB_FLUSH_ALL || + tlb_flushall_shift == (u16)TLB_FLUSH_ALL) { + local_flush_tlb(); + goto flush_all; + } - if (vmflag & VM_EXEC) - tlb_entries = tlb_lli_4k[ENTRIES]; - else - tlb_entries = tlb_lld_4k[ENTRIES]; + if (vmflag & VM_EXEC) + tlb_entries = tlb_lli_4k[ENTRIES]; + else + tlb_entries = tlb_lld_4k[ENTRIES]; + act_entries = mm->total_vm > tlb_entries ? tlb_entries : mm->total_vm; - act_entries = tlb_entries > mm->total_vm ? - mm->total_vm : tlb_entries; + if ((end - start) >> PAGE_SHIFT > act_entries >> tlb_flushall_shift) + local_flush_tlb(); + else { + if (has_large_page(mm, start, end)) { + local_flush_tlb(); + goto flush_all; + } + for (addr = start; addr <= end; addr += PAGE_SIZE) + __flush_tlb_single(addr); - if ((end - start) >> PAGE_SHIFT > - act_entries >> tlb_flushall_shift) - local_flush_tlb(); - else { - if (has_large_page(mm, start, end)) { - preempt_enable(); - goto flush_all; - } - for (addr = start; addr <= end; - addr += PAGE_SIZE) - __flush_tlb_single(addr); + if (cpumask_any_but(mm_cpumask(mm), + smp_processor_id()) < nr_cpu_ids) + flush_tlb_others(mm_cpumask(mm), mm, start, end); + preempt_enable(); + return; + } - if (cpumask_any_but(mm_cpumask(mm), - smp_processor_id()) < nr_cpu_ids) - flush_tlb_others(mm_cpumask(mm), mm, - start, end); - preempt_enable(); - return; - } - } else { +flush_all: + if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids) + flush_tlb_others(mm_cpumask(mm), mm, 0UL, TLB_FLUSH_ALL); + preempt_enable(); +} + +void flush_tlb_mm(struct mm_struct *mm) +{ + preempt_disable(); + + if (current->active_mm == mm) { + if (current->mm) + local_flush_tlb(); + else leave_mm(smp_processor_id()); - } } if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids) flush_tlb_others(mm_cpumask(mm), mm, 0UL, TLB_FLUSH_ALL); + preempt_enable(); } +void flush_tlb_range(struct vm_area_struct *vma, + unsigned long start, unsigned long end) +{ + struct mm_struct *mm = vma->vm_mm; + unsigned long vmflag = vma->vm_flags; + + if (!cpu_has_invlpg || vma->vm_flags & VM_HUGETLB) + flush_tlb_mm_range(mm, 0UL, TLB_FLUSH_ALL, 0UL); + else + flush_tlb_mm_range(mm, start, end, vmflag); +} + void flush_tlb_page(struct vm_area_struct *vma, unsigned long start) { diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 75e888b..ed6642a 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -86,6 +86,8 @@ struct mmu_gather { #ifdef CONFIG_HAVE_RCU_TABLE_FREE struct mmu_table_batch *batch; #endif + unsigned long start; + unsigned long end; unsigned int need_flush : 1, /* Did free PTEs */ fast_mode : 1; /* No batching */ diff --git a/mm/memory.c b/mm/memory.c index 6105f47..b176172 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -206,6 +206,8 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, bool fullmm) tlb->mm = mm; tlb->fullmm = fullmm; + tlb->start = -1UL; + tlb->end = 0; tlb->need_flush = 0; tlb->fast_mode = (num_possible_cpus() == 1); tlb->local.next = NULL; @@ -248,6 +250,8 @@ void tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long e { struct mmu_gather_batch *batch, *next; + tlb->start = start; + tlb->end = end; tlb_flush_mmu(tlb); /* keep the page table cache within bounds */ @@ -1204,6 +1208,8 @@ again: */ if (force_flush) { force_flush = 0; + tlb->start = addr; + tlb->end = end; tlb_flush_mmu(tlb); if (addr != end) goto again; -- 1.7.5.4 ^ permalink raw reply related [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-16 6:46 ` Alex Shi @ 2012-05-16 8:00 ` Peter Zijlstra 2012-05-16 8:04 ` Peter Zijlstra ` (2 more replies) 0 siblings, 3 replies; 42+ messages in thread From: Peter Zijlstra @ 2012-05-16 8:00 UTC (permalink / raw) To: Alex Shi Cc: Nick Piggin, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch On Wed, 2012-05-16 at 14:46 +0800, Alex Shi wrote: > diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h > index 75e888b..ed6642a 100644 > --- a/include/asm-generic/tlb.h > +++ b/include/asm-generic/tlb.h > @@ -86,6 +86,8 @@ struct mmu_gather { > #ifdef CONFIG_HAVE_RCU_TABLE_FREE > struct mmu_table_batch *batch; > #endif > + unsigned long start; > + unsigned long end; > unsigned int need_flush : 1, /* Did free PTEs */ > fast_mode : 1; /* No batching */ > > diff --git a/mm/memory.c b/mm/memory.c > index 6105f47..b176172 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -206,6 +206,8 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, bool fullmm) > tlb->mm = mm; > > tlb->fullmm = fullmm; > + tlb->start = -1UL; > + tlb->end = 0; > tlb->need_flush = 0; > tlb->fast_mode = (num_possible_cpus() == 1); > tlb->local.next = NULL; > @@ -248,6 +250,8 @@ void tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long e > { > struct mmu_gather_batch *batch, *next; > > + tlb->start = start; > + tlb->end = end; > tlb_flush_mmu(tlb); > > /* keep the page table cache within bounds */ > @@ -1204,6 +1208,8 @@ again: > */ > if (force_flush) { > force_flush = 0; > + tlb->start = addr; > + tlb->end = end; > tlb_flush_mmu(tlb); > if (addr != end) > goto again; ARGH.. no. What bit about you don't need to modify the generic code don't you get? Both ARM and IA64 (and possible others) already do range tracking, you don't need to modify mm/memory.c _AT_ALL_. Also, if you modify include/asm-generic/tlb.h to include the ranges it would be very nice to make that optional, most archs using it won't use this. Now IF you're going to change the tlb interface like this, you're going to get to do it for all architectures, along with a sane benchmark to show its beneficial to track ranges like this. But as it stands, people are still questioning the validity of your mprotect micro-bench, so no, you don't get to change the tlb interface. ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-16 8:00 ` Peter Zijlstra @ 2012-05-16 8:04 ` Peter Zijlstra 2012-05-16 8:53 ` Alex Shi 2012-05-16 13:34 ` Alex Shi 2012-05-16 13:44 ` Alex Shi 2 siblings, 1 reply; 42+ messages in thread From: Peter Zijlstra @ 2012-05-16 8:04 UTC (permalink / raw) To: Alex Shi Cc: Nick Piggin, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch On Wed, 2012-05-16 at 10:00 +0200, Peter Zijlstra wrote: > On Wed, 2012-05-16 at 14:46 +0800, Alex Shi wrote: > > diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h > > index 75e888b..ed6642a 100644 > > --- a/include/asm-generic/tlb.h > > +++ b/include/asm-generic/tlb.h > > @@ -86,6 +86,8 @@ struct mmu_gather { > > #ifdef CONFIG_HAVE_RCU_TABLE_FREE > > struct mmu_table_batch *batch; > > #endif > > + unsigned long start; > > + unsigned long end; > > unsigned int need_flush : 1, /* Did free PTEs */ > > fast_mode : 1; /* No batching */ > > > > diff --git a/mm/memory.c b/mm/memory.c > > index 6105f47..b176172 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -206,6 +206,8 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, bool fullmm) > > tlb->mm = mm; > > > > tlb->fullmm = fullmm; > > + tlb->start = -1UL; > > + tlb->end = 0; > > tlb->need_flush = 0; > > tlb->fast_mode = (num_possible_cpus() == 1); > > tlb->local.next = NULL; Also, you just broke compilation on a bunch of archs.. again. ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-16 8:04 ` Peter Zijlstra @ 2012-05-16 8:53 ` Alex Shi 2012-05-16 8:58 ` Peter Zijlstra 0 siblings, 1 reply; 42+ messages in thread From: Alex Shi @ 2012-05-16 8:53 UTC (permalink / raw) To: Peter Zijlstra Cc: Nick Piggin, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch On 05/16/2012 04:04 PM, Peter Zijlstra wrote: > On Wed, 2012-05-16 at 10:00 +0200, Peter Zijlstra wrote: >> On Wed, 2012-05-16 at 14:46 +0800, Alex Shi wrote: >>> diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h >>> index 75e888b..ed6642a 100644 >>> --- a/include/asm-generic/tlb.h >>> +++ b/include/asm-generic/tlb.h >>> @@ -86,6 +86,8 @@ struct mmu_gather { >>> #ifdef CONFIG_HAVE_RCU_TABLE_FREE >>> struct mmu_table_batch *batch; >>> #endif >>> + unsigned long start; >>> + unsigned long end; >>> unsigned int need_flush : 1, /* Did free PTEs */ >>> fast_mode : 1; /* No batching */ >>> >>> diff --git a/mm/memory.c b/mm/memory.c >>> index 6105f47..b176172 100644 >>> --- a/mm/memory.c >>> +++ b/mm/memory.c >>> @@ -206,6 +206,8 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, bool fullmm) >>> tlb->mm = mm; >>> >>> tlb->fullmm = fullmm; >>> + tlb->start = -1UL; >>> + tlb->end = 0; >>> tlb->need_flush = 0; >>> tlb->fast_mode = (num_possible_cpus() == 1); >>> tlb->local.next = NULL; > > Also, you just broke compilation on a bunch of archs.. again. Sorry. Do you mean not every archs use 'include/asm-generic/tlb.h', so the assignment of tlb->start in tlb_gather_mmu make trouble? ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-16 8:53 ` Alex Shi @ 2012-05-16 8:58 ` Peter Zijlstra 2012-05-16 8:58 ` Peter Zijlstra 2012-05-16 10:58 ` Alex Shi 0 siblings, 2 replies; 42+ messages in thread From: Peter Zijlstra @ 2012-05-16 8:58 UTC (permalink / raw) To: Alex Shi Cc: Nick Piggin, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch On Wed, 2012-05-16 at 16:53 +0800, Alex Shi wrote: > > Sorry. Do you mean not every archs use 'include/asm-generic/tlb.h', so > the assignment of tlb->start in tlb_gather_mmu make trouble? > Yes exactly. I know you work for Intel, but surely its not forbidden by contract to look outside of arch/x86/ ? I know!, look at arch/ia64/ that's still Intel. ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-16 8:58 ` Peter Zijlstra @ 2012-05-16 8:58 ` Peter Zijlstra 2012-05-16 10:58 ` Alex Shi 1 sibling, 0 replies; 42+ messages in thread From: Peter Zijlstra @ 2012-05-16 8:58 UTC (permalink / raw) To: Alex Shi Cc: Nick Piggin, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch On Wed, 2012-05-16 at 16:53 +0800, Alex Shi wrote: > > Sorry. Do you mean not every archs use 'include/asm-generic/tlb.h', so > the assignment of tlb->start in tlb_gather_mmu make trouble? > Yes exactly. I know you work for Intel, but surely its not forbidden by contract to look outside of arch/x86/ ? I know!, look at arch/ia64/ that's still Intel. ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-16 8:58 ` Peter Zijlstra 2012-05-16 8:58 ` Peter Zijlstra @ 2012-05-16 10:58 ` Alex Shi 2012-05-16 11:04 ` Peter Zijlstra 1 sibling, 1 reply; 42+ messages in thread From: Alex Shi @ 2012-05-16 10:58 UTC (permalink / raw) To: Peter Zijlstra Cc: Alex Shi, Nick Piggin, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch On Wed, May 16, 2012 at 4:58 PM, Peter Zijlstra <peterz@infradead.org> wrote: > On Wed, 2012-05-16 at 16:53 +0800, Alex Shi wrote: >> >> Sorry. Do you mean not every archs use 'include/asm-generic/tlb.h', so >> the assignment of tlb->start in tlb_gather_mmu make trouble? >> > Yes exactly. I know you work for Intel, but surely its not forbidden by > contract to look outside of arch/x86/ ? I know!, look at arch/ia64/ > that's still Intel. :) It is my fault. 'make cscope' just help me focus insight into x86. But frankly speaking, it looks mess that every arch implement similar fields of mmu_gather. If someone can unify the common fields of mmu_gather, and left private field for specific arch. it will be great and much helpful. > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-16 10:58 ` Alex Shi @ 2012-05-16 11:04 ` Peter Zijlstra 2012-05-16 12:57 ` Alex Shi 0 siblings, 1 reply; 42+ messages in thread From: Peter Zijlstra @ 2012-05-16 11:04 UTC (permalink / raw) To: Alex Shi Cc: Alex Shi, Nick Piggin, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch On Wed, 2012-05-16 at 18:58 +0800, Alex Shi wrote: > If someone can unify the common fields of mmu_gather, and > left private field for specific arch. it will be great and much > helpful. I've send you a link to a patch-set that does exactly that twice now. http://marc.info/?l=linux-mm&m=129952019004146&w=2 There, 3rd time, now go read all 15 patches ;-) Getting that merged is still on the todo list, I just got preempted by other stuff :/ ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-16 11:04 ` Peter Zijlstra @ 2012-05-16 12:57 ` Alex Shi 0 siblings, 0 replies; 42+ messages in thread From: Alex Shi @ 2012-05-16 12:57 UTC (permalink / raw) To: Peter Zijlstra Cc: Alex Shi, Nick Piggin, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch On 05/16/2012 07:04 PM, Peter Zijlstra wrote: > On Wed, 2012-05-16 at 18:58 +0800, Alex Shi wrote: >> If someone can unify the common fields of mmu_gather, and >> left private field for specific arch. it will be great and much >> helpful. > > I've send you a link to a patch-set that does exactly that twice now. > > http://marc.info/?l=linux-mm&m=129952019004146&w=2 Thanks for resend. Actually, I had read many times above patch, but it's still hard to catch some lines, maybe due to it can not apply on current mm/memory.c that is due to your bit newer code 9547d01b on 2011-05-24. or maybe I am too stupid. :) ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-16 8:00 ` Peter Zijlstra 2012-05-16 8:04 ` Peter Zijlstra @ 2012-05-16 13:34 ` Alex Shi 2012-05-16 21:09 ` Peter Zijlstra 2012-05-16 13:44 ` Alex Shi 2 siblings, 1 reply; 42+ messages in thread From: Alex Shi @ 2012-05-16 13:34 UTC (permalink / raw) To: Peter Zijlstra Cc: Nick Piggin, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch On 05/16/2012 04:00 PM, Peter Zijlstra wrote: > On Wed, 2012-05-16 at 14:46 +0800, Alex Shi wrote: >> diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h >> index 75e888b..ed6642a 100644 >> --- a/include/asm-generic/tlb.h >> +++ b/include/asm-generic/tlb.h >> @@ -86,6 +86,8 @@ struct mmu_gather { >> #ifdef CONFIG_HAVE_RCU_TABLE_FREE >> struct mmu_table_batch *batch; >> #endif >> + unsigned long start; >> + unsigned long end; >> unsigned int need_flush : 1, /* Did free PTEs */ >> fast_mode : 1; /* No batching */ >> >> diff --git a/mm/memory.c b/mm/memory.c >> index 6105f47..b176172 100644 >> --- a/mm/memory.c >> +++ b/mm/memory.c >> @@ -206,6 +206,8 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, bool fullmm) >> tlb->mm = mm; >> >> tlb->fullmm = fullmm; >> + tlb->start = -1UL; >> + tlb->end = 0; >> tlb->need_flush = 0; >> tlb->fast_mode = (num_possible_cpus() == 1); >> tlb->local.next = NULL; >> @@ -248,6 +250,8 @@ void tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long e >> { >> struct mmu_gather_batch *batch, *next; >> >> + tlb->start = start; >> + tlb->end = end; >> tlb_flush_mmu(tlb); >> >> /* keep the page table cache within bounds */ >> @@ -1204,6 +1208,8 @@ again: >> */ >> if (force_flush) { >> force_flush = 0; >> + tlb->start = addr; >> + tlb->end = end; >> tlb_flush_mmu(tlb); >> if (addr != end) >> goto again; > > > ARGH.. no. What bit about you don't need to modify the generic code > don't you get? > > Both ARM and IA64 (and possible others) already do range tracking, you > don't need to modify mm/memory.c _AT_ALL_. Thanks for time and time remdiner. (shame for me) In my code checking, the other archs can use self mmu_gather struct since they code are excluded by HAVE_GENERIC_MMU_GATHER. In another word if the code protected by HAVE_GENERIC_MMU_GATHER, it is safe for others That is why tlb_flush_mmu/tlb_finish_mmu enabled both in mm/memory.c and other archs. So, if the minimum change of tlb->start/end can be protected by HAVE_GENERIC_MMU_GATHER, it is safe and harmless, am I right? If so, the following patch should work on any condition. --- From ca29d791c3524887c1776136e9274d10d2114624 Mon Sep 17 00:00:00 2001 From: Alex Shi <alex.shi@intel.com> Date: Mon, 14 May 2012 09:17:03 +0800 Subject: [PATCH 6/7] x86/tlb: optimizing tlb_finish_mmu on x86 Not every tlb_flush execution moment is really need to evacuate all TLB entries, like in munmap, just few 'invlpg' is better for whole process performance, since it leaves most of TLB entries for later accessing. Since all of tlb interfaces in mm/memory.c is reused by all architecture CPU, except few of them which protected under HAVE_GENERIC_MMU_GATHER, I keeps global interfaces, just re-implement x86 specific 'tlb_flush' only. and put the minimum change under HAVE_GENERIC_MMU_GATHER too. This patch also rewrite flush_tlb_range for 2 purposes: 1, split it out to get flush_blt_mm_range function. 2, clean up to reduce line breaking, thanks for Borislav's input. Thanks for Peter Zijlstra time and time reminder for multiple architecture code safe! Signed-off-by: Alex Shi <alex.shi@intel.com> --- arch/x86/include/asm/tlb.h | 9 +++- arch/x86/include/asm/tlbflush.h | 2 + arch/x86/mm/tlb.c | 120 +++++++++++++++++++++------------------ include/asm-generic/tlb.h | 2 + mm/memory.c | 9 +++ 5 files changed, 85 insertions(+), 57 deletions(-) diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h index 829215f..4fef207 100644 --- a/arch/x86/include/asm/tlb.h +++ b/arch/x86/include/asm/tlb.h @@ -4,7 +4,14 @@ #define tlb_start_vma(tlb, vma) do { } while (0) #define tlb_end_vma(tlb, vma) do { } while (0) #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) -#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) + +#define tlb_flush(tlb) \ +{ \ + if (tlb->fullmm == 0) \ + flush_tlb_mm_range(tlb->mm, tlb->start, tlb->end, 0UL); \ + else \ + flush_tlb_mm_range(tlb->mm, 0UL, TLB_FLUSH_ALL, 0UL); \ +} #include <asm-generic/tlb.h> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index c39c94e..0107f3c 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -128,6 +128,8 @@ extern void flush_tlb_mm(struct mm_struct *); extern void flush_tlb_page(struct vm_area_struct *, unsigned long); extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); +extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, + unsigned long end, unsigned long vmflag); #define flush_tlb() flush_tlb_current_task() diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 5bf4e85..52f6a5a 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -298,22 +298,6 @@ void flush_tlb_current_task(void) preempt_enable(); } -void flush_tlb_mm(struct mm_struct *mm) -{ - preempt_disable(); - - if (current->active_mm == mm) { - if (current->mm) - local_flush_tlb(); - else - leave_mm(smp_processor_id()); - } - if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids) - flush_tlb_others(mm_cpumask(mm), mm, 0UL, TLB_FLUSH_ALL); - - preempt_enable(); -} - #ifdef CONFIG_TRANSPARENT_HUGEPAGE static inline int has_large_page(struct mm_struct *mm, unsigned long start, unsigned long end) @@ -343,61 +327,85 @@ static inline int has_large_page(struct mm_struct *mm, return 0; } #endif -void flush_tlb_range(struct vm_area_struct *vma, - unsigned long start, unsigned long end) + +void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, + unsigned long end, unsigned long vmflag) { - struct mm_struct *mm; + unsigned long addr; + unsigned act_entries, tlb_entries = 0; - if (!cpu_has_invlpg || vma->vm_flags & VM_HUGETLB - || tlb_flushall_shift == (u16)TLB_FLUSH_ALL) { -flush_all: - flush_tlb_mm(vma->vm_mm); - return; + preempt_disable(); + if (current->active_mm != mm) + goto flush_all; + + if (!current->mm) { + leave_mm(smp_processor_id()); + goto flush_all; } - preempt_disable(); - mm = vma->vm_mm; - if (current->active_mm == mm) { - if (current->mm) { - unsigned long addr, vmflag = vma->vm_flags; - unsigned act_entries, tlb_entries = 0; + if (end == TLB_FLUSH_ALL || + tlb_flushall_shift == (u16)TLB_FLUSH_ALL) { + local_flush_tlb(); + goto flush_all; + } - if (vmflag & VM_EXEC) - tlb_entries = tlb_lli_4k[ENTRIES]; - else - tlb_entries = tlb_lld_4k[ENTRIES]; + if (vmflag & VM_EXEC) + tlb_entries = tlb_lli_4k[ENTRIES]; + else + tlb_entries = tlb_lld_4k[ENTRIES]; + act_entries = mm->total_vm > tlb_entries ? tlb_entries : mm->total_vm; - act_entries = tlb_entries > mm->total_vm ? - mm->total_vm : tlb_entries; + if ((end - start) >> PAGE_SHIFT > act_entries >> tlb_flushall_shift) + local_flush_tlb(); + else { + if (has_large_page(mm, start, end)) { + local_flush_tlb(); + goto flush_all; + } + for (addr = start; addr <= end; addr += PAGE_SIZE) + __flush_tlb_single(addr); - if ((end - start) >> PAGE_SHIFT > - act_entries >> tlb_flushall_shift) - local_flush_tlb(); - else { - if (has_large_page(mm, start, end)) { - preempt_enable(); - goto flush_all; - } - for (addr = start; addr <= end; - addr += PAGE_SIZE) - __flush_tlb_single(addr); + if (cpumask_any_but(mm_cpumask(mm), + smp_processor_id()) < nr_cpu_ids) + flush_tlb_others(mm_cpumask(mm), mm, start, end); + preempt_enable(); + return; + } - if (cpumask_any_but(mm_cpumask(mm), - smp_processor_id()) < nr_cpu_ids) - flush_tlb_others(mm_cpumask(mm), mm, - start, end); - preempt_enable(); - return; - } - } else { +flush_all: + if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids) + flush_tlb_others(mm_cpumask(mm), mm, 0UL, TLB_FLUSH_ALL); + preempt_enable(); +} + +void flush_tlb_mm(struct mm_struct *mm) +{ + preempt_disable(); + + if (current->active_mm == mm) { + if (current->mm) + local_flush_tlb(); + else leave_mm(smp_processor_id()); - } } if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids) flush_tlb_others(mm_cpumask(mm), mm, 0UL, TLB_FLUSH_ALL); + preempt_enable(); } +void flush_tlb_range(struct vm_area_struct *vma, + unsigned long start, unsigned long end) +{ + struct mm_struct *mm = vma->vm_mm; + unsigned long vmflag = vma->vm_flags; + + if (!cpu_has_invlpg || vma->vm_flags & VM_HUGETLB) + flush_tlb_mm_range(mm, 0UL, TLB_FLUSH_ALL, 0UL); + else + flush_tlb_mm_range(mm, start, end, vmflag); +} + void flush_tlb_page(struct vm_area_struct *vma, unsigned long start) { diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 75e888b..ed6642a 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -86,6 +86,8 @@ struct mmu_gather { #ifdef CONFIG_HAVE_RCU_TABLE_FREE struct mmu_table_batch *batch; #endif + unsigned long start; + unsigned long end; unsigned int need_flush : 1, /* Did free PTEs */ fast_mode : 1; /* No batching */ diff --git a/mm/memory.c b/mm/memory.c index 6105f47..a1078af 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -206,6 +206,8 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, bool fullmm) tlb->mm = mm; tlb->fullmm = fullmm; + tlb->start = -1UL; + tlb->end = 0; tlb->need_flush = 0; tlb->fast_mode = (num_possible_cpus() == 1); tlb->local.next = NULL; @@ -248,6 +250,8 @@ void tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long e { struct mmu_gather_batch *batch, *next; + tlb->start = start; + tlb->end = end; tlb_flush_mmu(tlb); /* keep the page table cache within bounds */ @@ -1204,6 +1208,11 @@ again: */ if (force_flush) { force_flush = 0; + +#ifdef HAVE_GENERIC_MMU_GATHER + tlb->start = addr; + tlb->end = end; +#endif tlb_flush_mmu(tlb); if (addr != end) goto again; -- 1.7.5.4 ^ permalink raw reply related [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-16 13:34 ` Alex Shi @ 2012-05-16 21:09 ` Peter Zijlstra 2012-05-17 0:43 ` Alex Shi 2012-05-17 2:14 ` Paul Mundt 0 siblings, 2 replies; 42+ messages in thread From: Peter Zijlstra @ 2012-05-16 21:09 UTC (permalink / raw) To: Alex Shi Cc: Nick Piggin, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch On Wed, 2012-05-16 at 21:34 +0800, Alex Shi wrote: > > So, if the minimum change of tlb->start/end can be protected by > HAVE_GENERIC_MMU_GATHER, it is safe and harmless, am I right? > safe yes, but not entirely harmless. A quick look seems to suggest you fail for VM_HUGETLB. If your mmu_gather spans a vma with VM_HUGETLB you'll do a regular range flush not a full mm flush like the other paths do. Anyway, I did a quick refresh of my series on a recent -tip tree: git://git.kernel.org/pub/scm/linux/kernel/git/peterz/mmu.git tlb-unify With that all you need is to "select HAVE_MMU_GATHER_RANGE" for x86 and implement a useful flush_tlb_range(). In particular, see: http://git.kernel.org/?p=linux/kernel/git/peterz/mmu.git;a=commitdiff;h=05e53144177e6242fda404045f50f48114bcf185;hp=2cd7dc710652127522392f4b7ecb5fa6e954941e I've slightly changed the code to address an open issue with the vm_flags tracking. We now force flush the mmu_gather whenever VM_HUGETLB flips because most (all?) archs that look at that flag expect pure huge pages and not a mixture. I've seem to have misplaced my cross-compiler set, so I've only compiled x86-64 for now. ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-16 21:09 ` Peter Zijlstra @ 2012-05-17 0:43 ` Alex Shi 2012-05-17 2:07 ` Steven Rostedt 2012-05-17 2:14 ` Paul Mundt 1 sibling, 1 reply; 42+ messages in thread From: Alex Shi @ 2012-05-17 0:43 UTC (permalink / raw) To: Peter Zijlstra Cc: Nick Piggin, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch On 05/17/2012 05:09 AM, Peter Zijlstra wrote: > On Wed, 2012-05-16 at 21:34 +0800, Alex Shi wrote: >> >> So, if the minimum change of tlb->start/end can be protected by >> HAVE_GENERIC_MMU_GATHER, it is safe and harmless, am I right? >> > safe yes, but not entirely harmless. A quick look seems to suggest you > fail for VM_HUGETLB. If your mmu_gather spans a vma with VM_HUGETLB > you'll do a regular range flush not a full mm flush like the other paths > do. Thanks! Uh, HUGETLB will be found by has_large_page() if THP enabled now. And I will remove THP cost, then HUGETLB will be deal well. Since the max number of TLB entries is 512, has_large_page just need execute once just when the start address is align at HPAGE_SIZE. IMHO, this patch enabled generic mmu range flush support with just teen lines. and it is supported well by low level x86 architecture. I like it. :) > > Anyway, I did a quick refresh of my series on a recent -tip tree: > > git://git.kernel.org/pub/scm/linux/kernel/git/peterz/mmu.git tlb-unify > > With that all you need is to "select HAVE_MMU_GATHER_RANGE" for x86 and > implement a useful flush_tlb_range(). > > In particular, see: > http://git.kernel.org/?p=linux/kernel/git/peterz/mmu.git;a=commitdiff;h=05e53144177e6242fda404045f50f48114bcf185;hp=2cd7dc710652127522392f4b7ecb5fa6e954941e > > I've slightly changed the code to address an open issue with the > vm_flags tracking. We now force flush the mmu_gather whenever VM_HUGETLB > flips because most (all?) archs that look at that flag expect pure huge > pages and not a mixture. > > I've seem to have misplaced my cross-compiler set, so I've only compiled > x86-64 for now. Oh, I also need a cross-compiler for other archs. Thanks reminder! ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-17 0:43 ` Alex Shi @ 2012-05-17 2:07 ` Steven Rostedt 2012-05-17 2:07 ` Steven Rostedt 2012-05-17 8:04 ` Alex Shi 0 siblings, 2 replies; 42+ messages in thread From: Steven Rostedt @ 2012-05-17 2:07 UTC (permalink / raw) To: Alex Shi Cc: Peter Zijlstra, Nick Piggin, tglx, mingo, hpa, arnd, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch [-- Attachment #1: Type: text/plain, Size: 606 bytes --] On Thu, 2012-05-17 at 08:43 +0800, Alex Shi wrote: > > I've seem to have misplaced my cross-compiler set, so I've only compiled > > x86-64 for now. > > > Oh, I also need a cross-compiler for other archs. Thanks reminder! Here: http://kernel.org/pub/tools/crosstool/ Oh, and if you want to automate this. I attached a ktest.pl config that does it for you. I'll be pushing this config and others into a examples directory come the next merge window. Ktest is located in the Linux tree under tools/testing/ktest/ You can run a bunch of cross compiles by doing: ktest.pl crosstests.conf -- Steve [-- Attachment #2: crosstests.conf --] [-- Type: text/plain, Size: 6921 bytes --] # # Example config for cross compiling # # In this config, it is expected that the tool chains from: # # http://kernel.org/pub/tools/crosstool/files/bin/x86_64/ # # running on a x86_64 system have been downloaded and installed into: # # /usr/local/ # # such that the compiler binaries are something like: # # /usr/local/gcc-4.5.2-nolibc/mips-linux/bin/mips-linux-gcc # # Some of the archs will use gcc-4.5.1 instead of gcc-4.5.2 # this config uses variables to differentiate them. # # Comments will be described for some areas, but the opitons are # all documented in the samples.conf file. # ${PWD} is defined by ktest.pl to be the directory that the user # was in when they executed ktest.pl. It may be better to hardcode the # path name here. THIS_DIR is the variable used through out the config file # in case you want to change it. THIS_DIR := ${PWD} # Update the BUILD_DIR option to the location of your git repo you want to test. BUILD_DIR = ${THIS_DIR}/linux.git # The build will go into this directory. It will be created when you run the test. OUTPUT_DIR = ${THIS_DIR}/cross-compile # The build will be compiled with -j8 BUILD_OPTIONS = -j8 # The test will not stop when it hits a failure. DIE_ON_FAILURE = 0 # If you want to have ktest.pl store the failure somewhere, uncomment this option # and change the directory where ktest should store the failures. #STORE_FAILURES = ${THIS_DIR}/failures # The log file is stored in the OUTPUT_DIR called cross.log # If you enable this, you need to create the OUTPUT_DIR. It wont be created for you. #LOG_FILE = ${OUTPUT_DIR}/cross.log # The log file will be cleared each time you run ktest. CLEAR_LOG = 1 # As some archs do not build with the defconfig, they have been marked # to be ignored. If you want to test them anyway, change DO_FAILED to one. # If a test that has been marked as DO_FAILED passes, then you should change # that test to be DO_DEFAULT DO_FAILED := 0 DO_DEFAULT := 1 # By setting both DO_FAILED and DO_DEFAULT to zero, you can pick a single # arch that you want to test. (uncomment RUN and chose your arch) #RUN := m32r # At the bottom of the config file exists a bisect test. You can update that # test and set DO_FAILED and DO_DEFAULT to zero, and uncomment this variable # to run the bisect on the arch. #RUN := bisect # By default all tests will be running gcc 4.5.2. Some tests are using 4.5.1 # and they select that in the test. # Note: GCC_VER is declared as on option and not a variable ('=' istead of ':=') # This is important. A variable is used only in the config file and if it is set # it stays that way for the rest of the config file until it is change again. # Here we want GCC_VER to remain persistent throughout the tests, as it is used in # the MAKE_CMD. By using '=' instead of ':=' we achieve our goal. GCC_VER = 4.5.2 MAKE_CMD = PATH=/usr/local/gcc-${GCC_VER}-nolibc/${CROSS}/bin:$PATH CROSS_COMPILE=${CROSS}- make ARCH=${ARCH} # all tests are only doing builds. TEST_TYPE = build # If you want to add configs on top of the defconfig, you can add those configs into # the add-config file and uncomment this option. This is useful if you want to test # all cross compiles with PREEMPT set, or TRACING on, etc. #ADD_CONFIG = ${THIS_DIR}/add-config # All tests are using defconfig BUILD_TYPE = defconfig # The test names will have the arch and cross compiler used. This will be shown in # the results. TEST_NAME = ${ARCH} ${CROSS} # alpha TEST_START IF ${RUN} == alpha || ${DO_DEFAULT} CROSS = alpha-linux ARCH = alpha # arm TEST_START IF ${RUN} == arm || ${DO_DEFAULT} CROSS = arm-unknown-linux-gnueabi ARCH = arm # black fin TEST_START IF ${RUN} == bfin || ${DO_DEFAULT} CROSS = bfin-uclinux ARCH = blackfin BUILD_OPTIONS = -j8 vmlinux # cris - FAILS? TEST_START IF ${RUN} == cris || ${RUN} == cris64 || ${DO_FAILED} CROSS = cris-linux ARCH = cris # cris32 - not right arch? TEST_START IF ${RUN} == cris || ${RUN} == cris32 || ${DO_FAILED} CROSS = crisv32-linux ARCH = cris # ia64 TEST_START IF ${RUN} == ia64 || ${DO_DEFAULT} CROSS = ia64-linux ARCH = ia64 # frv TEST_START IF ${RUN} == frv || ${DO_FAILED} CROSS = frv-linux ARCH = frv GCC_VER = 4.5.1 # h8300 - failed make defconfig?? TEST_START IF ${RUN} == h8300 || ${DO_FAILED} CROSS = h8300-elf ARCH = h8300 GCC_VER = 4.5.1 # m68k fails with error? TEST_START IF ${RUN} == m68k || ${DO_DEFAULT} CROSS = m68k-linux ARCH = m68k # mips64 TEST_START IF ${RUN} == mips || ${RUN} == mips64 || ${DO_DEFAULT} CROSS = mips64-linux ARCH = mips # mips32 TEST_START IF ${RUN} == mips || ${RUN} == mips32 || ${DO_DEFAULT} CROSS = mips-linux ARCH = mips # m32r TEST_START IF ${RUN} == m32r || ${DO_FAILED} CROSS = m32r-linux ARCH = m32r GCC_VER = 4.5.1 BUILD_OPTIONS = -j8 vmlinux # parisc64 failed? TEST_START IF ${RUN} == hppa || ${RUN} == hppa64 || ${DO_FAILED} CROSS = hppa64-linux ARCH = parisc # parisc TEST_START IF ${RUN} == hppa || ${RUN} == hppa32 || ${DO_FAILED} CROSS = hppa-linux ARCH = parisc # ppc TEST_START IF ${RUN} == ppc || ${RUN} == ppc32 || ${DO_DEFAULT} CROSS = powerpc-linux ARCH = powerpc # ppc64 TEST_START IF ${RUN} == ppc || ${RUN} == ppc64 || ${DO_DEFAULT} CROSS = powerpc64-linux ARCH = powerpc # s390 TEST_START IF ${RUN} == s390 || ${DO_DEFAULT} CROSS = s390x-linux ARCH = s390 # sh TEST_START IF ${RUN} == sh || ${DO_DEFAULT} CROSS = sh4-linux ARCH = sh # sparc64 TEST_START IF ${RUN} == sparc || ${RUN} == sparc64 || ${DO_DEFAULT} CROSS = sparc64-linux ARCH = sparc64 # sparc TEST_START IF ${RUN} == sparc || ${RUN} == sparc32 || ${DO_DEFAULT} CROSS = sparc-linux ARCH = sparc # xtensa failed TEST_START IF ${RUN} == xtensa || ${DO_FAILED} CROSS = xtensa-linux ARCH = xtensa # UML TEST_START IF ${RUN} == uml || ${DO_DEFAULT} MAKE_CMD = make ARCH=um SUBARCH=x86_64 ARCH = uml CROSS = TEST_START IF ${RUN} == x86 || ${RUN} == i386 || ${DO_DEFAULT} MAKE_CMD = make ARCH=i386 ARCH = i386 CROSS = TEST_START IF ${RUN} == x86 || ${RUN} == x86_64 || ${DO_DEFAULT} MAKE_CMD = make ARCH=x86_64 ARCH = x86_64 CROSS = ################################# # This is a bisect if needed. You need to give it a MIN_CONFIG that # will be the config file it uses. Basically, just copy the created defconfig # for the arch someplace and point MIN_CONFIG to it. TEST_START IF ${RUN} == bisect MIN_CONFIG = ${THIS_DIR}/min-config CROSS = s390x-linux ARCH = s390 TEST_TYPE = bisect BISECT_TYPE = build BISECT_GOOD = v3.1 BISECT_BAD = v3.2 CHECKOUT = v3.2 ################################# # These defaults are needed to keep ktest.pl from complaining. They are # ignored because the test does not go pass the build. No install or # booting of the target images. DEFAULTS MACHINE = crosstest SSH_USER = root BUILD_TARGET = cross TARGET_IMAGE = image POWER_CYCLE = cycle CONSOLE = console LOCALVERSION = version GRUB_MENU = grub REBOOT_ON_ERROR = 0 POWEROFF_ON_ERROR = 0 POWEROFF_ON_SUCCESS = 0 REBOOT_ON_SUCCESS = 0 ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-17 2:07 ` Steven Rostedt @ 2012-05-17 2:07 ` Steven Rostedt 2012-05-17 8:04 ` Alex Shi 1 sibling, 0 replies; 42+ messages in thread From: Steven Rostedt @ 2012-05-17 2:07 UTC (permalink / raw) To: Alex Shi Cc: Peter Zijlstra, Nick Piggin, tglx, mingo, hpa, arnd, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch [-- Attachment #1: Type: text/plain, Size: 606 bytes --] On Thu, 2012-05-17 at 08:43 +0800, Alex Shi wrote: > > I've seem to have misplaced my cross-compiler set, so I've only compiled > > x86-64 for now. > > > Oh, I also need a cross-compiler for other archs. Thanks reminder! Here: http://kernel.org/pub/tools/crosstool/ Oh, and if you want to automate this. I attached a ktest.pl config that does it for you. I'll be pushing this config and others into a examples directory come the next merge window. Ktest is located in the Linux tree under tools/testing/ktest/ You can run a bunch of cross compiles by doing: ktest.pl crosstests.conf -- Steve [-- Attachment #2: crosstests.conf --] [-- Type: text/plain, Size: 6921 bytes --] # # Example config for cross compiling # # In this config, it is expected that the tool chains from: # # http://kernel.org/pub/tools/crosstool/files/bin/x86_64/ # # running on a x86_64 system have been downloaded and installed into: # # /usr/local/ # # such that the compiler binaries are something like: # # /usr/local/gcc-4.5.2-nolibc/mips-linux/bin/mips-linux-gcc # # Some of the archs will use gcc-4.5.1 instead of gcc-4.5.2 # this config uses variables to differentiate them. # # Comments will be described for some areas, but the opitons are # all documented in the samples.conf file. # ${PWD} is defined by ktest.pl to be the directory that the user # was in when they executed ktest.pl. It may be better to hardcode the # path name here. THIS_DIR is the variable used through out the config file # in case you want to change it. THIS_DIR := ${PWD} # Update the BUILD_DIR option to the location of your git repo you want to test. BUILD_DIR = ${THIS_DIR}/linux.git # The build will go into this directory. It will be created when you run the test. OUTPUT_DIR = ${THIS_DIR}/cross-compile # The build will be compiled with -j8 BUILD_OPTIONS = -j8 # The test will not stop when it hits a failure. DIE_ON_FAILURE = 0 # If you want to have ktest.pl store the failure somewhere, uncomment this option # and change the directory where ktest should store the failures. #STORE_FAILURES = ${THIS_DIR}/failures # The log file is stored in the OUTPUT_DIR called cross.log # If you enable this, you need to create the OUTPUT_DIR. It wont be created for you. #LOG_FILE = ${OUTPUT_DIR}/cross.log # The log file will be cleared each time you run ktest. CLEAR_LOG = 1 # As some archs do not build with the defconfig, they have been marked # to be ignored. If you want to test them anyway, change DO_FAILED to one. # If a test that has been marked as DO_FAILED passes, then you should change # that test to be DO_DEFAULT DO_FAILED := 0 DO_DEFAULT := 1 # By setting both DO_FAILED and DO_DEFAULT to zero, you can pick a single # arch that you want to test. (uncomment RUN and chose your arch) #RUN := m32r # At the bottom of the config file exists a bisect test. You can update that # test and set DO_FAILED and DO_DEFAULT to zero, and uncomment this variable # to run the bisect on the arch. #RUN := bisect # By default all tests will be running gcc 4.5.2. Some tests are using 4.5.1 # and they select that in the test. # Note: GCC_VER is declared as on option and not a variable ('=' istead of ':=') # This is important. A variable is used only in the config file and if it is set # it stays that way for the rest of the config file until it is change again. # Here we want GCC_VER to remain persistent throughout the tests, as it is used in # the MAKE_CMD. By using '=' instead of ':=' we achieve our goal. GCC_VER = 4.5.2 MAKE_CMD = PATH=/usr/local/gcc-${GCC_VER}-nolibc/${CROSS}/bin:$PATH CROSS_COMPILE=${CROSS}- make ARCH=${ARCH} # all tests are only doing builds. TEST_TYPE = build # If you want to add configs on top of the defconfig, you can add those configs into # the add-config file and uncomment this option. This is useful if you want to test # all cross compiles with PREEMPT set, or TRACING on, etc. #ADD_CONFIG = ${THIS_DIR}/add-config # All tests are using defconfig BUILD_TYPE = defconfig # The test names will have the arch and cross compiler used. This will be shown in # the results. TEST_NAME = ${ARCH} ${CROSS} # alpha TEST_START IF ${RUN} == alpha || ${DO_DEFAULT} CROSS = alpha-linux ARCH = alpha # arm TEST_START IF ${RUN} == arm || ${DO_DEFAULT} CROSS = arm-unknown-linux-gnueabi ARCH = arm # black fin TEST_START IF ${RUN} == bfin || ${DO_DEFAULT} CROSS = bfin-uclinux ARCH = blackfin BUILD_OPTIONS = -j8 vmlinux # cris - FAILS? TEST_START IF ${RUN} == cris || ${RUN} == cris64 || ${DO_FAILED} CROSS = cris-linux ARCH = cris # cris32 - not right arch? TEST_START IF ${RUN} == cris || ${RUN} == cris32 || ${DO_FAILED} CROSS = crisv32-linux ARCH = cris # ia64 TEST_START IF ${RUN} == ia64 || ${DO_DEFAULT} CROSS = ia64-linux ARCH = ia64 # frv TEST_START IF ${RUN} == frv || ${DO_FAILED} CROSS = frv-linux ARCH = frv GCC_VER = 4.5.1 # h8300 - failed make defconfig?? TEST_START IF ${RUN} == h8300 || ${DO_FAILED} CROSS = h8300-elf ARCH = h8300 GCC_VER = 4.5.1 # m68k fails with error? TEST_START IF ${RUN} == m68k || ${DO_DEFAULT} CROSS = m68k-linux ARCH = m68k # mips64 TEST_START IF ${RUN} == mips || ${RUN} == mips64 || ${DO_DEFAULT} CROSS = mips64-linux ARCH = mips # mips32 TEST_START IF ${RUN} == mips || ${RUN} == mips32 || ${DO_DEFAULT} CROSS = mips-linux ARCH = mips # m32r TEST_START IF ${RUN} == m32r || ${DO_FAILED} CROSS = m32r-linux ARCH = m32r GCC_VER = 4.5.1 BUILD_OPTIONS = -j8 vmlinux # parisc64 failed? TEST_START IF ${RUN} == hppa || ${RUN} == hppa64 || ${DO_FAILED} CROSS = hppa64-linux ARCH = parisc # parisc TEST_START IF ${RUN} == hppa || ${RUN} == hppa32 || ${DO_FAILED} CROSS = hppa-linux ARCH = parisc # ppc TEST_START IF ${RUN} == ppc || ${RUN} == ppc32 || ${DO_DEFAULT} CROSS = powerpc-linux ARCH = powerpc # ppc64 TEST_START IF ${RUN} == ppc || ${RUN} == ppc64 || ${DO_DEFAULT} CROSS = powerpc64-linux ARCH = powerpc # s390 TEST_START IF ${RUN} == s390 || ${DO_DEFAULT} CROSS = s390x-linux ARCH = s390 # sh TEST_START IF ${RUN} == sh || ${DO_DEFAULT} CROSS = sh4-linux ARCH = sh # sparc64 TEST_START IF ${RUN} == sparc || ${RUN} == sparc64 || ${DO_DEFAULT} CROSS = sparc64-linux ARCH = sparc64 # sparc TEST_START IF ${RUN} == sparc || ${RUN} == sparc32 || ${DO_DEFAULT} CROSS = sparc-linux ARCH = sparc # xtensa failed TEST_START IF ${RUN} == xtensa || ${DO_FAILED} CROSS = xtensa-linux ARCH = xtensa # UML TEST_START IF ${RUN} == uml || ${DO_DEFAULT} MAKE_CMD = make ARCH=um SUBARCH=x86_64 ARCH = uml CROSS = TEST_START IF ${RUN} == x86 || ${RUN} == i386 || ${DO_DEFAULT} MAKE_CMD = make ARCH=i386 ARCH = i386 CROSS = TEST_START IF ${RUN} == x86 || ${RUN} == x86_64 || ${DO_DEFAULT} MAKE_CMD = make ARCH=x86_64 ARCH = x86_64 CROSS = ################################# # This is a bisect if needed. You need to give it a MIN_CONFIG that # will be the config file it uses. Basically, just copy the created defconfig # for the arch someplace and point MIN_CONFIG to it. TEST_START IF ${RUN} == bisect MIN_CONFIG = ${THIS_DIR}/min-config CROSS = s390x-linux ARCH = s390 TEST_TYPE = bisect BISECT_TYPE = build BISECT_GOOD = v3.1 BISECT_BAD = v3.2 CHECKOUT = v3.2 ################################# # These defaults are needed to keep ktest.pl from complaining. They are # ignored because the test does not go pass the build. No install or # booting of the target images. DEFAULTS MACHINE = crosstest SSH_USER = root BUILD_TARGET = cross TARGET_IMAGE = image POWER_CYCLE = cycle CONSOLE = console LOCALVERSION = version GRUB_MENU = grub REBOOT_ON_ERROR = 0 POWEROFF_ON_ERROR = 0 POWEROFF_ON_SUCCESS = 0 REBOOT_ON_SUCCESS = 0 ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-17 2:07 ` Steven Rostedt 2012-05-17 2:07 ` Steven Rostedt @ 2012-05-17 8:04 ` Alex Shi 1 sibling, 0 replies; 42+ messages in thread From: Alex Shi @ 2012-05-17 8:04 UTC (permalink / raw) To: Steven Rostedt Cc: Peter Zijlstra, Nick Piggin, tglx, mingo, hpa, arnd, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch On 05/17/2012 10:07 AM, Steven Rostedt wrote: > On Thu, 2012-05-17 at 08:43 +0800, Alex Shi wrote: > >>> I've seem to have misplaced my cross-compiler set, so I've only compiled >>> x86-64 for now. >> >> >> Oh, I also need a cross-compiler for other archs. Thanks reminder! > > Here: > > http://kernel.org/pub/tools/crosstool/ > > Oh, and if you want to automate this. I attached a ktest.pl config that > does it for you. I'll be pushing this config and others into a examples > directory come the next merge window. > > Ktest is located in the Linux tree under tools/testing/ktest/ > > You can run a bunch of cross compiles by doing: > > ktest.pl crosstests.conf It works fine. :) But does ktest only do one kind of config testing? How it do randconfig testing? > > -- Steve > > > > ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-16 21:09 ` Peter Zijlstra 2012-05-17 0:43 ` Alex Shi @ 2012-05-17 2:14 ` Paul Mundt 1 sibling, 0 replies; 42+ messages in thread From: Paul Mundt @ 2012-05-17 2:14 UTC (permalink / raw) To: Peter Zijlstra Cc: Alex Shi, Nick Piggin, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch On Wed, May 16, 2012 at 11:09:29PM +0200, Peter Zijlstra wrote: > On Wed, 2012-05-16 at 21:34 +0800, Alex Shi wrote: > > > > So, if the minimum change of tlb->start/end can be protected by > > HAVE_GENERIC_MMU_GATHER, it is safe and harmless, am I right? > > > safe yes, but not entirely harmless. A quick look seems to suggest you > fail for VM_HUGETLB. If your mmu_gather spans a vma with VM_HUGETLB > you'll do a regular range flush not a full mm flush like the other paths > do. > > Anyway, I did a quick refresh of my series on a recent -tip tree: > > git://git.kernel.org/pub/scm/linux/kernel/git/peterz/mmu.git tlb-unify > > With that all you need is to "select HAVE_MMU_GATHER_RANGE" for x86 and > implement a useful flush_tlb_range(). > > In particular, see: > http://git.kernel.org/?p=linux/kernel/git/peterz/mmu.git;a=commitdiff;h=05e53144177e6242fda404045f50f48114bcf185;hp=2cd7dc710652127522392f4b7ecb5fa6e954941e > > I've slightly changed the code to address an open issue with the > vm_flags tracking. We now force flush the mmu_gather whenever VM_HUGETLB > flips because most (all?) archs that look at that flag expect pure huge > pages and not a mixture. > > I've seem to have misplaced my cross-compiler set, so I've only compiled > x86-64 for now. It was on my list to test when you sent out the series initially, but seems to have slipped my mind until I saw this thread. Here's a patch on top of your tlb-unify branch that gets sh working (tested on all of 2-level, 3-level, and nommu). I opted to shove the asm/cacheflush.h include in to tlb.h directly since it is calling flush_cache_range() openly now, and the rest of the architectures are just getting at it through various whimsical means. sh was getting it through pagemap.h -> highmem.h, while ARM presently can't seem to make up its mind and includes pagemap.h for nommu only as well as cacheflush.h explicitly. With the reworked interface we don't seem to actually need to stub out the interface for the nommu case anymore anyways, all of the users are insular to mm/memory.c which we don't build for nommu. Signed-off-by: Paul Mundt <lethal@linux-sh.org> --- diff --git a/arch/sh/include/asm/pgalloc.h b/arch/sh/include/asm/pgalloc.h index 8c00785..bedc2ed 100644 --- a/arch/sh/include/asm/pgalloc.h +++ b/arch/sh/include/asm/pgalloc.h @@ -13,6 +13,8 @@ extern void pgd_free(struct mm_struct *mm, pgd_t *pgd); extern void pud_populate(struct mm_struct *mm, pud_t *pudp, pmd_t *pmd); extern pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address); extern void pmd_free(struct mm_struct *mm, pmd_t *pmd); + +#define __pmd_free_tlb(tlb, pmdp, addr) pmd_free((tlb)->mm, pmdp) #endif static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, diff --git a/arch/sh/include/asm/tlb.h b/arch/sh/include/asm/tlb.h index 45e5925..71af915 100644 --- a/arch/sh/include/asm/tlb.h +++ b/arch/sh/include/asm/tlb.h @@ -6,18 +6,7 @@ #endif #ifndef __ASSEMBLY__ -#include <linux/pagemap.h> - #ifdef CONFIG_MMU -#include <linux/swap.h> - -#define __tlb_remove_tlb_entry(tlb, ptep, addr) do { } while (0) - -#define __pte_free_tlb(tlb, ptep, addr) pte_free((tlb)->mm, ptep) -#define __pmd_free_tlb(tlb, pmdp, addr) pmd_free((tlb)->mm, pmdp) -#define __pud_free_tlb(tlb, pudp, addr) pud_free((tlb)->mm, pudp) - -#include <asm-generic/tlb.h> #if defined(CONFIG_CPU_SH4) || defined(CONFIG_SUPERH64) extern void tlb_wire_entry(struct vm_area_struct *, unsigned long, pte_t); @@ -35,8 +24,6 @@ static inline void tlb_unwire_entry(void) } #endif -#else /* CONFIG_MMU */ - #define __tlb_remove_tlb_entry(tlb, pte, address) do { } while (0) #include <asm-generic/tlb.h> diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 90a725c..571e2cf 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -18,6 +18,7 @@ #include <linux/swap.h> #include <asm/pgalloc.h> #include <asm/tlbflush.h> +#include <asm/cacheflush.h> static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page); ^ permalink raw reply related [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-16 8:00 ` Peter Zijlstra 2012-05-16 8:04 ` Peter Zijlstra 2012-05-16 13:34 ` Alex Shi @ 2012-05-16 13:44 ` Alex Shi 2012-05-16 13:44 ` Alex Shi 2 siblings, 1 reply; 42+ messages in thread From: Alex Shi @ 2012-05-16 13:44 UTC (permalink / raw) To: Peter Zijlstra Cc: Nick Piggin, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch > Now IF you're going to change the tlb interface like this, you're going > to get to do it for all architectures, along with a sane benchmark to > show its beneficial to track ranges like this. > > But as it stands, people are still questioning the validity of your > mprotect micro-bench, so no, you don't get to change the tlb interface. Yes, sure. You are definitely right! ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm 2012-05-16 13:44 ` Alex Shi @ 2012-05-16 13:44 ` Alex Shi 0 siblings, 0 replies; 42+ messages in thread From: Alex Shi @ 2012-05-16 13:44 UTC (permalink / raw) To: Peter Zijlstra Cc: Nick Piggin, tglx, mingo, hpa, arnd, rostedt, fweisbec, jeremy, riel, luto, avi, len.brown, dhowells, fenghua.yu, borislav.petkov, yinghai, ak, cpw, steiner, akpm, penberg, hughd, rientjes, kosaki.motohiro, n-horiguchi, tj, oleg, axboe, jmorris, kamezawa.hiroyu, viro, linux-kernel, yongjie.ren, linux-arch > Now IF you're going to change the tlb interface like this, you're going > to get to do it for all architectures, along with a sane benchmark to > show its beneficial to track ranges like this. > > But as it stands, people are still questioning the validity of your > mprotect micro-bench, so no, you don't get to change the tlb interface. Yes, sure. You are definitely right! ^ permalink raw reply [flat|nested] 42+ messages in thread
end of thread, other threads:[~2012-05-17 8:06 UTC | newest]
Thread overview: 42+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <1337072138-8323-1-git-send-email-alex.shi@intel.com>
[not found] ` <1337072138-8323-7-git-send-email-alex.shi@intel.com>
2012-05-15 9:15 ` [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm Nick Piggin
2012-05-15 9:17 ` Nick Piggin
2012-05-15 12:58 ` Luming Yu
2012-05-15 13:06 ` Peter Zijlstra
2012-05-15 13:27 ` Luming Yu
2012-05-15 13:28 ` Alex Shi
2012-05-15 13:28 ` Alex Shi
2012-05-15 13:33 ` Alex Shi
2012-05-15 13:39 ` Steven Rostedt
2012-05-15 14:04 ` Borislav Petkov
2012-05-15 13:08 ` Luming Yu
2012-05-15 13:08 ` Luming Yu
2012-05-15 14:07 ` Alex Shi
2012-05-15 9:18 ` Peter Zijlstra
2012-05-15 9:52 ` Nick Piggin
2012-05-15 10:00 ` Peter Zijlstra
2012-05-15 10:06 ` Nick Piggin
2012-05-15 10:13 ` Peter Zijlstra
2012-05-15 14:04 ` Alex Shi
2012-05-15 13:24 ` Alex Shi
2012-05-15 14:36 ` Peter Zijlstra
2012-05-15 14:36 ` Peter Zijlstra
2012-05-15 14:57 ` Peter Zijlstra
2012-05-15 15:01 ` Alex Shi
2012-05-16 6:46 ` Alex Shi
2012-05-16 8:00 ` Peter Zijlstra
2012-05-16 8:04 ` Peter Zijlstra
2012-05-16 8:53 ` Alex Shi
2012-05-16 8:58 ` Peter Zijlstra
2012-05-16 8:58 ` Peter Zijlstra
2012-05-16 10:58 ` Alex Shi
2012-05-16 11:04 ` Peter Zijlstra
2012-05-16 12:57 ` Alex Shi
2012-05-16 13:34 ` Alex Shi
2012-05-16 21:09 ` Peter Zijlstra
2012-05-17 0:43 ` Alex Shi
2012-05-17 2:07 ` Steven Rostedt
2012-05-17 2:07 ` Steven Rostedt
2012-05-17 8:04 ` Alex Shi
2012-05-17 2:14 ` Paul Mundt
2012-05-16 13:44 ` Alex Shi
2012-05-16 13:44 ` Alex Shi
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox