All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ray Bryant <raybry@sgi.com>
To: Christoph Lameter <clameter@sgi.com>
Cc: "David S. Miller" <davem@redhat.com>,
	linux-ia64@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: page fault fastpath: Increasing SMP scalability by introducing
Date: Mon, 16 Aug 2004 06:59:53 +0000	[thread overview]
Message-ID: <41205BAA.9030800@sgi.com> (raw)
In-Reply-To: <Pine.LNX.4.58.0408151924250.4480@schroedinger.engr.sgi.com>

Christoph,

Something else to worry about here is mm->rss.  Previously, this was updated 
only with the page_table_lock held, so concurrent increments were not a 
problem.  rss may need to converted be an atomic_t if you use pte_locks.
It may be that an approximate value for rss is good enough, but I'm not sure 
how to bound the error that could be introduced by a couple of hundred 
processers handling page faults in parallel and updating rss without locking 
it or making it an atomic_t.

Christoph Lameter wrote:
> On Sun, 15 Aug 2004, David S. Miller wrote:
> 
> 
>>On Sun, 15 Aug 2004 17:11:53 -0700 (PDT)
>>Christoph Lameter <clameter@sgi.com> wrote:
>>
>>
>>>pgd/pmd walking should be possible always even without the vma semaphore
>>>since the CPU can potentially walk the chain at anytime.
>>
>>munmap() can destroy pmd and pte tables.  somehow we have
>>to protect against that, and currently that is having the
>>VMA semaphore held for reading, see free_pgtables().
> 
> 
> It looks to me like the code takes care to provide the correct
> sequencing so that the integrity of pgd,pmd and pte links is
> guaranteed from the viewpoint of the MMU in the CPUs. munmap is there to
> protect one kernel thread messing with the addresses of these entities
> that might be stored in another threads register.
> 
> Therefore it is safe to walk the chain only holding the semaphore read
> lock?
> 
> If the mmap lock already guarantees the integrity of the pgd,pmd,pte
> system, then pte locking would be okay as long as integrity of the
> pgd,pmd and pte's is always guaranteed. Then also adding a lock bit would
> work.
> 
> So then there are two ways of modifying the pgd,pmd and pte's.
> 
> A) Processor obtains vma semaphore write lock and does large scale
> modifications to pgd,pmd,pte.
> 
> B) Processor obtains vma semaphore read lock but is still free to do
> modifications on individual pte's while holding that vma lock. There is no
> need to acquire the page_table_lock. These changes must be atomic.
> 
> The role of the page_table_lock is restricted *only* to the "struct
> page" stuff? It says in the comments regarding handle_mm_fault that the
> lock is taken for synchronization with kswapd in regards to the pte
> entries. Seems that this use of the page_table_lock is wrong. A or B
> should have been used.
> 
> We could simply remove the page_table_lock from handle_mm_fault and
> provide the synchronization with kswapd with pte locks right? Both
> processes are essentially doing modifications on pte's while holding the
> vma read lock and I would be changing the way of synchronization between
> these two processes.
> 
> F.e. something along these lines removing the page_table_lock from
> handle_mm_fault and friends. Surprisingly this will also avoid many
> rereads of the pte's since the pte's are really locked. This is just for
> illustrative purpose and unfinished...
> 
> Index: linux-2.6.8.1/mm/memory.c
> =================================> --- linux-2.6.8.1.orig/mm/memory.c	2004-08-15 06:03:04.000000000 -0700
> +++ linux-2.6.8.1/mm/memory.c	2004-08-15 20:26:29.000000000 -0700
> @@ -1035,8 +1035,7 @@
>   * change only once the write actually happens. This avoids a few races,
>   * and potentially makes it more efficient.
>   *
> - * We hold the mm semaphore and the page_table_lock on entry and exit
> - * with the page_table_lock released.
> + * We hold the mm semaphore.
>   */
>  static int do_wp_page(struct mm_struct *mm, struct vm_area_struct * vma,
>  	unsigned long address, pte_t *page_table, pmd_t *pmd, pte_t pte)
> @@ -1051,10 +1050,10 @@
>  		 * at least the kernel stops what it's doing before it corrupts
>  		 * data, but for the moment just pretend this is OOM.
>  		 */
> +		ptep_unlock(page_table);
>  		pte_unmap(page_table);
>  		printk(KERN_ERR "do_wp_page: bogus page at address %08lx\n",
>  				address);
> -		spin_unlock(&mm->page_table_lock);
>  		return VM_FAULT_OOM;
>  	}
>  	old_page = pfn_to_page(pfn);
> @@ -1069,7 +1068,7 @@
>  			ptep_set_access_flags(vma, address, page_table, entry, 1);
>  			update_mmu_cache(vma, address, entry);
>  			pte_unmap(page_table);
> -			spin_unlock(&mm->page_table_lock);
> +			/* pte lock unlocked by ptep_set_access */
>  			return VM_FAULT_MINOR;
>  		}
>  	}
> @@ -1080,7 +1079,7 @@
>  	 */
>  	if (!PageReserved(old_page))
>  		page_cache_get(old_page);
> -	spin_unlock(&mm->page_table_lock);
> +	ptep_unlock(page_table);
> 
>  	if (unlikely(anon_vma_prepare(vma)))
>  		goto no_new_page;
> @@ -1090,26 +1089,21 @@
>  	copy_cow_page(old_page,new_page,address);
> 
>  	/*
> -	 * Re-check the pte - we dropped the lock
> +	 * There is no need to recheck. The pte was locked
>  	 */
> -	spin_lock(&mm->page_table_lock);
> -	page_table = pte_offset_map(pmd, address);
> -	if (likely(pte_same(*page_table, pte))) {
> -		if (PageReserved(old_page))
> -			++mm->rss;
> -		else
> -			page_remove_rmap(old_page);
> -		break_cow(vma, new_page, address, page_table);
> -		lru_cache_add_active(new_page);
> -		page_add_anon_rmap(new_page, vma, address);
> +	if (PageReserved(old_page))
> +		++mm->rss;
> +	else
> +		page_remove_rmap(old_page);
> +	break_cow(vma, new_page, address, page_table);
> +	lru_cache_add_active(new_page);
> +	page_add_anon_rmap(new_page, vma, address);
> 
> -		/* Free the old page.. */
> -		new_page = old_page;
> -	}
> +	/* Free the old page.. */
> +	new_page = old_page;
>  	pte_unmap(page_table);
>  	page_cache_release(new_page);
>  	page_cache_release(old_page);
> -	spin_unlock(&mm->page_table_lock);
>  	return VM_FAULT_MINOR;
> 
>  no_new_page:
> @@ -1314,8 +1308,8 @@
>  }
> 
>  /*
> - * We hold the mm semaphore and the page_table_lock on entry and
> - * should release the pagetable lock on exit..
> + * We hold the mm semaphore and a pte lock n entry and
> + * should release the pte lock on exit..
>   */
>  static int do_swap_page(struct mm_struct * mm,
>  	struct vm_area_struct * vma, unsigned long address,
> @@ -1327,27 +1321,10 @@
>  	int ret = VM_FAULT_MINOR;
> 
>  	pte_unmap(page_table);
> -	spin_unlock(&mm->page_table_lock);
>  	page = lookup_swap_cache(entry);
>  	if (!page) {
>   		swapin_readahead(entry, address, vma);
>   		page = read_swap_cache_async(entry, vma, address);
> -		if (!page) {
> -			/*
> -			 * Back out if somebody else faulted in this pte while
> -			 * we released the page table lock.
> -			 */
> -			spin_lock(&mm->page_table_lock);
> -			page_table = pte_offset_map(pmd, address);
> -			if (likely(pte_same(*page_table, orig_pte)))
> -				ret = VM_FAULT_OOM;
> -			else
> -				ret = VM_FAULT_MINOR;
> -			pte_unmap(page_table);
> -			spin_unlock(&mm->page_table_lock);
> -			goto out;
> -		}
> -
>  		/* Had to read the page from swap area: Major fault */
>  		ret = VM_FAULT_MAJOR;
>  		inc_page_state(pgmajfault);
> @@ -1356,21 +1333,6 @@
>  	mark_page_accessed(page);
>  	lock_page(page);
> 
> -	/*
> -	 * Back out if somebody else faulted in this pte while we
> -	 * released the page table lock.
> -	 */
> -	spin_lock(&mm->page_table_lock);
> -	page_table = pte_offset_map(pmd, address);
> -	if (unlikely(!pte_same(*page_table, orig_pte))) {
> -		pte_unmap(page_table);
> -		spin_unlock(&mm->page_table_lock);
> -		unlock_page(page);
> -		page_cache_release(page);
> -		ret = VM_FAULT_MINOR;
> -		goto out;
> -	}
> -
>  	/* The page isn't present yet, go ahead with the fault. */
> 
>  	swap_free(entry);
> @@ -1398,8 +1360,8 @@
> 
>  	/* No need to invalidate - it was non-present before */
>  	update_mmu_cache(vma, address, pte);
> +	ptep_unlock(page_table);
>  	pte_unmap(page_table);
> -	spin_unlock(&mm->page_table_lock);
>  out:
>  	return ret;
>  }
> @@ -1424,7 +1386,6 @@
>  	if (write_access) {
>  		/* Allocate our own private page. */
>  		pte_unmap(page_table);
> -		spin_unlock(&mm->page_table_lock);
> 
>  		if (unlikely(anon_vma_prepare(vma)))
>  			goto no_mem;
> @@ -1433,13 +1394,12 @@
>  			goto no_mem;
>  		clear_user_highpage(page, addr);
> 
> -		spin_lock(&mm->page_table_lock);
>  		page_table = pte_offset_map(pmd, addr);
> 
>  		if (!pte_none(*page_table)) {
> +			ptep_unlock(page_table);
>  			pte_unmap(page_table);
>  			page_cache_release(page);
> -			spin_unlock(&mm->page_table_lock);
>  			goto out;
>  		}
>  		mm->rss++;
> @@ -1456,7 +1416,6 @@
> 
>  	/* No need to invalidate - it was non-present before */
>  	update_mmu_cache(vma, addr, entry);
> -	spin_unlock(&mm->page_table_lock);
>  out:
>  	return VM_FAULT_MINOR;
>  no_mem:
> @@ -1472,8 +1431,8 @@
>   * As this is called only for pages that do not currently exist, we
>   * do not need to flush old virtual caches or the TLB.
>   *
> - * This is called with the MM semaphore held and the page table
> - * spinlock held. Exit with the spinlock released.
> + * This is called with the MM semaphore held and pte lock
> + * held. Exit with the pte lock released.
>   */
>  static int
>  do_no_page(struct mm_struct *mm, struct vm_area_struct *vma,
> @@ -1489,9 +1448,9 @@
>  	if (!vma->vm_ops || !vma->vm_ops->nopage)
>  		return do_anonymous_page(mm, vma, page_table,
>  					pmd, write_access, address);
> +	ptep_unlock(page_table);
>  	pte_unmap(page_table);
> -	spin_unlock(&mm->page_table_lock);
> -
> +
>  	if (vma->vm_file) {
>  		mapping = vma->vm_file->f_mapping;
>  		sequence = atomic_read(&mapping->truncate_count);
> @@ -1523,7 +1482,7 @@
>  		anon = 1;
>  	}
> 
> -	spin_lock(&mm->page_table_lock);
> +	while (ptep_lock(page_table)) ;
>  	/*
>  	 * For a file-backed vma, someone could have truncated or otherwise
>  	 * invalidated this page.  If unmap_mapping_range got called,
> @@ -1532,7 +1491,7 @@
>  	if (mapping &&
>  	      (unlikely(sequence != atomic_read(&mapping->truncate_count)))) {
>  		sequence = atomic_read(&mapping->truncate_count);
> -		spin_unlock(&mm->page_table_lock);
> +		ptep_unlock(page_table);
>  		page_cache_release(new_page);
>  		goto retry;
>  	}
> @@ -1565,15 +1524,15 @@
>  		pte_unmap(page_table);
>  	} else {
>  		/* One of our sibling threads was faster, back out. */
> +		ptep_unlock(page_table);
>  		pte_unmap(page_table);
>  		page_cache_release(new_page);
> -		spin_unlock(&mm->page_table_lock);
>  		goto out;
>  	}
> 
>  	/* no need to invalidate: a not-present page shouldn't be cached */
>  	update_mmu_cache(vma, address, entry);
> -	spin_unlock(&mm->page_table_lock);
> +	ptep_unlock(page_table);
>  out:
>  	return ret;
>  oom:
> @@ -1606,8 +1565,8 @@
> 
>  	pgoff = pte_to_pgoff(*pte);
> 
> +	ptep_unlock(pte);
>  	pte_unmap(pte);
> -	spin_unlock(&mm->page_table_lock);
> 
>  	err = vma->vm_ops->populate(vma, address & PAGE_MASK, PAGE_SIZE, vma->vm_page_prot, pgoff, 0);
>  	if (err = -ENOMEM)
> @@ -1644,13 +1603,11 @@
>  {
>  	pte_t entry;
> 
> -	entry = *pte;
> +	entry = *pte;	/* get the unlocked value so that we do not write the lock bit back */
> +
> +	if (ptep_lock(pte)) return VM_FAULT_MINOR;
> +
>  	if (!pte_present(entry)) {
> -		/*
> -		 * If it truly wasn't present, we know that kswapd
> -		 * and the PTE updates will not touch it later. So
> -		 * drop the lock.
> -		 */
>  		if (pte_none(entry))
>  			return do_no_page(mm, vma, address, write_access, pte, pmd);
>  		if (pte_file(entry))
> @@ -1668,7 +1625,6 @@
>  	ptep_set_access_flags(vma, address, pte, entry, write_access);
>  	update_mmu_cache(vma, address, entry);
>  	pte_unmap(pte);
> -	spin_unlock(&mm->page_table_lock);
>  	return VM_FAULT_MINOR;
>  }
> 
> @@ -1688,12 +1644,6 @@
> 
>  	if (is_vm_hugetlb_page(vma))
>  		return VM_FAULT_SIGBUS;	/* mapping truncation does this. */
> -
> -	/*
> -	 * We need the page table lock to synchronize with kswapd
> -	 * and the SMP-safe atomic PTE updates.
> -	 */
> -	spin_lock(&mm->page_table_lock);
>  	pmd = pmd_alloc(mm, pgd, address);
> 
>  	if (pmd) {
> @@ -1701,7 +1651,6 @@
>  		if (pte)
>  			return handle_pte_fault(mm, vma, address, write_access, pte, pmd);
>  	}
> -	spin_unlock(&mm->page_table_lock);
>  	return VM_FAULT_OOM;
>  }
> 
> Index: linux-2.6.8.1/mm/rmap.c
> =================================> --- linux-2.6.8.1.orig/mm/rmap.c	2004-08-14 03:56:22.000000000 -0700
> +++ linux-2.6.8.1/mm/rmap.c	2004-08-15 19:59:32.000000000 -0700
> @@ -494,8 +494,14 @@
> 
>  	/* Nuke the page table entry. */
>  	flush_cache_page(vma, address);
> +#ifdef __HAVE_ARCH_PTE_LOCK
> +	/* If we would simply zero the pte then handle_mm_fault might
> +	 * race against this code and reinstate an anonymous mapping
> +	 */
> +	pteval = ptep_clear_and_lock_flush(vma, address, pte);
> +#else
>  	pteval = ptep_clear_flush(vma, address, pte);
> -
> +#endif
>  	/* Move the dirty bit to the physical page now the pte is gone. */
>  	if (pte_dirty(pteval))
>  		set_page_dirty(page);
> @@ -508,9 +514,13 @@
>  		 */
>  		BUG_ON(!PageSwapCache(page));
>  		swap_duplicate(entry);
> +		/* This is going to clear the lock that may have been set on the pte */
>  		set_pte(pte, swp_entry_to_pte(entry));
>  		BUG_ON(pte_file(*pte));
>  	}
> +#ifdef __HAVE_ARCH_PTE_LOCK
> +	else ptep_unlock(pte);
> +#endif
> 
>  	mm->rss--;
>  	BUG_ON(!page->mapcount);
> -
> To unsubscribe from this list: send the line "unsubscribe linux-ia64" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

-- 
Best Regards,
Ray
-----------------------------------------------
                   Ray Bryant
512-453-9679 (work)         512-507-7807 (cell)
raybry@sgi.com             raybry@austin.rr.com
The box said: "Requires Windows 98 or better",
            so I installed Linux.
-----------------------------------------------


WARNING: multiple messages have this Message-ID (diff)
From: Ray Bryant <raybry@sgi.com>
To: Christoph Lameter <clameter@sgi.com>
Cc: "David S. Miller" <davem@redhat.com>,
	linux-ia64@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: page fault fastpath: Increasing SMP scalability by introducing pte locks?
Date: Mon, 16 Aug 2004 02:00:58 -0500	[thread overview]
Message-ID: <41205BAA.9030800@sgi.com> (raw)
In-Reply-To: <Pine.LNX.4.58.0408151924250.4480@schroedinger.engr.sgi.com>

Christoph,

Something else to worry about here is mm->rss.  Previously, this was updated 
only with the page_table_lock held, so concurrent increments were not a 
problem.  rss may need to converted be an atomic_t if you use pte_locks.
It may be that an approximate value for rss is good enough, but I'm not sure 
how to bound the error that could be introduced by a couple of hundred 
processers handling page faults in parallel and updating rss without locking 
it or making it an atomic_t.

Christoph Lameter wrote:
> On Sun, 15 Aug 2004, David S. Miller wrote:
> 
> 
>>On Sun, 15 Aug 2004 17:11:53 -0700 (PDT)
>>Christoph Lameter <clameter@sgi.com> wrote:
>>
>>
>>>pgd/pmd walking should be possible always even without the vma semaphore
>>>since the CPU can potentially walk the chain at anytime.
>>
>>munmap() can destroy pmd and pte tables.  somehow we have
>>to protect against that, and currently that is having the
>>VMA semaphore held for reading, see free_pgtables().
> 
> 
> It looks to me like the code takes care to provide the correct
> sequencing so that the integrity of pgd,pmd and pte links is
> guaranteed from the viewpoint of the MMU in the CPUs. munmap is there to
> protect one kernel thread messing with the addresses of these entities
> that might be stored in another threads register.
> 
> Therefore it is safe to walk the chain only holding the semaphore read
> lock?
> 
> If the mmap lock already guarantees the integrity of the pgd,pmd,pte
> system, then pte locking would be okay as long as integrity of the
> pgd,pmd and pte's is always guaranteed. Then also adding a lock bit would
> work.
> 
> So then there are two ways of modifying the pgd,pmd and pte's.
> 
> A) Processor obtains vma semaphore write lock and does large scale
> modifications to pgd,pmd,pte.
> 
> B) Processor obtains vma semaphore read lock but is still free to do
> modifications on individual pte's while holding that vma lock. There is no
> need to acquire the page_table_lock. These changes must be atomic.
> 
> The role of the page_table_lock is restricted *only* to the "struct
> page" stuff? It says in the comments regarding handle_mm_fault that the
> lock is taken for synchronization with kswapd in regards to the pte
> entries. Seems that this use of the page_table_lock is wrong. A or B
> should have been used.
> 
> We could simply remove the page_table_lock from handle_mm_fault and
> provide the synchronization with kswapd with pte locks right? Both
> processes are essentially doing modifications on pte's while holding the
> vma read lock and I would be changing the way of synchronization between
> these two processes.
> 
> F.e. something along these lines removing the page_table_lock from
> handle_mm_fault and friends. Surprisingly this will also avoid many
> rereads of the pte's since the pte's are really locked. This is just for
> illustrative purpose and unfinished...
> 
> Index: linux-2.6.8.1/mm/memory.c
> ===================================================================
> --- linux-2.6.8.1.orig/mm/memory.c	2004-08-15 06:03:04.000000000 -0700
> +++ linux-2.6.8.1/mm/memory.c	2004-08-15 20:26:29.000000000 -0700
> @@ -1035,8 +1035,7 @@
>   * change only once the write actually happens. This avoids a few races,
>   * and potentially makes it more efficient.
>   *
> - * We hold the mm semaphore and the page_table_lock on entry and exit
> - * with the page_table_lock released.
> + * We hold the mm semaphore.
>   */
>  static int do_wp_page(struct mm_struct *mm, struct vm_area_struct * vma,
>  	unsigned long address, pte_t *page_table, pmd_t *pmd, pte_t pte)
> @@ -1051,10 +1050,10 @@
>  		 * at least the kernel stops what it's doing before it corrupts
>  		 * data, but for the moment just pretend this is OOM.
>  		 */
> +		ptep_unlock(page_table);
>  		pte_unmap(page_table);
>  		printk(KERN_ERR "do_wp_page: bogus page at address %08lx\n",
>  				address);
> -		spin_unlock(&mm->page_table_lock);
>  		return VM_FAULT_OOM;
>  	}
>  	old_page = pfn_to_page(pfn);
> @@ -1069,7 +1068,7 @@
>  			ptep_set_access_flags(vma, address, page_table, entry, 1);
>  			update_mmu_cache(vma, address, entry);
>  			pte_unmap(page_table);
> -			spin_unlock(&mm->page_table_lock);
> +			/* pte lock unlocked by ptep_set_access */
>  			return VM_FAULT_MINOR;
>  		}
>  	}
> @@ -1080,7 +1079,7 @@
>  	 */
>  	if (!PageReserved(old_page))
>  		page_cache_get(old_page);
> -	spin_unlock(&mm->page_table_lock);
> +	ptep_unlock(page_table);
> 
>  	if (unlikely(anon_vma_prepare(vma)))
>  		goto no_new_page;
> @@ -1090,26 +1089,21 @@
>  	copy_cow_page(old_page,new_page,address);
> 
>  	/*
> -	 * Re-check the pte - we dropped the lock
> +	 * There is no need to recheck. The pte was locked
>  	 */
> -	spin_lock(&mm->page_table_lock);
> -	page_table = pte_offset_map(pmd, address);
> -	if (likely(pte_same(*page_table, pte))) {
> -		if (PageReserved(old_page))
> -			++mm->rss;
> -		else
> -			page_remove_rmap(old_page);
> -		break_cow(vma, new_page, address, page_table);
> -		lru_cache_add_active(new_page);
> -		page_add_anon_rmap(new_page, vma, address);
> +	if (PageReserved(old_page))
> +		++mm->rss;
> +	else
> +		page_remove_rmap(old_page);
> +	break_cow(vma, new_page, address, page_table);
> +	lru_cache_add_active(new_page);
> +	page_add_anon_rmap(new_page, vma, address);
> 
> -		/* Free the old page.. */
> -		new_page = old_page;
> -	}
> +	/* Free the old page.. */
> +	new_page = old_page;
>  	pte_unmap(page_table);
>  	page_cache_release(new_page);
>  	page_cache_release(old_page);
> -	spin_unlock(&mm->page_table_lock);
>  	return VM_FAULT_MINOR;
> 
>  no_new_page:
> @@ -1314,8 +1308,8 @@
>  }
> 
>  /*
> - * We hold the mm semaphore and the page_table_lock on entry and
> - * should release the pagetable lock on exit..
> + * We hold the mm semaphore and a pte lock n entry and
> + * should release the pte lock on exit..
>   */
>  static int do_swap_page(struct mm_struct * mm,
>  	struct vm_area_struct * vma, unsigned long address,
> @@ -1327,27 +1321,10 @@
>  	int ret = VM_FAULT_MINOR;
> 
>  	pte_unmap(page_table);
> -	spin_unlock(&mm->page_table_lock);
>  	page = lookup_swap_cache(entry);
>  	if (!page) {
>   		swapin_readahead(entry, address, vma);
>   		page = read_swap_cache_async(entry, vma, address);
> -		if (!page) {
> -			/*
> -			 * Back out if somebody else faulted in this pte while
> -			 * we released the page table lock.
> -			 */
> -			spin_lock(&mm->page_table_lock);
> -			page_table = pte_offset_map(pmd, address);
> -			if (likely(pte_same(*page_table, orig_pte)))
> -				ret = VM_FAULT_OOM;
> -			else
> -				ret = VM_FAULT_MINOR;
> -			pte_unmap(page_table);
> -			spin_unlock(&mm->page_table_lock);
> -			goto out;
> -		}
> -
>  		/* Had to read the page from swap area: Major fault */
>  		ret = VM_FAULT_MAJOR;
>  		inc_page_state(pgmajfault);
> @@ -1356,21 +1333,6 @@
>  	mark_page_accessed(page);
>  	lock_page(page);
> 
> -	/*
> -	 * Back out if somebody else faulted in this pte while we
> -	 * released the page table lock.
> -	 */
> -	spin_lock(&mm->page_table_lock);
> -	page_table = pte_offset_map(pmd, address);
> -	if (unlikely(!pte_same(*page_table, orig_pte))) {
> -		pte_unmap(page_table);
> -		spin_unlock(&mm->page_table_lock);
> -		unlock_page(page);
> -		page_cache_release(page);
> -		ret = VM_FAULT_MINOR;
> -		goto out;
> -	}
> -
>  	/* The page isn't present yet, go ahead with the fault. */
> 
>  	swap_free(entry);
> @@ -1398,8 +1360,8 @@
> 
>  	/* No need to invalidate - it was non-present before */
>  	update_mmu_cache(vma, address, pte);
> +	ptep_unlock(page_table);
>  	pte_unmap(page_table);
> -	spin_unlock(&mm->page_table_lock);
>  out:
>  	return ret;
>  }
> @@ -1424,7 +1386,6 @@
>  	if (write_access) {
>  		/* Allocate our own private page. */
>  		pte_unmap(page_table);
> -		spin_unlock(&mm->page_table_lock);
> 
>  		if (unlikely(anon_vma_prepare(vma)))
>  			goto no_mem;
> @@ -1433,13 +1394,12 @@
>  			goto no_mem;
>  		clear_user_highpage(page, addr);
> 
> -		spin_lock(&mm->page_table_lock);
>  		page_table = pte_offset_map(pmd, addr);
> 
>  		if (!pte_none(*page_table)) {
> +			ptep_unlock(page_table);
>  			pte_unmap(page_table);
>  			page_cache_release(page);
> -			spin_unlock(&mm->page_table_lock);
>  			goto out;
>  		}
>  		mm->rss++;
> @@ -1456,7 +1416,6 @@
> 
>  	/* No need to invalidate - it was non-present before */
>  	update_mmu_cache(vma, addr, entry);
> -	spin_unlock(&mm->page_table_lock);
>  out:
>  	return VM_FAULT_MINOR;
>  no_mem:
> @@ -1472,8 +1431,8 @@
>   * As this is called only for pages that do not currently exist, we
>   * do not need to flush old virtual caches or the TLB.
>   *
> - * This is called with the MM semaphore held and the page table
> - * spinlock held. Exit with the spinlock released.
> + * This is called with the MM semaphore held and pte lock
> + * held. Exit with the pte lock released.
>   */
>  static int
>  do_no_page(struct mm_struct *mm, struct vm_area_struct *vma,
> @@ -1489,9 +1448,9 @@
>  	if (!vma->vm_ops || !vma->vm_ops->nopage)
>  		return do_anonymous_page(mm, vma, page_table,
>  					pmd, write_access, address);
> +	ptep_unlock(page_table);
>  	pte_unmap(page_table);
> -	spin_unlock(&mm->page_table_lock);
> -
> +
>  	if (vma->vm_file) {
>  		mapping = vma->vm_file->f_mapping;
>  		sequence = atomic_read(&mapping->truncate_count);
> @@ -1523,7 +1482,7 @@
>  		anon = 1;
>  	}
> 
> -	spin_lock(&mm->page_table_lock);
> +	while (ptep_lock(page_table)) ;
>  	/*
>  	 * For a file-backed vma, someone could have truncated or otherwise
>  	 * invalidated this page.  If unmap_mapping_range got called,
> @@ -1532,7 +1491,7 @@
>  	if (mapping &&
>  	      (unlikely(sequence != atomic_read(&mapping->truncate_count)))) {
>  		sequence = atomic_read(&mapping->truncate_count);
> -		spin_unlock(&mm->page_table_lock);
> +		ptep_unlock(page_table);
>  		page_cache_release(new_page);
>  		goto retry;
>  	}
> @@ -1565,15 +1524,15 @@
>  		pte_unmap(page_table);
>  	} else {
>  		/* One of our sibling threads was faster, back out. */
> +		ptep_unlock(page_table);
>  		pte_unmap(page_table);
>  		page_cache_release(new_page);
> -		spin_unlock(&mm->page_table_lock);
>  		goto out;
>  	}
> 
>  	/* no need to invalidate: a not-present page shouldn't be cached */
>  	update_mmu_cache(vma, address, entry);
> -	spin_unlock(&mm->page_table_lock);
> +	ptep_unlock(page_table);
>  out:
>  	return ret;
>  oom:
> @@ -1606,8 +1565,8 @@
> 
>  	pgoff = pte_to_pgoff(*pte);
> 
> +	ptep_unlock(pte);
>  	pte_unmap(pte);
> -	spin_unlock(&mm->page_table_lock);
> 
>  	err = vma->vm_ops->populate(vma, address & PAGE_MASK, PAGE_SIZE, vma->vm_page_prot, pgoff, 0);
>  	if (err == -ENOMEM)
> @@ -1644,13 +1603,11 @@
>  {
>  	pte_t entry;
> 
> -	entry = *pte;
> +	entry = *pte;	/* get the unlocked value so that we do not write the lock bit back */
> +
> +	if (ptep_lock(pte)) return VM_FAULT_MINOR;
> +
>  	if (!pte_present(entry)) {
> -		/*
> -		 * If it truly wasn't present, we know that kswapd
> -		 * and the PTE updates will not touch it later. So
> -		 * drop the lock.
> -		 */
>  		if (pte_none(entry))
>  			return do_no_page(mm, vma, address, write_access, pte, pmd);
>  		if (pte_file(entry))
> @@ -1668,7 +1625,6 @@
>  	ptep_set_access_flags(vma, address, pte, entry, write_access);
>  	update_mmu_cache(vma, address, entry);
>  	pte_unmap(pte);
> -	spin_unlock(&mm->page_table_lock);
>  	return VM_FAULT_MINOR;
>  }
> 
> @@ -1688,12 +1644,6 @@
> 
>  	if (is_vm_hugetlb_page(vma))
>  		return VM_FAULT_SIGBUS;	/* mapping truncation does this. */
> -
> -	/*
> -	 * We need the page table lock to synchronize with kswapd
> -	 * and the SMP-safe atomic PTE updates.
> -	 */
> -	spin_lock(&mm->page_table_lock);
>  	pmd = pmd_alloc(mm, pgd, address);
> 
>  	if (pmd) {
> @@ -1701,7 +1651,6 @@
>  		if (pte)
>  			return handle_pte_fault(mm, vma, address, write_access, pte, pmd);
>  	}
> -	spin_unlock(&mm->page_table_lock);
>  	return VM_FAULT_OOM;
>  }
> 
> Index: linux-2.6.8.1/mm/rmap.c
> ===================================================================
> --- linux-2.6.8.1.orig/mm/rmap.c	2004-08-14 03:56:22.000000000 -0700
> +++ linux-2.6.8.1/mm/rmap.c	2004-08-15 19:59:32.000000000 -0700
> @@ -494,8 +494,14 @@
> 
>  	/* Nuke the page table entry. */
>  	flush_cache_page(vma, address);
> +#ifdef __HAVE_ARCH_PTE_LOCK
> +	/* If we would simply zero the pte then handle_mm_fault might
> +	 * race against this code and reinstate an anonymous mapping
> +	 */
> +	pteval = ptep_clear_and_lock_flush(vma, address, pte);
> +#else
>  	pteval = ptep_clear_flush(vma, address, pte);
> -
> +#endif
>  	/* Move the dirty bit to the physical page now the pte is gone. */
>  	if (pte_dirty(pteval))
>  		set_page_dirty(page);
> @@ -508,9 +514,13 @@
>  		 */
>  		BUG_ON(!PageSwapCache(page));
>  		swap_duplicate(entry);
> +		/* This is going to clear the lock that may have been set on the pte */
>  		set_pte(pte, swp_entry_to_pte(entry));
>  		BUG_ON(pte_file(*pte));
>  	}
> +#ifdef __HAVE_ARCH_PTE_LOCK
> +	else ptep_unlock(pte);
> +#endif
> 
>  	mm->rss--;
>  	BUG_ON(!page->mapcount);
> -
> To unsubscribe from this list: send the line "unsubscribe linux-ia64" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

-- 
Best Regards,
Ray
-----------------------------------------------
                   Ray Bryant
512-453-9679 (work)         512-507-7807 (cell)
raybry@sgi.com             raybry@austin.rr.com
The box said: "Requires Windows 98 or better",
            so I installed Linux.
-----------------------------------------------


  reply	other threads:[~2004-08-16  6:59 UTC|newest]

Thread overview: 197+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2004-08-15 13:50 page fault fastpath: Increasing SMP scalability by introducing pte Christoph Lameter
2004-08-15 13:50 ` page fault fastpath: Increasing SMP scalability by introducing pte locks? Christoph Lameter
2004-08-15 20:09 ` page fault fastpath: Increasing SMP scalability by introducing David S. Miller
2004-08-15 20:09   ` page fault fastpath: Increasing SMP scalability by introducing pte locks? David S. Miller
2004-08-15 22:58   ` page fault fastpath: Increasing SMP scalability by introducing Christoph Lameter
2004-08-15 22:58     ` page fault fastpath: Increasing SMP scalability by introducing pte locks? Christoph Lameter
2004-08-15 23:58     ` page fault fastpath: Increasing SMP scalability by introducing David S. Miller
2004-08-15 23:58       ` page fault fastpath: Increasing SMP scalability by introducing pte locks? David S. Miller
2004-08-16  0:11       ` page fault fastpath: Increasing SMP scalability by introducing Christoph Lameter
2004-08-16  0:11         ` page fault fastpath: Increasing SMP scalability by introducing pte locks? Christoph Lameter
2004-08-16  1:56         ` page fault fastpath: Increasing SMP scalability by introducing David S. Miller
2004-08-16  1:56           ` page fault fastpath: Increasing SMP scalability by introducing pte locks? David S. Miller
2004-08-16  3:29           ` page fault fastpath: Increasing SMP scalability by introducing Christoph Lameter
2004-08-16  3:29             ` page fault fastpath: Increasing SMP scalability by introducing pte locks? Christoph Lameter
2004-08-16  6:59             ` Ray Bryant [this message]
2004-08-16  7:00               ` Ray Bryant
2004-08-16 15:18               ` page fault fastpath: Increasing SMP scalability by introducing Christoph Lameter
2004-08-16 15:18                 ` page fault fastpath: Increasing SMP scalability by introducing pte locks? Christoph Lameter
2004-08-16 16:18                 ` William Lee Irwin III
2004-08-16 16:18                   ` William Lee Irwin III
2004-08-16 14:39             ` William Lee Irwin III
2004-08-16 14:39               ` William Lee Irwin III
2004-08-17 15:28               ` page fault fastpath patch v2: fix race conditions, stats for 8,32 Christoph Lameter
2004-08-17 15:28                 ` page fault fastpath patch v2: fix race conditions, stats for 8,32 and 512 cpu SMP Christoph Lameter
2004-08-17 15:37                 ` Christoph Hellwig
2004-08-17 15:37                   ` Christoph Hellwig
2004-08-17 15:51                 ` William Lee Irwin III
2004-08-17 15:51                   ` William Lee Irwin III
2004-08-18 17:55                 ` page fault fastpath patch v2: fix race conditions, stats for 8,32 Hugh Dickins
2004-08-18 17:55                   ` page fault fastpath patch v2: fix race conditions, stats for 8,32 and 512 cpu SMP Hugh Dickins
2004-08-18 20:20                   ` William Lee Irwin III
2004-08-18 20:20                     ` William Lee Irwin III
2004-08-19  1:19                   ` page fault fastpath patch v2: fix race conditions, stats for Christoph Lameter
2004-08-19  1:19                     ` page fault fastpath patch v2: fix race conditions, stats for 8,32 and 512 cpu SMP Christoph Lameter
     [not found]               ` <B6E8046E1E28D34EB815A11AC8CA3129027B679F@mtv-atc-605e--n.corp.sgi.com>
2004-08-24  4:43                 ` page fault scalability patch v3: use cmpxchg, make rss atomic Christoph Lameter
2004-08-24  4:43                   ` Christoph Lameter
2004-08-24  5:49                   ` Christoph Lameter
2004-08-24  5:49                     ` Christoph Lameter
2004-08-24 12:34                     ` Matthew Wilcox
2004-08-24 12:34                       ` Matthew Wilcox
2004-08-24 14:47                       ` Christoph Lameter
2004-08-24 14:47                         ` Christoph Lameter
     [not found]                 ` <B6E8046E1E28D34EB815A11AC8CA3129027B67A9@mtv-atc-605e--n.corp.sgi.com>
2004-08-26 15:20                   ` page fault scalability patch v4: reduce page_table_lock use, atomic Christoph Lameter
2004-08-26 15:20                     ` page fault scalability patch v4: reduce page_table_lock use, atomic pmd,pgd handlin Christoph Lameter
     [not found]                   ` <B6E8046E1E28D34EB815A11AC8CA3129027B67B4@mtv-atc-605e--n.corp.sgi.com>
2004-08-27 23:20                     ` page fault scalability patch final : i386 tested, x86_64 support Christoph Lameter
2004-08-27 23:20                       ` page fault scalability patch final : i386 tested, x86_64 support added Christoph Lameter
2004-08-27 23:36                       ` Andi Kleen
2004-08-27 23:36                         ` Andi Kleen
2004-08-27 23:43                         ` page fault scalability patch final : i386 tested, x86_64 David S. Miller
2004-08-27 23:43                           ` page fault scalability patch final : i386 tested, x86_64 support added David S. Miller
2004-08-28  0:19                         ` page fault scalability patch final : i386 tested, x86_64 support Christoph Lameter
2004-08-28  0:19                           ` page fault scalability patch final : i386 tested, x86_64 support added Christoph Lameter
2004-08-28  0:23                           ` page fault scalability patch final : i386 tested, x86_64 David S. Miller
2004-08-28  0:23                             ` page fault scalability patch final : i386 tested, x86_64 support added David S. Miller
2004-08-28  0:36                             ` page fault scalability patch final : i386 tested, x86_64 Andrew Morton
2004-08-28  0:36                               ` page fault scalability patch final : i386 tested, x86_64 support added Andrew Morton
2004-08-28  0:40                               ` page fault scalability patch final : i386 tested, x86_64 David S. Miller
2004-08-28  0:40                                 ` page fault scalability patch final : i386 tested, x86_64 support added David S. Miller
2004-08-28  1:05                                 ` Andi Kleen
2004-08-28  1:05                                   ` Andi Kleen
2004-08-28  1:11                                   ` page fault scalability patch final : i386 tested, x86_64 David S. Miller
2004-08-28  1:11                                     ` page fault scalability patch final : i386 tested, x86_64 support added David S. Miller
2004-08-28  1:17                                     ` Andi Kleen
2004-08-28  1:17                                       ` Andi Kleen
2004-08-28  1:02                               ` Andi Kleen
2004-08-28  1:02                                 ` Andi Kleen
2004-08-28  1:39                                 ` page fault scalability patch final : i386 tested, x86_64 Andrew Morton
2004-08-28  1:39                                   ` page fault scalability patch final : i386 tested, x86_64 support added Andrew Morton
2004-08-28  2:08                                   ` page fault scalability patch final : i386 tested, x86_64 Paul Mackerras
2004-08-28  2:08                                     ` page fault scalability patch final : i386 tested, x86_64 support added Paul Mackerras
2004-08-28  3:32                                     ` page fault scalability patch final : i386 tested, x86_64 support Christoph Lameter
2004-08-28  3:32                                       ` page fault scalability patch final : i386 tested, x86_64 support added Christoph Lameter
2004-08-28  3:42                                       ` page fault scalability patch final : i386 tested, x86_64 Andrew Morton
2004-08-28  3:42                                         ` page fault scalability patch final : i386 tested, x86_64 support added Andrew Morton
2004-08-28  4:24                                         ` page fault scalability patch final : i386 tested, x86_64 support Christoph Lameter
2004-08-28  4:24                                           ` page fault scalability patch final : i386 tested, x86_64 support added Christoph Lameter
2004-08-28  5:39                                           ` page fault scalability patch final : i386 tested, x86_64 Andrew Morton
2004-08-28  5:39                                             ` page fault scalability patch final : i386 tested, x86_64 support added Andrew Morton
2004-08-28  5:58                                             ` page fault scalability patch final : i386 tested, x86_64 support Christoph Lameter
2004-08-28  5:58                                               ` page fault scalability patch final : i386 tested, x86_64 support added Christoph Lameter
2004-08-28  6:03                                               ` William Lee Irwin III
2004-08-28  6:03                                                 ` William Lee Irwin III
2004-08-28  6:06                                               ` page fault scalability patch final : i386 tested, x86_64 Andrew Morton
2004-08-28  6:06                                                 ` page fault scalability patch final : i386 tested, x86_64 support added Andrew Morton
2004-08-30 17:02                                                 ` Herbert Poetzl
2004-08-30 17:02                                                   ` Herbert Poetzl
2004-08-30 17:05                                                   ` Andi Kleen
2004-08-30 17:05                                                     ` Andi Kleen
2004-08-28 13:19                                             ` page fault scalability patch final : i386 tested, x86_64 Andi Kleen
2004-08-28 13:19                                               ` page fault scalability patch final : i386 tested, x86_64 support added Andi Kleen
2004-08-28 15:48                                             ` Matt Mackall
2004-08-28 15:48                                               ` Matt Mackall
2004-09-01  4:13                                             ` page fault scalability patch final : i386 tested, x86_64 Benjamin Herrenschmidt
2004-09-01  4:13                                               ` page fault scalability patch final : i386 tested, x86_64 support added Benjamin Herrenschmidt
2004-09-02 21:26                                               ` Andi Kleen
2004-09-02 21:26                                                 ` Andi Kleen
2004-09-02 21:55                                                 ` page fault scalability patch final : i386 tested, x86_64 David S. Miller
2004-09-02 21:55                                                   ` page fault scalability patch final : i386 tested, x86_64 support added David S. Miller
2004-09-01 18:03                                             ` Matthew Wilcox
2004-09-01 18:19                                               ` Andrew Morton
2004-09-01 19:06                                                 ` William Lee Irwin III
2004-08-28 21:41                                 ` Daniel Phillips
2004-08-28 21:41                                   ` Daniel Phillips
2004-09-01  4:24                       ` page fault scalability patch final : i386 tested, x86_64 Benjamin Herrenschmidt
2004-09-01  4:24                         ` page fault scalability patch final : i386 tested, x86_64 support added Benjamin Herrenschmidt
2004-09-01  5:22                         ` page fault scalability patch final : i386 tested, x86_64 David S. Miller
2004-09-01  5:22                           ` page fault scalability patch final : i386 tested, x86_64 support added David S. Miller
2004-09-01 16:43                         ` page fault scalability patch final : i386 tested, x86_64 support Christoph Lameter
2004-09-01 16:43                           ` page fault scalability patch final : i386 tested, x86_64 support added Christoph Lameter
2004-09-01 23:09                           ` page fault scalability patch final : i386 tested, x86_64 Benjamin Herrenschmidt
2004-09-01 23:09                             ` page fault scalability patch final : i386 tested, x86_64 support added Benjamin Herrenschmidt
     [not found]                             ` <Pine.LNX.4.58.0409012140440.23186@schroedinger.engr.sgi.com>
     [not found]                               ` <20040901215741.3538bbf4.davem@davemloft.net>
2004-09-02  5:18                                 ` William Lee Irwin III
2004-09-02  5:18                                   ` William Lee Irwin III
2004-09-09 15:38                                   ` page fault scalability patch: V7 (+fallback for atomic page table Christoph Lameter
2004-09-09 15:38                                     ` page fault scalability patch: V7 (+fallback for atomic page table ops) Christoph Lameter
2004-09-02 16:24                                 ` page fault scalability patch final : i386 tested, x86_64 support Christoph Lameter
2004-09-02 16:24                                   ` page fault scalability patch final : i386 tested, x86_64 support added Christoph Lameter
2004-09-02 20:10                                   ` page fault scalability patch final : i386 tested, x86_64 David S. Miller
2004-09-02 20:10                                     ` page fault scalability patch final : i386 tested, x86_64 support added David S. Miller
2004-09-02 21:02                                     ` page fault scalability patch final : i386 tested, x86_64 support Christoph Lameter
2004-09-02 21:02                                       ` page fault scalability patch final : i386 tested, x86_64 support added Christoph Lameter
2004-09-02 21:07                                       ` page fault scalability patch final : i386 tested, x86_64 David S. Miller
2004-09-02 21:07                                         ` page fault scalability patch final : i386 tested, x86_64 support added David S. Miller
2004-09-18 23:23                                         ` page fault scalability patch V8: [0/7] Description Christoph Lameter
2004-09-18 23:23                                           ` Christoph Lameter
     [not found]                                         ` <B6E8046E1E28D34EB815A11AC8CA312902CD3243@mtv-atc-605e--n.corp.sgi.com>
2004-09-18 23:24                                           ` page fault scalability patch V8: [1/7] make mm->rss atomic Christoph Lameter
2004-09-18 23:24                                             ` Christoph Lameter
2004-09-18 23:26                                           ` page fault scalability patch V8: [2/7] avoid page_table_lock in Christoph Lameter
2004-09-18 23:26                                             ` page fault scalability patch V8: [2/7] avoid page_table_lock in handle_mm_fault Christoph Lameter
2004-09-19  9:04                                             ` Christoph Hellwig
2004-09-19  9:04                                               ` Christoph Hellwig
2004-09-18 23:27                                           ` page fault scalability patch V8: [3/7] atomic pte operations for Christoph Lameter
2004-09-18 23:27                                             ` page fault scalability patch V8: [3/7] atomic pte operations for ia64 Christoph Lameter
2004-09-18 23:28                                           ` page fault scalability patch V8: [4/7] universally available cmpxchg Christoph Lameter
2004-09-18 23:28                                             ` page fault scalability patch V8: [4/7] universally available cmpxchg on i386 Christoph Lameter
     [not found]                                             ` <200409191430.37444.vda@port.imtp.ilyichevsk.odessa.ua>
2004-09-19 12:11                                               ` Andi Kleen
2004-09-19 12:11                                                 ` Andi Kleen
2004-09-20 15:45                                               ` page fault scalability patch V8: [4/7] universally available Christoph Lameter
2004-09-20 15:45                                                 ` page fault scalability patch V8: [4/7] universally available cmpxchg on i386 Christoph Lameter
     [not found]                                                 ` <200409202043.00580.vda@port.imtp.ilyichevsk.odessa.ua>
2004-09-20 20:49                                                   ` page fault scalability patch V8: [4/7] universally available Christoph Lameter
2004-09-20 20:49                                                     ` page fault scalability patch V8: [4/7] universally available cmpxchg on i386 Christoph Lameter
2004-09-20 20:57                                                     ` Andi Kleen
2004-09-20 20:57                                                       ` Andi Kleen
     [not found]                                                       ` <200409211841.25507.vda@port.imtp.ilyichevsk.odessa.ua>
2004-09-21 15:45                                                         ` Andi Kleen
2004-09-21 15:45                                                           ` Andi Kleen
     [not found]                                                           ` <200409212306.38800.vda@port.imtp.ilyichevsk.odessa.ua>
2004-09-21 20:14                                                             ` Andi Kleen
2004-09-21 20:14                                                               ` Andi Kleen
2004-09-23  7:17                                                           ` page fault scalability patch V8: [4/7] universally available Andy Lutomirski
2004-09-23  7:17                                                             ` page fault scalability patch V8: [4/7] universally available cmpxchg on i386 Andy Lutomirski
2004-09-23  9:03                                                             ` Andi Kleen
2004-09-23  9:03                                                               ` Andi Kleen
2004-09-27 19:06                                                               ` page fault scalability patch V9: [0/7] overview Christoph Lameter
2004-10-15 19:02                                                                 ` page fault scalability patch V10: " Christoph Lameter
2004-10-15 19:02                                                                   ` Christoph Lameter
2004-10-15 19:03                                                                   ` page fault scalability patch V10: [1/7] make rss atomic Christoph Lameter
2004-10-15 19:03                                                                     ` Christoph Lameter
2004-10-15 19:04                                                                   ` page fault scalability patch V10: [2/7] defer/omit taking page_table_lock Christoph Lameter
2004-10-15 19:04                                                                     ` Christoph Lameter
2004-10-15 20:00                                                                     ` Marcelo Tosatti
2004-10-15 20:00                                                                       ` Marcelo Tosatti
2004-10-18 15:59                                                                       ` page fault scalability patch V10: [2/7] defer/omit taking Christoph Lameter
2004-10-18 15:59                                                                         ` page fault scalability patch V10: [2/7] defer/omit taking page_table_lock Christoph Lameter
2004-10-19  5:25                                                                       ` [revised] page fault scalability patch V10: [2/7] defer/omit taking Christoph Lameter
2004-10-19  5:25                                                                         ` [revised] page fault scalability patch V10: [2/7] defer/omit taking page_table_lock Christoph Lameter
2004-10-15 19:05                                                                   ` page fault scalability patch V10: [3/7] IA64 atomic pte operations Christoph Lameter
2004-10-15 19:05                                                                     ` Christoph Lameter
2004-10-15 19:06                                                                   ` page fault scalability patch V10: [4/7] cmpxchg for 386 and 486 Christoph Lameter
2004-10-15 19:06                                                                     ` Christoph Lameter
2004-10-15 19:06                                                                   ` page fault scalability patch V10: [5/7] i386 atomic pte operations Christoph Lameter
2004-10-15 19:06                                                                     ` Christoph Lameter
2004-10-15 19:07                                                                   ` page fault scalability patch V10: [6/7] x86_64 " Christoph Lameter
2004-10-15 19:07                                                                     ` Christoph Lameter
2004-10-15 19:08                                                                   ` page fault scalability patch V10: [7/7] s/390 " Christoph Lameter
2004-10-15 19:08                                                                     ` Christoph Lameter
     [not found]                                                               ` <B6E8046E1E28D34EB815A11AC8CA312902CD3282@mtv-atc-605e--n.corp.sgi.com>
2004-09-27 19:07                                                                 ` page fault scalability patch V9: [1/7] make mm->rss atomic Christoph Lameter
2004-09-27 19:08                                                                 ` page fault scalability patch V9: [2/7] defer/remove page_table_lock Christoph Lameter
2004-09-27 19:10                                                                 ` page fault scalability patch V9: [3/7] atomic pte operatios for ia64 Christoph Lameter
2004-09-27 19:10                                                                 ` page fault scalability patch V9: [4/7] generally available cmpxchg on i386 Christoph Lameter
2004-09-27 19:11                                                                 ` page fault scalability patch V9: [5/7] atomic pte operations for i386 Christoph Lameter
2004-09-27 19:12                                                                 ` page fault scalability patch V9: [6/7] atomic pte operations for x86_64 Christoph Lameter
2004-09-27 19:13                                                                 ` page fault scalability patch V9: [7/7] atomic pte operatiosn for s390 Christoph Lameter
2004-09-18 23:29                                           ` page fault scalability patch V8: [5/7] atomic pte operations for Christoph Lameter
2004-09-18 23:29                                             ` page fault scalability patch V8: [5/7] atomic pte operations for i386 Christoph Lameter
2004-09-18 23:30                                           ` page fault scalability patch V8: [6/7] atomic pte operations for Christoph Lameter
2004-09-18 23:30                                             ` page fault scalability patch V8: [6/7] atomic pte operations for x86_64 Christoph Lameter
2004-09-18 23:31                                           ` page fault scalability patch V8: [7/7] atomic pte operations for Christoph Lameter
2004-09-18 23:31                                             ` page fault scalability patch V8: [7/7] atomic pte operations for s390 Christoph Lameter
     [not found]                                             ` <200409191435.09445.vda@port.imtp.ilyichevsk.odessa.ua>
2004-09-20 15:44                                               ` page fault scalability patch V8: [7/7] atomic pte operations Christoph Lameter
2004-09-20 15:44                                                 ` page fault scalability patch V8: [7/7] atomic pte operations for s390 Christoph Lameter
2004-08-15 22:38 ` page fault fastpath: Increasing SMP scalability by introducing Benjamin Herrenschmidt
2004-08-15 22:38   ` page fault fastpath: Increasing SMP scalability by introducing pte locks? Benjamin Herrenschmidt
2004-08-16 17:28   ` page fault fastpath: Increasing SMP scalability by introducing Christoph Lameter
2004-08-16 17:28     ` page fault fastpath: Increasing SMP scalability by introducing pte locks? Christoph Lameter
2004-08-17  8:01     ` page fault fastpath: Increasing SMP scalability by introducing Benjamin Herrenschmidt
2004-08-17  8:01       ` page fault fastpath: Increasing SMP scalability by introducing pte locks? Benjamin Herrenschmidt
     [not found] <2ttIr-2e4-17@gated-at.bofh.it>
     [not found] ` <2tzE4-6sw-25@gated-at.bofh.it>
     [not found]   ` <2tCiw-8pK-1@gated-at.bofh.it>
2004-08-15 23:53     ` page fault fastpath: Increasing SMP scalability by introducing Andi Kleen
2004-08-15 23:55       ` Christoph Lameter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=41205BAA.9030800@sgi.com \
    --to=raybry@sgi.com \
    --cc=clameter@sgi.com \
    --cc=davem@redhat.com \
    --cc=linux-ia64@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.