From: "Michał Winiarski" <michal.winiarski@intel.com>
To: intel-gfx@lists.freedesktop.org
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Subject: [PATCH 5/8] drm/i915/gtt: Purge temp bitmaps
Date: Mon, 12 Dec 2016 12:46:42 +0100 [thread overview]
Message-ID: <1481543205-1813-1-git-send-email-michal.winiarski@intel.com> (raw)
In-Reply-To: <1481543057-333-1-git-send-email-michal.winiarski@intel.com>
Temp bitmaps were being used to revert the page tracking structures to
original state after error arises in the middle of the process.
We could just use the range though, and indeed, right now temp bitmaps
are not used for anything useful. We can simply remove them without any
functional changes.
Cc: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Michel Thierry <michel.thierry@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Michał Winiarski <michal.winiarski@intel.com>
---
drivers/gpu/drm/i915/i915_gem_gtt.c | 181 ++++++++----------------------------
1 file changed, 37 insertions(+), 144 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index c6f0708..7013967 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -1088,8 +1088,6 @@ static void gen8_ppgtt_cleanup(struct i915_address_space *vm)
* @pd: Page directory for this address range.
* @start: Starting virtual address to begin allocations.
* @length: Size of the allocations.
- * @new_pts: Bitmap set by function with new allocations. Likely used by the
- * caller to free on error.
*
* Allocate the required number of page tables. Extremely similar to
* gen8_ppgtt_alloc_page_directories(). The main difference is here we are limited by
@@ -1103,8 +1101,7 @@ static void gen8_ppgtt_cleanup(struct i915_address_space *vm)
static int gen8_ppgtt_alloc_pagetabs(struct i915_address_space *vm,
struct i915_page_directory *pd,
uint64_t start,
- uint64_t length,
- unsigned long *new_pts)
+ uint64_t length)
{
struct drm_i915_private *dev_priv = vm->i915;
struct i915_page_table *pt;
@@ -1112,12 +1109,8 @@ static int gen8_ppgtt_alloc_pagetabs(struct i915_address_space *vm,
const uint64_t start_save = start;
gen8_for_each_pde(pt, pd, start, length, pde) {
- /* Don't reallocate page tables */
- if (test_bit(pde, pd->used_pdes)) {
- /* Scratch is never allocated this way */
- WARN_ON(pt == vm->scratch_pt);
+ if (test_bit(pde, pd->used_pdes))
continue;
- }
pt = alloc_pt(dev_priv);
if (IS_ERR(pt)) {
@@ -1128,7 +1121,7 @@ static int gen8_ppgtt_alloc_pagetabs(struct i915_address_space *vm,
gen8_initialize_pt(vm, pt);
pd->page_table[pde] = pt;
- __set_bit(pde, new_pts);
+ __set_bit(pde, pd->used_pdes);
trace_i915_page_table_entry_alloc(vm, pde, start, GEN8_PDE_SHIFT);
}
@@ -1141,14 +1134,11 @@ static int gen8_ppgtt_alloc_pagetabs(struct i915_address_space *vm,
* @pdp: Page directory pointer for this address range.
* @start: Starting virtual address to begin allocations.
* @length: Size of the allocations.
- * @new_pds: Bitmap set by function with new allocations. Likely used by the
- * caller to free on error.
*
* Allocate the required number of page directories starting at the pde index of
* @start, and ending at the pde index @start + @length. This function will skip
* over already allocated page directories within the range, and only allocate
- * new ones, setting the appropriate pointer within the pdp as well as the
- * correct position in the bitmap @new_pds.
+ * new ones, setting the appropriate pointer within the pdp.
*
* The function will only allocate the pages within the range for a give page
* directory pointer. In other words, if @start + @length straddles a virtually
@@ -1162,17 +1152,13 @@ static int
gen8_ppgtt_alloc_page_directories(struct i915_address_space *vm,
struct i915_page_directory_pointer *pdp,
uint64_t start,
- uint64_t length,
- unsigned long *new_pds)
+ uint64_t length)
{
struct drm_i915_private *dev_priv = vm->i915;
struct i915_page_directory *pd;
uint32_t pdpe;
- uint32_t pdpes = I915_PDPES_PER_PDP(dev_priv);
const uint64_t start_save = start;
- WARN_ON(!bitmap_empty(new_pds, pdpes));
-
gen8_for_each_pdpe(pd, pdp, start, length, pdpe) {
if (test_bit(pdpe, pdp->used_pdpes))
continue;
@@ -1186,7 +1172,7 @@ gen8_ppgtt_alloc_page_directories(struct i915_address_space *vm,
gen8_initialize_pd(vm, pd);
pdp->page_directory[pdpe] = pd;
- __set_bit(pdpe, new_pds);
+ __set_bit(pdpe, pdp->used_pdpes);
trace_i915_page_directory_entry_alloc(vm, pdpe, start, GEN8_PDPE_SHIFT);
}
@@ -1199,8 +1185,6 @@ gen8_ppgtt_alloc_page_directories(struct i915_address_space *vm,
* @pml4: Page map level 4 for this address range.
* @start: Starting virtual address to begin allocations.
* @length: Size of the allocations.
- * @new_pdps: Bitmap set by function with new allocations. Likely used by the
- * caller to free on error.
*
* Allocate the required number of page directory pointers. Extremely similar to
* gen8_ppgtt_alloc_page_directories() and gen8_ppgtt_alloc_pagetabs().
@@ -1213,73 +1197,34 @@ static int
gen8_ppgtt_alloc_page_dirpointers(struct i915_address_space *vm,
struct i915_pml4 *pml4,
uint64_t start,
- uint64_t length,
- unsigned long *new_pdps)
+ uint64_t length)
{
struct drm_i915_private *dev_priv = vm->i915;
struct i915_page_directory_pointer *pdp;
uint32_t pml4e;
const uint64_t start_save = start;
- WARN_ON(!bitmap_empty(new_pdps, GEN8_PML4ES_PER_PML4));
-
gen8_for_each_pml4e(pdp, pml4, start, length, pml4e) {
- if (!test_bit(pml4e, pml4->used_pml4es)) {
- pdp = alloc_pdp(dev_priv);
- if (IS_ERR(pdp)) {
- gen8_ppgtt_clear_pml4(vm, pml4, start_save,
- start - start_save);
- return PTR_ERR(pdp);
- }
+ if (test_bit(pml4e, pml4->used_pml4es))
+ continue;
- gen8_initialize_pdp(vm, pdp);
- pml4->pdps[pml4e] = pdp;
- __set_bit(pml4e, new_pdps);
- trace_i915_page_directory_pointer_entry_alloc(vm,
- pml4e,
- start,
- GEN8_PML4E_SHIFT);
+ pdp = alloc_pdp(dev_priv);
+ if (IS_ERR(pdp)) {
+ gen8_ppgtt_clear_pml4(vm, pml4, start_save,
+ start - start_save);
+ return PTR_ERR(pdp);
}
- }
-
- return 0;
-}
-
-static void
-free_gen8_temp_bitmaps(unsigned long *new_pds, unsigned long *new_pts)
-{
- kfree(new_pts);
- kfree(new_pds);
-}
-
-/* Fills in the page directory bitmap, and the array of page tables bitmap. Both
- * of these are based on the number of PDPEs in the system.
- */
-static
-int __must_check alloc_gen8_temp_bitmaps(unsigned long **new_pds,
- unsigned long **new_pts,
- uint32_t pdpes)
-{
- unsigned long *pds;
- unsigned long *pts;
-
- pds = kcalloc(BITS_TO_LONGS(pdpes), sizeof(unsigned long), GFP_TEMPORARY);
- if (!pds)
- return -ENOMEM;
- pts = kcalloc(pdpes, BITS_TO_LONGS(I915_PDES) * sizeof(unsigned long),
- GFP_TEMPORARY);
- if (!pts)
- goto err_out;
-
- *new_pds = pds;
- *new_pts = pts;
+ gen8_initialize_pdp(vm, pdp);
+ pml4->pdps[pml4e] = pdp;
+ __set_bit(pml4e, pml4->used_pml4es);
+ trace_i915_page_directory_pointer_entry_alloc(vm,
+ pml4e,
+ start,
+ GEN8_PML4E_SHIFT);
+ }
return 0;
-
-err_out:
- free_gen8_temp_bitmaps(pds, pts);
- return -ENOMEM;
}
static int gen8_alloc_va_range_3lvl(struct i915_address_space *vm,
@@ -1288,12 +1233,10 @@ static int gen8_alloc_va_range_3lvl(struct i915_address_space *vm,
uint64_t length)
{
struct i915_hw_ppgtt *ppgtt = i915_vm_to_ppgtt(vm);
- unsigned long *new_page_dirs, *new_page_tables;
struct i915_page_directory *pd;
const uint64_t start_save = start;
const uint64_t length_save = length;
uint32_t pdpe;
- uint32_t pdpes = I915_PDPES_PER_PDP(dev_priv);
int ret;
/* Wrap is never okay since we can only represent 48b, and we don't
@@ -1305,22 +1248,14 @@ static int gen8_alloc_va_range_3lvl(struct i915_address_space *vm,
if (WARN_ON(start + length > vm->total))
return -ENODEV;
- ret = alloc_gen8_temp_bitmaps(&new_page_dirs, &new_page_tables, pdpes);
- if (ret)
- return ret;
-
/* Do the allocations first so we can easily bail out */
- ret = gen8_ppgtt_alloc_page_directories(vm, pdp, start, length,
- new_page_dirs);
- if (ret) {
- free_gen8_temp_bitmaps(new_page_dirs, new_page_tables);
+ ret = gen8_ppgtt_alloc_page_directories(vm, pdp, start, length);
+ if (ret)
return ret;
- }
/* For every page directory referenced, allocate page tables */
gen8_for_each_pdpe(pd, pdp, start, length, pdpe) {
- ret = gen8_ppgtt_alloc_pagetabs(vm, pd, start, length,
- new_page_tables + pdpe * BITS_TO_LONGS(I915_PDES));
+ ret = gen8_ppgtt_alloc_pagetabs(vm, pd, start, length);
if (ret) {
gen8_ppgtt_clear_pdp(vm, pdp, start_save,
start - start_save);
@@ -1355,9 +1290,6 @@ static int gen8_alloc_va_range_3lvl(struct i915_address_space *vm,
gen8_pte_index(pd_start),
gen8_pte_count(pd_start, pd_len));
- /* Our pde is now pointing to the pagetable, pt */
- __set_bit(pde, pd->used_pdes);
-
/* Map the PDE to the page table */
page_directory[pde] = gen8_pde_encode(px_dma(pt),
I915_CACHE_LLC);
@@ -1371,11 +1303,9 @@ static int gen8_alloc_va_range_3lvl(struct i915_address_space *vm,
}
kunmap_px(ppgtt, page_directory);
- __set_bit(pdpe, pdp->used_pdpes);
gen8_setup_page_directory(ppgtt, pdp, pd, pdpe);
}
- free_gen8_temp_bitmaps(new_page_dirs, new_page_tables);
mark_tlbs_dirty(ppgtt);
return 0;
}
@@ -1385,26 +1315,20 @@ static int gen8_alloc_va_range_4lvl(struct i915_address_space *vm,
uint64_t start,
uint64_t length)
{
- DECLARE_BITMAP(new_pdps, GEN8_PML4ES_PER_PML4);
struct i915_hw_ppgtt *ppgtt = i915_vm_to_ppgtt(vm);
struct i915_page_directory_pointer *pdp;
uint64_t pml4e;
- int ret = 0;
+ int ret;
const uint64_t start_save = start;
- /* Do the pml4 allocations first, so we don't need to track the newly
- * allocated tables below the pdp */
- bitmap_zero(new_pdps, GEN8_PML4ES_PER_PML4);
-
/* The pagedirectory and pagetable allocations are done in the shared 3
* and 4 level code. Just allocate the pdps.
*/
- ret = gen8_ppgtt_alloc_page_dirpointers(vm, pml4, start, length,
- new_pdps);
+ ret = gen8_ppgtt_alloc_page_dirpointers(vm, pml4, start, length);
if (ret)
return ret;
- WARN(bitmap_weight(new_pdps, GEN8_PML4ES_PER_PML4) > 2,
+ WARN(length > (1ULL << GEN8_PML4E_SHIFT),
"The allocation has spanned more than 512GB. "
"It is highly likely this is incorrect.");
@@ -1421,9 +1345,6 @@ static int gen8_alloc_va_range_4lvl(struct i915_address_space *vm,
gen8_setup_page_directory_pointer(ppgtt, pml4, pdp, pml4e);
}
- bitmap_or(pml4->used_pml4es, new_pdps, pml4->used_pml4es,
- GEN8_PML4ES_PER_PML4);
-
return 0;
}
@@ -1522,27 +1443,18 @@ static void gen8_dump_ppgtt(struct i915_hw_ppgtt *ppgtt, struct seq_file *m)
static int gen8_preallocate_top_level_pdps(struct i915_hw_ppgtt *ppgtt)
{
- unsigned long *new_page_dirs, *new_page_tables;
- uint32_t pdpes = I915_PDPES_PER_PDP(to_i915(ppgtt->base.dev));
+ const uint64_t start = 0;
+ const uint64_t length = 1ULL << 32;
int ret;
- /* We allocate temp bitmap for page tables for no gain
- * but as this is for init only, lets keep the things simple
- */
- ret = alloc_gen8_temp_bitmaps(&new_page_dirs, &new_page_tables, pdpes);
- if (ret)
- return ret;
-
/* Allocate for all pdps regardless of how the ppgtt
* was defined.
*/
ret = gen8_ppgtt_alloc_page_directories(&ppgtt->base, &ppgtt->pdp,
- 0, 1ULL << 32,
- new_page_dirs);
+ start, length);
if (!ret)
- *ppgtt->pdp.used_pdpes = *new_page_dirs;
-
- free_gen8_temp_bitmaps(new_page_dirs, new_page_tables);
+ bitmap_set(ppgtt->pdp.used_pdpes, gen8_pdpe_index(start),
+ i915_pte_count(start, length, GEN8_PDPE_SHIFT));
return ret;
}
@@ -1915,7 +1827,6 @@ static void gen6_ppgtt_unwind_pd(struct i915_address_space *vm,
static int gen6_alloc_va_range(struct i915_address_space *vm,
uint64_t start, uint64_t length)
{
- DECLARE_BITMAP(new_page_tables, I915_PDES);
struct drm_i915_private *dev_priv = vm->i915;
struct i915_ggtt *ggtt = &dev_priv->ggtt;
struct i915_hw_ppgtt *ppgtt = i915_vm_to_ppgtt(vm);
@@ -1930,8 +1841,6 @@ static int gen6_alloc_va_range(struct i915_address_space *vm,
start_save = start;
length_save = length;
- bitmap_zero(new_page_tables, I915_PDES);
-
/* The allocation is done in two stages so that we can bail out with
* minimal amount of pain. The first stage finds new page tables that
* need allocation. The second stage marks use ptes within the page
@@ -1954,37 +1863,21 @@ static int gen6_alloc_va_range(struct i915_address_space *vm,
mark_tlbs_dirty(ppgtt);
return ret;
}
-
- gen6_initialize_pt(vm, pt);
-
- ppgtt->pd.page_table[pde] = pt;
- __set_bit(pde, new_page_tables);
trace_i915_page_table_entry_alloc(vm, pde, start, GEN6_PDE_SHIFT);
- }
- start = start_save;
- length = length_save;
-
- gen6_for_each_pde(pt, &ppgtt->pd, start, length, pde) {
- DECLARE_BITMAP(tmp_bitmap, GEN6_PTES);
-
- bitmap_zero(tmp_bitmap, GEN6_PTES);
- bitmap_set(tmp_bitmap, gen6_pte_index(start),
+ gen6_initialize_pt(vm, pt);
+ bitmap_set(pt->used_ptes, gen6_pte_index(start),
gen6_pte_count(start, length));
- if (__test_and_clear_bit(pde, new_page_tables))
- gen6_write_pde(&ppgtt->pd, pde, pt);
+ ppgtt->pd.page_table[pde] = pt;
+ gen6_write_pde(&ppgtt->pd, pde, pt);
trace_i915_page_table_entry_map(vm, pde, pt,
gen6_pte_index(start),
gen6_pte_count(start, length),
GEN6_PTES);
- bitmap_or(pt->used_ptes, tmp_bitmap, pt->used_ptes,
- GEN6_PTES);
}
- WARN_ON(!bitmap_empty(new_page_tables, I915_PDES));
-
/* Make sure write is complete before other code can use this page
* table. Also require for WC mapped PTEs */
readl(ggtt->gsm);
--
2.7.4
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2016-12-12 11:47 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-12-12 11:44 [PATCH 0/8] GTT bitmaps purge Michał Winiarski
2016-12-12 11:44 ` [PATCH 1/8] drm/i915/gtt: Don't pass ppgtt to ppgtt_cleanup_4lvl Michał Winiarski
2016-12-12 11:56 ` Chris Wilson
2016-12-12 11:44 ` [PATCH 2/8] drm/i915/gtt: Rename orig_start/orig_length Michał Winiarski
2016-12-12 11:44 ` [PATCH 3/8] drm/i915/gtt: Extract unwind to separate function for gen6_alloc_va_range Michał Winiarski
2016-12-12 11:44 ` [PATCH 4/8] drm/i915/gtt: Don't use temp bitmaps to unwind gen8_alloc_va_range Michał Winiarski
2016-12-12 11:46 ` Michał Winiarski [this message]
2016-12-12 11:46 ` [PATCH 6/8] drm/i915: Prepare i915_page_table_entry_map tracepoint for bitmap purge Michał Winiarski
2016-12-12 11:48 ` [PATCH 7/8] drm/i915/gtt: Purge page tracking bitmaps Michał Winiarski
2016-12-12 11:48 ` [PATCH 8/8] drm/i915: Clear range when unbinding closed vma Michał Winiarski
2016-12-12 12:00 ` Chris Wilson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1481543205-1813-1-git-send-email-michal.winiarski@intel.com \
--to=michal.winiarski@intel.com \
--cc=intel-gfx@lists.freedesktop.org \
--cc=mika.kuoppala@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).