From mboxrd@z Thu Jan 1 00:00:00 1970 From: Michel Thierry Subject: Re: [PATCH v5 03/32] drm/i915: Create page table allocators Date: Tue, 24 Feb 2015 15:18:47 +0000 Message-ID: <54EC9657.4030401@intel.com> References: <1418922621-25818-1-git-send-email-michel.thierry@intel.com> <1424706272-3016-1-git-send-email-michel.thierry@intel.com> <1424706272-3016-4-git-send-email-michel.thierry@intel.com> <87h9ubfm6f.fsf@gaia.fi.intel.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1337298412==" Return-path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by gabe.freedesktop.org (Postfix) with ESMTP id 58D036E198 for ; Tue, 24 Feb 2015 07:19:08 -0800 (PST) In-Reply-To: <87h9ubfm6f.fsf@gaia.fi.intel.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" To: Mika Kuoppala Cc: intel-gfx@lists.freedesktop.org List-Id: intel-gfx@lists.freedesktop.org This is a cryptographically signed message in MIME format. --===============1337298412== Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms010905000901040500020204" This is a cryptographically signed message in MIME format. --------------ms010905000901040500020204 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: quoted-printable On 2/24/2015 1:56 PM, Mika Kuoppala wrote: > Michel Thierry writes: > >> From: Ben Widawsky >> >> As we move toward dynamic page table allocation, it becomes much easie= r >> to manage our data structures if break do things less coarsely by >> breaking up all of our actions into individual tasks. This makes the >> code easier to write, read, and verify. >> >> Aside from the dissection of the allocation functions, the patch >> statically allocates the page table structures without a page director= y. >> This remains the same for all platforms, >> >> The patch itself should not have much functional difference. The prima= ry >> noticeable difference is the fact that page tables are no longer >> allocated, but rather statically declared as part of the page director= y. >> This has non-zero overhead, but things gain non-trivial complexity as = a >> result. >> > I don't quite understand the last sentence here. We gain overhead and > complexity. > > s/non-trivial/trivial? I'll rephrase this. >> This patch exists for a few reasons: >> 1. Splitting out the functions allows easily combining GEN6 and GEN8 >> code. Page tables have no difference based on GEN8. As we'll see in a >> future patch when we add the DMA mappings to the allocations, it >> requires only one small change to make work, and error handling should= >> just fall into place. >> >> 2. Unless we always want to allocate all page tables under a given PDE= , >> we'll have to eventually break this up into an array of pointers (or >> pointer to pointer). >> >> 3. Having the discrete functions is easier to review, and understand. >> All allocations and frees now take place in just a couple of locations= =2E >> Reviewing, and catching leaks should be easy. >> >> 4. Less important: the GFP flags are confined to one location, which >> makes playing around with such things trivial. >> >> v2: Updated commit message to explain why this patch exists >> >> v3: For lrc, s/pdp.page_directory[i].daddr/pdp.page_directory[i]->dadd= r/ >> >> v4: Renamed free_pt/pd_single functions to unmap_and_free_pt/pd (Danie= l) >> >> v5: Added additional safety checks in gen8 clear/free/unmap. >> >> v6: Use WARN_ON and return -EINVAL in alloc_pt_range (Mika). >> >> Cc: Mika Kuoppala >> Signed-off-by: Ben Widawsky >> Signed-off-by: Michel Thierry (v3+) >> --- >> drivers/gpu/drm/i915/i915_gem_gtt.c | 252 ++++++++++++++++++++++++--= ---------- >> drivers/gpu/drm/i915/i915_gem_gtt.h | 4 +- >> drivers/gpu/drm/i915/intel_lrc.c | 16 +-- >> 3 files changed, 178 insertions(+), 94 deletions(-) >> >> diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i91= 5/i915_gem_gtt.c >> index eb0714c..65c77e5 100644 >> --- a/drivers/gpu/drm/i915/i915_gem_gtt.c >> +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c >> @@ -279,6 +279,98 @@ static gen6_gtt_pte_t iris_pte_encode(dma_addr_t = addr, >> return pte; >> } >> =20 >> +static void unmap_and_free_pt(struct i915_page_table_entry *pt) >> +{ >> + if (WARN_ON(!pt->page)) >> + return; >> + __free_page(pt->page); >> + kfree(pt); >> +} >> + >> +static struct i915_page_table_entry *alloc_pt_single(void) >> +{ >> + struct i915_page_table_entry *pt; >> + >> + pt =3D kzalloc(sizeof(*pt), GFP_KERNEL); >> + if (!pt) >> + return ERR_PTR(-ENOMEM); >> + >> + pt->page =3D alloc_page(GFP_KERNEL | __GFP_ZERO); >> + if (!pt->page) { >> + kfree(pt); >> + return ERR_PTR(-ENOMEM); >> + } >> + >> + return pt; >> +} >> + >> +/** >> + * alloc_pt_range() - Allocate a multiple page tables >> + * @pd: The page directory which will have at least @count entries >> + * available to point to the allocated page tables. >> + * @pde: First page directory entry for which we are allocating. >> + * @count: Number of pages to allocate. >> + * >> + * Allocates multiple page table pages and sets the appropriate entri= es in the >> + * page table structure within the page directory. Function cleans up= after >> + * itself on any failures. >> + * >> + * Return: 0 if allocation succeeded. >> + */ >> +static int alloc_pt_range(struct i915_page_directory_entry *pd, uint1= 6_t pde, size_t count) >> +{ >> + int i, ret; >> + >> + /* 512 is the max page tables per page_directory on any platform. */= >> + if (WARN_ON(pde + count > GEN6_PPGTT_PD_ENTRIES)) >> + return -EINVAL; >> + >> + for (i =3D pde; i < pde + count; i++) { >> + struct i915_page_table_entry *pt =3D alloc_pt_single(); >> + >> + if (IS_ERR(pt)) { >> + ret =3D PTR_ERR(pt); >> + goto err_out; >> + } >> + WARN(pd->page_tables[i], >> + "Leaking page directory entry %d (%pa)\n", >> + i, pd->page_tables[i]); >> + pd->page_tables[i] =3D pt; >> + } >> + >> + return 0; >> + >> +err_out: >> + while (i--) >> + unmap_and_free_pt(pd->page_tables[i]); > This is suspicious as it is non symmetrical of how we allocate. If the > plan is to free everything below pde, that should be mentioned in the > comment above. > > On this patch we call this with pde =3D=3D 0, but I suspect later in th= e > series there will be other usecases for this. Actually is the other way around, later on this will be used only by=20 aliasing ppgtt; others will iterate through a macro and call alloc_pt_single ("page=20 table allocation rework" patch). Anyway, it makes it clearer to have something like: err_out: - while (i--) + while (i-- > pde) unmap_and_free_pt(pd->page_tables[i]); return ret; >> + return ret; >> +} >> + >> +static void unmap_and_free_pd(struct i915_page_directory_entry *pd) >> +{ >> + if (pd->page) { >> + __free_page(pd->page); >> + kfree(pd); >> + } >> +} >> + >> +static struct i915_page_directory_entry *alloc_pd_single(void) >> +{ >> + struct i915_page_directory_entry *pd; >> + >> + pd =3D kzalloc(sizeof(*pd), GFP_KERNEL); >> + if (!pd) >> + return ERR_PTR(-ENOMEM); >> + >> + pd->page =3D alloc_page(GFP_KERNEL | __GFP_ZERO); >> + if (!pd->page) { >> + kfree(pd); >> + return ERR_PTR(-ENOMEM); >> + } >> + >> + return pd; >> +} >> + >> /* Broadwell Page Directory Pointer Descriptors */ >> static int gen8_write_pdp(struct intel_engine_cs *ring, unsigned ent= ry, >> uint64_t val) >> @@ -311,7 +403,7 @@ static int gen8_mm_switch(struct i915_hw_ppgtt *pp= gtt, >> int used_pd =3D ppgtt->num_pd_entries / GEN8_PDES_PER_PAGE; >> =20 >> for (i =3D used_pd - 1; i >=3D 0; i--) { >> - dma_addr_t addr =3D ppgtt->pdp.page_directory[i].daddr; >> + dma_addr_t addr =3D ppgtt->pdp.page_directory[i]->daddr; >> ret =3D gen8_write_pdp(ring, i, addr); >> if (ret) >> return ret; >> @@ -338,8 +430,24 @@ static void gen8_ppgtt_clear_range(struct i915_ad= dress_space *vm, >> I915_CACHE_LLC, use_scratch); >> =20 >> while (num_entries) { >> - struct i915_page_directory_entry *pd =3D &ppgtt->pdp.page_directory= [pdpe]; >> - struct page *page_table =3D pd->page_tables[pde].page; >> + struct i915_page_directory_entry *pd; >> + struct i915_page_table_entry *pt; >> + struct page *page_table; >> + >> + if (WARN_ON(!ppgtt->pdp.page_directory[pdpe])) >> + continue; >> + >> + pd =3D ppgtt->pdp.page_directory[pdpe]; >> + >> + if (WARN_ON(!pd->page_tables[pde])) >> + continue; >> + >> + pt =3D pd->page_tables[pde]; >> + >> + if (WARN_ON(!pt->page)) >> + continue; >> + >> + page_table =3D pt->page; >> =20 >> last_pte =3D pte + num_entries; >> if (last_pte > GEN8_PTES_PER_PAGE) >> @@ -384,8 +492,9 @@ static void gen8_ppgtt_insert_entries(struct i915_= address_space *vm, >> break; >> =20 >> if (pt_vaddr =3D=3D NULL) { >> - struct i915_page_directory_entry *pd =3D &ppgtt->pdp.page_director= y[pdpe]; >> - struct page *page_table =3D pd->page_tables[pde].page; >> + struct i915_page_directory_entry *pd =3D ppgtt->pdp.page_directory= [pdpe]; >> + struct i915_page_table_entry *pt =3D pd->page_tables[pde]; >> + struct page *page_table =3D pt->page; >> =20 >> pt_vaddr =3D kmap_atomic(page_table); >> } >> @@ -416,19 +525,16 @@ static void gen8_free_page_tables(struct i915_pa= ge_directory_entry *pd) >> { >> int i; >> =20 >> - if (pd->page_tables =3D=3D NULL) >> + if (!pd->page) >> return; >> =20 >> - for (i =3D 0; i < GEN8_PDES_PER_PAGE; i++) >> - if (pd->page_tables[i].page) >> - __free_page(pd->page_tables[i].page); >> -} >> + for (i =3D 0; i < GEN8_PDES_PER_PAGE; i++) { >> + if (WARN_ON(!pd->page_tables[i])) >> + continue; >> =20 >> -static void gen8_free_page_directory(struct i915_page_directory_entry= *pd) >> -{ >> - gen8_free_page_tables(pd); >> - kfree(pd->page_tables); >> - __free_page(pd->page); >> + unmap_and_free_pt(pd->page_tables[i]); >> + pd->page_tables[i] =3D NULL; >> + } >> } >> =20 >> static void gen8_ppgtt_free(struct i915_hw_ppgtt *ppgtt) >> @@ -436,7 +542,11 @@ static void gen8_ppgtt_free(struct i915_hw_ppgtt = *ppgtt) >> int i; >> =20 >> for (i =3D 0; i < ppgtt->num_pd_pages; i++) { >> - gen8_free_page_directory(&ppgtt->pdp.page_directory[i]); >> + if (WARN_ON(!ppgtt->pdp.page_directory[i])) >> + continue; >> + >> + gen8_free_page_tables(ppgtt->pdp.page_directory[i]); >> + unmap_and_free_pd(ppgtt->pdp.page_directory[i]); >> } >> } >> =20 >> @@ -448,14 +558,23 @@ static void gen8_ppgtt_unmap_pages(struct i915_h= w_ppgtt *ppgtt) >> for (i =3D 0; i < ppgtt->num_pd_pages; i++) { >> /* TODO: In the future we'll support sparse mappings, so this >> * will have to change. */ >> - if (!ppgtt->pdp.page_directory[i].daddr) >> + if (!ppgtt->pdp.page_directory[i]->daddr) >> continue; >> =20 >> - pci_unmap_page(hwdev, ppgtt->pdp.page_directory[i].daddr, PAGE_SIZE= , >> + pci_unmap_page(hwdev, ppgtt->pdp.page_directory[i]->daddr, PAGE_SIZ= E, >> PCI_DMA_BIDIRECTIONAL); >> =20 >> for (j =3D 0; j < GEN8_PDES_PER_PAGE; j++) { >> - dma_addr_t addr =3D ppgtt->pdp.page_directory[i].page_tables[j].da= ddr; >> + struct i915_page_directory_entry *pd =3D ppgtt->pdp.page_directory= [i]; >> + struct i915_page_table_entry *pt; >> + dma_addr_t addr; >> + >> + if (WARN_ON(!pd->page_tables[j])) >> + continue; >> + >> + pt =3D pd->page_tables[j]; >> + addr =3D pt->daddr; >> + >> if (addr) >> pci_unmap_page(hwdev, addr, PAGE_SIZE, >> PCI_DMA_BIDIRECTIONAL); >> @@ -474,25 +593,20 @@ static void gen8_ppgtt_cleanup(struct i915_addre= ss_space *vm) >> =20 >> static int gen8_ppgtt_allocate_page_tables(struct i915_hw_ppgtt *ppg= tt) >> { >> - int i, j; >> + int i, ret; >> =20 >> for (i =3D 0; i < ppgtt->num_pd_pages; i++) { >> - struct i915_page_directory_entry *pd =3D &ppgtt->pdp.page_directory= [i]; >> - for (j =3D 0; j < GEN8_PDES_PER_PAGE; j++) { >> - struct i915_page_table_entry *pt =3D &pd->page_tables[j]; >> - >> - pt->page =3D alloc_page(GFP_KERNEL | __GFP_ZERO); >> - if (!pt->page) >> - goto unwind_out; >> - >> - } >> + ret =3D alloc_pt_range(ppgtt->pdp.page_directory[i], >> + 0, GEN8_PDES_PER_PAGE); >> + if (ret) >> + goto unwind_out; >> } >> =20 >> return 0; >> =20 >> unwind_out: >> while (i--) >> - gen8_free_page_tables(&ppgtt->pdp.page_directory[i]); >> + gen8_free_page_tables(ppgtt->pdp.page_directory[i]); >> =20 >> return -ENOMEM; >> } >> @@ -503,19 +617,9 @@ static int gen8_ppgtt_allocate_page_directories(s= truct i915_hw_ppgtt *ppgtt, >> int i; >> =20 >> for (i =3D 0; i < max_pdp; i++) { >> - struct i915_page_table_entry *pt; >> - >> - pt =3D kcalloc(GEN8_PDES_PER_PAGE, sizeof(*pt), GFP_KERNEL); >> - if (!pt) >> + ppgtt->pdp.page_directory[i] =3D alloc_pd_single(); >> + if (IS_ERR(ppgtt->pdp.page_directory[i])) >> goto unwind_out; >> - >> - ppgtt->pdp.page_directory[i].page =3D alloc_page(GFP_KERNEL); >> - if (!ppgtt->pdp.page_directory[i].page) { >> - kfree(pt); >> - goto unwind_out; >> - } >> - >> - ppgtt->pdp.page_directory[i].page_tables =3D pt; >> } >> =20 >> ppgtt->num_pd_pages =3D max_pdp; >> @@ -524,10 +628,8 @@ static int gen8_ppgtt_allocate_page_directories(s= truct i915_hw_ppgtt *ppgtt, >> return 0; >> =20 >> unwind_out: >> - while (i--) { >> - kfree(ppgtt->pdp.page_directory[i].page_tables); >> - __free_page(ppgtt->pdp.page_directory[i].page); >> - } >> + while (i--) >> + unmap_and_free_pd(ppgtt->pdp.page_directory[i]); >> =20 >> return -ENOMEM; >> } >> @@ -561,14 +663,14 @@ static int gen8_ppgtt_setup_page_directories(str= uct i915_hw_ppgtt *ppgtt, >> int ret; >> =20 >> pd_addr =3D pci_map_page(ppgtt->base.dev->pdev, >> - ppgtt->pdp.page_directory[pd].page, 0, >> + ppgtt->pdp.page_directory[pd]->page, 0, >> PAGE_SIZE, PCI_DMA_BIDIRECTIONAL); >> =20 >> ret =3D pci_dma_mapping_error(ppgtt->base.dev->pdev, pd_addr); >> if (ret) >> return ret; >> =20 >> - ppgtt->pdp.page_directory[pd].daddr =3D pd_addr; >> + ppgtt->pdp.page_directory[pd]->daddr =3D pd_addr; >> =20 >> return 0; >> } >> @@ -578,8 +680,8 @@ static int gen8_ppgtt_setup_page_tables(struct i91= 5_hw_ppgtt *ppgtt, >> const int pt) >> { >> dma_addr_t pt_addr; >> - struct i915_page_directory_entry *pdir =3D &ppgtt->pdp.page_director= y[pd]; >> - struct i915_page_table_entry *ptab =3D &pdir->page_tables[pt]; >> + struct i915_page_directory_entry *pdir =3D ppgtt->pdp.page_directory= [pd]; >> + struct i915_page_table_entry *ptab =3D pdir->page_tables[pt]; >> struct page *p =3D ptab->page; >> int ret; >> =20 >> @@ -642,10 +744,12 @@ static int gen8_ppgtt_init(struct i915_hw_ppgtt = *ppgtt, uint64_t size) >> * will never need to touch the PDEs again. >> */ >> for (i =3D 0; i < max_pdp; i++) { >> + struct i915_page_directory_entry *pd =3D ppgtt->pdp.page_directory[= i]; >> gen8_ppgtt_pde_t *pd_vaddr; >> - pd_vaddr =3D kmap_atomic(ppgtt->pdp.page_directory[i].page); >> + pd_vaddr =3D kmap_atomic(ppgtt->pdp.page_directory[i]->page); >> for (j =3D 0; j < GEN8_PDES_PER_PAGE; j++) { >> - dma_addr_t addr =3D ppgtt->pdp.page_directory[i].page_tables[j].da= ddr; >> + struct i915_page_table_entry *pt =3D pd->page_tables[j]; >> + dma_addr_t addr =3D pt->daddr; >> pd_vaddr[j] =3D gen8_pde_encode(ppgtt->base.dev, addr, >> I915_CACHE_LLC); >> } >> @@ -696,7 +800,7 @@ static void gen6_dump_ppgtt(struct i915_hw_ppgtt *= ppgtt, struct seq_file *m) >> for (pde =3D 0; pde < ppgtt->num_pd_entries; pde++) { >> u32 expected; >> gen6_gtt_pte_t *pt_vaddr; >> - dma_addr_t pt_addr =3D ppgtt->pd.page_tables[pde].daddr; >> + dma_addr_t pt_addr =3D ppgtt->pd.page_tables[pde]->daddr; >> pd_entry =3D readl(pd_addr + pde); >> expected =3D (GEN6_PDE_ADDR_ENCODE(pt_addr) | GEN6_PDE_VALID); >> =20 >> @@ -707,7 +811,7 @@ static void gen6_dump_ppgtt(struct i915_hw_ppgtt *= ppgtt, struct seq_file *m) >> expected); >> seq_printf(m, "\tPDE: %x\n", pd_entry); >> =20 >> - pt_vaddr =3D kmap_atomic(ppgtt->pd.page_tables[pde].page); >> + pt_vaddr =3D kmap_atomic(ppgtt->pd.page_tables[pde]->page); >> for (pte =3D 0; pte < I915_PPGTT_PT_ENTRIES; pte+=3D4) { >> unsigned long va =3D >> (pde * PAGE_SIZE * I915_PPGTT_PT_ENTRIES) + >> @@ -746,7 +850,7 @@ static void gen6_write_pdes(struct i915_hw_ppgtt *= ppgtt) >> for (i =3D 0; i < ppgtt->num_pd_entries; i++) { >> dma_addr_t pt_addr; >> =20 >> - pt_addr =3D ppgtt->pd.page_tables[i].daddr; >> + pt_addr =3D ppgtt->pd.page_tables[i]->daddr; >> pd_entry =3D GEN6_PDE_ADDR_ENCODE(pt_addr); >> pd_entry |=3D GEN6_PDE_VALID; >> =20 >> @@ -922,7 +1026,7 @@ static void gen6_ppgtt_clear_range(struct i915_ad= dress_space *vm, >> if (last_pte > I915_PPGTT_PT_ENTRIES) >> last_pte =3D I915_PPGTT_PT_ENTRIES; >> =20 >> - pt_vaddr =3D kmap_atomic(ppgtt->pd.page_tables[act_pt].page); >> + pt_vaddr =3D kmap_atomic(ppgtt->pd.page_tables[act_pt]->page); >> =20 >> for (i =3D first_pte; i < last_pte; i++) >> pt_vaddr[i] =3D scratch_pte; >> @@ -951,7 +1055,7 @@ static void gen6_ppgtt_insert_entries(struct i915= _address_space *vm, >> pt_vaddr =3D NULL; >> for_each_sg_page(pages->sgl, &sg_iter, pages->nents, 0) { >> if (pt_vaddr =3D=3D NULL) >> - pt_vaddr =3D kmap_atomic(ppgtt->pd.page_tables[act_pt].page); >> + pt_vaddr =3D kmap_atomic(ppgtt->pd.page_tables[act_pt]->page); >> =20 >> pt_vaddr[act_pte] =3D >> vm->pte_encode(sg_page_iter_dma_address(&sg_iter), >> @@ -974,7 +1078,7 @@ static void gen6_ppgtt_unmap_pages(struct i915_hw= _ppgtt *ppgtt) >> =20 >> for (i =3D 0; i < ppgtt->num_pd_entries; i++) >> pci_unmap_page(ppgtt->base.dev->pdev, >> - ppgtt->pd.page_tables[i].daddr, >> + ppgtt->pd.page_tables[i]->daddr, >> 4096, PCI_DMA_BIDIRECTIONAL); >> } >> =20 >> @@ -983,8 +1087,9 @@ static void gen6_ppgtt_free(struct i915_hw_ppgtt = *ppgtt) >> int i; >> =20 >> for (i =3D 0; i < ppgtt->num_pd_entries; i++) >> - __free_page(ppgtt->pd.page_tables[i].page); >> - kfree(ppgtt->pd.page_tables); >> + unmap_and_free_pt(ppgtt->pd.page_tables[i]); >> + >> + unmap_and_free_pd(&ppgtt->pd); >> } >> =20 >> static void gen6_ppgtt_cleanup(struct i915_address_space *vm) >> @@ -1039,27 +1144,6 @@ alloc: >> return 0; >> } >> =20 >> -static int gen6_ppgtt_allocate_page_tables(struct i915_hw_ppgtt *ppgt= t) >> -{ >> - struct i915_page_table_entry *pt; >> - int i; >> - >> - pt =3D kcalloc(ppgtt->num_pd_entries, sizeof(*pt), GFP_KERNEL); >> - if (!pt) >> - return -ENOMEM; >> - >> - for (i =3D 0; i < ppgtt->num_pd_entries; i++) { >> - pt[i].page =3D alloc_page(GFP_KERNEL); >> - if (!pt->page) { >> - gen6_ppgtt_free(ppgtt); >> - return -ENOMEM; >> - } >> - } >> - >> - ppgtt->pd.page_tables =3D pt; >> - return 0; >> -} >> - >> static int gen6_ppgtt_alloc(struct i915_hw_ppgtt *ppgtt) >> { >> int ret; >> @@ -1068,7 +1152,7 @@ static int gen6_ppgtt_alloc(struct i915_hw_ppgtt= *ppgtt) >> if (ret) >> return ret; >> =20 >> - ret =3D gen6_ppgtt_allocate_page_tables(ppgtt); >> + ret =3D alloc_pt_range(&ppgtt->pd, 0, ppgtt->num_pd_entries); >> if (ret) { >> drm_mm_remove_node(&ppgtt->node); >> return ret; >> @@ -1086,7 +1170,7 @@ static int gen6_ppgtt_setup_page_tables(struct i= 915_hw_ppgtt *ppgtt) >> struct page *page; >> dma_addr_t pt_addr; >> =20 >> - page =3D ppgtt->pd.page_tables[i].page; >> + page =3D ppgtt->pd.page_tables[i]->page; >> pt_addr =3D pci_map_page(dev->pdev, page, 0, 4096, >> PCI_DMA_BIDIRECTIONAL); >> =20 >> @@ -1095,7 +1179,7 @@ static int gen6_ppgtt_setup_page_tables(struct i= 915_hw_ppgtt *ppgtt) >> return -EIO; >> } >> =20 >> - ppgtt->pd.page_tables[i].daddr =3D pt_addr; >> + ppgtt->pd.page_tables[i]->daddr =3D pt_addr; >> } >> =20 >> return 0; >> diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i91= 5/i915_gem_gtt.h >> index 6efeb18..e8cad72 100644 >> --- a/drivers/gpu/drm/i915/i915_gem_gtt.h >> +++ b/drivers/gpu/drm/i915/i915_gem_gtt.h >> @@ -199,12 +199,12 @@ struct i915_page_directory_entry { >> dma_addr_t daddr; >> }; >> =20 >> - struct i915_page_table_entry *page_tables; >> + struct i915_page_table_entry *page_tables[GEN6_PPGTT_PD_ENTRIES]; /*= PDEs */ > Would you consider changing the plural here in 'tables' so that we woul= d > lose the discrepancy against the page_directory below? Ok, I'll rename them, but it's cleaner to make this change in the patch=20 that added it ("drm/i915: page table abstractions"). -Michel > -Mika > >> }; >> =20 >> struct i915_page_directory_pointer_entry { >> /* struct page *page; */ >> - struct i915_page_directory_entry page_directory[GEN8_LEGACY_PDPES]; >> + struct i915_page_directory_entry *page_directory[GEN8_LEGACY_PDPES];= >> }; >> =20 >> struct i915_address_space { >> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/i= ntel_lrc.c >> index 9e71992..bc9c7c3 100644 >> --- a/drivers/gpu/drm/i915/intel_lrc.c >> +++ b/drivers/gpu/drm/i915/intel_lrc.c >> @@ -1735,14 +1735,14 @@ populate_lr_context(struct intel_context *ctx,= struct drm_i915_gem_object *ctx_o >> reg_state[CTX_PDP1_LDW] =3D GEN8_RING_PDP_LDW(ring, 1); >> reg_state[CTX_PDP0_UDW] =3D GEN8_RING_PDP_UDW(ring, 0); >> reg_state[CTX_PDP0_LDW] =3D GEN8_RING_PDP_LDW(ring, 0); >> - reg_state[CTX_PDP3_UDW+1] =3D upper_32_bits(ppgtt->pdp.page_director= y[3].daddr); >> - reg_state[CTX_PDP3_LDW+1] =3D lower_32_bits(ppgtt->pdp.page_director= y[3].daddr); >> - reg_state[CTX_PDP2_UDW+1] =3D upper_32_bits(ppgtt->pdp.page_director= y[2].daddr); >> - reg_state[CTX_PDP2_LDW+1] =3D lower_32_bits(ppgtt->pdp.page_director= y[2].daddr); >> - reg_state[CTX_PDP1_UDW+1] =3D upper_32_bits(ppgtt->pdp.page_director= y[1].daddr); >> - reg_state[CTX_PDP1_LDW+1] =3D lower_32_bits(ppgtt->pdp.page_director= y[1].daddr); >> - reg_state[CTX_PDP0_UDW+1] =3D upper_32_bits(ppgtt->pdp.page_director= y[0].daddr); >> - reg_state[CTX_PDP0_LDW+1] =3D lower_32_bits(ppgtt->pdp.page_director= y[0].daddr); >> + reg_state[CTX_PDP3_UDW+1] =3D upper_32_bits(ppgtt->pdp.page_director= y[3]->daddr); >> + reg_state[CTX_PDP3_LDW+1] =3D lower_32_bits(ppgtt->pdp.page_director= y[3]->daddr); >> + reg_state[CTX_PDP2_UDW+1] =3D upper_32_bits(ppgtt->pdp.page_director= y[2]->daddr); >> + reg_state[CTX_PDP2_LDW+1] =3D lower_32_bits(ppgtt->pdp.page_director= y[2]->daddr); >> + reg_state[CTX_PDP1_UDW+1] =3D upper_32_bits(ppgtt->pdp.page_director= y[1]->daddr); >> + reg_state[CTX_PDP1_LDW+1] =3D lower_32_bits(ppgtt->pdp.page_director= y[1]->daddr); >> + reg_state[CTX_PDP0_UDW+1] =3D upper_32_bits(ppgtt->pdp.page_director= y[0]->daddr); >> + reg_state[CTX_PDP0_LDW+1] =3D lower_32_bits(ppgtt->pdp.page_director= y[0]->daddr); >> if (ring->id =3D=3D RCS) { >> reg_state[CTX_LRI_HEADER_2] =3D MI_LOAD_REGISTER_IMM(1); >> reg_state[CTX_R_PWR_CLK_STATE] =3D 0x20c8; >> --=20 >> 2.1.1 >> >> _______________________________________________ >> Intel-gfx mailing list >> Intel-gfx@lists.freedesktop.org >> http://lists.freedesktop.org/mailman/listinfo/intel-gfx --------------ms010905000901040500020204 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIRkjCC BOswggPToAMCAQICEFLpAsoR6ESdlGU4L6MaMLswDQYJKoZIhvcNAQEFBQAwbzELMAkGA1UE BhMCU0UxFDASBgNVBAoTC0FkZFRydXN0IEFCMSYwJAYDVQQLEx1BZGRUcnVzdCBFeHRlcm5h bCBUVFAgTmV0d29yazEiMCAGA1UEAxMZQWRkVHJ1c3QgRXh0ZXJuYWwgQ0EgUm9vdDAeFw0x MzAzMTkwMDAwMDBaFw0yMDA1MzAxMDQ4MzhaMHkxCzAJBgNVBAYTAlVTMQswCQYDVQQIEwJD QTEUMBIGA1UEBxMLU2FudGEgQ2xhcmExGjAYBgNVBAoTEUludGVsIENvcnBvcmF0aW9uMSsw KQYDVQQDEyJJbnRlbCBFeHRlcm5hbCBCYXNpYyBJc3N1aW5nIENBIDRBMIIBIjANBgkqhkiG 9w0BAQEFAAOCAQ8AMIIBCgKCAQEA4LDMgJ3YSVX6A9sE+jjH3b+F3Xa86z3LLKu/6WvjIdvU bxnoz2qnvl9UKQI3sE1zURQxrfgvtP0bPgt1uDwAfLc6H5eqnyi+7FrPsTGCR4gwDmq1WkTQ gNDNXUgb71e9/6sfq+WfCDpi8ScaglyLCRp7ph/V60cbitBvnZFelKCDBh332S6KG3bAdnNG B/vk86bwDlY6omDs6/RsfNwzQVwo/M3oPrux6y6zyIoRulfkVENbM0/9RrzQOlyK4W5Vk4EE sfW2jlCV4W83QKqRccAKIUxw2q/HoHVPbbETrrLmE6RRZ/+eWlkGWl+mtx42HOgOmX0BRdTR o9vH7yeBowIDAQABo4IBdzCCAXMwHwYDVR0jBBgwFoAUrb2YejS0Jvf6xCZU7wO94CTLVBow HQYDVR0OBBYEFB5pKrTcKP5HGE4hCz+8rBEv8Jj1MA4GA1UdDwEB/wQEAwIBhjASBgNVHRMB Af8ECDAGAQH/AgEAMDYGA1UdJQQvMC0GCCsGAQUFBwMEBgorBgEEAYI3CgMEBgorBgEEAYI3 CgMMBgkrBgEEAYI3FQUwFwYDVR0gBBAwDjAMBgoqhkiG+E0BBQFpMEkGA1UdHwRCMEAwPqA8 oDqGOGh0dHA6Ly9jcmwudHJ1c3QtcHJvdmlkZXIuY29tL0FkZFRydXN0RXh0ZXJuYWxDQVJv b3QuY3JsMDoGCCsGAQUFBwEBBC4wLDAqBggrBgEFBQcwAYYeaHR0cDovL29jc3AudHJ1c3Qt cHJvdmlkZXIuY29tMDUGA1UdHgQuMCygKjALgQlpbnRlbC5jb20wG6AZBgorBgEEAYI3FAID oAsMCWludGVsLmNvbTANBgkqhkiG9w0BAQUFAAOCAQEAKcLNo/2So1Jnoi8G7W5Q6FSPq1fm yKW3sSDf1amvyHkjEgd25n7MKRHGEmRxxoziPKpcmbfXYU+J0g560nCo5gPF78Wd7ZmzcmCc m1UFFfIxfw6QA19bRpTC8bMMaSSEl8y39Pgwa+HENmoPZsM63DdZ6ziDnPqcSbcfYs8qd/m5 d22rpXq5IGVUtX6LX7R/hSSw/3sfATnBLgiJtilVyY7OGGmYKCAS2I04itvSS1WtecXTt9OZ DyNbl7LtObBrgMLhZkpJW+pOR9f3h5VG2S5uKkA7Th9NC9EoScdwQCAIw+UWKbSQ0Isj2UFL 7fHKvmqWKVTL98sRzvI3seNC4DCCBiowggUSoAMCAQICChZNwGIAAAAAT+swDQYJKoZIhvcN AQEFBQAweTELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAkNBMRQwEgYDVQQHEwtTYW50YSBDbGFy YTEaMBgGA1UEChMRSW50ZWwgQ29ycG9yYXRpb24xKzApBgNVBAMTIkludGVsIEV4dGVybmFs IEJhc2ljIElzc3VpbmcgQ0EgNEEwHhcNMTQwNDI4MTIyMzQ3WhcNMTcwNDEyMTIyMzQ3WjBD MRgwFgYDVQQDEw9UaGllcnJ5LCBNaWNoZWwxJzAlBgkqhkiG9w0BCQEWGG1pY2hlbC50aGll cnJ5QGludGVsLmNvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKuvwFQqpb2d bD6xItCPUz74nZG7NRczIbUcL8TFaJXQPU+wJhpZBXYhTv+Qme/ZgAlIF/caUKQZ0uYGcYVY yf7dPp+lShl6+CYqz/NDvSb28jxvMgK/3MLQaVQP/82V7nYbe/2JPpAXJVDW/PQk6j+I/tgK zAIxbr1IZB91KecP3p3ksnKuohDkJA4FEK/f9LWo18lSa92yYDhJ6kEV90a2VPA4l6UYngGs OYhVAAsRoba+ROSiuVG5yDyrKwrAhAXdSbpIH1KO91MsCNlUPEtKdunnSL6eoFXfe9cA9+iZ qrvddLWCmDkjsqCsXrBJs87gE8lj733wU+rUiIPnq08CAwEAAaOCAugwggLkMAsGA1UdDwQE AwIHgDA8BgkrBgEEAYI3FQcELzAtBiUrBgEEAYI3FQiGw4x1hJnlUYP9gSiFjp9TgpHACWeB 3r05lfBDAgFkAgEIMB0GA1UdDgQWBBRFUVbAqJvKLf4iKYvg6Gx6sRNhZjAfBgNVHSMEGDAW gBQeaSq03Cj+RxhOIQs/vKwRL/CY9TCByQYDVR0fBIHBMIG+MIG7oIG4oIG1hlRodHRwOi8v d3d3LmludGVsLmNvbS9yZXBvc2l0b3J5L0NSTC9JbnRlbCUyMEV4dGVybmFsJTIwQmFzaWMl MjBJc3N1aW5nJTIwQ0ElMjA0QS5jcmyGXWh0dHA6Ly9jZXJ0aWZpY2F0ZXMuaW50ZWwuY29t L3JlcG9zaXRvcnkvQ1JML0ludGVsJTIwRXh0ZXJuYWwlMjBCYXNpYyUyMElzc3VpbmclMjBD QSUyMDRBLmNybDCB7wYIKwYBBQUHAQEEgeIwgd8waQYIKwYBBQUHMAKGXWh0dHA6Ly93d3cu aW50ZWwuY29tL3JlcG9zaXRvcnkvY2VydGlmaWNhdGVzL0ludGVsJTIwRXh0ZXJuYWwlMjBC YXNpYyUyMElzc3VpbmclMjBDQSUyMDRBLmNydDByBggrBgEFBQcwAoZmaHR0cDovL2NlcnRp ZmljYXRlcy5pbnRlbC5jb20vcmVwb3NpdG9yeS9jZXJ0aWZpY2F0ZXMvSW50ZWwlMjBFeHRl cm5hbCUyMEJhc2ljJTIwSXNzdWluZyUyMENBJTIwNEEuY3J0MB8GA1UdJQQYMBYGCCsGAQUF BwMEBgorBgEEAYI3CgMMMCkGCSsGAQQBgjcVCgQcMBowCgYIKwYBBQUHAwQwDAYKKwYBBAGC NwoDDDBNBgNVHREERjBEoCgGCisGAQQBgjcUAgOgGgwYbWljaGVsLnRoaWVycnlAaW50ZWwu Y29tgRhtaWNoZWwudGhpZXJyeUBpbnRlbC5jb20wDQYJKoZIhvcNAQEFBQADggEBAMYpBdhi VzoxuD58VMj2xc6Zsz7cg0Bpiji7sfDrhswhYwWogsBukXFwhukKmSqRIwvdSIoWfBDA4kIS qvgrMTYPtDl59awxdvn+jWx/P0APyC3tBg4z+1u0waM3smppq5/8f8Hew2S0IWP7tIJeQ+NY IE+iX5NclbuK4mFNJa74Rw7uaWBY+4zBNDaaaCgosmPfP5sBQnJzYL0jnNJTWXZP74JUPj/7 9SXg+C0dSORUqkkIsCy0jPGzT3ypk75JJ80znb0uoO4kgW2hO7dAAEe1GkONG4Ab1Bsk4zJl 1FaFkOf5aAepzm4rwnVtRMpaL8rjce3VOfpCvcIg+WDFXUAwggZxMIIFWaADAgECAgoWTN75 AAAAAE/qMA0GCSqGSIb3DQEBBQUAMHkxCzAJBgNVBAYTAlVTMQswCQYDVQQIEwJDQTEUMBIG A1UEBxMLU2FudGEgQ2xhcmExGjAYBgNVBAoTEUludGVsIENvcnBvcmF0aW9uMSswKQYDVQQD EyJJbnRlbCBFeHRlcm5hbCBCYXNpYyBJc3N1aW5nIENBIDRBMB4XDTE0MDQyODEyMjI0OVoX DTE3MDQxMjEyMjI0OVowQzEYMBYGA1UEAxMPVGhpZXJyeSwgTWljaGVsMScwJQYJKoZIhvcN AQkBFhhtaWNoZWwudGhpZXJyeUBpbnRlbC5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw ggEKAoIBAQDECX/a/YWv0J8ABUwU61b73w7YCBE18G1fMdlsadEZUpg0BbpMOu0rx68iUq9x zU575ggweIlkt/qBGnGZvDVP8Iit5hFLmtlPnds0OelpjzE1o9nsgdrGEoT2BgjoTja+yLnv wDldEPqAszih/nCQcqtyH4vEmmnBxsUwMR7oD9U+U8sRLmpqbE1g0OGEh+X5zLrdNBEMZAGu iE2VeqvgS3poayRiJ2nc43ufMDtDlUnozq2H7+CRXU6yFsprpVTx29YHIrJLZxp+rBOlNQAy 8Ba5zyKq/lRRs79YLH8qzCdZJS7PfLpgBn8B8vl+Kp9iMMKdxhIMms46S+Grv+7FAgMBAAGj ggMvMIIDKzALBgNVHQ8EBAMCBDAwPQYJKwYBBAGCNxUHBDAwLgYmKwYBBAGCNxUIhsOMdYSZ 5VGD/YEohY6fU4KRwAlnhLnZQYeE/04CAWQCAQ0wRAYJKoZIhvcNAQkPBDcwNTAOBggqhkiG 9w0DAgICAIAwDgYIKoZIhvcNAwQCAgCAMAcGBSsOAwIHMAoGCCqGSIb3DQMHMB0GA1UdDgQW BBRBzUHVb22UOyhMUTmF2b9/Rh7RMTAfBgNVHSMEGDAWgBQeaSq03Cj+RxhOIQs/vKwRL/CY 9TCByQYDVR0fBIHBMIG+MIG7oIG4oIG1hlRodHRwOi8vd3d3LmludGVsLmNvbS9yZXBvc2l0 b3J5L0NSTC9JbnRlbCUyMEV4dGVybmFsJTIwQmFzaWMlMjBJc3N1aW5nJTIwQ0ElMjA0QS5j cmyGXWh0dHA6Ly9jZXJ0aWZpY2F0ZXMuaW50ZWwuY29tL3JlcG9zaXRvcnkvQ1JML0ludGVs JTIwRXh0ZXJuYWwlMjBCYXNpYyUyMElzc3VpbmclMjBDQSUyMDRBLmNybDCB7wYIKwYBBQUH AQEEgeIwgd8waQYIKwYBBQUHMAKGXWh0dHA6Ly93d3cuaW50ZWwuY29tL3JlcG9zaXRvcnkv Y2VydGlmaWNhdGVzL0ludGVsJTIwRXh0ZXJuYWwlMjBCYXNpYyUyMElzc3VpbmclMjBDQSUy MDRBLmNydDByBggrBgEFBQcwAoZmaHR0cDovL2NlcnRpZmljYXRlcy5pbnRlbC5jb20vcmVw b3NpdG9yeS9jZXJ0aWZpY2F0ZXMvSW50ZWwlMjBFeHRlcm5hbCUyMEJhc2ljJTIwSXNzdWlu ZyUyMENBJTIwNEEuY3J0MB8GA1UdJQQYMBYGCCsGAQUFBwMEBgorBgEEAYI3CgMEMCkGCSsG AQQBgjcVCgQcMBowCgYIKwYBBQUHAwQwDAYKKwYBBAGCNwoDBDBNBgNVHREERjBEoCgGCisG AQQBgjcUAgOgGgwYbWljaGVsLnRoaWVycnlAaW50ZWwuY29tgRhtaWNoZWwudGhpZXJyeUBp bnRlbC5jb20wDQYJKoZIhvcNAQEFBQADggEBAGrtUHqoTWHttQxVewb/Uv6uL0YnPOVyFZs2 UKlnDV8zoUf4enk4FBIOKEqZIZ1Ektx5P1BaIHGfCVBP0Y4FIo2Twi4VHuWzkmECEKvjaur1 YTcuUHT3xQMsRS2CgUb2ttxGr381zG4CDKM9esAv0lDPzH+KgWhK1U+/gpq04AzLOQUHi5cP 1ZCINbo8AnunzRKhzegbQ6nr7frQlCUlrLMstzquhI0t8QjdFO3GKAjXF3wJf3aKnYNeFAWE xwLkdjM+TevhMpiNeBRJoEeUcGRRhe0YlhIoiryD5sZIeWGPBAhwOpZFGRdlXAcfbpwQ3K7a we2yqKeiU606TCcfppgxggO2MIIDsgIBATCBhzB5MQswCQYDVQQGEwJVUzELMAkGA1UECBMC Q0ExFDASBgNVBAcTC1NhbnRhIENsYXJhMRowGAYDVQQKExFJbnRlbCBDb3Jwb3JhdGlvbjEr MCkGA1UEAxMiSW50ZWwgRXh0ZXJuYWwgQmFzaWMgSXNzdWluZyBDQSA0QQIKFk3AYgAAAABP 6zAJBgUrDgMCGgUAoIICAzAYBgkqhkiG9w0BCQMxCwYJKoZIhvcNAQcBMBwGCSqGSIb3DQEJ BTEPFw0xNTAyMjQxNTE4NDdaMCMGCSqGSIb3DQEJBDEWBBQDgVmnI/4JVuYIsH/3GRHD8mwB BjBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjALBglghkgBZQMEAQIwCgYIKoZIhvcN AwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFAMAcGBSsOAwIHMA0GCCqGSIb3DQMC AgEoMIGYBgkrBgEEAYI3EAQxgYowgYcweTELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAkNBMRQw EgYDVQQHEwtTYW50YSBDbGFyYTEaMBgGA1UEChMRSW50ZWwgQ29ycG9yYXRpb24xKzApBgNV BAMTIkludGVsIEV4dGVybmFsIEJhc2ljIElzc3VpbmcgQ0EgNEECChZM3vkAAAAAT+owgZoG CyqGSIb3DQEJEAILMYGKoIGHMHkxCzAJBgNVBAYTAlVTMQswCQYDVQQIEwJDQTEUMBIGA1UE BxMLU2FudGEgQ2xhcmExGjAYBgNVBAoTEUludGVsIENvcnBvcmF0aW9uMSswKQYDVQQDEyJJ bnRlbCBFeHRlcm5hbCBCYXNpYyBJc3N1aW5nIENBIDRBAgoWTN75AAAAAE/qMA0GCSqGSIb3 DQEBAQUABIIBAGiubEy2tdpZh9p1OC6ep8CeTcQwY2dpjCgmj+zITJCRih5Usy6wGlwZG3Q+ rY84tiv+pTo7U2c9hx4PQzSlK7BtLi3uuOjkDrwfN/I7OFQloPcuweu4zAXDzOfNJ9knRFAa 9wghZQm5TwxstZWJ/3Sp+jq9KYqDLdl9Jt6JERGrld1CBR7pcm44ptlRYzw1YAeEBDX2Xdp6 jamMQSn2nxzIFzLHz7Mi2OpPUVx2atMRfR2ug0nEYIIs+6a3MKnBBc7/NHlxKnMGklxgo4DI LQTnIK/Bqioha+qbmJPM8qVuOkW8WtKBGbIbCdCIIOXNbUkeQc0A8C6QlOXRlUatnisAAAAA AAA= --------------ms010905000901040500020204-- --===============1337298412== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: inline X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KSW50ZWwtZ2Z4 IG1haWxpbmcgbGlzdApJbnRlbC1nZnhAbGlzdHMuZnJlZWRlc2t0b3Aub3JnCmh0dHA6Ly9saXN0 cy5mcmVlZGVza3RvcC5vcmcvbWFpbG1hbi9saXN0aW5mby9pbnRlbC1nZngK --===============1337298412==--