* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers [not found] ` <20201015164909.GC401619@phenom.ffwll.local> @ 2020-10-15 17:52 ` Thomas Zimmermann 2020-10-16 9:41 ` Christian König 0 siblings, 1 reply; 4+ messages in thread From: Thomas Zimmermann @ 2020-10-15 17:52 UTC (permalink / raw) To: Daniel Vetter Cc: airlied, nouveau, dri-devel, chris, melissa.srw, ray.huang, kraxel, yuq825, sam, emil.velikov, linux-samsung-soc, jy0922.shim, lima, oleksandr_andrushchenko, krzk, steven.price, linux-rockchip, luben.tuikov, bskeggs, linux+etnaviv, spice-devel, alyssa.rosenzweig, etnaviv, linaro-mm-sig, hdegoede, xen-devel, virtualization, sean, apaneers, linux-arm-kernel, amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park, miaoqinglang, kgene, alexander.deucher, linux-media, Christian König Hi On Thu, 15 Oct 2020 18:49:09 +0200 Daniel Vetter <daniel@ffwll.ch> wrote: > On Thu, Oct 15, 2020 at 04:08:13PM +0200, Christian König wrote: > > Am 15.10.20 um 14:38 schrieb Thomas Zimmermann: > > > The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in > > > kernel address space. The mapping's address is returned as struct > > > dma_buf_map. Each function is a simplified version of TTM's existing > > > kmap code. Both functions respect the memory's location ani/or > > > writecombine flags. > > > > > > On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(), > > > two helpers that convert a GEM object into the TTM BO and forward the > > > call to TTM's vmap/vunmap. These helpers can be dropped into the rsp > > > GEM object callbacks. > > > > > > v4: > > > * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers > > > (Daniel, Christian) > > > > Bunch of minor comments below, but over all look very solid to me. > > Yeah I think just duplicating the ttm bo map stuff for vmap is indeed the > cleanest. And then we can maybe push the combinatorial monster into > vmwgfx, which I think is the only user after this series. Or perhaps a > dedicated set of helpers to map an invidual page (again using the > dma_buf_map stuff). From a quick look, I'd say it should be possible to have the same interface for kmap/kunmap as for vmap/vunmap (i.e., parameters are bo and dma-buf-map). All mapping state can be deduced from this. And struct ttm_bo_kmap_obj can be killed off entirely. Best regards Thomas > > I'll let Christian with the details, but at a high level this is > definitely > > Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> > > Thanks a lot for doing all this. > -Daniel > > > > > > > > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> > > > --- > > > drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++ > > > drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++ > > > include/drm/drm_gem_ttm_helper.h | 6 +++ > > > include/drm/ttm/ttm_bo_api.h | 28 +++++++++++ > > > include/linux/dma-buf-map.h | 20 ++++++++ > > > 5 files changed, 164 insertions(+) > > > > > > diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c > > > b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30 > > > 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c > > > +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c > > > @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, > > > unsigned int indent, } > > > EXPORT_SYMBOL(drm_gem_ttm_print_info); > > > +/** > > > + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object > > > + * @gem: GEM object. > > > + * @map: [out] returns the dma-buf mapping. > > > + * > > > + * Maps a GEM object with ttm_bo_vmap(). This function can be used as > > > + * &drm_gem_object_funcs.vmap callback. > > > + * > > > + * Returns: > > > + * 0 on success, or a negative errno code otherwise. > > > + */ > > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem, > > > + struct dma_buf_map *map) > > > +{ > > > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); > > > + > > > + return ttm_bo_vmap(bo, map); > > > + > > > +} > > > +EXPORT_SYMBOL(drm_gem_ttm_vmap); > > > + > > > +/** > > > + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object > > > + * @gem: GEM object. > > > + * @map: dma-buf mapping. > > > + * > > > + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used > > > as > > > + * &drm_gem_object_funcs.vmap callback. > > > + */ > > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem, > > > + struct dma_buf_map *map) > > > +{ > > > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); > > > + > > > + ttm_bo_vunmap(bo, map); > > > +} > > > +EXPORT_SYMBOL(drm_gem_ttm_vunmap); > > > + > > > /** > > > * drm_gem_ttm_mmap() - mmap &ttm_buffer_object > > > * @gem: GEM object. > > > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c > > > b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d > > > 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c > > > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c > > > @@ -32,6 +32,7 @@ > > > #include <drm/ttm/ttm_bo_driver.h> > > > #include <drm/ttm/ttm_placement.h> > > > #include <drm/drm_vma_manager.h> > > > +#include <linux/dma-buf-map.h> > > > #include <linux/io.h> > > > #include <linux/highmem.h> > > > #include <linux/wait.h> > > > @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map) > > > } > > > EXPORT_SYMBOL(ttm_bo_kunmap); > > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map) > > > +{ > > > + struct ttm_resource *mem = &bo->mem; > > > + int ret; > > > + > > > + ret = ttm_mem_io_reserve(bo->bdev, mem); > > > + if (ret) > > > + return ret; > > > + > > > + if (mem->bus.is_iomem) { > > > + void __iomem *vaddr_iomem; > > > + unsigned long size = bo->num_pages << PAGE_SHIFT; > > > > Please use uint64_t here and make sure to cast bo->num_pages before > > shifting. > > > > We have an unit tests of allocating a 8GB BO and that should work on a > > 32bit machine as well :) > > > > > + > > > + if (mem->bus.addr) > > > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr)); > > > + else if (mem->placement & TTM_PL_FLAG_WC) > > > > I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new > > mem->bus.caching enum as replacement. > > > > > + vaddr_iomem = ioremap_wc(mem->bus.offset, > > > size); > > > + else > > > + vaddr_iomem = ioremap(mem->bus.offset, size); > > > + > > > + if (!vaddr_iomem) > > > + return -ENOMEM; > > > + > > > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem); > > > + > > > + } else { > > > + struct ttm_operation_ctx ctx = { > > > + .interruptible = false, > > > + .no_wait_gpu = false > > > + }; > > > + struct ttm_tt *ttm = bo->ttm; > > > + pgprot_t prot; > > > + void *vaddr; > > > + > > > + BUG_ON(!ttm); > > > > I think we can drop this, populate will just crash badly anyway. > > > > > + > > > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx); > > > + if (ret) > > > + return ret; > > > + > > > + /* > > > + * We need to use vmap to get the desired page > > > protection > > > + * or to make the buffer object look contiguous. > > > + */ > > > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL); > > > > The calling convention has changed on drm-misc-next as well, but should be > > trivial to adapt. > > > > Regards, > > Christian. > > > > > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot); > > > + if (!vaddr) > > > + return -ENOMEM; > > > + > > > + dma_buf_map_set_vaddr(map, vaddr); > > > + } > > > + > > > + return 0; > > > +} > > > +EXPORT_SYMBOL(ttm_bo_vmap); > > > + > > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map > > > *map) +{ > > > + if (dma_buf_map_is_null(map)) > > > + return; > > > + > > > + if (map->is_iomem) > > > + iounmap(map->vaddr_iomem); > > > + else > > > + vunmap(map->vaddr); > > > + dma_buf_map_clear(map); > > > + > > > + ttm_mem_io_free(bo->bdev, &bo->mem); > > > +} > > > +EXPORT_SYMBOL(ttm_bo_vunmap); > > > + > > > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo, > > > bool dst_use_tt) > > > { > > > diff --git a/include/drm/drm_gem_ttm_helper.h > > > b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8 > > > 100644 --- a/include/drm/drm_gem_ttm_helper.h > > > +++ b/include/drm/drm_gem_ttm_helper.h > > > @@ -10,11 +10,17 @@ > > > #include <drm/ttm/ttm_bo_api.h> > > > #include <drm/ttm/ttm_bo_driver.h> > > > +struct dma_buf_map; > > > + > > > #define drm_gem_ttm_of_gem(gem_obj) \ > > > container_of(gem_obj, struct ttm_buffer_object, base) > > > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int > > > indent, const struct drm_gem_object *gem); > > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem, > > > + struct dma_buf_map *map); > > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem, > > > + struct dma_buf_map *map); > > > int drm_gem_ttm_mmap(struct drm_gem_object *gem, > > > struct vm_area_struct *vma); > > > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h > > > index 37102e45e496..2c59a785374c 100644 > > > --- a/include/drm/ttm/ttm_bo_api.h > > > +++ b/include/drm/ttm/ttm_bo_api.h > > > @@ -48,6 +48,8 @@ struct ttm_bo_global; > > > struct ttm_bo_device; > > > +struct dma_buf_map; > > > + > > > struct drm_mm_node; > > > struct ttm_placement; > > > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, > > > unsigned long start_page, */ > > > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map); > > > +/** > > > + * ttm_bo_vmap > > > + * > > > + * @bo: The buffer object. > > > + * @map: pointer to a struct dma_buf_map representing the map. > > > + * > > > + * Sets up a kernel virtual mapping, using ioremap or vmap to the > > > + * data in the buffer object. The parameter @map returns the virtual > > > + * address as struct dma_buf_map. Unmap the buffer with > > > ttm_bo_vunmap(). > > > + * > > > + * Returns > > > + * -ENOMEM: Out of memory. > > > + * -EINVAL: Invalid range. > > > + */ > > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); > > > + > > > +/** > > > + * ttm_bo_vunmap > > > + * > > > + * @bo: The buffer object. > > > + * @map: Object describing the map to unmap. > > > + * > > > + * Unmaps a kernel map set up by ttm_bo_vmap(). > > > + */ > > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map > > > *map); + > > > /** > > > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object. > > > * > > > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h > > > index fd1aba545fdf..2e8bbecb5091 100644 > > > --- a/include/linux/dma-buf-map.h > > > +++ b/include/linux/dma-buf-map.h > > > @@ -45,6 +45,12 @@ > > > * > > > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf); > > > * > > > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem(). > > > + * > > > + * .. code-block:: c > > > + * > > > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); > > > + * > > > * Test if a mapping is valid with either dma_buf_map_is_set() or > > > * dma_buf_map_is_null(). > > > * > > > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct > > > dma_buf_map *map, void *vaddr) map->is_iomem = false; > > > } > > > +/** > > > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to > > > an address in I/O memory > > > + * @map: The dma-buf mapping structure > > > + * @vaddr_iomem: An I/O-memory address > > > + * > > > + * Sets the address and the I/O-memory flag. > > > + */ > > > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map, > > > + void __iomem > > > *vaddr_iomem) +{ > > > + map->vaddr_iomem = vaddr_iomem; > > > + map->is_iomem = true; > > > +} > > > + > > > /** > > > * dma_buf_map_is_equal - Compares two dma-buf mapping structures for > > > equality > > > * @lhs: The dma-buf mapping structure > > > -- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers 2020-10-15 17:52 ` [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers Thomas Zimmermann @ 2020-10-16 9:41 ` Christian König 0 siblings, 0 replies; 4+ messages in thread From: Christian König @ 2020-10-16 9:41 UTC (permalink / raw) To: Thomas Zimmermann, Daniel Vetter Cc: airlied, nouveau, dri-devel, chris, melissa.srw, ray.huang, kraxel, yuq825, sam, emil.velikov, linux-samsung-soc, jy0922.shim, lima, oleksandr_andrushchenko, krzk, steven.price, linux-rockchip, luben.tuikov, alyssa.rosenzweig, linux+etnaviv, spice-devel, bskeggs, etnaviv, linaro-mm-sig, hdegoede, xen-devel, virtualization, sean, apaneers, linux-arm-kernel, amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park, miaoqinglang, kgene, alexander.deucher, linux-media Am 15.10.20 um 19:52 schrieb Thomas Zimmermann: > Hi > > On Thu, 15 Oct 2020 18:49:09 +0200 Daniel Vetter <daniel@ffwll.ch> wrote: > >> On Thu, Oct 15, 2020 at 04:08:13PM +0200, Christian König wrote: >>> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann: >>>> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in >>>> kernel address space. The mapping's address is returned as struct >>>> dma_buf_map. Each function is a simplified version of TTM's existing >>>> kmap code. Both functions respect the memory's location ani/or >>>> writecombine flags. >>>> >>>> On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(), >>>> two helpers that convert a GEM object into the TTM BO and forward the >>>> call to TTM's vmap/vunmap. These helpers can be dropped into the rsp >>>> GEM object callbacks. >>>> >>>> v4: >>>> * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers >>>> (Daniel, Christian) >>> Bunch of minor comments below, but over all look very solid to me. >> Yeah I think just duplicating the ttm bo map stuff for vmap is indeed the >> cleanest. And then we can maybe push the combinatorial monster into >> vmwgfx, which I think is the only user after this series. Or perhaps a >> dedicated set of helpers to map an invidual page (again using the >> dma_buf_map stuff). > From a quick look, I'd say it should be possible to have the same interface > for kmap/kunmap as for vmap/vunmap (i.e., parameters are bo and dma-buf-map). > All mapping state can be deduced from this. And struct ttm_bo_kmap_obj can be > killed off entirely. Yes, that would be rather nice to have. Thanks, Christian. > > Best regards > Thomas > >> I'll let Christian with the details, but at a high level this is >> definitely >> >> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> >> >> Thanks a lot for doing all this. >> -Daniel >> >>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> >>>> --- >>>> drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++ >>>> drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++ >>>> include/drm/drm_gem_ttm_helper.h | 6 +++ >>>> include/drm/ttm/ttm_bo_api.h | 28 +++++++++++ >>>> include/linux/dma-buf-map.h | 20 ++++++++ >>>> 5 files changed, 164 insertions(+) >>>> >>>> diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c >>>> b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30 >>>> 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c >>>> +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c >>>> @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, >>>> unsigned int indent, } >>>> EXPORT_SYMBOL(drm_gem_ttm_print_info); >>>> +/** >>>> + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object >>>> + * @gem: GEM object. >>>> + * @map: [out] returns the dma-buf mapping. >>>> + * >>>> + * Maps a GEM object with ttm_bo_vmap(). This function can be used as >>>> + * &drm_gem_object_funcs.vmap callback. >>>> + * >>>> + * Returns: >>>> + * 0 on success, or a negative errno code otherwise. >>>> + */ >>>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem, >>>> + struct dma_buf_map *map) >>>> +{ >>>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); >>>> + >>>> + return ttm_bo_vmap(bo, map); >>>> + >>>> +} >>>> +EXPORT_SYMBOL(drm_gem_ttm_vmap); >>>> + >>>> +/** >>>> + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object >>>> + * @gem: GEM object. >>>> + * @map: dma-buf mapping. >>>> + * >>>> + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used >>>> as >>>> + * &drm_gem_object_funcs.vmap callback. >>>> + */ >>>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem, >>>> + struct dma_buf_map *map) >>>> +{ >>>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); >>>> + >>>> + ttm_bo_vunmap(bo, map); >>>> +} >>>> +EXPORT_SYMBOL(drm_gem_ttm_vunmap); >>>> + >>>> /** >>>> * drm_gem_ttm_mmap() - mmap &ttm_buffer_object >>>> * @gem: GEM object. >>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c >>>> b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d >>>> 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c >>>> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c >>>> @@ -32,6 +32,7 @@ >>>> #include <drm/ttm/ttm_bo_driver.h> >>>> #include <drm/ttm/ttm_placement.h> >>>> #include <drm/drm_vma_manager.h> >>>> +#include <linux/dma-buf-map.h> >>>> #include <linux/io.h> >>>> #include <linux/highmem.h> >>>> #include <linux/wait.h> >>>> @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map) >>>> } >>>> EXPORT_SYMBOL(ttm_bo_kunmap); >>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map) >>>> +{ >>>> + struct ttm_resource *mem = &bo->mem; >>>> + int ret; >>>> + >>>> + ret = ttm_mem_io_reserve(bo->bdev, mem); >>>> + if (ret) >>>> + return ret; >>>> + >>>> + if (mem->bus.is_iomem) { >>>> + void __iomem *vaddr_iomem; >>>> + unsigned long size = bo->num_pages << PAGE_SHIFT; >>> Please use uint64_t here and make sure to cast bo->num_pages before >>> shifting. >>> >>> We have an unit tests of allocating a 8GB BO and that should work on a >>> 32bit machine as well :) >>> >>>> + >>>> + if (mem->bus.addr) >>>> + vaddr_iomem = (void *)(((u8 *)mem->bus.addr)); >>>> + else if (mem->placement & TTM_PL_FLAG_WC) >>> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new >>> mem->bus.caching enum as replacement. >>> >>>> + vaddr_iomem = ioremap_wc(mem->bus.offset, >>>> size); >>>> + else >>>> + vaddr_iomem = ioremap(mem->bus.offset, size); >>>> + >>>> + if (!vaddr_iomem) >>>> + return -ENOMEM; >>>> + >>>> + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem); >>>> + >>>> + } else { >>>> + struct ttm_operation_ctx ctx = { >>>> + .interruptible = false, >>>> + .no_wait_gpu = false >>>> + }; >>>> + struct ttm_tt *ttm = bo->ttm; >>>> + pgprot_t prot; >>>> + void *vaddr; >>>> + >>>> + BUG_ON(!ttm); >>> I think we can drop this, populate will just crash badly anyway. >>> >>>> + >>>> + ret = ttm_tt_populate(bo->bdev, ttm, &ctx); >>>> + if (ret) >>>> + return ret; >>>> + >>>> + /* >>>> + * We need to use vmap to get the desired page >>>> protection >>>> + * or to make the buffer object look contiguous. >>>> + */ >>>> + prot = ttm_io_prot(mem->placement, PAGE_KERNEL); >>> The calling convention has changed on drm-misc-next as well, but should be >>> trivial to adapt. >>> >>> Regards, >>> Christian. >>> >>>> + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot); >>>> + if (!vaddr) >>>> + return -ENOMEM; >>>> + >>>> + dma_buf_map_set_vaddr(map, vaddr); >>>> + } >>>> + >>>> + return 0; >>>> +} >>>> +EXPORT_SYMBOL(ttm_bo_vmap); >>>> + >>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map >>>> *map) +{ >>>> + if (dma_buf_map_is_null(map)) >>>> + return; >>>> + >>>> + if (map->is_iomem) >>>> + iounmap(map->vaddr_iomem); >>>> + else >>>> + vunmap(map->vaddr); >>>> + dma_buf_map_clear(map); >>>> + >>>> + ttm_mem_io_free(bo->bdev, &bo->mem); >>>> +} >>>> +EXPORT_SYMBOL(ttm_bo_vunmap); >>>> + >>>> static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo, >>>> bool dst_use_tt) >>>> { >>>> diff --git a/include/drm/drm_gem_ttm_helper.h >>>> b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8 >>>> 100644 --- a/include/drm/drm_gem_ttm_helper.h >>>> +++ b/include/drm/drm_gem_ttm_helper.h >>>> @@ -10,11 +10,17 @@ >>>> #include <drm/ttm/ttm_bo_api.h> >>>> #include <drm/ttm/ttm_bo_driver.h> >>>> +struct dma_buf_map; >>>> + >>>> #define drm_gem_ttm_of_gem(gem_obj) \ >>>> container_of(gem_obj, struct ttm_buffer_object, base) >>>> void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int >>>> indent, const struct drm_gem_object *gem); >>>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem, >>>> + struct dma_buf_map *map); >>>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem, >>>> + struct dma_buf_map *map); >>>> int drm_gem_ttm_mmap(struct drm_gem_object *gem, >>>> struct vm_area_struct *vma); >>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h >>>> index 37102e45e496..2c59a785374c 100644 >>>> --- a/include/drm/ttm/ttm_bo_api.h >>>> +++ b/include/drm/ttm/ttm_bo_api.h >>>> @@ -48,6 +48,8 @@ struct ttm_bo_global; >>>> struct ttm_bo_device; >>>> +struct dma_buf_map; >>>> + >>>> struct drm_mm_node; >>>> struct ttm_placement; >>>> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, >>>> unsigned long start_page, */ >>>> void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map); >>>> +/** >>>> + * ttm_bo_vmap >>>> + * >>>> + * @bo: The buffer object. >>>> + * @map: pointer to a struct dma_buf_map representing the map. >>>> + * >>>> + * Sets up a kernel virtual mapping, using ioremap or vmap to the >>>> + * data in the buffer object. The parameter @map returns the virtual >>>> + * address as struct dma_buf_map. Unmap the buffer with >>>> ttm_bo_vunmap(). >>>> + * >>>> + * Returns >>>> + * -ENOMEM: Out of memory. >>>> + * -EINVAL: Invalid range. >>>> + */ >>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); >>>> + >>>> +/** >>>> + * ttm_bo_vunmap >>>> + * >>>> + * @bo: The buffer object. >>>> + * @map: Object describing the map to unmap. >>>> + * >>>> + * Unmaps a kernel map set up by ttm_bo_vmap(). >>>> + */ >>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map >>>> *map); + >>>> /** >>>> * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object. >>>> * >>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h >>>> index fd1aba545fdf..2e8bbecb5091 100644 >>>> --- a/include/linux/dma-buf-map.h >>>> +++ b/include/linux/dma-buf-map.h >>>> @@ -45,6 +45,12 @@ >>>> * >>>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf); >>>> * >>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem(). >>>> + * >>>> + * .. code-block:: c >>>> + * >>>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); >>>> + * >>>> * Test if a mapping is valid with either dma_buf_map_is_set() or >>>> * dma_buf_map_is_null(). >>>> * >>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct >>>> dma_buf_map *map, void *vaddr) map->is_iomem = false; >>>> } >>>> +/** >>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to >>>> an address in I/O memory >>>> + * @map: The dma-buf mapping structure >>>> + * @vaddr_iomem: An I/O-memory address >>>> + * >>>> + * Sets the address and the I/O-memory flag. >>>> + */ >>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map, >>>> + void __iomem >>>> *vaddr_iomem) +{ >>>> + map->vaddr_iomem = vaddr_iomem; >>>> + map->is_iomem = true; >>>> +} >>>> + >>>> /** >>>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures for >>>> equality >>>> * @lhs: The dma-buf mapping structure > > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 4+ messages in thread
[parent not found: <20201015123806.32416-10-tzimmermann@suse.de>]
[parent not found: <20201016100854.GA1042954@ravnborg.org>]
* Re: [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces [not found] ` <20201016100854.GA1042954@ravnborg.org> @ 2020-10-16 10:39 ` Thomas Zimmermann 0 siblings, 0 replies; 4+ messages in thread From: Thomas Zimmermann @ 2020-10-16 10:39 UTC (permalink / raw) To: Sam Ravnborg Cc: airlied, nouveau, dri-devel, chris, melissa.srw, ray.huang, kraxel, yuq825, emil.velikov, linux-samsung-soc, jy0922.shim, lima, oleksandr_andrushchenko, krzk, steven.price, linux-rockchip, luben.tuikov, alyssa.rosenzweig, linux+etnaviv, spice-devel, bskeggs, etnaviv, linaro-mm-sig, hdegoede, xen-devel, virtualization, sean, apaneers, linux-arm-kernel, amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park, miaoqinglang, kgene, alexander.deucher, linux-media, christian.koenig Hi Sam On Fri, 16 Oct 2020 12:08:54 +0200 Sam Ravnborg <sam@ravnborg.org> wrote: > Hi Thomas. > > On Thu, Oct 15, 2020 at 02:38:05PM +0200, Thomas Zimmermann wrote: > > To do framebuffer updates, one needs memcpy from system memory and a > > pointer-increment function. Add both interfaces with documentation. > > > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> > > Looks good. > Reviewed-by: Sam Ravnborg <sam@ravnborg.org> Thanks. If you have the time, may I ask you to test this patchset on the bochs/sparc64 system that failed with the original code? Best regards Thomas > > > --- > > include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------ > > 1 file changed, 62 insertions(+), 10 deletions(-) > > > > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h > > index 2e8bbecb5091..6ca0f304dda2 100644 > > --- a/include/linux/dma-buf-map.h > > +++ b/include/linux/dma-buf-map.h > > @@ -32,6 +32,14 @@ > > * accessing the buffer. Use the returned instance and the helper > > functions > > * to access the buffer's memory in the correct way. > > * > > + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers > > are > > + * actually independent from the dma-buf infrastructure. When sharing > > buffers > > + * among devices, drivers have to know the location of the memory to > > access > > + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>` > > + * solves this problem for dma-buf and its users. If other drivers or > > + * sub-systems require similar functionality, the type could be > > generalized > > + * and moved to a more prominent header file. > > + * > > * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is > > * considered bad style. Rather then accessing its fields directly, use > > one > > * of the provided helper functions, or implement your own. For example, > > @@ -51,6 +59,14 @@ > > * > > * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); > > * > > + * Instances of struct dma_buf_map do not have to be cleaned up, but > > + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings > > + * always refer to system memory. > > + * > > + * .. code-block:: c > > + * > > + * dma_buf_map_clear(&map); > > + * > > * Test if a mapping is valid with either dma_buf_map_is_set() or > > * dma_buf_map_is_null(). > > * > > @@ -73,17 +89,19 @@ > > * if (dma_buf_map_is_equal(&sys_map, &io_map)) > > * // always false > > * > > - * Instances of struct dma_buf_map do not have to be cleaned up, but > > - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings > > - * always refer to system memory. > > + * A set up instance of struct dma_buf_map can be used to access or > > manipulate > > + * the buffer memory. Depending on the location of the memory, the > > provided > > + * helpers will pick the correct operations. Data can be copied into the > > memory > > + * with dma_buf_map_memcpy_to(). The address can be manipulated with > > + * dma_buf_map_incr(). > > * > > - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers > > are > > - * actually independent from the dma-buf infrastructure. When sharing > > buffers > > - * among devices, drivers have to know the location of the memory to > > access > > - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>` > > - * solves this problem for dma-buf and its users. If other drivers or > > - * sub-systems require similar functionality, the type could be > > generalized > > - * and moved to a more prominent header file. > > + * .. code-block:: c > > + * > > + * const void *src = ...; // source buffer > > + * size_t len = ...; // length of src > > + * > > + * dma_buf_map_memcpy_to(&map, src, len); > > + * dma_buf_map_incr(&map, len); // go to first byte after the > > memcpy */ > > > > /** > > @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct > > dma_buf_map *map) } > > } > > > > +/** > > + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping > > + * @dst: The dma-buf mapping structure > > + * @src: The source buffer > > + * @len: The number of byte in src > > + * > > + * Copies data into a dma-buf mapping. The source buffer is in system > > + * memory. Depending on the buffer's location, the helper picks the > > correct > > + * method of accessing the memory. > > + */ > > +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const > > void *src, size_t len) +{ > > + if (dst->is_iomem) > > + memcpy_toio(dst->vaddr_iomem, src, len); > > + else > > + memcpy(dst->vaddr, src, len); > > +} > > + > > +/** > > + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping > > + * @map: The dma-buf mapping structure > > + * @incr: The number of bytes to increment > > + * > > + * Increments the address stored in a dma-buf mapping. Depending on the > > + * buffer's location, the correct value will be updated. > > + */ > > +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr) > > +{ > > + if (map->is_iomem) > > + map->vaddr_iomem += incr; > > + else > > + map->vaddr += incr; > > +} > > + > > #endif /* __DMA_BUF_MAP_H__ */ > > -- > > 2.28.0 > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel -- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 4+ messages in thread
[parent not found: <20201015123806.32416-11-tzimmermann@suse.de>]
[parent not found: <20201016105854.GB1042954@ravnborg.org>]
* Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory [not found] ` <20201016105854.GB1042954@ravnborg.org> @ 2020-10-16 11:34 ` Thomas Zimmermann 0 siblings, 0 replies; 4+ messages in thread From: Thomas Zimmermann @ 2020-10-16 11:34 UTC (permalink / raw) To: Sam Ravnborg Cc: airlied, nouveau, dri-devel, chris, melissa.srw, ray.huang, kraxel, yuq825, emil.velikov, linux-samsung-soc, jy0922.shim, lima, oleksandr_andrushchenko, krzk, steven.price, linux-rockchip, luben.tuikov, alyssa.rosenzweig, linux+etnaviv, spice-devel, bskeggs, etnaviv, linaro-mm-sig, hdegoede, xen-devel, virtualization, sean, apaneers, linux-arm-kernel, amd-gfx, tomeu.vizoso, sw0312.kim, hjc, kyungmin.park, miaoqinglang, kgene, alexander.deucher, linux-media, christian.koenig Hi On Fri, 16 Oct 2020 12:58:54 +0200 Sam Ravnborg <sam@ravnborg.org> wrote: > Hi Thomas. > > On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote: > > At least sparc64 requires I/O-specific access to framebuffers. This > > patch updates the fbdev console accordingly. > > > > For drivers with direct access to the framebuffer memory, the callback > > functions in struct fb_ops test for the type of memory and call the rsp > > fb_sys_ of fb_cfb_ functions. > > > > For drivers that employ a shadow buffer, fbdev's blit function retrieves > > the framebuffer address as struct dma_buf_map, and uses dma_buf_map > > interfaces to access the buffer. > > > > The bochs driver on sparc64 uses a workaround to flag the framebuffer as > > I/O memory and avoid a HW exception. With the introduction of struct > > dma_buf_map, this is not required any longer. The patch removes the rsp > > code from both, bochs and fbdev. > > > > v4: > > * move dma_buf_map changes into separate patch (Daniel) > > * TODO list: comment on fbdev updates (Daniel) > > I have been offline for a while so have not followed all the threads on > this. So may comments below may well be addressed but I failed to see > it. > > If the point about fb_sync is already addressed/considered then: > Reviewed-by: Sam Ravnborg <sam@ravnborg.org> It has not been brought up yet. See below. > > > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> > > --- > > Documentation/gpu/todo.rst | 19 ++- > > drivers/gpu/drm/bochs/bochs_kms.c | 1 - > > drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++-- > > include/drm/drm_mode_config.h | 12 -- > > 4 files changed, 220 insertions(+), 29 deletions(-) > > > > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst > > index 7e6fc3c04add..638b7f704339 100644 > > --- a/Documentation/gpu/todo.rst > > +++ b/Documentation/gpu/todo.rst > > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup() > > ------------------------------------------------ > > > > Most drivers can use drm_fbdev_generic_setup(). Driver have to implement > > -atomic modesetting and GEM vmap support. Current generic fbdev emulation > > -expects the framebuffer in system memory (or system-like memory). > > +atomic modesetting and GEM vmap support. Historically, generic fbdev > > emulation +expected the framebuffer in system memory or system-like > > memory. By employing +struct dma_buf_map, drivers with frambuffers in I/O > > memory can be supported +as well. > > > > Contact: Maintainer of the driver you plan to convert > > > > Level: Intermediate > > > > +Reimplement functions in drm_fbdev_fb_ops without fbdev > > +------------------------------------------------------- > > + > > +A number of callback functions in drm_fbdev_fb_ops could benefit from > > +being rewritten without dependencies on the fbdev module. Some of the > > +helpers could further benefit from using struct dma_buf_map instead of > > +raw pointers. > > + > > +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter > > + > > +Level: Advanced > > + > > + > > drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup > > ----------------------------------------------------------------- > > > > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c > > b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5 > > 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c > > +++ b/drivers/gpu/drm/bochs/bochs_kms.c > > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs) > > bochs->dev->mode_config.preferred_depth = 24; > > bochs->dev->mode_config.prefer_shadow = 0; > > bochs->dev->mode_config.prefer_shadow_fbdev = 1; > > - bochs->dev->mode_config.fbdev_use_iomem = true; > > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = > > true; > > bochs->dev->mode_config.funcs = &bochs_mode_funcs; > Good to see this workaround gone again! > > > diff --git a/drivers/gpu/drm/drm_fb_helper.c > > b/drivers/gpu/drm/drm_fb_helper.c index 6212cd7cde1d..462b0c130ebb 100644 > > --- a/drivers/gpu/drm/drm_fb_helper.c > > +++ b/drivers/gpu/drm/drm_fb_helper.c > > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct > > work_struct *work) } > > > > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper > > *fb_helper, > > - struct drm_clip_rect *clip) > > + struct drm_clip_rect *clip, > > + struct dma_buf_map *dst) > > { > > struct drm_framebuffer *fb = fb_helper->fb; > > unsigned int cpp = fb->format->cpp[0]; > > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp; > > void *src = fb_helper->fbdev->screen_buffer + offset; > > - void *dst = fb_helper->buffer->map.vaddr + offset; > > size_t len = (clip->x2 - clip->x1) * cpp; > > unsigned int y; > > > > - for (y = clip->y1; y < clip->y2; y++) { > > - if (!fb_helper->dev->mode_config.fbdev_use_iomem) > > - memcpy(dst, src, len); > > - else > > - memcpy_toio((void __iomem *)dst, src, len); > > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip > > rect */ > > + for (y = clip->y1; y < clip->y2; y++) { > > + dma_buf_map_memcpy_to(dst, src, len); > > + dma_buf_map_incr(dst, fb->pitches[0]); > > src += fb->pitches[0]; > > - dst += fb->pitches[0]; > > } > > } > > > > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct > > work_struct *work) ret = drm_client_buffer_vmap(helper->buffer, &map); > > if (ret) > > return; > > - drm_fb_helper_dirty_blit_real(helper, > > &clip_copy); > > + drm_fb_helper_dirty_blit_real(helper, > > &clip_copy, &map); } > > + > > if (helper->fb->funcs->dirty) > > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, > > &clip_copy, 1); > > @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info > > *info, } > > EXPORT_SYMBOL(drm_fb_helper_sys_imageblit); > > > So far everything looks good. > > > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user > > *buf, > > + size_t count, loff_t *ppos) > > +{ > > + unsigned long p = *ppos; > > + u8 *dst; > > + u8 __iomem *src; > > + int c, err = 0; > > + unsigned long total_size; > > + unsigned long alloc_size; > > + ssize_t ret = 0; > > + > > + if (info->state != FBINFO_STATE_RUNNING) > > + return -EPERM; > > + > > + total_size = info->screen_size; > > + > > + if (total_size == 0) > > + total_size = info->fix.smem_len; > > + > > + if (p >= total_size) > > + return 0; > > + > > + if (count >= total_size) > > + count = total_size; > > + > > + if (count + p > total_size) > > + count = total_size - p; > > + > > + src = (u8 __iomem *)(info->screen_base + p); > screen_base is a char __iomem * - so this cast looks semi redundant. I took the basic code from fbdev. Maybe there's a reason for the case, otherwise I'll remove it. > > > + > > + alloc_size = min(count, PAGE_SIZE); > > + > > + dst = kmalloc(alloc_size, GFP_KERNEL); > > + if (!dst) > > + return -ENOMEM; > > + > Same comment as below about fb_sync. > > > > + while (count) { > > + c = min(count, alloc_size); > > + > > + memcpy_fromio(dst, src, c); > > + if (copy_to_user(buf, dst, c)) { > > + err = -EFAULT; > > + break; > > + } > > + > > + src += c; > > + *ppos += c; > > + buf += c; > > + ret += c; > > + count -= c; > > + } > > + > > + kfree(dst); > > + > > + if (err) > > + return err; > > + > > + return ret; > > +} > > + > > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char > > __user *buf, > > + size_t count, loff_t *ppos) > > +{ > > + unsigned long p = *ppos; > > + u8 *src; > > + u8 __iomem *dst; > > + int c, err = 0; > > + unsigned long total_size; > > + unsigned long alloc_size; > > + ssize_t ret = 0; > > + > > + if (info->state != FBINFO_STATE_RUNNING) > > + return -EPERM; > > + > > + total_size = info->screen_size; > > + > > + if (total_size == 0) > > + total_size = info->fix.smem_len; > > + > > + if (p > total_size) > > + return -EFBIG; > > + > > + if (count > total_size) { > > + err = -EFBIG; > > + count = total_size; > > + } > > + > > + if (count + p > total_size) { > > + /* > > + * The framebuffer is too small. We do the > > + * copy operation, but return an error code > > + * afterwards. Taken from fbdev. > > + */ > > + if (!err) > > + err = -ENOSPC; > > + count = total_size - p; > > + } > > + > > + alloc_size = min(count, PAGE_SIZE); > > + > > + src = kmalloc(alloc_size, GFP_KERNEL); > > + if (!src) > > + return -ENOMEM; > > + > > + dst = (u8 __iomem *)(info->screen_base + p); > > + > > The fbdev variant call the fb_sync callback here. > noveau and gma500 implments the fb_sync callback - but no-one else. These drivers implement some form of HW acceleration. If they have a HW blit/draw/etc op queued up, they have to wait for it to complete. Otherwise, the copied memory would contain an old state. The fb_sync acts as the fence. Fbdev only uses software copying, so the fb_sync is not required. From what I heard, the HW acceleration is not useful on modern machines. I hope to convert more drivers to generic fbdev after these patches for I/O-memory support have been merged. > > > > + while (count) { > > + c = min(count, alloc_size); > > + > > + if (copy_from_user(src, buf, c)) { > > + err = -EFAULT; > > + break; > > + } > > + memcpy_toio(dst, src, c); > When we rewrite this part to use dma_buf_map_memcpy_to() then we can > merge the two variants of helper_{sys,cfb}_read()? > Which is part of the todo - so OK I'm not sure if dma_buf_map is a good fit here. The I/O-memory function does an additional copy between system memory and I/O memory. Of course, the top and bottom of both functions are similar and could probably be shared. Best regards Thomas > > + > > + dst += c; > > + *ppos += c; > > + buf += c; > > + ret += c; > > + count -= c; > > + } > > + > > + kfree(src); > > + > > + if (err) > > + return err; > > + > > + return ret; > > +} > > + > > /** > > * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect > > * @info: fbdev registered by the helper > > @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, > > struct vm_area_struct *vma) return -ENODEV; > > } > > > > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf, > > + size_t count, loff_t *ppos) > > +{ > > + struct drm_fb_helper *fb_helper = info->par; > > + struct drm_client_buffer *buffer = fb_helper->buffer; > > + > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > > + return drm_fb_helper_sys_read(info, buf, count, ppos); > > + else > > + return drm_fb_helper_cfb_read(info, buf, count, ppos); > > +} > > + > > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char > > __user *buf, > > + size_t count, loff_t *ppos) > > +{ > > + struct drm_fb_helper *fb_helper = info->par; > > + struct drm_client_buffer *buffer = fb_helper->buffer; > > + > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > > + return drm_fb_helper_sys_write(info, buf, count, ppos); > > + else > > + return drm_fb_helper_cfb_write(info, buf, count, ppos); > > +} > > + > > +static void drm_fbdev_fb_fillrect(struct fb_info *info, > > + const struct fb_fillrect *rect) > > +{ > > + struct drm_fb_helper *fb_helper = info->par; > > + struct drm_client_buffer *buffer = fb_helper->buffer; > > + > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > > + drm_fb_helper_sys_fillrect(info, rect); > > + else > > + drm_fb_helper_cfb_fillrect(info, rect); > > +} > > + > > +static void drm_fbdev_fb_copyarea(struct fb_info *info, > > + const struct fb_copyarea *area) > > +{ > > + struct drm_fb_helper *fb_helper = info->par; > > + struct drm_client_buffer *buffer = fb_helper->buffer; > > + > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > > + drm_fb_helper_sys_copyarea(info, area); > > + else > > + drm_fb_helper_cfb_copyarea(info, area); > > +} > > + > > +static void drm_fbdev_fb_imageblit(struct fb_info *info, > > + const struct fb_image *image) > > +{ > > + struct drm_fb_helper *fb_helper = info->par; > > + struct drm_client_buffer *buffer = fb_helper->buffer; > > + > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > > + drm_fb_helper_sys_imageblit(info, image); > > + else > > + drm_fb_helper_cfb_imageblit(info, image); > > +} > > + > > static const struct fb_ops drm_fbdev_fb_ops = { > > .owner = THIS_MODULE, > > DRM_FB_HELPER_DEFAULT_OPS, > > @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = { > > .fb_release = drm_fbdev_fb_release, > > .fb_destroy = drm_fbdev_fb_destroy, > > .fb_mmap = drm_fbdev_fb_mmap, > > - .fb_read = drm_fb_helper_sys_read, > > - .fb_write = drm_fb_helper_sys_write, > > - .fb_fillrect = drm_fb_helper_sys_fillrect, > > - .fb_copyarea = drm_fb_helper_sys_copyarea, > > - .fb_imageblit = drm_fb_helper_sys_imageblit, > > + .fb_read = drm_fbdev_fb_read, > > + .fb_write = drm_fbdev_fb_write, > > + .fb_fillrect = drm_fbdev_fb_fillrect, > > + .fb_copyarea = drm_fbdev_fb_copyarea, > > + .fb_imageblit = drm_fbdev_fb_imageblit, > > }; > > > > static struct fb_deferred_io drm_fbdev_defio = { > > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h > > index 5ffbb4ed5b35..ab424ddd7665 100644 > > --- a/include/drm/drm_mode_config.h > > +++ b/include/drm/drm_mode_config.h > > @@ -877,18 +877,6 @@ struct drm_mode_config { > > */ > > bool prefer_shadow_fbdev; > > > > - /** > > - * @fbdev_use_iomem: > > - * > > - * Set to true if framebuffer reside in iomem. > > - * When set to true memcpy_toio() is used when copying the > > framebuffer in > > - * drm_fb_helper.drm_fb_helper_dirty_blit_real(). > > - * > > - * FIXME: This should be replaced with a per-mapping is_iomem > > - * flag (like ttm does), and then used everywhere in fbdev code. > > - */ > > - bool fbdev_use_iomem; > > - > > /** > > * @quirk_addfb_prefer_xbgr_30bpp: > > * > > -- > > 2.28.0 > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel -- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2020-10-16 11:36 UTC | newest] Thread overview: 4+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- [not found] <20201015123806.32416-1-tzimmermann@suse.de> [not found] ` <20201015123806.32416-6-tzimmermann@suse.de> [not found] ` <935d5771-5645-62a6-849c-31e286db1e30@amd.com> [not found] ` <20201015164909.GC401619@phenom.ffwll.local> 2020-10-15 17:52 ` [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers Thomas Zimmermann 2020-10-16 9:41 ` Christian König [not found] ` <20201015123806.32416-10-tzimmermann@suse.de> [not found] ` <20201016100854.GA1042954@ravnborg.org> 2020-10-16 10:39 ` [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces Thomas Zimmermann [not found] ` <20201015123806.32416-11-tzimmermann@suse.de> [not found] ` <20201016105854.GB1042954@ravnborg.org> 2020-10-16 11:34 ` [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory Thomas Zimmermann
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).