From mboxrd@z Thu Jan 1 00:00:00 1970 From: daniel@ffwll.ch Date: Fri, 31 Jul 2020 09:22:54 +0000 Subject: Re: [PATCH 3/5] drm: Add infrastructure for vmap operations of I/O memory Message-Id: <20200731092254.GW6419@phenom.ffwll.local> List-Id: References: <20200729134148.6855-1-tzimmermann@suse.de> <20200729134148.6855-4-tzimmermann@suse.de> <20200729135744.GQ6419@phenom.ffwll.local> <79a17df5-5654-ccf7-e3aa-5c74894b436f@suse.de> In-Reply-To: <79a17df5-5654-ccf7-e3aa-5c74894b436f@suse.de> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Cc: linux-fbdev@vger.kernel.org, b.zolnierkie@samsung.com, jani.nikula@intel.com, dri-devel@lists.freedesktop.org, kraxel@redhat.com, airlied@redhat.com, natechancellor@gmail.com, sam@ravnborg.org, peda@axentia.se, dan.carpenter@oracle.com On Thu, Jul 30, 2020 at 10:14:43AM +0200, Thomas Zimmermann wrote: > Hi >=20 > Am 29.07.20 um 15:57 schrieb daniel@ffwll.ch: > > On Wed, Jul 29, 2020 at 03:41:46PM +0200, Thomas Zimmermann wrote: > >> Most platforms allow for accessing framebuffer I/O memory with regular > >> load and store operations. Some platforms, such as sparc64, require > >> the use of special instructions instead. > >> > >> This patch adds vmap_iomem to struct drm_gem_object_funcs. The new > >> interface drm_client_buffer_vmap_iomem() gives DRM clients access to t= he > >> I/O memory buffer. The semantics of struct drm_gem_objcet_funcs.vmap > >> change slightly. It used to return system or I/O memory. Now it is > >> expected to return memory addresses that can be accessed with regular > >> load and store operations. So nothing changes for existing implementat= ions > >> of GEM objects. If the GEM object also implements vmap_iomem, a call > >> to vmap shall only return system memory, even if I/O memory could be > >> accessed with loads and stores. > >> > >> The existing interface drm_client_buffer_vmap() shall only return memo= ry > >> as given by drm_gem_vmap ((i.e., that is accessible via regular load a= nd > >> store). The new interface drm_client_buffer_vmap_iomem() shall only > >> return I/O memory. > >> > >> DRM clients must map buffers by calling drm_client_buffer_vmap_iomem() > >> and drm_client_buffer_vmap() to get the buffer in I/O or system memory. > >> Each function returns NULL if the buffer is in the other memory area. > >> Depending on the type of the returned memory, clients must access the > >> framebuffer with the appropriate operations. > >> > >> Signed-off-by: Thomas Zimmermann > >=20 > > Hm I don't think this works, since for more dynamic framebuffers (like > > real big gpu ttm drivers) this is a dynamic thing, which can change eve= ry > > time we do an mmap. So I think the ttm approach of having an is_iomem f= lag > > is a lot better. > >=20 > > The trouble with that is that you don't have correct checking of sparse > > mappings, but oh well :-/ The one idea I've had to address that is using > > something like this > >=20 > > typedef dma_buf_addr_t { > > bool is_iomem; > > union { > > void __iomem *vaddr_iomem; > > void vaddr; > > }; > > }; > >=20 > > And then having a wrapper for memcpy_from_dma_buf_addr and > > memcpy_to_dma_buf_addr, which switches between memcpy and memcpy_from/t= oio > > depending upon the is_iomem flag. > >=20 > > But it's a lot more invasive unfortunately :-/ >=20 > What do you think about introducing read and write callbacks for GEM > objects? Like this: >=20 > int drm_gem_read(struct drm_gem_object *gbo, size_t off, size_t len, > void *buf); >=20 > int drm_gem_write(struct drm_gem_object *gbo, size_t off, size_t len, > const void *buf); >=20 > The common case would by memcpy, but GEM implementations could provide > their own thing. The fbdev blit function would look like >=20 > vaddr =3D drm_gem_vmap(gbo) > if (IS_ERR(vaddr)) > return >=20 > for (each line) { > drm_gem_write(gbo, gbo_line_offset, line_size, src) > gbo_line_offset =3D /* next line */ > src =3D /* next line */ > } >=20 > drm_gem_vunmap(gbo); >=20 > The whole mess about I/O access would be self-contained. Copying the irc discussion over: We've had that idea floating around years ago, i915-gem even implemented in the form of pwrite/pread for usersapce. But now all userspace moved over to mmap, so read/write has fallen out of favour. I'm also not sure whether we really need to fix more than just fbcon on fbdev on drm emulation, and it feels a bit silly to add read/write just for that. Also the is_iomem flag on the vmap (and maybe eventually on mmap, no idea) might be able to let us fix this at least for real eventually. Cheers, Daniel >=20 > Best regards > Thomas >=20 > > -Daniel > >=20 > >> --- > >> drivers/gpu/drm/drm_client.c | 52 ++++++++++++++++++++++++++++++++-- > >> drivers/gpu/drm/drm_gem.c | 19 +++++++++++++ > >> drivers/gpu/drm/drm_internal.h | 1 + > >> include/drm/drm_client.h | 8 +++++- > >> include/drm/drm_gem.h | 17 +++++++++-- > >> 5 files changed, 91 insertions(+), 6 deletions(-) > >> > >> diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client= .c > >> index 495f47d23d87..b5bbe089a41e 100644 > >> --- a/drivers/gpu/drm/drm_client.c > >> +++ b/drivers/gpu/drm/drm_client.c > >> @@ -327,6 +327,46 @@ void *drm_client_buffer_vmap(struct drm_client_bu= ffer *buffer) > >> } > >> EXPORT_SYMBOL(drm_client_buffer_vmap); > >> =20 > >> +/** > >> + * drm_client_buffer_vmap_iomem - Map DRM client buffer into address = space > >> + * @buffer: DRM client buffer > >> + * > >> + * This function maps a client buffer into kernel address space. If t= he > >> + * buffer is already mapped, it returns the mapping's address. > >> + * > >> + * Client buffer mappings are not ref'counted. Each call to > >> + * drm_client_buffer_vmap() should be followed by a call to > >> + * drm_client_buffer_vunmap(); or the client buffer should be mapped > >> + * throughout its lifetime. > >> + * > >> + * Returns: > >> + * The mapped memory's address > >> + */ > >> +void __iomem *drm_client_buffer_vmap_iomem(struct drm_client_buffer *= buffer) > >> +{ > >> + void __iomem *vaddr_iomem; > >> + > >> + if (buffer->vaddr_iomem) > >> + return buffer->vaddr_iomem; > >> + > >> + /* > >> + * FIXME: The dependency on GEM here isn't required, we could > >> + * convert the driver handle to a dma-buf instead and use the > >> + * backend-agnostic dma-buf vmap support instead. This would > >> + * require that the handle2fd prime ioctl is reworked to pull the > >> + * fd_install step out of the driver backend hooks, to make that > >> + * final step optional for internal users. > >> + */ > >> + vaddr_iomem =3D drm_gem_vmap_iomem(buffer->gem); > >> + if (IS_ERR(vaddr_iomem)) > >> + return vaddr_iomem; > >> + > >> + buffer->vaddr_iomem =3D vaddr_iomem; > >> + > >> + return vaddr_iomem; > >> +} > >> +EXPORT_SYMBOL(drm_client_buffer_vmap_iomem); > >> + > >> /** > >> * drm_client_buffer_vunmap - Unmap DRM client buffer > >> * @buffer: DRM client buffer > >> @@ -337,8 +377,16 @@ EXPORT_SYMBOL(drm_client_buffer_vmap); > >> */ > >> void drm_client_buffer_vunmap(struct drm_client_buffer *buffer) > >> { > >> - drm_gem_vunmap(buffer->gem, buffer->vaddr); > >> - buffer->vaddr =3D NULL; > >> + drm_WARN_ON(buffer->client->dev, buffer->vaddr && buffer->vaddr_iome= m); > >> + > >> + if (buffer->vaddr) { > >> + drm_gem_vunmap(buffer->gem, buffer->vaddr); > >> + buffer->vaddr =3D NULL; > >> + } > >> + if (buffer->vaddr_iomem) { > >> + drm_gem_vunmap(buffer->gem, (void *)buffer->vaddr_iomem); > >> + buffer->vaddr_iomem =3D NULL; > >> + } > >> } > >> EXPORT_SYMBOL(drm_client_buffer_vunmap); > >> =20 > >> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c > >> index a57f5379fc08..a001be8c0965 100644 > >> --- a/drivers/gpu/drm/drm_gem.c > >> +++ b/drivers/gpu/drm/drm_gem.c > >> @@ -1227,6 +1227,25 @@ void *drm_gem_vmap(struct drm_gem_object *obj) > >> vaddr =3D obj->funcs->vmap(obj); > >> else if (obj->dev->driver->gem_prime_vmap) > >> vaddr =3D obj->dev->driver->gem_prime_vmap(obj); > >> + else if (obj->funcs && obj->funcs->vmap_iomem) > >> + vaddr =3D NULL; /* requires mapping as I/O memory */ > >> + else > >> + vaddr =3D ERR_PTR(-EOPNOTSUPP); > >> + > >> + if (!vaddr) > >> + vaddr =3D ERR_PTR(-ENOMEM); > >> + > >> + return vaddr; > >> +} > >> + > >> +void __iomem *drm_gem_vmap_iomem(struct drm_gem_object *obj) > >> +{ > >> + void __iomem *vaddr; > >> + > >> + if (obj->funcs && obj->funcs->vmap_iomem) > >> + vaddr =3D obj->funcs->vmap_iomem(obj); > >> + else if (obj->funcs && obj->funcs->vmap) > >> + vaddr =3D NULL; /* requires mapping as system memory */ > >> else > >> vaddr =3D ERR_PTR(-EOPNOTSUPP); > >> =20 > >> diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_inte= rnal.h > >> index 8e01caaf95cc..aa1a3d4f9223 100644 > >> --- a/drivers/gpu/drm/drm_internal.h > >> +++ b/drivers/gpu/drm/drm_internal.h > >> @@ -187,6 +187,7 @@ void drm_gem_print_info(struct drm_printer *p, uns= igned int indent, > >> int drm_gem_pin(struct drm_gem_object *obj); > >> void drm_gem_unpin(struct drm_gem_object *obj); > >> void *drm_gem_vmap(struct drm_gem_object *obj); > >> +void __iomem *drm_gem_vmap_iomem(struct drm_gem_object *obj); > >> void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr); > >> =20 > >> /* drm_debugfs.c drm_debugfs_crc.c */ > >> diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h > >> index 7aaea665bfc2..94aa075ee4b6 100644 > >> --- a/include/drm/drm_client.h > >> +++ b/include/drm/drm_client.h > >> @@ -141,10 +141,15 @@ struct drm_client_buffer { > >> struct drm_gem_object *gem; > >> =20 > >> /** > >> - * @vaddr: Virtual address for the buffer > >> + * @vaddr: Virtual address for the buffer in system memory > >> */ > >> void *vaddr; > >> =20 > >> + /** > >> + * @vaddr: Virtual address for the buffer in I/O memory > >> + */ > >> + void *vaddr_iomem; > >> + > >> /** > >> * @fb: DRM framebuffer > >> */ > >> @@ -156,6 +161,7 @@ drm_client_framebuffer_create(struct drm_client_de= v *client, u32 width, u32 heig > >> void drm_client_framebuffer_delete(struct drm_client_buffer *buffer); > >> int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, st= ruct drm_rect *rect); > >> void *drm_client_buffer_vmap(struct drm_client_buffer *buffer); > >> +void __iomem *drm_client_buffer_vmap_iomem(struct drm_client_buffer *= buffer); > >> void drm_client_buffer_vunmap(struct drm_client_buffer *buffer); > >> =20 > >> int drm_client_modeset_create(struct drm_client_dev *client); > >> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h > >> index 337a48321705..bc735ff522a8 100644 > >> --- a/include/drm/drm_gem.h > >> +++ b/include/drm/drm_gem.h > >> @@ -134,17 +134,28 @@ struct drm_gem_object_funcs { > >> * @vmap: > >> * > >> * Returns a virtual address for the buffer. Used by the > >> - * drm_gem_dmabuf_vmap() helper. > >> + * drm_gem_dmabuf_vmap() helper. If the buffer is not > >> + * located in system memory, the function returns NULL. > >> * > >> * This callback is optional. > >> */ > >> void *(*vmap)(struct drm_gem_object *obj); > >> =20 > >> + /** > >> + * @vmap_iomem: > >> + * > >> + * Returns a virtual address for the buffer. If the buffer is not > >> + * located in I/O memory, the function returns NULL. > >> + * > >> + * This callback is optional. > >> + */ > >> + void __iomem *(*vmap_iomem)(struct drm_gem_object *obj); > >> + > >> /** > >> * @vunmap: > >> * > >> - * Releases the address previously returned by @vmap. Used by the > >> - * drm_gem_dmabuf_vunmap() helper. > >> + * Releases the address previously returned by @vmap or @vmap_iomem. > >> + * Used by the drm_gem_dmabuf_vunmap() helper. > >> * > >> * This callback is optional. > >> */ > >> --=20 > >> 2.27.0 > >> > >=20 >=20 > --=20 > Thomas Zimmermann > Graphics Driver Developer > SUSE Software Solutions Germany GmbH > Maxfeldstr. 5, 90409 N=FCrnberg, Germany > (HRB 36809, AG N=FCrnberg) > Gesch=E4ftsf=FChrer: Felix Imend=F6rffer >=20 --=20 Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch