From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: "Christian König" <christian.koenig@amd.com>,
intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org
Subject: Re: [Intel-gfx] [PATCH v2 13/15] drm/ttm: Add BO and offset arguments for vm_access and vm_fault ttm handlers.
Date: Tue, 18 May 2021 16:59:47 +0200 [thread overview]
Message-ID: <a8c98c8a-3313-9802-31be-81e80525a111@linux.intel.com> (raw)
In-Reply-To: <a93d3f87-1331-e264-13c7-87b29cdbc22f@amd.com>
On 5/18/21 1:59 PM, Christian König wrote:
> Can you send me the patch directly and not just on CC?
>
> Thanks,
> Christian.
Original patch sent. Pls remember to CC lists on reply, though.
The reason we need this is because of i915's strange mmap functionality
which allows a bo to be mapped at multiple offsets and the vma->private
is not a bo...
Thanks,
Thomas
>
> Am 18.05.21 um 10:59 schrieb Thomas Hellström:
>> + Christian König
>>
>> On 5/18/21 10:26 AM, Thomas Hellström wrote:
>>> From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
>>>
>>> This allows other drivers that may not setup the vma in the same way
>>> to use the ttm bo helpers.
>>>
>>> Also clarify the documentation a bit, especially related to
>>> VM_FAULT_RETRY.
>>>
>>> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
>>
>> Lgtm. Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>
>>> ---
>>> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 4 +-
>>> drivers/gpu/drm/nouveau/nouveau_ttm.c | 4 +-
>>> drivers/gpu/drm/radeon/radeon_ttm.c | 4 +-
>>> drivers/gpu/drm/ttm/ttm_bo_vm.c | 84
>>> +++++++++++++---------
>>> drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c | 8 ++-
>>> include/drm/ttm/ttm_bo_api.h | 9 ++-
>>> 6 files changed, 75 insertions(+), 38 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>> index d5a9d7a88315..89dafe14f828 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>> @@ -1919,7 +1919,9 @@ static vm_fault_t amdgpu_ttm_fault(struct
>>> vm_fault *vmf)
>>> if (ret)
>>> goto unlock;
>>> - ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot,
>>> + ret = ttm_bo_vm_fault_reserved(bo, vmf,
>>> + drm_vma_node_start(&bo->base.vma_node),
>>> + vmf->vma->vm_page_prot,
>>> TTM_BO_VM_NUM_PREFAULT, 1);
>>> if (ret == VM_FAULT_RETRY && !(vmf->flags &
>>> FAULT_FLAG_RETRY_NOWAIT))
>>> return ret;
>>> diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c
>>> b/drivers/gpu/drm/nouveau/nouveau_ttm.c
>>> index b81ae90b8449..555fb6d8be8b 100644
>>> --- a/drivers/gpu/drm/nouveau/nouveau_ttm.c
>>> +++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c
>>> @@ -144,7 +144,9 @@ static vm_fault_t nouveau_ttm_fault(struct
>>> vm_fault *vmf)
>>> nouveau_bo_del_io_reserve_lru(bo);
>>> prot = vm_get_page_prot(vma->vm_flags);
>>> - ret = ttm_bo_vm_fault_reserved(vmf, prot,
>>> TTM_BO_VM_NUM_PREFAULT, 1);
>>> + ret = ttm_bo_vm_fault_reserved(bo, vmf,
>>> + drm_vma_node_start(&bo->base.vma_node),
>>> + prot, TTM_BO_VM_NUM_PREFAULT, 1);
>>> nouveau_bo_add_io_reserve_lru(bo);
>>> if (ret == VM_FAULT_RETRY && !(vmf->flags &
>>> FAULT_FLAG_RETRY_NOWAIT))
>>> return ret;
>>> diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c
>>> b/drivers/gpu/drm/radeon/radeon_ttm.c
>>> index 3361d11769a2..ba48a2acdef0 100644
>>> --- a/drivers/gpu/drm/radeon/radeon_ttm.c
>>> +++ b/drivers/gpu/drm/radeon/radeon_ttm.c
>>> @@ -816,7 +816,9 @@ static vm_fault_t radeon_ttm_fault(struct
>>> vm_fault *vmf)
>>> if (ret)
>>> goto unlock_resv;
>>> - ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot,
>>> + ret = ttm_bo_vm_fault_reserved(bo, vmf,
>>> + drm_vma_node_start(&bo->base.vma_node),
>>> + vmf->vma->vm_page_prot,
>>> TTM_BO_VM_NUM_PREFAULT, 1);
>>> if (ret == VM_FAULT_RETRY && !(vmf->flags &
>>> FAULT_FLAG_RETRY_NOWAIT))
>>> goto unlock_mclk;
>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c
>>> b/drivers/gpu/drm/ttm/ttm_bo_vm.c
>>> index b31b18058965..ed00ccf1376e 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
>>> @@ -42,7 +42,7 @@
>>> #include <linux/mem_encrypt.h>
>>> static vm_fault_t ttm_bo_vm_fault_idle(struct ttm_buffer_object
>>> *bo,
>>> - struct vm_fault *vmf)
>>> + struct vm_fault *vmf)
>>> {
>>> vm_fault_t ret = 0;
>>> int err = 0;
>>> @@ -122,7 +122,8 @@ static unsigned long ttm_bo_io_mem_pfn(struct
>>> ttm_buffer_object *bo,
>>> * Return:
>>> * 0 on success and the bo was reserved.
>>> * VM_FAULT_RETRY if blocking wait.
>>> - * VM_FAULT_NOPAGE if blocking wait and retrying was not allowed.
>>> + * VM_FAULT_NOPAGE if blocking wait and retrying was not
>>> allowed, or wait interrupted.
>>> + * VM_FAULT_SIGBUS if wait on bo->moving failed for reason other
>>> than a signal.
>>> */
>>> vm_fault_t ttm_bo_vm_reserve(struct ttm_buffer_object *bo,
>>> struct vm_fault *vmf)
>>> @@ -254,7 +255,9 @@ static vm_fault_t ttm_bo_vm_insert_huge(struct
>>> vm_fault *vmf,
>>> /**
>>> * ttm_bo_vm_fault_reserved - TTM fault helper
>>> + * @bo: The buffer object
>>> * @vmf: The struct vm_fault given as argument to the fault callback
>>> + * @mmap_base: The base of the mmap, to which the @vmf fault is
>>> relative to.
>>> * @prot: The page protection to be used for this memory area.
>>> * @num_prefault: Maximum number of prefault pages. The caller may
>>> want to
>>> * specify this based on madvice settings and the size of the GPU
>>> object
>>> @@ -265,19 +268,28 @@ static vm_fault_t ttm_bo_vm_insert_huge(struct
>>> vm_fault *vmf,
>>> * memory backing the buffer object, and then returns a return code
>>> * instructing the caller to retry the page access.
>>> *
>>> + * This function ensures any pipelined wait is finished.
>>> + *
>>> + * WARNING:
>>> + * On VM_FAULT_RETRY, the bo will be unlocked by this function when
>>> + * #FAULT_FLAG_RETRY_NOWAIT is not set inside @vmf->flags. In this
>>> + * case, the caller should not unlock the @bo.
>>> + *
>>> * Return:
>>> - * VM_FAULT_NOPAGE on success or pending signal
>>> + * 0 on success.
>>> + * VM_FAULT_NOPAGE on pending signal
>>> * VM_FAULT_SIGBUS on unspecified error
>>> * VM_FAULT_OOM on out-of-memory
>>> - * VM_FAULT_RETRY if retryable wait
>>> + * VM_FAULT_RETRY if retryable wait, see WARNING above.
>>> */
>>> -vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
>>> +vm_fault_t ttm_bo_vm_fault_reserved(struct ttm_buffer_object *bo,
>>> + struct vm_fault *vmf,
>>> + unsigned long mmap_base,
>>> pgprot_t prot,
>>> pgoff_t num_prefault,
>>> pgoff_t fault_page_size)
>>> {
>>> struct vm_area_struct *vma = vmf->vma;
>>> - struct ttm_buffer_object *bo = vma->vm_private_data;
>>> struct ttm_device *bdev = bo->bdev;
>>> unsigned long page_offset;
>>> unsigned long page_last;
>>> @@ -286,15 +298,11 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct
>>> vm_fault *vmf,
>>> struct page *page;
>>> int err;
>>> pgoff_t i;
>>> - vm_fault_t ret = VM_FAULT_NOPAGE;
>>> + vm_fault_t ret;
>>> unsigned long address = vmf->address;
>>> - /*
>>> - * Wait for buffer data in transit, due to a pipelined
>>> - * move.
>>> - */
>>> ret = ttm_bo_vm_fault_idle(bo, vmf);
>>> - if (unlikely(ret != 0))
>>> + if (ret)
>>> return ret;
>>> err = ttm_mem_io_reserve(bdev, &bo->mem);
>>> @@ -302,9 +310,8 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct
>>> vm_fault *vmf,
>>> return VM_FAULT_SIGBUS;
>>> page_offset = ((address - vma->vm_start) >> PAGE_SHIFT) +
>>> - vma->vm_pgoff - drm_vma_node_start(&bo->base.vma_node);
>>> - page_last = vma_pages(vma) + vma->vm_pgoff -
>>> - drm_vma_node_start(&bo->base.vma_node);
>>> + vma->vm_pgoff - mmap_base;
>>> + page_last = vma_pages(vma) + vma->vm_pgoff - mmap_base;
>>> if (unlikely(page_offset >= bo->mem.num_pages))
>>> return VM_FAULT_SIGBUS;
>>> @@ -344,8 +351,7 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct
>>> vm_fault *vmf,
>>> } else if (unlikely(!page)) {
>>> break;
>>> }
>>> - page->index = drm_vma_node_start(&bo->base.vma_node) +
>>> - page_offset;
>>> + page->index = mmap_base + page_offset;
>>> pfn = page_to_pfn(page);
>>> }
>>> @@ -392,7 +398,10 @@ vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf)
>>> return ret;
>>> prot = vma->vm_page_prot;
>>> - ret = ttm_bo_vm_fault_reserved(vmf, prot,
>>> TTM_BO_VM_NUM_PREFAULT, 1);
>>> + ret = ttm_bo_vm_fault_reserved(bo, vmf,
>>> + drm_vma_node_start(&bo->base.vma_node),
>>> + prot, TTM_BO_VM_NUM_PREFAULT, 1);
>>> +
>>> if (ret == VM_FAULT_RETRY && !(vmf->flags &
>>> FAULT_FLAG_RETRY_NOWAIT))
>>> return ret;
>>> @@ -460,22 +469,16 @@ static int ttm_bo_vm_access_kmap(struct
>>> ttm_buffer_object *bo,
>>> return len;
>>> }
>>> -int ttm_bo_vm_access(struct vm_area_struct *vma, unsigned long addr,
>>> - void *buf, int len, int write)
>>> +int ttm_bo_vm_access_reserved(struct ttm_buffer_object *bo,
>>> + struct vm_area_struct *vma,
>>> + unsigned long offset,
>>> + void *buf, int len, int write)
>>> {
>>> - struct ttm_buffer_object *bo = vma->vm_private_data;
>>> - unsigned long offset = (addr) - vma->vm_start +
>>> - ((vma->vm_pgoff - drm_vma_node_start(&bo->base.vma_node))
>>> - << PAGE_SHIFT);
>>> int ret;
>>> if (len < 1 || (offset + len) >> PAGE_SHIFT >
>>> bo->mem.num_pages)
>>> return -EIO;
>>> - ret = ttm_bo_reserve(bo, true, false, NULL);
>>> - if (ret)
>>> - return ret;
>>> -
>>> switch (bo->mem.mem_type) {
>>> case TTM_PL_SYSTEM:
>>> if (unlikely(bo->ttm->page_flags & TTM_PAGE_FLAG_SWAPPED)) {
>>> @@ -485,16 +488,33 @@ int ttm_bo_vm_access(struct vm_area_struct
>>> *vma, unsigned long addr,
>>> }
>>> fallthrough;
>>> case TTM_PL_TT:
>>> - ret = ttm_bo_vm_access_kmap(bo, offset, buf, len, write);
>>> - break;
>>> + return ttm_bo_vm_access_kmap(bo, offset, buf, len, write);
>>> default:
>>> if (bo->bdev->funcs->access_memory)
>>> - ret = bo->bdev->funcs->access_memory(
>>> + return bo->bdev->funcs->access_memory(
>>> bo, offset, buf, len, write);
>>> else
>>> - ret = -EIO;
>>> + return -EIO;
>>> }
>>> + return ret;
>>> +}
>>> +EXPORT_SYMBOL(ttm_bo_vm_access_reserved);
>>> +
>>> +int ttm_bo_vm_access(struct vm_area_struct *vma, unsigned long addr,
>>> + void *buf, int len, int write)
>>> +{
>>> + struct ttm_buffer_object *bo = vma->vm_private_data;
>>> + unsigned long offset = (addr) - vma->vm_start +
>>> + ((vma->vm_pgoff - drm_vma_node_start(&bo->base.vma_node))
>>> + << PAGE_SHIFT);
>>> + int ret;
>>> +
>>> + ret = ttm_bo_reserve(bo, true, false, NULL);
>>> + if (ret)
>>> + return ret;
>>> +
>>> + ret = ttm_bo_vm_access_reserved(bo, vma, offset, buf, len, write);
>>> ttm_bo_unreserve(bo);
>>> return ret;
>>> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c
>>> b/drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c
>>> index 45c9c6a7f1d6..56ecace0cf5c 100644
>>> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c
>>> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c
>>> @@ -477,7 +477,9 @@ vm_fault_t vmw_bo_vm_fault(struct vm_fault *vmf)
>>> else
>>> prot = vm_get_page_prot(vma->vm_flags);
>>> - ret = ttm_bo_vm_fault_reserved(vmf, prot, num_prefault, 1);
>>> + ret = ttm_bo_vm_fault_reserved(bo, vmf,
>>> + drm_vma_node_start(&bo->base.vma_node),
>>> + prot, num_prefault, 1);
>>> if (ret == VM_FAULT_RETRY && !(vmf->flags &
>>> FAULT_FLAG_RETRY_NOWAIT))
>>> return ret;
>>> @@ -546,7 +548,9 @@ vm_fault_t vmw_bo_vm_huge_fault(struct
>>> vm_fault *vmf,
>>> prot = vm_get_page_prot(vma->vm_flags);
>>> }
>>> - ret = ttm_bo_vm_fault_reserved(vmf, prot, 1, fault_page_size);
>>> + ret = ttm_bo_vm_fault_reserved(bo, vmf,
>>> + drm_vma_node_start(&bo->base.vma_node),
>>> + prot, 1, fault_page_size);
>>> if (ret == VM_FAULT_RETRY && !(vmf->flags &
>>> FAULT_FLAG_RETRY_NOWAIT))
>>> return ret;
>>> diff --git a/include/drm/ttm/ttm_bo_api.h
>>> b/include/drm/ttm/ttm_bo_api.h
>>> index 639521880c29..434f91f1fdbf 100644
>>> --- a/include/drm/ttm/ttm_bo_api.h
>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>> @@ -605,7 +605,9 @@ int ttm_mem_evict_first(struct ttm_device *bdev,
>>> vm_fault_t ttm_bo_vm_reserve(struct ttm_buffer_object *bo,
>>> struct vm_fault *vmf);
>>> -vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
>>> +vm_fault_t ttm_bo_vm_fault_reserved(struct ttm_buffer_object *bo,
>>> + struct vm_fault *vmf,
>>> + unsigned long mmap_base,
>>> pgprot_t prot,
>>> pgoff_t num_prefault,
>>> pgoff_t fault_page_size);
>>> @@ -616,6 +618,11 @@ void ttm_bo_vm_open(struct vm_area_struct *vma);
>>> void ttm_bo_vm_close(struct vm_area_struct *vma);
>>> +int ttm_bo_vm_access_reserved(struct ttm_buffer_object *bo,
>>> + struct vm_area_struct *vma,
>>> + unsigned long offset,
>>> + void *buf, int len, int write);
>>> +
>>> int ttm_bo_vm_access(struct vm_area_struct *vma, unsigned long addr,
>>> void *buf, int len, int write);
>>> bool ttm_bo_delayed_delete(struct ttm_device *bdev, bool remove_all);
>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2021-05-18 14:59 UTC|newest]
Thread overview: 62+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-05-18 8:26 [Intel-gfx] [PATCH v2 00/15] drm/i915: Move LMEM (VRAM) management over to TTM Thomas Hellström
2021-05-18 8:26 ` [Intel-gfx] [PATCH v2 01/15] drm/i915: Untangle the vma pages_mutex Thomas Hellström
2021-05-18 11:12 ` Maarten Lankhorst
2021-05-18 11:28 ` Thomas Hellström
2021-05-18 8:26 ` [Intel-gfx] [PATCH v2 02/15] drm/i915: Don't free shared locks while shared Thomas Hellström
2021-05-18 11:18 ` Maarten Lankhorst
2021-05-18 11:30 ` Thomas Hellström (Intel)
2021-05-18 8:26 ` [Intel-gfx] [PATCH v2 03/15] drm/i915: Fix i915_sg_page_sizes to record dma segments rather than physical pages Thomas Hellström
2021-05-18 8:46 ` Matthew Auld
2021-05-18 8:26 ` [Intel-gfx] [PATCH v2 04/15] drm/ttm: Export functions to initialize and finalize the ttm range manager standalone Thomas Hellström
2021-05-18 9:03 ` Daniel Vetter
2021-05-18 11:51 ` Christian König
2021-05-18 13:06 ` Thomas Hellström
2021-05-18 13:11 ` Christian König
2021-05-18 8:26 ` [Intel-gfx] [PATCH v2 05/15] drm/i915/ttm Initialize the ttm device and memory managers Thomas Hellström
2021-05-18 9:05 ` Matthew Auld
2021-05-18 9:09 ` Matthew Auld
2021-05-18 9:12 ` Thomas Hellström
2021-05-18 8:26 ` [Intel-gfx] [PATCH v2 06/15] drm/i915/ttm: Embed a ttm buffer object in the i915 gem object Thomas Hellström
2021-05-18 11:44 ` Maarten Lankhorst
2021-05-18 8:26 ` [Intel-gfx] [PATCH v2 07/15] drm/ttm: Export ttm_bo_tt_destroy() Thomas Hellström
2021-05-18 11:46 ` Maarten Lankhorst
2021-05-18 12:01 ` Christian König
2021-05-18 8:26 ` [Intel-gfx] [PATCH v2 08/15] drm/i915/ttm Add a generic TTM memcpy move for page-based iomem Thomas Hellström
2021-05-18 11:55 ` Christian König
2021-05-18 12:04 ` Thomas Hellström
2021-05-18 12:09 ` Christian König
2021-05-18 12:52 ` Thomas Hellström
2021-05-18 13:08 ` Christian König
2021-05-18 13:24 ` Thomas Hellström
2021-05-18 13:26 ` Christian König
2021-05-18 8:26 ` [Intel-gfx] [PATCH v2 09/15] drm/ttm, drm/amdgpu: Allow the driver some control over swapping Thomas Hellström
2021-05-18 12:19 ` Maarten Lankhorst
2021-05-18 15:15 ` Thomas Hellström
2021-05-18 15:18 ` Christian König
2021-05-18 15:20 ` Thomas Hellström
2021-05-18 15:28 ` Christian König
2021-05-18 15:38 ` Thomas Hellström
2021-05-18 15:42 ` Christian König
2021-05-18 16:07 ` Thomas Hellström
2021-05-18 16:30 ` Christian König
2021-05-19 6:27 ` Thomas Hellström
2021-05-19 10:43 ` Christian König
2021-05-18 8:26 ` [Intel-gfx] [PATCH v2 10/15] drm/i915/ttm: Introduce a TTM i915 gem object backend Thomas Hellström
2021-05-19 9:53 ` Matthew Auld
2021-05-19 11:29 ` Thomas Hellström
2021-05-18 8:26 ` [Intel-gfx] [PATCH v2 11/15] drm/i915/lmem: Verify checks for lmem residency Thomas Hellström
2021-05-19 10:04 ` Matthew Auld
2021-05-18 8:26 ` [Intel-gfx] [PATCH v2 12/15] drm/i915: Disable mmap ioctl for gen12+ Thomas Hellström
2021-05-18 8:41 ` Thomas Hellström
2021-05-18 8:26 ` [Intel-gfx] [PATCH v2 13/15] drm/ttm: Add BO and offset arguments for vm_access and vm_fault ttm handlers Thomas Hellström
2021-05-18 8:59 ` Thomas Hellström
2021-05-18 11:59 ` Christian König
2021-05-18 14:59 ` Thomas Hellström [this message]
2021-05-18 8:27 ` [Intel-gfx] [PATCH v2 14/15] drm/i915: Use ttm mmap handling for ttm bo's Thomas Hellström
2021-05-18 9:17 ` Thomas Hellström
2021-05-18 8:27 ` [Intel-gfx] [PATCH v2 15/15] drm/i915/ttm: Add io sgt caching to i915_ttm_io_mem_pfn Thomas Hellström
2021-05-18 9:33 ` Thomas Hellström
2021-05-18 8:44 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Move LMEM (VRAM) management over to TTM (rev2) Patchwork
2021-05-18 8:47 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2021-05-18 9:14 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
2021-05-18 17:02 ` [Intel-gfx] ✓ Fi.CI.IGT: success " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a8c98c8a-3313-9802-31be-81e80525a111@linux.intel.com \
--to=thomas.hellstrom@linux.intel.com \
--cc=christian.koenig@amd.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-gfx@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox