* [LSF/MM/BPF TOPIC] Adding NVMeVirt to Kernel mainline
@ 2024-02-22 1:38 Jaehoon Shim
2024-02-22 7:10 ` Damien Le Moal
2024-02-22 15:56 ` Keith Busch
0 siblings, 2 replies; 5+ messages in thread
From: Jaehoon Shim @ 2024-02-22 1:38 UTC (permalink / raw)
To: lsf-pc; +Cc: linux-nvme
Hi all,
My research group has recently introduced NVMeVirt, a software-defined
virtual NVMe device implemented as a Linux kernel module. Upon
loading, NVMeVirt emulates an NVMe device that is recognized by the
host as a native PCIe device.
- https://github.com/snu-csl/nvmevirt
- https://www.usenix.org/system/files/fast23-kim.pdf
Advantages of NVMeVirt are:
- Deployable in real environments (not virtual)
- PCI peer-to-peer DMA support
- Low-latency device support
- Multiple namespace support (each namespace can support different command sets)
- Multiple device support
- Various command set support (currently supporting ZNS and KV)
- Accurate performance emulation
What if we can simplify NVMeVirt and add it to the Kernel mainline
just like the scsi_debug in the SCSI driver? This would offer an
accessible tool for developers, especially when NVMe devices with
specific spec are unavailable, to develop and debug the NVMe driver
functionalities.
Best regards,
Jaehoon
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [LSF/MM/BPF TOPIC] Adding NVMeVirt to Kernel mainline
2024-02-22 1:38 [LSF/MM/BPF TOPIC] Adding NVMeVirt to Kernel mainline Jaehoon Shim
@ 2024-02-22 7:10 ` Damien Le Moal
2024-02-23 5:38 ` Jaehoon Shim
2024-02-22 15:56 ` Keith Busch
1 sibling, 1 reply; 5+ messages in thread
From: Damien Le Moal @ 2024-02-22 7:10 UTC (permalink / raw)
To: Jaehoon Shim, lsf-pc; +Cc: linux-nvme
On 2/22/24 10:38, Jaehoon Shim wrote:
> Hi all,
>
> My research group has recently introduced NVMeVirt, a software-defined
> virtual NVMe device implemented as a Linux kernel module. Upon
> loading, NVMeVirt emulates an NVMe device that is recognized by the
> host as a native PCIe device.
> - https://github.com/snu-csl/nvmevirt
> - https://www.usenix.org/system/files/fast23-kim.pdf
>
> Advantages of NVMeVirt are:
> - Deployable in real environments (not virtual)
> - PCI peer-to-peer DMA support
> - Low-latency device support
> - Multiple namespace support (each namespace can support different command sets)
> - Multiple device support
> - Various command set support (currently supporting ZNS and KV)
> - Accurate performance emulation
>
> What if we can simplify NVMeVirt and add it to the Kernel mainline
> just like the scsi_debug in the SCSI driver? This would offer an
> accessible tool for developers, especially when NVMe devices with
> specific spec are unavailable, to develop and debug the NVMe driver
> functionalities.
Qemu emulates nvme devices fairly well already.
What is the backing store for you kernel module nvme emulation ? Memory only ?
files or block device ? If it is the latter, how can you do "Accurate
performance emulation". And what does "Accurate performance emulation" mean
anyway ? Different NVMe drives from the same or different vendors have different
performance characteristics. So what exactly are you emulating here ?
>
> Best regards,
> Jaehoon
>
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [LSF/MM/BPF TOPIC] Adding NVMeVirt to Kernel mainline
2024-02-22 1:38 [LSF/MM/BPF TOPIC] Adding NVMeVirt to Kernel mainline Jaehoon Shim
2024-02-22 7:10 ` Damien Le Moal
@ 2024-02-22 15:56 ` Keith Busch
2024-02-23 5:45 ` Jaehoon Shim
1 sibling, 1 reply; 5+ messages in thread
From: Keith Busch @ 2024-02-22 15:56 UTC (permalink / raw)
To: Jaehoon Shim; +Cc: lsf-pc, linux-nvme
On Thu, Feb 22, 2024 at 10:38:03AM +0900, Jaehoon Shim wrote:
> Hi all,
>
> My research group has recently introduced NVMeVirt, a software-defined
> virtual NVMe device implemented as a Linux kernel module. Upon
> loading, NVMeVirt emulates an NVMe device that is recognized by the
> host as a native PCIe device.
> - https://github.com/snu-csl/nvmevirt
> - https://www.usenix.org/system/files/fast23-kim.pdf
>
> Advantages of NVMeVirt are:
> - Deployable in real environments (not virtual)
> - PCI peer-to-peer DMA support
It looks like your module creates a single nvme device. How are you
testing P2P with just one device?
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [LSF/MM/BPF TOPIC] Adding NVMeVirt to Kernel mainline
2024-02-22 7:10 ` Damien Le Moal
@ 2024-02-23 5:38 ` Jaehoon Shim
0 siblings, 0 replies; 5+ messages in thread
From: Jaehoon Shim @ 2024-02-23 5:38 UTC (permalink / raw)
To: Damien Le Moal; +Cc: lsf-pc, linux-nvme
On Thu, 22 Feb 2024 at 16:10, Damien Le Moal <dlemoal@kernel.org> wrote:
>
> On 2/22/24 10:38, Jaehoon Shim wrote:
> > Hi all,
> >
> > My research group has recently introduced NVMeVirt, a software-defined
> > virtual NVMe device implemented as a Linux kernel module. Upon
> > loading, NVMeVirt emulates an NVMe device that is recognized by the
> > host as a native PCIe device.
> > - https://github.com/snu-csl/nvmevirt
> > - https://www.usenix.org/system/files/fast23-kim.pdf
> >
> > Advantages of NVMeVirt are:
> > - Deployable in real environments (not virtual)
> > - PCI peer-to-peer DMA support
> > - Low-latency device support
> > - Multiple namespace support (each namespace can support different command sets)
> > - Multiple device support
> > - Various command set support (currently supporting ZNS and KV)
> > - Accurate performance emulation
> >
> > What if we can simplify NVMeVirt and add it to the Kernel mainline
> > just like the scsi_debug in the SCSI driver? This would offer an
> > accessible tool for developers, especially when NVMe devices with
> > specific spec are unavailable, to develop and debug the NVMe driver
> > functionalities.
>
> Qemu emulates nvme devices fairly well already.
>
> What is the backing store for you kernel module nvme emulation ? Memory only ?
> files or block device ? If it is the latter, how can you do "Accurate
> performance emulation". And what does "Accurate performance emulation" mean
> anyway ? Different NVMe drives from the same or different vendors have different
> performance characteristics. So what exactly are you emulating here ?
>
> >
> > Best regards,
> > Jaehoon
> >
>
> --
> Damien Le Moal
> Western Digital Research
>
During our evaluation, we have found out it is hard to emulate
low-latency SSDs using FEMU due to virtualization overhead.
Also, it is hard for FEMU to communicate with other real PCIe devices
(e.g., FPGA, GPU, NIC) on the host system.
NVMeVirt's backing store is kernel memory similar to SCSI_DEBUG.
A physical memory range needs to be reserved for NVMeVirt.
The accurate performance emulation means that current NVMeVirt
supports NAND flash SSD FTL performance model just like FEMU. It has a
page-mapping based FTL inside.
However, we support much more features of real SSD FTLs that other
emulators doesn't, such as write buffer, one-shot programing, multiple
FTL instances, etc.
In our research, we have emulated Samsung 970 Pro and Intel Optane SSDs.
We believe that NVMeVirt can open more opportunities to developers
regarding such advantages.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [LSF/MM/BPF TOPIC] Adding NVMeVirt to Kernel mainline
2024-02-22 15:56 ` Keith Busch
@ 2024-02-23 5:45 ` Jaehoon Shim
0 siblings, 0 replies; 5+ messages in thread
From: Jaehoon Shim @ 2024-02-23 5:45 UTC (permalink / raw)
To: Keith Busch; +Cc: lsf-pc, linux-nvme
On Fri, 23 Feb 2024 at 00:56, Keith Busch <kbusch@kernel.org> wrote:
>
> On Thu, Feb 22, 2024 at 10:38:03AM +0900, Jaehoon Shim wrote:
> > Hi all,
> >
> > My research group has recently introduced NVMeVirt, a software-defined
> > virtual NVMe device implemented as a Linux kernel module. Upon
> > loading, NVMeVirt emulates an NVMe device that is recognized by the
> > host as a native PCIe device.
> > - https://github.com/snu-csl/nvmevirt
> > - https://www.usenix.org/system/files/fast23-kim.pdf
> >
> > Advantages of NVMeVirt are:
> > - Deployable in real environments (not virtual)
> > - PCI peer-to-peer DMA support
>
> It looks like your module creates a single nvme device. How are you
> testing P2P with just one device?
We have recently developed functionality for multi-device support,
which will soon be merged into the main branch of our GitHub
repository.
One possible way of making multiple nvme devices using current
NVMeVirt code is making two nvmevirt modules with different names and
loading them.
During our research, we have used GPUDirect Storage to test the P2P
capabilities between NVMeVirt and an actual NVIDIA GPU.
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2024-02-23 6:49 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-02-22 1:38 [LSF/MM/BPF TOPIC] Adding NVMeVirt to Kernel mainline Jaehoon Shim
2024-02-22 7:10 ` Damien Le Moal
2024-02-23 5:38 ` Jaehoon Shim
2024-02-22 15:56 ` Keith Busch
2024-02-23 5:45 ` Jaehoon Shim
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox