Linux PCI subsystem development
 help / color / mirror / Atom feed
* [Question] Custom MMIO handler - is it possible?
@ 2024-01-31 20:42 nowicki
  2024-01-31 21:20 ` Bjorn Helgaas
  0 siblings, 1 reply; 4+ messages in thread
From: nowicki @ 2024-01-31 20:42 UTC (permalink / raw)
  To: linux-pci, linux-pci

Hello,

I'm trying to implement a fake PCIe device and I'm looking for guidance 
(by fake I mean fully software device).

So far I implemented:
- fake PCIe bus with custom fake pci_ops.read & pci_ops.write functions
- fake PCIe switch
- fake PCIe endpoint

Fake devices have implemented PCIe registers and are visible in user 
space via lspci tool.
Registers can be edited via setpci tool.

Now I'm looking for a way to implement BAR regions with custom memory 
handlers. Is it even possible?
Basically I'd like to capture each MemoryWrite & MemoryRead targeted for 
PCIe endpoint's BAR region and emulate NVMe registers.

I'm in dead-end right now and I'm seeing only two options:
- generate page faults on every access to fake BAR region and execute 
fake PCIe endpoint's callbacks - similar/the same as mmiotrace
- periodically scan fake BAR region for any changes

Both solutions have drawbacks.
Is there other way to implement fake BAR region?


Regards,
Mateusz

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Question] Custom MMIO handler - is it possible?
  2024-01-31 20:42 [Question] Custom MMIO handler - is it possible? nowicki
@ 2024-01-31 21:20 ` Bjorn Helgaas
  2024-02-01 15:38   ` Mateusz Nowicki
  0 siblings, 1 reply; 4+ messages in thread
From: Bjorn Helgaas @ 2024-01-31 21:20 UTC (permalink / raw)
  To: nowicki; +Cc: linux-pci, linux-pci

On Wed, Jan 31, 2024 at 08:42:18PM +0000, nowicki@posteo.net wrote:
> Hello,
> 
> I'm trying to implement a fake PCIe device and I'm looking for guidance (by
> fake I mean fully software device).
> 
> So far I implemented:
> - fake PCIe bus with custom fake pci_ops.read & pci_ops.write functions
> - fake PCIe switch
> - fake PCIe endpoint
> 
> Fake devices have implemented PCIe registers and are visible in user space
> via lspci tool.
> Registers can be edited via setpci tool.
> 
> Now I'm looking for a way to implement BAR regions with custom memory
> handlers. Is it even possible?
> Basically I'd like to capture each MemoryWrite & MemoryRead targeted for
> PCIe endpoint's BAR region and emulate NVMe registers.
> 
> I'm in dead-end right now and I'm seeing only two options:
> - generate page faults on every access to fake BAR region and execute fake
> PCIe endpoint's callbacks - similar/the same as mmiotrace
> - periodically scan fake BAR region for any changes
> 
> Both solutions have drawbacks.
> Is there other way to implement fake BAR region?

Sounds kind of cool and potentially useful to build kernel test tools.

Is the page fault on access option a problem because you want better
performance?  I assume you really *want* to know about every write and
possibly even every read, so a page fault seems like the way to do
that.

Maybe qemu would have some ideas?  I assume it implements some similar
things.

Bjorn

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Question] Custom MMIO handler - is it possible?
  2024-01-31 21:20 ` Bjorn Helgaas
@ 2024-02-01 15:38   ` Mateusz Nowicki
  2024-02-01 18:06     ` Bjorn Helgaas
  0 siblings, 1 reply; 4+ messages in thread
From: Mateusz Nowicki @ 2024-02-01 15:38 UTC (permalink / raw)
  To: Bjorn Helgaas; +Cc: linux-pci, linux-pci

Thanks for a quick reply Bjorn!

Actually performance is not the biggest concern.
Mmiotrace has documented SMP race condition:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/trace/mmiotrace.rst#n135

Also playing correctly with page fault is quite a challenge. I'm trying 
to find a simpler/easier solution :)

Thanks for Qemu tip! I'll take a look


On Wed, Jan 31 2024 at 15:20:09 -06:00:00, Bjorn Helgaas 
<helgaas@kernel.org> wrote:
> On Wed, Jan 31, 2024 at 08:42:18PM +0000, nowicki@posteo.net wrote:
>>  Hello,
>> 
>>  I'm trying to implement a fake PCIe device and I'm looking for 
>> guidance (by
>>  fake I mean fully software device).
>> 
>>  So far I implemented:
>>  - fake PCIe bus with custom fake pci_ops.read & pci_ops.write 
>> functions
>>  - fake PCIe switch
>>  - fake PCIe endpoint
>> 
>>  Fake devices have implemented PCIe registers and are visible in 
>> user space
>>  via lspci tool.
>>  Registers can be edited via setpci tool.
>> 
>>  Now I'm looking for a way to implement BAR regions with custom 
>> memory
>>  handlers. Is it even possible?
>>  Basically I'd like to capture each MemoryWrite & MemoryRead 
>> targeted for
>>  PCIe endpoint's BAR region and emulate NVMe registers.
>> 
>>  I'm in dead-end right now and I'm seeing only two options:
>>  - generate page faults on every access to fake BAR region and 
>> execute fake
>>  PCIe endpoint's callbacks - similar/the same as mmiotrace
>>  - periodically scan fake BAR region for any changes
>> 
>>  Both solutions have drawbacks.
>>  Is there other way to implement fake BAR region?
> 
> Sounds kind of cool and potentially useful to build kernel test tools.
> 
> Is the page fault on access option a problem because you want better
> performance?  I assume you really *want* to know about every write and
> possibly even every read, so a page fault seems like the way to do
> that.
> 
> Maybe qemu would have some ideas?  I assume it implements some similar
> things.
> 
> Bjorn



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Question] Custom MMIO handler - is it possible?
  2024-02-01 15:38   ` Mateusz Nowicki
@ 2024-02-01 18:06     ` Bjorn Helgaas
  0 siblings, 0 replies; 4+ messages in thread
From: Bjorn Helgaas @ 2024-02-01 18:06 UTC (permalink / raw)
  To: Mateusz Nowicki; +Cc: linux-pci, linux-pci

On Thu, Feb 01, 2024 at 03:38:42PM +0000, Mateusz Nowicki wrote:
> Thanks for a quick reply Bjorn!
> 
> Actually performance is not the biggest concern.
> Mmiotrace has documented SMP race condition:
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/trace/mmiotrace.rst#n135
> 
> Also playing correctly with page fault is quite a challenge. I'm trying to
> find a simpler/easier solution :)

I don't know how to make better handler for this.  Anything we do
would probably involve VM protection to catch accesses, even if that's
just in the kernel and not visible to userspace, and would probably
have the same SMP issue mentioned above.

Bjorn

> On Wed, Jan 31 2024 at 15:20:09 -06:00:00, Bjorn Helgaas
> <helgaas@kernel.org> wrote:
> > On Wed, Jan 31, 2024 at 08:42:18PM +0000, nowicki@posteo.net wrote:
> > >  Hello,
> > > 
> > >  I'm trying to implement a fake PCIe device and I'm looking for
> > > guidance (by
> > >  fake I mean fully software device).
> > > 
> > >  So far I implemented:
> > >  - fake PCIe bus with custom fake pci_ops.read & pci_ops.write
> > > functions
> > >  - fake PCIe switch
> > >  - fake PCIe endpoint
> > > 
> > >  Fake devices have implemented PCIe registers and are visible in
> > > user space
> > >  via lspci tool.
> > >  Registers can be edited via setpci tool.
> > > 
> > >  Now I'm looking for a way to implement BAR regions with custom
> > > memory
> > >  handlers. Is it even possible?
> > >  Basically I'd like to capture each MemoryWrite & MemoryRead
> > > targeted for
> > >  PCIe endpoint's BAR region and emulate NVMe registers.
> > > 
> > >  I'm in dead-end right now and I'm seeing only two options:
> > >  - generate page faults on every access to fake BAR region and
> > > execute fake
> > >  PCIe endpoint's callbacks - similar/the same as mmiotrace
> > >  - periodically scan fake BAR region for any changes
> > > 
> > >  Both solutions have drawbacks.
> > >  Is there other way to implement fake BAR region?
> > 
> > Sounds kind of cool and potentially useful to build kernel test tools.
> > 
> > Is the page fault on access option a problem because you want better
> > performance?  I assume you really *want* to know about every write and
> > possibly even every read, so a page fault seems like the way to do
> > that.
> > 
> > Maybe qemu would have some ideas?  I assume it implements some similar
> > things.
> > 
> > Bjorn
> 
> 

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2024-02-01 18:06 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-01-31 20:42 [Question] Custom MMIO handler - is it possible? nowicki
2024-01-31 21:20 ` Bjorn Helgaas
2024-02-01 15:38   ` Mateusz Nowicki
2024-02-01 18:06     ` Bjorn Helgaas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox