public inbox for linux-trace-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* Re: uprobe overhead when specifying a pid
       [not found] <66ba4183c94d28f7020c118029d45650@tecnico.ulisboa.pt>
@ 2024-11-14  9:08 ` Jiri Olsa
  2024-11-19 13:30   ` Sebastião Santos Boavida Amaro
  0 siblings, 1 reply; 2+ messages in thread
From: Jiri Olsa @ 2024-11-14  9:08 UTC (permalink / raw)
  To: Sebastião Santos Boavida Amaro
  Cc: bpf, Oleg Nesterov, Masami Hiramatsu, linux-trace-kernel

On Wed, Nov 13, 2024 at 11:33:01PM +0000, Sebastião Santos Boavida Amaro wrote:
> Hi,
> I am using:
> libbpf-cargo = "0.24.6"
> libbpf-rs = "0.24.6"
> libbpf-sys = "1.4.3"
> On kernel 6.8.0-47-generic.
> I contacted the libbpf-rs guys, and they told me this belonged here.
> I am attaching 252 uprobes to a system, these symbols are not regularly
> called (90ish times over 9 minutes), however, when I specify a pid the
> throughput drops 3 times from 12k ops/sec to 4k ops/sec. When I do not
> specify a PID, and simply pass -1 the throughput remains the same (as it
> should, since 90 times is not significant to affect overhead I would say).
> It looks as if we are switching from userspace to kernel space without
> triggering the uprobe.
> Do not know if this is a known issue, it does not look like an intended
> behavior.

hi,
thanks for the report, I cc-ed some other folks and trace list

I'm not aware about such slowdown, I think with pid filter in place
there should be less work to do

could you please provide more details?
  - do you know which uprobe interface you are using
    uprobe over perf event or uprobe_multi (likely uprobe_multi,
    because you said above you attach 250 probes)
  - more details on the workload, like is the threads/processes,
    how many and I guess you trigger bpf program
  - do you filter out single pid or more
  - could you profile the workload with perf

thanks,
jirka

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: uprobe overhead when specifying a pid
  2024-11-14  9:08 ` uprobe overhead when specifying a pid Jiri Olsa
@ 2024-11-19 13:30   ` Sebastião Santos Boavida Amaro
  0 siblings, 0 replies; 2+ messages in thread
From: Sebastião Santos Boavida Amaro @ 2024-11-19 13:30 UTC (permalink / raw)
  To: Jiri Olsa; +Cc: bpf, Oleg Nesterov, Masami Hiramatsu, linux-trace-kernel

I am using a normal SEC(uprobe) in the eBPF code. The workload is ycsb 
(with 1 thread) running against a cluster of 3 Redis nodes, I filter the 
uprobes for 3 pids (the Redis nodes).
When I profiled the machine with perf, I could not see glaring 
differences. Should I repeat this and send the .data here?
Best Regards,
Sebastião

A 2024-11-14 09:08, Jiri Olsa escreveu:
> On Wed, Nov 13, 2024 at 11:33:01PM +0000, Sebastião Santos Boavida 
> Amaro wrote:
>> Hi,
>> I am using:
>> libbpf-cargo = "0.24.6"
>> libbpf-rs = "0.24.6"
>> libbpf-sys = "1.4.3"
>> On kernel 6.8.0-47-generic.
>> I contacted the libbpf-rs guys, and they told me this belonged here.
>> I am attaching 252 uprobes to a system, these symbols are not 
>> regularly
>> called (90ish times over 9 minutes), however, when I specify a pid the
>> throughput drops 3 times from 12k ops/sec to 4k ops/sec. When I do not
>> specify a PID, and simply pass -1 the throughput remains the same (as 
>> it
>> should, since 90 times is not significant to affect overhead I would 
>> say).
>> It looks as if we are switching from userspace to kernel space without
>> triggering the uprobe.
>> Do not know if this is a known issue, it does not look like an 
>> intended
>> behavior.
> 
> hi,
> thanks for the report, I cc-ed some other folks and trace list
> 
> I'm not aware about such slowdown, I think with pid filter in place
> there should be less work to do
> 
> could you please provide more details?
>   - do you know which uprobe interface you are using
>     uprobe over perf event or uprobe_multi (likely uprobe_multi,
>     because you said above you attach 250 probes)
>   - more details on the workload, like is the threads/processes,
>     how many and I guess you trigger bpf program
>   - do you filter out single pid or more
>   - could you profile the workload with perf
> 
> thanks,
> jirka

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2024-11-19 13:30 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <66ba4183c94d28f7020c118029d45650@tecnico.ulisboa.pt>
2024-11-14  9:08 ` uprobe overhead when specifying a pid Jiri Olsa
2024-11-19 13:30   ` Sebastião Santos Boavida Amaro

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox