netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Michal Schmidt <mschmidt@redhat.com>
To: Jacob Keller <jacob.e.keller@intel.com>
Cc: intel-wired-lan@lists.osuosl.org,
	 Jesse Brandeburg <jesse.brandeburg@intel.com>,
	Richard Cochran <richardcochran@gmail.com>,
	 Arkadiusz Kubalewski <arkadiusz.kubalewski@intel.com>,
	 Karol Kolacinski <karol.kolacinski@intel.com>,
	netdev@vger.kernel.org
Subject: Re: [PATCH net-next 2/3] ice: avoid the PTP hardware semaphore in gettimex64 path
Date: Mon, 26 Feb 2024 21:11:23 +0100	[thread overview]
Message-ID: <CADEbmW19UZ2KvmHr_JrmJ9--yy2L4zOJKAUdJFtN53tWR5nkrA@mail.gmail.com> (raw)
In-Reply-To: <e03aabf2-2a97-4395-9060-909d3651bcf7@intel.com>

On Mon, Feb 26, 2024 at 8:36 PM Jacob Keller <jacob.e.keller@intel.com> wrote:
> On 2/26/2024 7:11 AM, Michal Schmidt wrote:
> > The writing is performed indirectly, by the hardware, as a result of
> > the driver writing GLTSYN_CMD_SYNC in ice_ptp_exec_tmr_cmd. I wasn't
> > sure if the ice_flush there is enough to make sure GLTSYN_TIME has been
> > updated, but it works well in my testing.
> >
>
> I believe this is good enough. I don't know the exact timing involved
> here, but the hardware synchronizes the update to all the PHYs and the
> MAC and it is supposed to be on the order of nanoseconds.

Thanks, that's good to know.

> > My test code can be seen here:
> > https://gitlab.com/mschmidt2/linux/-/commits/ice-ptp-host-side-lock
> > It consists of:
> >  - kernel threads reading the time in a busy loop and looking at the
> >    deltas between consecutive values, reporting new maxima.
> >    in the consecutive values;
> >  - a shell script that sets the time repeatedly;
> >  - a bpftrace probe to produce a histogram of the measured deltas.
> > Without the spinlock ptp_gltsyn_time_lock, it is easy to see tearing.
> > Deltas in the [2G, 4G) range appear in the histograms.
> > With the spinlock added, there is no tearing and the biggest delta I saw
> > was in the range [1M, 2M), that is under 2 ms.
> >
>
> Nice.
>
>
> At first, I wasn't convinced we actually need cross-adapter spinlock
> here. I thought all the flows that took hardware semaphore were on the
> clock owner. Only the clock owner PF will access the GLTSYN_TIME
> registers, (gettimex is only exposed through the ptp device, which hooks
> into the clock owner). Similarly, only the clock owner does adjtime,
> settime, etc.

Non-owners do not expose a ptp device to userspace, but they still do
ice_ptp_periodic_work -> ice_ptp_update_cached_phctime ->
ice_ptp_read_src_clk_reg, where they read GLTSYN_TIME.

> However... It turns out that at least a couple of flows use the
> semaphore on non-clock owners. Specifically E822 hardware has to
> initialize the PHY after a link restart, which includes re-doing Vernier
> calibration. Each PF handles this itself, but does require use of the
> timer synchronization commands to do it. In this flow, the main timer is
> not modified but we still use the semaphore and sync registers.
>
> I don't think that impacts the E810 card, but we use the same code flow
> here. The E822 series hardware doesn't use the AdminQ for the PHY
> messages, so its faster but I think the locking improvement would still
> be relevant for them as well.
>
> Given all this, I think it makes sense to go this route.
>
> I'd like to follow-up with possibly replacing the entire HW semaphore
> with a mutex initialized here. That would remove a bunch of PCIe posted
> transactions required to acquire the HW semaphore which would be a
> further improvement here.

Yes, I agree and I have already been looking into this.
Thanks,
Michal

> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
>
> > Signed-off-by: Michal Schmidt <mschmidt@redhat.com>
> > ---
> >  drivers/net/ethernet/intel/ice/ice_adapter.c | 2 ++
> >  drivers/net/ethernet/intel/ice/ice_adapter.h | 6 ++++++
> >  drivers/net/ethernet/intel/ice/ice_ptp.c     | 8 +-------
> >  drivers/net/ethernet/intel/ice/ice_ptp_hw.c  | 3 +++
> >  4 files changed, 12 insertions(+), 7 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/intel/ice/ice_adapter.c b/drivers/net/ethernet/intel/ice/ice_adapter.c
> > index deb063401238..4b9f5d29811c 100644
> > --- a/drivers/net/ethernet/intel/ice/ice_adapter.c
> > +++ b/drivers/net/ethernet/intel/ice/ice_adapter.c
> > @@ -5,6 +5,7 @@
> >  #include <linux/mutex.h>
> >  #include <linux/pci.h>
> >  #include <linux/slab.h>
> > +#include <linux/spinlock.h>
> >  #include <linux/xarray.h>
> >  #include "ice_adapter.h"
> >
> > @@ -38,6 +39,7 @@ struct ice_adapter *ice_adapter_get(const struct pci_dev *pdev)
> >       if (!a)
> >               return NULL;
> >
> > +     spin_lock_init(&a->ptp_gltsyn_time_lock);
> >       refcount_set(&a->refcount, 1);
> >
> >       if (xa_is_err(xa_store(&ice_adapters, index, a, GFP_KERNEL))) {
> > diff --git a/drivers/net/ethernet/intel/ice/ice_adapter.h b/drivers/net/ethernet/intel/ice/ice_adapter.h
> > index cb5a02eb24c1..9d11014ec02f 100644
> > --- a/drivers/net/ethernet/intel/ice/ice_adapter.h
> > +++ b/drivers/net/ethernet/intel/ice/ice_adapter.h
> > @@ -4,15 +4,21 @@
> >  #ifndef _ICE_ADAPTER_H_
> >  #define _ICE_ADAPTER_H_
> >
> > +#include <linux/spinlock_types.h>
> >  #include <linux/refcount_types.h>
> >
> >  struct pci_dev;
> >
> >  /**
> >   * struct ice_adapter - PCI adapter resources shared across PFs
> > + * @ptp_gltsyn_time_lock: Spinlock protecting access to the GLTSYN_TIME
> > + *                        register of the PTP clock.
> >   * @refcount: Reference count. struct ice_pf objects hold the references.
> >   */
> >  struct ice_adapter {
> > +     /* For access to the GLTSYN_TIME register */
> > +     spinlock_t ptp_gltsyn_time_lock;
> > +
> >       refcount_t refcount;
> >  };
> >
> > diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c
> > index c11eba07283c..b6c7246245c6 100644
> > --- a/drivers/net/ethernet/intel/ice/ice_ptp.c
> > +++ b/drivers/net/ethernet/intel/ice/ice_ptp.c
> > @@ -374,6 +374,7 @@ ice_ptp_read_src_clk_reg(struct ice_pf *pf, struct ptp_system_timestamp *sts)
> >       u8 tmr_idx;
> >
> >       tmr_idx = ice_get_ptp_src_clock_index(hw);
> > +     guard(spinlock_irqsave)(&pf->adapter->ptp_gltsyn_time_lock);
> >       /* Read the system timestamp pre PHC read */
> >       ptp_read_system_prets(sts);
> >
> > @@ -1925,15 +1926,8 @@ ice_ptp_gettimex64(struct ptp_clock_info *info, struct timespec64 *ts,
> >                  struct ptp_system_timestamp *sts)
> >  {
> >       struct ice_pf *pf = ptp_info_to_pf(info);
> > -     struct ice_hw *hw = &pf->hw;
> > -
> > -     if (!ice_ptp_lock(hw)) {
> > -             dev_err(ice_pf_to_dev(pf), "PTP failed to get time\n");
> > -             return -EBUSY;
> > -     }
> >
> >       ice_ptp_read_time(pf, ts, sts);
> > -     ice_ptp_unlock(hw);
> >
> >       return 0;
> >  }
> > diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
> > index 187ce9b54e1a..a47dbbfadb74 100644
> > --- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
> > +++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
> > @@ -274,6 +274,9 @@ void ice_ptp_src_cmd(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd)
> >   */
> >  static void ice_ptp_exec_tmr_cmd(struct ice_hw *hw)
> >  {
> > +     struct ice_pf *pf = container_of(hw, struct ice_pf, hw);
> > +
> > +     guard(spinlock_irqsave)(&pf->adapter->ptp_gltsyn_time_lock);
> >       wr32(hw, GLTSYN_CMD_SYNC, SYNC_EXEC_CMD);
> >       ice_flush(hw);
> >  }
>


  reply	other threads:[~2024-02-26 20:11 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-26 15:11 [PATCH net-next 0/3] ice: lighter locking for PTP time reading Michal Schmidt
2024-02-26 15:11 ` [PATCH net-next 1/3] ice: add ice_adapter for shared data across PFs on the same NIC Michal Schmidt
2024-02-26 19:18   ` Jacob Keller
2024-02-27  7:05   ` Jiri Pirko
2024-02-27  8:11     ` Michal Schmidt
2024-02-28  2:35   ` Jakub Kicinski
2024-02-28 17:39     ` Keller, Jacob E
2024-02-26 15:11 ` [PATCH net-next 2/3] ice: avoid the PTP hardware semaphore in gettimex64 path Michal Schmidt
2024-02-26 19:36   ` Jacob Keller
2024-02-26 20:11     ` Michal Schmidt [this message]
2024-02-26 21:13       ` Jacob Keller
2024-02-26 15:11 ` [PATCH net-next 3/3] ice: fold ice_ptp_read_time into ice_ptp_gettimex64 Michal Schmidt
2024-02-26 19:36   ` Jacob Keller
2024-02-26 19:16 ` [PATCH net-next 0/3] ice: lighter locking for PTP time reading Jacob Keller
2024-02-26 20:01   ` Michal Schmidt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CADEbmW19UZ2KvmHr_JrmJ9--yy2L4zOJKAUdJFtN53tWR5nkrA@mail.gmail.com \
    --to=mschmidt@redhat.com \
    --cc=arkadiusz.kubalewski@intel.com \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=jacob.e.keller@intel.com \
    --cc=jesse.brandeburg@intel.com \
    --cc=karol.kolacinski@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=richardcochran@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).