netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vinicius Costa Gomes <vinicius.gomes@intel.com>
To: Kurt Kanzenbach <kurt@linutronix.de>,
	Jesse Brandeburg <jesse.brandeburg@intel.com>,
	Tony Nguyen <anthony.l.nguyen@intel.com>
Cc: "David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org
Subject: Re: [PATCH net-next] igc: Lift TAPRIO schedule restriction
Date: Tue, 07 Jun 2022 10:14:52 -0700	[thread overview]
Message-ID: <87wndsmm43.fsf@intel.com> (raw)
In-Reply-To: <87k09tar5e.fsf@kurt>

Hi Kurt,

Kurt Kanzenbach <kurt@linutronix.de> writes:

> Hi Vinicius,
>
> On Mon Jun 06 2022, Vinicius Costa Gomes wrote:
>> Hi Kurt,
>>
>> Kurt Kanzenbach <kurt@linutronix.de> writes:
>>
>>> Add support for Qbv schedules where one queue stays open
>>> in consecutive entries. Currently that's not supported.
>>>
>>> Example schedule:
>>>
>>> |tc qdisc replace dev ${INTERFACE} handle 100 parent root taprio num_tc 3 \
>>> |   map 2 2 1 0 2 2 2 2 2 2 2 2 2 2 2 2 \
>>> |   queues 1@0 1@1 2@2 \
>>> |   base-time ${BASETIME} \
>>> |   sched-entry S 0x01 300000 \ # Stream High/Low
>>> |   sched-entry S 0x06 500000 \ # Management and Best Effort
>>> |   sched-entry S 0x04 200000 \ # Best Effort
>>> |   flags 0x02
>>>
>>> Signed-off-by: Kurt Kanzenbach <kurt@linutronix.de>
>>> ---
>>>  drivers/net/ethernet/intel/igc/igc_main.c | 23 +++++++++++++++++------
>>>  1 file changed, 17 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
>>> index ae17af44fe02..4758bdbe5df3 100644
>>> --- a/drivers/net/ethernet/intel/igc/igc_main.c
>>> +++ b/drivers/net/ethernet/intel/igc/igc_main.c
>>> @@ -5813,9 +5813,10 @@ static bool validate_schedule(struct igc_adapter *adapter,
>>>  		return false;
>>>  
>>>  	for (n = 0; n < qopt->num_entries; n++) {
>>> -		const struct tc_taprio_sched_entry *e;
>>> +		const struct tc_taprio_sched_entry *e, *prev;
>>>  		int i;
>>>  
>>> +		prev = n ? &qopt->entries[n - 1] : NULL;
>>>  		e = &qopt->entries[n];
>>>  
>>>  		/* i225 only supports "global" frame preemption
>>> @@ -5828,7 +5829,12 @@ static bool validate_schedule(struct igc_adapter *adapter,
>>>  			if (e->gate_mask & BIT(i))
>>>  				queue_uses[i]++;
>>>  
>>> -			if (queue_uses[i] > 1)
>>> +			/* There are limitations: A single queue cannot be
>>> +			 * opened and closed multiple times per cycle unless the
>>> +			 * gate stays open. Check for it.
>>> +			 */
>>> +			if (queue_uses[i] > 1 &&
>>> +			    !(prev->gate_mask & BIT(i)))
>>
>> Perhaps I am missing something, I didn't try to run, but not checking if
>> 'prev' can be NULL, could lead to crashes for some schedules, no?
>
> My thinking was that `prev` can never be NULL, as `queue_uses[i] > 1` is
> checked first. This condition can only be true if there are at least two
> entries.
>

Oh, yeah! That's true. I have missed that.

>>
>> What I have in mind is a schedule that queue 0 is mentioned multiple
>> times, for example:
>>
>>  |   sched-entry S 0x01 300000 \ # Stream High/Low
>>  |   sched-entry S 0x03 500000 \ # Management and Best Effort
>>  |   sched-entry S 0x05 200000 \ # Best Effort
>>
>
> So, this schedule works with the proposed patch. Queue 0 is opened in
> all three entries. My debug code shows:
>
> |tc-6145    [010] .......   616.190589: igc_setup_tc: Qbv configuration:
> |tc-6145    [010] .......   616.190592: igc_setup_tc: Queue 0 -- start_time=0 [ns]
> |tc-6145    [010] .......   616.190592: igc_setup_tc: Queue 0 -- end_time=1000000 [ns]
> |tc-6145    [010] .......   616.190593: igc_setup_tc: Queue 1 -- start_time=300000 [ns]
> |tc-6145    [010] .......   616.190593: igc_setup_tc: Queue 1 -- end_time=800000 [ns]
> |tc-6145    [010] .......   616.190593: igc_setup_tc: Queue 2 -- start_time=800000 [ns]
> |tc-6145    [010] .......   616.190594: igc_setup_tc: Queue 2 -- end_time=1000000 [ns]
> |tc-6145    [010] .......   616.190594: igc_setup_tc: Queue 3 -- start_time=800000 [ns]
> |tc-6145    [010] .......   616.190594: igc_setup_tc: Queue 3 -- end_time=1000000 [ns]
>
> Anyway, I'd appreciate some testing on your side too :).

Sure, I can give it a spin, but it'll have to be later in the week, kind
of swamped right now.

>
> Thanks,
> Kurt

-- 
Vinicius

  reply	other threads:[~2022-06-07 17:24 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-06  9:27 [PATCH net-next] igc: Lift TAPRIO schedule restriction Kurt Kanzenbach
2022-06-06 23:55 ` Vinicius Costa Gomes
2022-06-07  7:06   ` Kurt Kanzenbach
2022-06-07 17:14     ` Vinicius Costa Gomes [this message]
2022-06-08  6:00       ` Kurt Kanzenbach
2022-06-24 23:56 ` Vinicius Costa Gomes
2022-07-11  5:14 ` [Intel-wired-lan] " naamax.meir

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87wndsmm43.fsf@intel.com \
    --to=vinicius.gomes@intel.com \
    --cc=anthony.l.nguyen@intel.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=jesse.brandeburg@intel.com \
    --cc=kuba@kernel.org \
    --cc=kurt@linutronix.de \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).