public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Wen Gu <guwen@linux.alibaba.com>
To: Simon Horman <horms@kernel.org>
Cc: wenjia@linux.ibm.com, jaka@linux.ibm.com, davem@davemloft.net,
	edumazet@google.com, kuba@kernel.org, pabeni@redhat.com,
	alibuda@linux.alibaba.com, tonylu@linux.alibaba.com,
	linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [PATCH net-next 1/2] net/smc: introduce statistics for allocated ringbufs of link group
Date: Tue, 6 Aug 2024 20:23:11 +0800	[thread overview]
Message-ID: <b655fdb9-1d3f-4547-98f3-178ae4027bb3@linux.alibaba.com> (raw)
In-Reply-To: <20240806104925.GS2636630@kernel.org>



On 2024/8/6 18:49, Simon Horman wrote:
> On Mon, Aug 05, 2024 at 05:05:50PM +0800, Wen Gu wrote:
>> Currently we have the statistics on sndbuf/RMB sizes of all connections
>> that have ever been on the link group, namely smc_stats_memsize. However
>> these statistics are incremental and since the ringbufs of link group
>> are allowed to be reused, we cannot know the actual allocated buffers
>> through these. So here introduces the statistic on actual allocated
>> ringbufs of the link group, it will be incremented when a new ringbuf is
>> added into buf_list and decremented when it is deleted from buf_list.
>>
>> Signed-off-by: Wen Gu <guwen@linux.alibaba.com>
>> ---
>>   include/uapi/linux/smc.h |  4 ++++
>>   net/smc/smc_core.c       | 52 ++++++++++++++++++++++++++++++++++++----
>>   net/smc/smc_core.h       |  2 ++
>>   3 files changed, 54 insertions(+), 4 deletions(-)
>>
>> diff --git a/include/uapi/linux/smc.h b/include/uapi/linux/smc.h
>> index b531e3ef011a..d27b8dc50f90 100644
>> --- a/include/uapi/linux/smc.h
>> +++ b/include/uapi/linux/smc.h
>> @@ -127,6 +127,8 @@ enum {
>>   	SMC_NLA_LGR_R_NET_COOKIE,	/* u64 */
>>   	SMC_NLA_LGR_R_PAD,		/* flag */
>>   	SMC_NLA_LGR_R_BUF_TYPE,		/* u8 */
>> +	SMC_NLA_LGR_R_SNDBUF_ALLOC,	/* u64 */
>> +	SMC_NLA_LGR_R_RMB_ALLOC,	/* u64 */
>>   	__SMC_NLA_LGR_R_MAX,
>>   	SMC_NLA_LGR_R_MAX = __SMC_NLA_LGR_R_MAX - 1
>>   };
>> @@ -162,6 +164,8 @@ enum {
>>   	SMC_NLA_LGR_D_V2_COMMON,	/* nest */
>>   	SMC_NLA_LGR_D_EXT_GID,		/* u64 */
>>   	SMC_NLA_LGR_D_PEER_EXT_GID,	/* u64 */
>> +	SMC_NLA_LGR_D_SNDBUF_ALLOC,	/* u64 */
>> +	SMC_NLA_LGR_D_DMB_ALLOC,	/* u64 */
>>   	__SMC_NLA_LGR_D_MAX,
>>   	SMC_NLA_LGR_D_MAX = __SMC_NLA_LGR_D_MAX - 1
>>   };
>> diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
>> index 71fb334d8234..73c7999fc74f 100644
>> --- a/net/smc/smc_core.c
>> +++ b/net/smc/smc_core.c
>> @@ -221,6 +221,37 @@ static void smc_lgr_unregister_conn(struct smc_connection *conn)
>>   	write_unlock_bh(&lgr->conns_lock);
>>   }
>>   
>> +/* must be called under lgr->{sndbufs|rmbs} lock */
>> +static inline void smc_lgr_buf_list_add(struct smc_link_group *lgr,
>> +					bool is_rmb,
>> +					struct list_head *buf_list,
>> +					struct smc_buf_desc *buf_desc)
> 
> Please do not use the inline keyword in .c files unless there is a
> demonstrable reason to do so, e.g. performance. Rather, please allow
> the compiler to inline functions as it sees fit.
> 
> The inline keyword in .h files is, of course, fine.
> 

Yes.. I forgot to remove 'inline' when I moved these two helpers
from .h file to .c file. I will fix this in next version.

Thank you!

>> +{
>> +	list_add(&buf_desc->list, buf_list);
>> +	if (is_rmb) {
>> +		lgr->alloc_rmbs += buf_desc->len;
>> +		lgr->alloc_rmbs +=
>> +			lgr->is_smcd ? sizeof(struct smcd_cdc_msg) : 0;
>> +	} else {
>> +		lgr->alloc_sndbufs += buf_desc->len;
>> +	}
>> +}
>> +
>> +/* must be called under lgr->{sndbufs|rmbs} lock */
>> +static inline void smc_lgr_buf_list_del(struct smc_link_group *lgr,
>> +					bool is_rmb,
>> +					struct smc_buf_desc *buf_desc)
> 
> Ditto.
> 
>> +{
>> +	list_del(&buf_desc->list);
>> +	if (is_rmb) {
>> +		lgr->alloc_rmbs -= buf_desc->len;
>> +		lgr->alloc_rmbs -=
>> +			lgr->is_smcd ? sizeof(struct smcd_cdc_msg) : 0;
>> +	} else {
>> +		lgr->alloc_sndbufs -= buf_desc->len;
>> +	}
>> +}
>> +
> 
> ...
> 

  reply	other threads:[~2024-08-06 12:23 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-05  9:05 [PATCH net-next 0/2] net/smc: introduce ringbufs usage statistics Wen Gu
2024-08-05  9:05 ` [PATCH net-next 1/2] net/smc: introduce statistics for allocated ringbufs of link group Wen Gu
2024-08-06 10:49   ` Simon Horman
2024-08-06 12:23     ` Wen Gu [this message]
2024-08-05  9:05 ` [PATCH net-next 2/2] net/smc: introduce statistics for ringbufs usage of net namespace Wen Gu
2024-08-06 10:49   ` Simon Horman
2024-08-06 13:07     ` Wen Gu
2024-08-06 15:26       ` Simon Horman
2024-08-06  3:52 ` [PATCH net-next 0/2] net/smc: introduce ringbufs usage statistics shaozhengchao
2024-08-06 11:54   ` Wen Gu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b655fdb9-1d3f-4547-98f3-178ae4027bb3@linux.alibaba.com \
    --to=guwen@linux.alibaba.com \
    --cc=alibuda@linux.alibaba.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=horms@kernel.org \
    --cc=jaka@linux.ibm.com \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=tonylu@linux.alibaba.com \
    --cc=wenjia@linux.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox