From mboxrd@z Thu Jan 1 00:00:00 1970 From: Thomas Monjalon Subject: Re: [PATCH 01/19] devtools: add simple script to find duplicate includes Date: Fri, 14 Jul 2017 17:39:33 +0200 Message-ID: <12511927.qublii9NrP@xps> References: <20170711185546.26138-1-stephen@networkplumber.org> <20170712145925.2dfe5be1@xeon-e3> <7177425.p9cP4Rfg1a@xps> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7Bit Cc: dev@dpdk.org To: Stephen Hemminger Return-path: Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com [66.111.4.25]) by dpdk.org (Postfix) with ESMTP id DC1E02C39 for ; Fri, 14 Jul 2017 17:39:41 +0200 (CEST) In-Reply-To: <7177425.p9cP4Rfg1a@xps> List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 13/07/2017 08:56, Thomas Monjalon: > 12/07/2017 23:59, Stephen Hemminger: > > On Tue, 11 Jul 2017 22:33:55 +0200 > > Thomas Monjalon wrote: > > > > > Thank you for this script, but... it is written in Perl! > > > I don't think it is a good idea to add yet another language to DPDK. > > > We already have shell and python scripts. > > > And I am not sure a lot of (young) people are able to parse it ;) > > > > > > I would like to propose this shell script: [...] > > > plus shell is 7x slower. > > > > $ time bash -c "find . -name '*.c' | xargs /tmp/dupinc.sh" > > real 0m0.765s > > user 0m1.220s > > sys 0m0.155s > > $time bash -c "find . -name '*.c' | xargs ~/bin/dup_inc.pl" > > real 0m0.131s > > user 0m0.118s > > sys 0m0.014s > > I don't think speed is really relevant here :) I did my own benchmark (recreation time): # time sh -c 'for file in $(git ls-files app buildtools drivers examples lib test) ; do devtools/dup_include.pl $file ; done' 4,41s user 1,32s system 101% cpu 5,667 total # time devtools/check-duplicate-includes.sh 5,48s user 1,00s system 153% cpu 4,222 total The shell version is reported as faster on my computer! It is faster when filtering only .c and .h files: for file in $(git ls-files '*.[ch]') ; do dups=$(sed -rn "s,$pattern,\1,p" $file | sort | uniq -d) [ -z "$dups" ] || echo "$dups" | sed "s,^,$file: duplicated include: ," done # time sh -c 'for file in $(git ls-files "*.[ch]") ; do devtools/dup_include.pl $file ; done' 3,65s user 1,05s system 100% cpu 4,668 total # time devtools/check-duplicate-includes.sh 4,72s user 0,80s system 153% cpu 3,603 total I prefer this version using only pipes, which is well parallelized: for file in $(git ls-files '*.[ch]') ; do sed -rn "s,$pattern,\1,p" $file | sort | uniq -d | sed "s,^,$file: duplicated include: ," done 7,40s user 1,49s system 231% cpu 3,847 total