* [PATCH v3] tags: much faster, parallel "make tags"
@ 2015-05-08 13:26 Alexey Dobriyan
2015-05-09 5:07 ` Pádraig Brady
0 siblings, 1 reply; 8+ messages in thread
From: Alexey Dobriyan @ 2015-05-08 13:26 UTC (permalink / raw)
To: akpm, mmarek; +Cc: linux-kernel
ctags is single-threaded program. Split list of files to be tagged into
equal parts, 1 part for each CPU and then merge the results.
Speedup on one 2-way box I have is ~143 s => ~99 s (-31%).
On another 4-way box: ~120 s => ~65 s (-46%!).
Resulting "tags" files aren't byte-for-byte identical because ctags
program numbers anon struct and enum declarations with "__anonNNN"
symbols. If those lines are removed, "tags" file becomes byte-for-byte
identical with those generated with current code.
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
scripts/tags.sh | 36 +++++++++++++++++++++++++++++++-----
1 file changed, 31 insertions(+), 5 deletions(-)
--- a/scripts/tags.sh
+++ b/scripts/tags.sh
@@ -152,7 +152,19 @@ dogtags()
exuberant()
{
- all_target_sources | xargs $1 -a \
+ rm -f .make-tags.*
+
+ all_target_sources >.make-tags.src
+ NR_CPUS=$(getconf _NPROCESSORS_ONLN 2>/dev/null || echo 1)
+ NR_LINES=$(wc -l <.make-tags.src)
+ NR_LINES=$((($NR_LINES + $NR_CPUS - 1) / $NR_CPUS))
+
+ split -a 6 -d -l $NR_LINES .make-tags.src .make-tags.src.
+
+ for i in .make-tags.src.*; do
+ N=$(echo $i | sed -e 's/.*\.//')
+ # -u: don't sort now, sort later
+ xargs <$i $1 -a -f .make-tags.$N -u \
-I __initdata,__exitdata,__initconst, \
-I __cpuinitdata,__initdata_memblock \
-I __refdata,__attribute,__maybe_unused,__always_unused \
@@ -211,7 +223,21 @@ exuberant()
--regex-c='/DEFINE_PCI_DEVICE_TABLE\((\w*)/\1/v/' \
--regex-c='/(^\s)OFFSET\((\w*)/\2/v/' \
--regex-c='/(^\s)DEFINE\((\w*)/\2/v/' \
- --regex-c='/DEFINE_HASHTABLE\((\w*)/\1/v/'
+ --regex-c='/DEFINE_HASHTABLE\((\w*)/\1/v/' \
+ &
+ done
+ wait
+ rm -f .make-tags.src .make-tags.src.*
+
+ # write header
+ $1 -f $2 /dev/null
+ # remove headers
+ for i in .make-tags.*; do
+ sed -i -e '/^!/d' $i &
+ done
+ wait
+ sort .make-tags.* >>$2
+ rm -f .make-tags.*
all_kconfigs | xargs $1 -a \
--langdef=kconfig --language-force=kconfig \
@@ -276,7 +302,7 @@ emacs()
xtags()
{
if $1 --version 2>&1 | grep -iq exuberant; then
- exuberant $1
+ exuberant $1 $2
elif $1 --version 2>&1 | grep -iq emacs; then
emacs $1
else
@@ -322,13 +348,13 @@ case "$1" in
"tags")
rm -f tags
- xtags ctags
+ xtags ctags tags
remove_structs=y
;;
"TAGS")
rm -f TAGS
- xtags etags
+ xtags etags TAGS
remove_structs=y
;;
esac
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH v3] tags: much faster, parallel "make tags"
2015-05-08 13:26 [PATCH v3] tags: much faster, parallel "make tags" Alexey Dobriyan
@ 2015-05-09 5:07 ` Pádraig Brady
2015-05-10 13:26 ` Alexey Dobriyan
0 siblings, 1 reply; 8+ messages in thread
From: Pádraig Brady @ 2015-05-09 5:07 UTC (permalink / raw)
To: Alexey Dobriyan, akpm, mmarek; +Cc: linux-kernel
On 08/05/15 14:26, Alexey Dobriyan wrote:
> ctags is single-threaded program. Split list of files to be tagged into
> equal parts, 1 part for each CPU and then merge the results.
>
> Speedup on one 2-way box I have is ~143 s => ~99 s (-31%).
> On another 4-way box: ~120 s => ~65 s (-46%!).
>
> Resulting "tags" files aren't byte-for-byte identical because ctags
> program numbers anon struct and enum declarations with "__anonNNN"
> symbols. If those lines are removed, "tags" file becomes byte-for-byte
> identical with those generated with current code.
>
> Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
> ---
>
> scripts/tags.sh | 36 +++++++++++++++++++++++++++++++-----
> 1 file changed, 31 insertions(+), 5 deletions(-)
>
> --- a/scripts/tags.sh
> +++ b/scripts/tags.sh
> @@ -152,7 +152,19 @@ dogtags()
>
> exuberant()
> {
> - all_target_sources | xargs $1 -a \
> + rm -f .make-tags.*
> +
> + all_target_sources >.make-tags.src
> + NR_CPUS=$(getconf _NPROCESSORS_ONLN 2>/dev/null || echo 1)
`nproc` is simpler and available since coreutils 8.1 (2009-11-18)
> + NR_LINES=$(wc -l <.make-tags.src)
> + NR_LINES=$((($NR_LINES + $NR_CPUS - 1) / $NR_CPUS))
> +
> + split -a 6 -d -l $NR_LINES .make-tags.src .make-tags.src.
`split -d -nl/$(nproc)` is simpler and available since coreutils 8.8 (2010-12-22)
> +
> + for i in .make-tags.src.*; do
> + N=$(echo $i | sed -e 's/.*\.//')
> + # -u: don't sort now, sort later
> + xargs <$i $1 -a -f .make-tags.$N -u \
> -I __initdata,__exitdata,__initconst, \
> -I __cpuinitdata,__initdata_memblock \
> -I __refdata,__attribute,__maybe_unused,__always_unused \
> @@ -211,7 +223,21 @@ exuberant()
> --regex-c='/DEFINE_PCI_DEVICE_TABLE\((\w*)/\1/v/' \
> --regex-c='/(^\s)OFFSET\((\w*)/\2/v/' \
> --regex-c='/(^\s)DEFINE\((\w*)/\2/v/' \
> - --regex-c='/DEFINE_HASHTABLE\((\w*)/\1/v/'
> + --regex-c='/DEFINE_HASHTABLE\((\w*)/\1/v/' \
> + &
> + done
> + wait
> + rm -f .make-tags.src .make-tags.src.*
> +
> + # write header
> + $1 -f $2 /dev/null
> + # remove headers
> + for i in .make-tags.*; do
> + sed -i -e '/^!/d' $i &
> + done
> + wait
> + sort .make-tags.* >>$2
> + rm -f .make-tags.*
Using sort --merge would speed up significantly?
Even faster would be to get sort to skip the header lines, avoiding the need for sed.
It's a bit awkward and was discussed at:
http://lists.gnu.org/archive/html/coreutils/2013-01/msg00027.html
Summarising that, is if not using merge you can:
tlines=$(($(wc -l < "$2") + 1))
tail -q -n+$tlines .make-tags.* | LC_ALL=C sort >>$2
Or if merge is appropriate then:
tlines=$(($(wc -l < "$2") + 1))
eval "eval LC_ALL=C sort -m '<(tail -n+$tlines .make-tags.'{1..$(nproc)}')'" >>$2
Note eval is fine here as inputs are controlled within the script
cheers,
Pádraig.
p.s. To avoid temp files altogether you could wire everything up through fifos,
though that's probably overkill here TBH
p.p.s. You may want to `trap EXIT cleanup` to rm -f .make-tags.*
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH v3] tags: much faster, parallel "make tags"
2015-05-09 5:07 ` Pádraig Brady
@ 2015-05-10 13:26 ` Alexey Dobriyan
2015-05-10 13:53 ` Alexey Dobriyan
2015-05-10 20:58 ` Pádraig Brady
0 siblings, 2 replies; 8+ messages in thread
From: Alexey Dobriyan @ 2015-05-10 13:26 UTC (permalink / raw)
To: Pádraig Brady; +Cc: akpm, mmarek, linux-kernel
On Sat, May 09, 2015 at 06:07:18AM +0100, Pádraig Brady wrote:
> On 08/05/15 14:26, Alexey Dobriyan wrote:
> > exuberant()
> > {
> > - all_target_sources | xargs $1 -a \
> > + rm -f .make-tags.*
> > +
> > + all_target_sources >.make-tags.src
> > + NR_CPUS=$(getconf _NPROCESSORS_ONLN 2>/dev/null || echo 1)
>
> `nproc` is simpler and available since coreutils 8.1 (2009-11-18)
nproc was discarded because getconf is standartized.
> > + NR_LINES=$(wc -l <.make-tags.src)
> > + NR_LINES=$((($NR_LINES + $NR_CPUS - 1) / $NR_CPUS))
> > +
> > + split -a 6 -d -l $NR_LINES .make-tags.src .make-tags.src.
>
> `split -d -nl/$(nproc)` is simpler and available since coreutils 8.8 (2010-12-22)
-nl/ can't count and always make first file somewhat bigger, which is
suspicious. What else it can't do right?
> > + sort .make-tags.* >>$2
> > + rm -f .make-tags.*
>
> Using sort --merge would speed up significantly?
By ~1 second, yes.
> Even faster would be to get sort to skip the header lines, avoiding the need for sed.
> It's a bit awkward and was discussed at:
> http://lists.gnu.org/archive/html/coreutils/2013-01/msg00027.html
> Summarising that, is if not using merge you can:
>
> tlines=$(($(wc -l < "$2") + 1))
> tail -q -n+$tlines .make-tags.* | LC_ALL=C sort >>$2
>
> Or if merge is appropriate then:
>
> tlines=$(($(wc -l < "$2") + 1))
> eval "eval LC_ALL=C sort -m '<(tail -n+$tlines .make-tags.'{1..$(nproc)}')'" >>$2
Might as well teach ctags to do real parallel processing.
LC_* are set by top level Makefile.
> p.p.s. You may want to `trap EXIT cleanup` to rm -f .make-tags.*
The real question is how to kill ctags reliably.
Naive
trap 'kill $(jobs -p); rm -f .make-tags.*' TERM INT
doesn't work.
Files are removed, but processes aren't.
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH v3] tags: much faster, parallel "make tags"
2015-05-10 13:26 ` Alexey Dobriyan
@ 2015-05-10 13:53 ` Alexey Dobriyan
2015-05-10 20:58 ` Pádraig Brady
1 sibling, 0 replies; 8+ messages in thread
From: Alexey Dobriyan @ 2015-05-10 13:53 UTC (permalink / raw)
To: Pádraig Brady; +Cc: akpm, mmarek, linux-kernel
[fix Andrew's email]
On Sun, May 10, 2015 at 04:26:34PM +0300, Alexey Dobriyan wrote:
> On Sat, May 09, 2015 at 06:07:18AM +0100, Pádraig Brady wrote:
> > On 08/05/15 14:26, Alexey Dobriyan wrote:
>
> > > exuberant()
> > > {
> > > - all_target_sources | xargs $1 -a \
> > > + rm -f .make-tags.*
> > > +
> > > + all_target_sources >.make-tags.src
> > > + NR_CPUS=$(getconf _NPROCESSORS_ONLN 2>/dev/null || echo 1)
> >
> > `nproc` is simpler and available since coreutils 8.1 (2009-11-18)
>
> nproc was discarded because getconf is standartized.
>
> > > + NR_LINES=$(wc -l <.make-tags.src)
> > > + NR_LINES=$((($NR_LINES + $NR_CPUS - 1) / $NR_CPUS))
> > > +
> > > + split -a 6 -d -l $NR_LINES .make-tags.src .make-tags.src.
> >
> > `split -d -nl/$(nproc)` is simpler and available since coreutils 8.8 (2010-12-22)
>
> -nl/ can't count and always make first file somewhat bigger, which is
> suspicious. What else it can't do right?
>
> > > + sort .make-tags.* >>$2
> > > + rm -f .make-tags.*
> >
> > Using sort --merge would speed up significantly?
>
> By ~1 second, yes.
>
> > Even faster would be to get sort to skip the header lines, avoiding the need for sed.
> > It's a bit awkward and was discussed at:
> > http://lists.gnu.org/archive/html/coreutils/2013-01/msg00027.html
> > Summarising that, is if not using merge you can:
> >
> > tlines=$(($(wc -l < "$2") + 1))
> > tail -q -n+$tlines .make-tags.* | LC_ALL=C sort >>$2
> >
> > Or if merge is appropriate then:
> >
> > tlines=$(($(wc -l < "$2") + 1))
> > eval "eval LC_ALL=C sort -m '<(tail -n+$tlines .make-tags.'{1..$(nproc)}')'" >>$2
>
> Might as well teach ctags to do real parallel processing.
> LC_* are set by top level Makefile.
>
> > p.p.s. You may want to `trap EXIT cleanup` to rm -f .make-tags.*
>
> The real question is how to kill ctags reliably.
> Naive
>
> trap 'kill $(jobs -p); rm -f .make-tags.*' TERM INT
>
> doesn't work.
>
> Files are removed, but processes aren't.
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH v3] tags: much faster, parallel "make tags"
2015-05-10 13:26 ` Alexey Dobriyan
2015-05-10 13:53 ` Alexey Dobriyan
@ 2015-05-10 20:58 ` Pádraig Brady
2015-05-11 20:20 ` Alexey Dobriyan
1 sibling, 1 reply; 8+ messages in thread
From: Pádraig Brady @ 2015-05-10 20:58 UTC (permalink / raw)
To: Alexey Dobriyan; +Cc: Michal Marek, linux-kernel, Andrew Morton
On 10/05/15 14:26, Alexey Dobriyan wrote:
> On Sat, May 09, 2015 at 06:07:18AM +0100, Pádraig Brady wrote:
>> On 08/05/15 14:26, Alexey Dobriyan wrote:
>
>>> exuberant()
>>> {
>>> - all_target_sources | xargs $1 -a \
>>> + rm -f .make-tags.*
>>> +
>>> + all_target_sources >.make-tags.src
>>> + NR_CPUS=$(getconf _NPROCESSORS_ONLN 2>/dev/null || echo 1)
>>
>> `nproc` is simpler and available since coreutils 8.1 (2009-11-18)
>
> nproc was discarded because getconf is standartized.
Note getconf doesn't honor CPU affinity which may be fine here?
$ taskset -c 0 getconf _NPROCESSORS_ONLN
4
$ taskset -c 0 nproc
1
>>> + NR_LINES=$(wc -l <.make-tags.src)
>>> + NR_LINES=$((($NR_LINES + $NR_CPUS - 1) / $NR_CPUS))
>>> +
>>> + split -a 6 -d -l $NR_LINES .make-tags.src .make-tags.src.
>>
>> `split -d -nl/$(nproc)` is simpler and available since coreutils 8.8 (2010-12-22)
>
> -nl/ can't count and always make first file somewhat bigger, which is
> suspicious. What else it can't do right?
It avoids the overhead of reading all data and counting the lines,
by splitting the data into approx equal numbers of lines as detailed at:
http://gnu.org/s/coreutils/split
>>> + sort .make-tags.* >>$2
>>> + rm -f .make-tags.*
>>
>> Using sort --merge would speed up significantly?
>
> By ~1 second, yes.
>
>> Even faster would be to get sort to skip the header lines, avoiding the need for sed.
>> It's a bit awkward and was discussed at:
>> http://lists.gnu.org/archive/html/coreutils/2013-01/msg00027.html
>> Summarising that, is if not using merge you can:
>>
>> tlines=$(($(wc -l < "$2") + 1))
>> tail -q -n+$tlines .make-tags.* | LC_ALL=C sort >>$2
>>
>> Or if merge is appropriate then:
>>
>> tlines=$(($(wc -l < "$2") + 1))
>> eval "eval LC_ALL=C sort -m '<(tail -n+$tlines .make-tags.'{1..$(nproc)}')'" >>$2
>
> Might as well teach ctags to do real parallel processing.
> LC_* are set by top level Makefile.
>
>> p.p.s. You may want to `trap EXIT cleanup` to rm -f .make-tags.*
>
> The real question is how to kill ctags reliably.
> Naive
>
> trap 'kill $(jobs -p); rm -f .make-tags.*' TERM INT
>
> doesn't work.
>
> Files are removed, but processes aren't.
Is $(jobs -p) generating the correct list?
On an interactive shell here it is.
Perhaps you need to explicitly use #!/bin/sh -m
at the start to enable job control like that?
Another option would be to append each background $! pid
to a list and kill that list.
Note also you may want to `wait` after the kill too.
cheers,
Pádraig.
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH v3] tags: much faster, parallel "make tags"
2015-05-10 20:58 ` Pádraig Brady
@ 2015-05-11 20:20 ` Alexey Dobriyan
2015-05-11 20:25 ` [PATCH v4] " Alexey Dobriyan
0 siblings, 1 reply; 8+ messages in thread
From: Alexey Dobriyan @ 2015-05-11 20:20 UTC (permalink / raw)
To: Pádraig Brady; +Cc: Michal Marek, linux-kernel, Andrew Morton
On Sun, May 10, 2015 at 09:58:12PM +0100, Pádraig Brady wrote:
> On 10/05/15 14:26, Alexey Dobriyan wrote:
> > On Sat, May 09, 2015 at 06:07:18AM +0100, Pádraig Brady wrote:
> >> On 08/05/15 14:26, Alexey Dobriyan wrote:
> >
> >>> exuberant()
> >>> {
> >>> - all_target_sources | xargs $1 -a \
> >>> + rm -f .make-tags.*
> >>> +
> >>> + all_target_sources >.make-tags.src
> >>> + NR_CPUS=$(getconf _NPROCESSORS_ONLN 2>/dev/null || echo 1)
> >>
> >> `nproc` is simpler and available since coreutils 8.1 (2009-11-18)
> >
> > nproc was discarded because getconf is standartized.
>
> Note getconf doesn't honor CPU affinity which may be fine here?
>
> $ taskset -c 0 getconf _NPROCESSORS_ONLN
> 4
> $ taskset -c 0 nproc
> 1
Why would anyone tag files under affinity?
> >>> + NR_LINES=$(wc -l <.make-tags.src)
> >>> + NR_LINES=$((($NR_LINES + $NR_CPUS - 1) / $NR_CPUS))
> >>> +
> >>> + split -a 6 -d -l $NR_LINES .make-tags.src .make-tags.src.
> >>
> >> `split -d -nl/$(nproc)` is simpler and available since coreutils 8.8 (2010-12-22)
> >
> > -nl/ can't count and always make first file somewhat bigger, which is
> > suspicious. What else it can't do right?
>
> It avoids the overhead of reading all data and counting the lines,
> by splitting the data into approx equal numbers of lines as detailed at:
> http://gnu.org/s/coreutils/split
~1 second -- statistical error.
> >>> + sort .make-tags.* >>$2
> >>> + rm -f .make-tags.*
> >>
> >> Using sort --merge would speed up significantly?
> >
> > By ~1 second, yes.
> >
> >> Even faster would be to get sort to skip the header lines, avoiding the need for sed.
> >> It's a bit awkward and was discussed at:
> >> http://lists.gnu.org/archive/html/coreutils/2013-01/msg00027.html
> >> Summarising that, is if not using merge you can:
> >>
> >> tlines=$(($(wc -l < "$2") + 1))
> >> tail -q -n+$tlines .make-tags.* | LC_ALL=C sort >>$2
> >>
> >> Or if merge is appropriate then:
> >>
> >> tlines=$(($(wc -l < "$2") + 1))
> >> eval "eval LC_ALL=C sort -m '<(tail -n+$tlines .make-tags.'{1..$(nproc)}')'" >>$2
> >
> > Might as well teach ctags to do real parallel processing.
> > LC_* are set by top level Makefile.
> >
> >> p.p.s. You may want to `trap EXIT cleanup` to rm -f .make-tags.*
> >
> > The real question is how to kill ctags reliably.
> > Naive
> >
> > trap 'kill $(jobs -p); rm -f .make-tags.*' TERM INT
> >
> > doesn't work.
> >
> > Files are removed, but processes aren't.
>
> Is $(jobs -p) generating the correct list?
It looks like it does.
> On an interactive shell here it is.
> Perhaps you need to explicitly use #!/bin/sh -m
> at the start to enable job control like that?
> Another option would be to append each background $! pid
> to a list and kill that list.
> Note also you may want to `wait` after the kill too.
All of this doesn't work reliably.
I switched to "xargs -P" and Ctrl+C became reliable, immediate and
free for programmer. See updated patch.
^ permalink raw reply [flat|nested] 8+ messages in thread* [PATCH v4] tags: much faster, parallel "make tags"
2015-05-11 20:20 ` Alexey Dobriyan
@ 2015-05-11 20:25 ` Alexey Dobriyan
2015-08-19 13:25 ` Michal Marek
0 siblings, 1 reply; 8+ messages in thread
From: Alexey Dobriyan @ 2015-05-11 20:25 UTC (permalink / raw)
To: Andrew Morton; +Cc: Michal Marek, linux-kernel, Pádraig Brady
ctags is single-threaded program. Split list of files to be tagged into
almost equal parts, process them on every CPU and merge the results.
Speedup is ~30-45% (!) (depending on number of cores).
Resulting "tags" files aren't byte-for-byte identical because ctags
program numbers anon struct and enum declarations with "__anonNNN"
symbols. If those lines are removed, "tags" file becomes byte-for-byte
identical with those generated with current code.
v4: switch from shell "&; wait"' parallelism to "xargs -P" for reliable cleanup.
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
scripts/tags.sh | 58 +++++++++++++++++++++++++++++++++++++++++++++++++++-----
1 file changed, 53 insertions(+), 5 deletions(-)
--- a/scripts/tags.sh
+++ b/scripts/tags.sh
@@ -152,7 +152,41 @@ dogtags()
exuberant()
{
- all_target_sources | xargs $1 -a \
+ trap 'rm -f .make-tags.*; exit 1' TERM INT
+ rm -f .make-tags.*
+
+ all_target_sources >.make-tags.0
+
+ # Default xargs(1) total command line size.
+ XARGS_ARG_MAX=$((128 * 1024))
+ # Split is unequal w.r.t file count, but asking for both size and
+ # line count limit is too much in 2015.
+ #
+ # Reserve room for fixed ctags(1) arguments.
+ split -a 6 -d -C $(($XARGS_ARG_MAX - 4 * 1024)) .make-tags.0 .make-tags.x
+ rm -f .make-tags.0
+
+ # xargs(1) appears to not support command line tweaking,
+ # so it has to be prepared in advance (see '-f').
+ NR_TAGS=$(ls -1 .make-tags.x* | wc -l)
+ touch .make-tags.1
+ for i in $(seq 0 $(($NR_TAGS - 1))); do
+ N=$(printf "%06u" $i)
+ echo -n "-f .make-tags.t$N " >>.make-tags.1
+ tr '\n' ' ' <.make-tags.x$N >>.make-tags.1
+ echo >>.make-tags.1
+ rm -f .make-tags.x$N
+ done
+
+ # Tag files in parallel.
+ #
+ # "xargs -I" puts command line piece as one argument,
+ # so shell is employed to split it back.
+ NR_CPUS=$(getconf _NPROCESSORS_ONLN 2>/dev/null || echo 1)
+ # ctags -u: don't sort now, sort later
+ xargs -P $NR_CPUS -L 1 -I CMD -s $XARGS_ARG_MAX \
+ <.make-tags.1 \
+ sh -c "$1 -a -u \
-I __initdata,__exitdata,__initconst, \
-I __cpuinitdata,__initdata_memblock \
-I __refdata,__attribute,__maybe_unused,__always_unused \
@@ -211,7 +245,21 @@ exuberant()
--regex-c='/DEFINE_PCI_DEVICE_TABLE\((\w*)/\1/v/' \
--regex-c='/(^\s)OFFSET\((\w*)/\2/v/' \
--regex-c='/(^\s)DEFINE\((\w*)/\2/v/' \
- --regex-c='/DEFINE_HASHTABLE\((\w*)/\1/v/'
+ --regex-c='/DEFINE_HASHTABLE\((\w*)/\1/v/' \
+ CMD"
+ rm -f .make-tags.1
+
+ # Remove headers.
+ for i in .make-tags.t*; do
+ sed -i -e '/^!/d' $i
+ done
+
+ # Write final header.
+ $1 -f $2 /dev/null
+
+ # Append sorted results.
+ sort .make-tags.t* >>$2
+ rm -f .make-tags.t*
all_kconfigs | xargs $1 -a \
--langdef=kconfig --language-force=kconfig \
@@ -276,7 +324,7 @@ emacs()
xtags()
{
if $1 --version 2>&1 | grep -iq exuberant; then
- exuberant $1
+ exuberant $1 $2
elif $1 --version 2>&1 | grep -iq emacs; then
emacs $1
else
@@ -322,13 +370,13 @@ case "$1" in
"tags")
rm -f tags
- xtags ctags
+ xtags ctags tags
remove_structs=y
;;
"TAGS")
rm -f TAGS
- xtags etags
+ xtags etags TAGS
remove_structs=y
;;
esac
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH v4] tags: much faster, parallel "make tags"
2015-05-11 20:25 ` [PATCH v4] " Alexey Dobriyan
@ 2015-08-19 13:25 ` Michal Marek
0 siblings, 0 replies; 8+ messages in thread
From: Michal Marek @ 2015-08-19 13:25 UTC (permalink / raw)
To: Alexey Dobriyan; +Cc: Andrew Morton, linux-kernel, Pádraig Brady
On 2015-05-11 22:25, Alexey Dobriyan wrote:
> ctags is single-threaded program. Split list of files to be tagged into
> almost equal parts, process them on every CPU and merge the results.
Sorry, I missed the v4 of the patch.
> + # Remove headers.
> + for i in .make-tags.t*; do
> + sed -i -e '/^!/d' $i
> + done
> +
> + # Write final header.
> + $1 -f $2 /dev/null
> +
> + # Append sorted results.
> + sort .make-tags.t* >>$2
> + rm -f .make-tags.t*
This still breaks Exuberant ctags in emacs mode:
$ ln -s /usr/bin/ctags ~/bin/etags
$ make TAGS
GEN TAGS
etags: "TAGS" doesn't look like a tag file; I refuse to overwrite it.
etags: "TAGS" doesn't look like a tag file; I refuse to overwrite it.
The TAGS file is corrupted because of the sorting.
Michal
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2015-08-19 13:25 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-05-08 13:26 [PATCH v3] tags: much faster, parallel "make tags" Alexey Dobriyan
2015-05-09 5:07 ` Pádraig Brady
2015-05-10 13:26 ` Alexey Dobriyan
2015-05-10 13:53 ` Alexey Dobriyan
2015-05-10 20:58 ` Pádraig Brady
2015-05-11 20:20 ` Alexey Dobriyan
2015-05-11 20:25 ` [PATCH v4] " Alexey Dobriyan
2015-08-19 13:25 ` Michal Marek
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox