Discussion:
[Cocci] [PoC] coccinelle: make Coccinelle-related make targets more fine-grained
Julia Lawall
2018-08-03 06:22:24 UTC
Permalink
Let's add a bit of Makefile metaprogramming to generate finer-grained
make targets applying one semantic patch to only a single source file,
and specify these as dependencies of the targets applying one semantic
patch to all source files. This way that shell loop can be avoided,
semantic patches will only be applied to changed source files, and the
same semantic patch can be applied in parallel to multiple source
files.
This was already possible. Make coccicheck is not supposed to be used
with -j, but rather with J=n. That tells Coccinelle to parallelize the
treatment of the files internally. In this case, the semantic patch is
only parsed once, and then n worker processes are forked to treat the
different files.
The only remaining sequential part is aggregating the
suggested transformations from the individual targets into a single
patch file, which is comparatively cheap (especially since ideally
there aren't any suggestions).
This change brings spectacular speedup in the scenario described in
point (1) above. When the results of a previous 'make coccicheck' are
there, the time needed to run
touch apply.c ; time make -j4 coccicheck
went from 3m42s to 1.848s, which is just over 99% speedup, yay!, and
'apply.c' is the second longest source file in our codebase. In the
scenario in point (2), running
touch contrib/coccinelle/array.cocci ; time make -j4 coccicheck
went from 56.364s to 35.883s, which is ~36% speedup.
I really like this direction. The slowness of coccicheck is one of the
things that has prevented me from running it more frequently. And I'm a
big fan of breaking steps down as much as possible into make targets. It
lets make do its job (avoiding repeated work and parallelizing).
Yeah, this is great. Also, CC-ing some of the recent contributors to
linux.git's coccinelle, perhaps they're interested / have comments.
I have extended the list of recipients with Nicolas Palix and the
Coccinelle mailing list. In particular, Nicolas should comment on any
changes.
All the above timings are best-of-five on a machine with 2 physical
and 2 logical cores. I don't have the hardware to bring any numbers
for point (3). The time needed to run 'make -j4 coccicheck' in a
clean state didn't change, it's ~3m42s both with and without this
patch.
On a 40-core (20+20) machine, doing "make -j40 coccicheck" from scratch
real 1m25.520s
user 5m41.492s
sys 0m26.776s
real 0m24.300s
user 14m35.084s
sys 0m50.108s
By default, when the -j option is not used, make coccicheck causes
Coccinelle to run on all of the cores of the machine. In my experience,
on a laptop (2 physical cores with hyperthreading), this is basically OK.
And on a server, it is a disaster. On a machine with 80 physical cores
and hyperthreading, make coccicheck will run each instance of Coccinelle
such that it parallelizes on 160 cores. But in reality, there is not much
difference between 20 and 40 cores, and after 40 cores the performance
starts to degrade. So basically, using more than half of the physical
cores on each socket is a loss.
I was surprised by the jump in CPU times. Doing "make -j1 coccicheck"
real 5m34.887s
user 5m5.620s
sys 0m19.908s
so it's really the parallelizing that seems to be to blame (possibly
because this CPU boosts from 2.3Ghz to 3.0Ghz, and we're only using 8
threads in the first example).
- [RFC]
With this patch 'make coccicheck's output will look like this
('make' apparently doesn't iterate over semantic patches in
SPATCH commit.cocci abspath.c
SPATCH commit.cocci advice.c
<... lots of lines ...>
SPATCH array.cocci http-walker.c
SPATCH array.cocci remote-curl.c
which means that the number of lines in the output grows from
"Nr-of-semantic-patches" to "Nr-of-semantic-patches *
Nr-of-source-files".
IMHO this is not really that big a deal. We are doing useful work for
each line, so to me it's just more eye candy (and it's really satisfying
to watch it zip by on the 40-core machine ;) ).
FWIW on the 8-cpu box I usually test on this went from 2m30s to 1m50s,
so not a huge improvement in time, but nice to have the per-file
progress.
Coccinelle already does this when make coccicheck is used with J=n. It
makes a subdirectory for the semantic patch that it is currently working
on, and this subdirectory contains stderr.n files that contain HANDLING
and the name of the file being treated. It also prints information about
performance bottlenecks. I think that make coccicheck turns off all this
reporting by default, but you can get it back with SPFLAGS="--quiet"
What if we inverted the current loop? That is, right now we iterate over
the cocci patches at the Makefile level, and then for each target we
iterate over the giant list of source files. Instead, we could teach the
Makefile to iterate over the source files, with a target for each, and
then hit each cocci patch inside there.
That would give roughly the same output as a normal build. But moreover,
I wonder if we could make things faster by actually combining the cocci
files into a single set of rules to be applied to each source file. That
would mean invoking spatch 1/8 as much. It does give fewer opportunities
for "make" to reuse work, but that only matters when the cocci files
change (which is much less frequent than source files changing).
cat contrib/coccinelle/*.cocci >mega.cocci
make -j40 coccicheck COCCI_SEM_PATCHES=mega.cocci
There was already a COCCI=foo.cocci argument to focus on a single semantic
patch.

I'm surprised that the above cat command would work. Semantic patch rules
have names, and Coccinelle will not be happy isf two rules have the same
name. Some may also have variables declared in initializers, although
perhaps the ones in the kernel don't do this. Causing these variables to
be shared would not have a good effect.

Putting everything together can also go counter to the optimizations that
Coccinelle provides. You can speed up spatch a lot on rules that mention
specific functions by running id-utils on your code base in advance (sh
coccinelle/scripts/idutils_index.sh). The if you have one semantic patch
that only applies to files that contain foo and another that only applies
to files that contain bar, then each will only be applied to its
respective files. If you don't have an id-utils index this optimization
will be done by Coccinelle by first scanning files for foo and bar, but
the index is obviously much faster. If your semantic patch can be
relavant to files that contain foo or bar, then the rules from the foo
semantic patch (which could be very slow) will also be uselessly applied
to the bar files. Whether this is relevant in practice depends on the
specific semantic patches of course.
real 0m17.573s
user 10m56.724s
sys 0m11.548s
And that should show an improvement on more normal-sized systems, too,
because we really are eliminating some of the startup overhead.
The other obvious downside is that you don't get individual patches for
each class of transformation. Do we care? Certainly for a routine "make
- is there something that needs fixing?
- give me the patch for all fixes
So I wonder if we'd want to have that be the default, and then perhaps
optionally give some targets to let people run single scripts (or not;
they could probably just run spatch themselves).
- [RFC]
The overhead of applying a semantic patch to all source files
became larger. 'make coccicheck' currently runs only one shell
process
Not sure to understand this. To my understanding it runs as many shell
processes as there are cores on the machine.

julia
and creates two output files for each semantic patch.
With this patch it will run approx. "Nr-of-semantic-patches *
Nr-of-source-files" shell processes and create twice as many
output files.
Doing the "one big .cocci" would help with this, too (and again puts us
in the same ballpark as a compile).
- [RFC]
This approach uses $(eval), which we haven't used in any of our
Makefiles yet. I wonder whether it's portable enough. And it's
ugly and complicated.
I looked into this a long time ago for another set of Makefile patches I
was considering. $(eval) was added to GNU make in 3.80, released in
2002. Which is probably fine by now.
If it isn't, one more exotic option would be to push the coccicheck
stuff into its own Makefile, and just run it via recursive make. Then
anybody doing a vanilla build can do so even with an antique make, but
you could only "make coccicheck" with something from the last 16 years
(but good luck getting ocaml running there).
I suspect if we go with the one-spatch-per-source route, though, that we
could do this just with regular make rules.
-Peff
Julia Lawall
2018-08-03 06:52:38 UTC
Permalink
Hi,
Post by Julia Lawall
This was already possible. Make coccicheck is not supposed to be used
with -j, but rather with J=n. That tells Coccinelle to parallelize the
treatment of the files internally. In this case, the semantic patch is
only parsed once, and then n worker processes are forked to treat the
different files.
Thanks for this hint.
I wonder if we can make this happen automatically under suitable
conditions. We can parse -j<num> out of MAKEFLAGS and convert it into a
J=<num> argument,
It does happen automatically, in the sense that the default with no -j or
J=option is to use all the cores on the machine.

With -j, it seems to have to default to one core internally:

d7059ca0147adcd495f3c5b41f260e1ac55bb679

Also, if it didn't do that, it would end up with the product of the -j
option and the number of cores on the machine, which would give very bad
performance.
but that only solves half the problem: the "make"
parent process would think that the coccinelle run only deserves to
occupy one job slot.
Yes, it seems to be intended to be run by itself.
Technically we could do all this using a wrapper that pretends to be a
submake <https://www.gnu.org/software/make/manual/html_node/Job-Slots.html>
(prefixing the command with '+', parsing jobserver options from the
MAKEFLAGS envvar) but it gets ugly.
It's likely that the best we can do is just to advertise J more
prominently.
[...]
Post by Julia Lawall
cat contrib/coccinelle/*.cocci >mega.cocci
make -j40 coccicheck COCCI_SEM_PATCHES=mega.cocci
There was already a COCCI=foo.cocci argument to focus on a single semantic
patch.
I'm surprised that the above cat command would work. Semantic patch rules
have names, and Coccinelle will not be happy isf two rules have the same
name.
Yes, Git's semantic patches (in contrib/coccinelle) tend to be
relatively undemanding.
Post by Julia Lawall
Some may also have variables declared in initializers, although
perhaps the ones in the kernel don't do this. Causing these variables to
be shared would not have a good effect.
... oh! You're thinking of the Linux kernel.
It looks like Linux's scripts/coccicheck has a lot we can crib from.
That's where the J envvar and automatic parallelism you mentioned are
implemented, too.
So it sounds to me like at a minimum we should use all of that. ;-)
Yes.

julia

Loading...