Commit e758a33d authored by Paul Mackerras's avatar Paul Mackerras Committed by Ingo Molnar

perf_counter: call hw_perf_save_disable/restore around group_sched_in

I noticed that when enabling a group via the PERF_COUNTER_IOC_ENABLE
ioctl on the group leader, the counters weren't enabled and counting
immediately on return from the ioctl, but did start counting a little
while later (presumably after a context switch).

The reason was that __perf_counter_enable calls group_sched_in which
calls hw_perf_group_sched_in, which on powerpc assumes that the caller
has called hw_perf_save_disable already.  Until commit 46d686c6
("perf_counter: put whole group on when enabling group leader") it was
true that all callers of group_sched_in had called
hw_perf_save_disable first, and the powerpc hw_perf_group_sched_in
relies on that (there isn't an x86 version).

This fixes the problem by putting calls to hw_perf_save_disable /
hw_perf_restore around the calls to group_sched_in and
counter_sched_in in __perf_counter_enable.  Having the calls to
hw_perf_save_disable/restore around the counter_sched_in call is
harmless and makes this call consistent with the other call sites
of counter_sched_in, which have all called hw_perf_save_disable first.

[ Impact: more precise counter group disable/enable functionality ]
Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
LKML-Reference: <18953.25733.53359.147452@cargo.ozlabs.ibm.com>
Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
parent 615a3f1e
...@@ -663,6 +663,7 @@ static void __perf_counter_enable(void *info) ...@@ -663,6 +663,7 @@ static void __perf_counter_enable(void *info)
struct perf_cpu_context *cpuctx = &__get_cpu_var(perf_cpu_context); struct perf_cpu_context *cpuctx = &__get_cpu_var(perf_cpu_context);
struct perf_counter_context *ctx = counter->ctx; struct perf_counter_context *ctx = counter->ctx;
struct perf_counter *leader = counter->group_leader; struct perf_counter *leader = counter->group_leader;
unsigned long pmuflags;
unsigned long flags; unsigned long flags;
int err; int err;
...@@ -689,14 +690,18 @@ static void __perf_counter_enable(void *info) ...@@ -689,14 +690,18 @@ static void __perf_counter_enable(void *info)
if (leader != counter && leader->state != PERF_COUNTER_STATE_ACTIVE) if (leader != counter && leader->state != PERF_COUNTER_STATE_ACTIVE)
goto unlock; goto unlock;
if (!group_can_go_on(counter, cpuctx, 1)) if (!group_can_go_on(counter, cpuctx, 1)) {
err = -EEXIST; err = -EEXIST;
else if (counter == leader) } else {
err = group_sched_in(counter, cpuctx, ctx, pmuflags = hw_perf_save_disable();
smp_processor_id()); if (counter == leader)
else err = group_sched_in(counter, cpuctx, ctx,
err = counter_sched_in(counter, cpuctx, ctx, smp_processor_id());
smp_processor_id()); else
err = counter_sched_in(counter, cpuctx, ctx,
smp_processor_id());
hw_perf_restore(pmuflags);
}
if (err) { if (err) {
/* /*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment