diff options
author | Steven Rostedt <srostedt@redhat.com> | 2009-06-02 14:01:19 -0400 |
---|---|---|
committer | Steven Rostedt <rostedt@goodmis.org> | 2009-06-02 14:42:17 -0400 |
commit | 26c01624a2a40f8a4ddf6449b65c9b1c418d0e72 (patch) | |
tree | eeff81aa0fa56ba1f2c180d4ec6e64cb31af898e /kernel | |
parent | 82310a3272d5a2a7652f5649ad8a55f58c8f74d9 (diff) |
function-graph: add memory barriers for accessing task's ret_stack
The code that handles the tasks ret_stack allocation for every task
assumes that only an interrupt can cause issues (even though interrupts
are disabled).
In reality, the code is allocating the ret_stack for tasks that may be
running on other CPUs and there are not efficient memory barriers to
handle this case.
[ Impact: prevent crash due to using of uninitialized ret_stack variables ]
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/trace/ftrace.c | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 20e066065eb..1664d3f33d3 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -2580,12 +2580,12 @@ static int alloc_retstack_tasklist(struct ftrace_ret_stack **ret_stack_list) } if (t->ret_stack == NULL) { - t->curr_ret_stack = -1; - /* Make sure IRQs see the -1 first: */ - barrier(); - t->ret_stack = ret_stack_list[start++]; atomic_set(&t->tracing_graph_pause, 0); atomic_set(&t->trace_overrun, 0); + t->curr_ret_stack = -1; + /* Make sure the tasks see the -1 first: */ + smp_wmb(); + t->ret_stack = ret_stack_list[start++]; } } while_each_thread(g, t); |