diff options
author | Frederik Schwarzer <schwarzerf@gmail.com> | 2008-10-16 19:02:37 +0200 |
---|---|---|
committer | Jiri Kosina <jkosina@suse.cz> | 2009-01-06 11:28:06 +0100 |
commit | 025dfdafe77f20b3890981a394774baab7b9c827 (patch) | |
tree | c4d514990d7a0673df5d32aa11fded95f9644ff0 /arch/powerpc | |
parent | 0abb8b6a939b742f273edc68b64dba26c57331bc (diff) |
trivial: fix then -> than typos in comments and documentation
- (better, more, bigger ...) then -> (...) than
Signed-off-by: Frederik Schwarzer <schwarzerf@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Diffstat (limited to 'arch/powerpc')
-rw-r--r-- | arch/powerpc/kernel/kprobes.c | 2 | ||||
-rw-r--r-- | arch/powerpc/oprofile/cell/spu_profiler.c | 2 |
2 files changed, 2 insertions, 2 deletions
diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c index de79915452c..b29005a5a8f 100644 --- a/arch/powerpc/kernel/kprobes.c +++ b/arch/powerpc/kernel/kprobes.c @@ -316,7 +316,7 @@ static int __kprobes trampoline_probe_handler(struct kprobe *p, /* * It is possible to have multiple instances associated with a given * task either because an multiple functions in the call path - * have a return probe installed on them, and/or more then one return + * have a return probe installed on them, and/or more than one return * return probe was registered for a target function. * * We can handle this because: diff --git a/arch/powerpc/oprofile/cell/spu_profiler.c b/arch/powerpc/oprofile/cell/spu_profiler.c index dd499c3e9da..83faa958b9d 100644 --- a/arch/powerpc/oprofile/cell/spu_profiler.c +++ b/arch/powerpc/oprofile/cell/spu_profiler.c @@ -49,7 +49,7 @@ void set_spu_profiling_frequency(unsigned int freq_khz, unsigned int cycles_rese * of precision. This is close enough for the purpose at hand. * * The value of the timeout should be small enough that the hw - * trace buffer will not get more then about 1/3 full for the + * trace buffer will not get more than about 1/3 full for the * maximum user specified (the LFSR value) hw sampling frequency. * This is to ensure the trace buffer will never fill even if the * kernel thread scheduling varies under a heavy system load. |