diff options
author | hawkes@sgi.com <hawkes@sgi.com> | 2006-02-14 10:40:17 -0800 |
---|---|---|
committer | Tony Luck <tony.luck@intel.com> | 2006-02-15 13:37:04 -0800 |
commit | defbb2c929cbe89dc92239b303cd33d3c85e9a83 (patch) | |
tree | 85dbcfa407d4bfaecbce4f3556a73033b8f70caf /arch/ia64/kernel | |
parent | 4c2cd96696ae0896ce4bcf725b9f0eaffafeb640 (diff) |
[IA64] ia64: simplify and fix udelay()
The original ia64 udelay() was simple, but flawed for platforms without
synchronized ITCs: a preemption and migration to another CPU during the
while-loop likely resulted in too-early termination or very, very
lengthy looping.
The first fix (now in 2.6.15) broke the delay loop into smaller,
non-preemptible chunks, reenabling preemption between the chunks. This
fix is flawed in that the total udelay is computed to be the sum of just
the non-premptible while-loop pieces, i.e., not counting the time spent
in the interim preemptible periods. If an interrupt or a migration
occurs during one of these interim periods, then that time is invisible
and only serves to lengthen the effective udelay().
This new fix backs out the current flawed fix and returns to a simple
udelay(), fully preemptible and interruptible. It implements two simple
alternative udelay() routines: one a default generic version that uses
ia64_get_itc(), and the other an sn-specific version that uses that
platform's RTC.
Signed-off-by: John Hawkes <hawkes@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Diffstat (limited to 'arch/ia64/kernel')
-rw-r--r-- | arch/ia64/kernel/time.c | 39 |
1 files changed, 17 insertions, 22 deletions
diff --git a/arch/ia64/kernel/time.c b/arch/ia64/kernel/time.c index a094ec49ccf..307d01e15b2 100644 --- a/arch/ia64/kernel/time.c +++ b/arch/ia64/kernel/time.c @@ -250,32 +250,27 @@ time_init (void) set_normalized_timespec(&wall_to_monotonic, -xtime.tv_sec, -xtime.tv_nsec); } -#define SMALLUSECS 100 - -void -udelay (unsigned long usecs) +/* + * Generic udelay assumes that if preemption is allowed and the thread + * migrates to another CPU, that the ITC values are synchronized across + * all CPUs. + */ +static void +ia64_itc_udelay (unsigned long usecs) { - unsigned long start; - unsigned long cycles; - unsigned long smallusecs; + unsigned long start = ia64_get_itc(); + unsigned long end = start + usecs*local_cpu_data->cyc_per_usec; - /* - * Execute the non-preemptible delay loop (because the ITC might - * not be synchronized between CPUS) in relatively short time - * chunks, allowing preemption between the chunks. - */ - while (usecs > 0) { - smallusecs = (usecs > SMALLUSECS) ? SMALLUSECS : usecs; - preempt_disable(); - cycles = smallusecs*local_cpu_data->cyc_per_usec; - start = ia64_get_itc(); + while (time_before(ia64_get_itc(), end)) + cpu_relax(); +} - while (ia64_get_itc() - start < cycles) - cpu_relax(); +void (*ia64_udelay)(unsigned long usecs) = &ia64_itc_udelay; - preempt_enable(); - usecs -= smallusecs; - } +void +udelay (unsigned long usecs) +{ + (*ia64_udelay)(usecs); } EXPORT_SYMBOL(udelay); |