aboutsummaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2009-09-24x86: early_printk: Protect against using the same device twiceJason Wessel
If you use the kernel argument: earlyprintk=serial,ttyS0,115200 This will cause a recursive hang printing the same line again and again: BIOS-e820: 000000003fff3000 - 0000000040000000 (ACPI data) BIOS-e820: 00000000e0000000 - 00000000f0000000 (reserved) BIOS-e820: 00000000fec00000 - 0000000100000000 (reserved) bootconsole [earlyser0] enabled Linux version 2.6.31-07863-gb64ada6 (mingo@sirius) (gcc version 4.3.2 20081105 (Red Hat 4.3.2-7) (GCC) ) #16789 SMP Wed Sep 23 21:09:43 CEST 2009 Linux version 2.6.31-07863-gb64ada6 (mingo@sirius) (gcc version 4.3.2 20081105 (Red Hat 4.3.2-7) (GCC) ) #16789 SMP Wed Sep 23 21:09:43 CEST 2009 Linux version 2.6.31-07863-gb64ada6 (mingo@sirius) (gcc version 4.3.2 20081105 (Red Hat 4.3.2-7) (GCC) ) #16789 SMP Wed Sep 23 21:09:43 CEST 2009 Linux version 2.6.31-07863-gb64ada6 (mingo@sirius) (gcc version 4.3.2 20081105 (Red Hat 4.3.2-7) (GCC) ) #16789 SMP Wed Sep 23 21:09:43 CEST 2009 Linux version 2.6.31-07863-gb64ada6 (mingo@sirius) (gcc version 4.3.2 20081105 (Red Hat 4.3.2-7) (GCC) ) #16789 SMP Wed Sep 23 21:09:43 CEST 2009 Instead warn the end user that they specified the device a second time, and ignore that second console. Reported-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Jason Wessel <jason.wessel@windriver.com> Cc: Len Brown <lenb@kernel.org> Cc: Greg KH <gregkh@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <4ABAAB89.1080407@windriver.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-24Merge branch 'linus' into x86/urgentIngo Molnar
Merge reason: Queueing up dependent early-printk fix. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-24x86: Reduce verbosity of "PAT enabled" kernel messageRoland Dreier
On modern systems, the kernel prints the message x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106 once for every CPU. This gets kind of ridiculous on huge systems; for example, on a 64-thread system I was lucky enough to get: dmesg| grep 'PAT enabled' | wc 64 704 5174 There is already a BUG() if non-boot CPUs have PAT capabilities that don't match the boot CPU, so just print the message on the boot CPU. (I kept the print after the wrmsrl() that enables PAT, so that the log output continues to mean that the system survived enabling PAT on the boot CPU) Signed-off-by: Roland Dreier <rolandd@cisco.com> Cc: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> LKML-Reference: <adavdj92sso.fsf@cisco.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-24x86: Reduce verbosity of "TSC is reliable" messageRoland Dreier
On modern systems, the kernel prints the message Skipping synchronization checks as TSC is reliable. once for every non-boot CPU. This gets kind of ridiculous on huge systems; for example, on a 64-thread system I was lucky enough to get: $ dmesg | grep 'TSC is reliable' | wc 63 567 4221 There's no point to doing this for every CPU, since the code is just checking the boot CPU anyway, so change this to a printk_once() to make the message appears only once. Signed-off-by: Roland Dreier <rolandd@cisco.com> LKML-Reference: <adazl8l2swc.fsf@cisco.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-24sh: Fix up uninitialized variable use caught by gcc 4.4.Paul Mundt
In the unaligned kernel exception fixup case the printk() was ordered before the copy_from_user(), resulting in a nonsensical instruction value. This fixes up the ordering properly. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-24sh: Handle unaligned 16-bit instructions on SH-2A.Paul Mundt
This adds some sanity checking in the unaligned instruction handler to verify the instruction size, which enables basic support for 16-bit fixups on SH-2A parts. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-24microblaze: Disable heartbeat/enable emaclite in defconfigsMichal Simek
I need to disable heartbeat function because this features breaks testing in Qemu. Signed-off-by: Michal Simek <monstr@monstr.eu>
2009-09-24microblaze: Support simpleImage.dts make targetMichal Simek
Instead of remembering to specify DTB= on the make commandline, this commit allows the much friendlier make simpleImage.<dts> where <dts>.dts is expected to be found in arch/microblaze/boot/dts/ The resulting vmlinux, with the compiled DTS linked in, will be copied to boot/simpleImage.<dts> This mirrors the same functionality as on PowerPC, albeit achieving it in a slightly different way. + strip simpleImage file The size of output file is very similar to linux.bin. vmlinux - full elf without fdt blob simpleImage.<dtb name>.unstrip - full elf with fdt blob simpleImage.<dtb name> - stripped elf with fdt blob Add symlink to generic system.dts in platform folder Signed-off-by: John Williams <john.williams@petalogix.com> Signed-off-by: Michal Simek <monstr@monstr.eu>
2009-09-24sh: mach-ecovec24: Add active low setting for sh_ethKuninori Morimoto
Signed-off-by: Kuninori Morimoto <morimoto.kuninori@renesas.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-24sh: includecheck fix: dwarf.cJaswinder Singh Rajput
fix the following 'make includecheck' warning: arch/sh/kernel/dwarf.c: asm/dwarf.h is included more than once. Signed-off-by: Jaswinder Singh Rajput <jaswinderrajput@gmail.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-24powerpc/8xx: Fix regression introduced by cache coherency rewriteRex Feany
After upgrading to the latest kernel on my mpc875 userspace started running incredibly slow (hours to get to a shell, even!). I tracked it down to commit 8d30c14cab30d405a05f2aaceda1e9ad57800f36, that patch removed a work-around for the 8xx. Adding it back makes my problem go away. Signed-off-by: Rex Feany <rfeany@mrv.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-09-24powerpc/4xx: Fix erroneous xmon warning on PowerPC 4xxJosh Boyer
The xmon code relies on MSR_RI being non-zero to indicate that an exception is recoverable. If it is not, it prints a warning message. However, the PowerPC 4xx cores do not have an MSR_RI bit and this warning is produced for every xmon event. This introduces an unrecoverable_excp function to determine if an exception is recoverable or not. This gets rid of the erroneous warnings on 4xx. Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-09-24powerpc/mm: Fix 40x and 8xx vs. _PAGE_SPECIALBenjamin Herrenschmidt
The test to check whether we have _PAGE_SPECIAL defined is broken, since we always define it, just not always to a meaninful value :-) That broke 8xx and 40x under some circumstances. This fixes it by adding _PAGE_SPECIAL for both of these since they had a free PTE bit, and removing the condition around advertising it. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-09-24powerpc: Cleanup linker script using new linker script macros.Tim Abbott
Signed-off-by: Tim Abbott <tabbott@ksplice.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: linuxppc-dev@ozlabs.org Cc: Sam Ravnborg <sam@ravnborg.org> Acked-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-09-24powerpc: Fix ibm,client-architecture-support printoutAnton Blanchard
On machines without the ibm,client-architecture-support call we were missing a newline. We may as well print the full name in all its glory too - its ibm,client-architecture-support, not ibm,client-architecture as I mistakenly wrote (a name only an IBM architect could love). For my penance I will write out ibm,client-architecture-support 100 times. Before: Calling ibm,client-architecture...command line: root=/dev/sda6 console=hvc0 quiet After: Calling ibm,client-architecture-support... not implemented command line: root=/dev/sda6 console=hvc0 Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-09-24powerpc: Increase NODES_SHIFT on 64bit from 4 to 8Anton Blanchard
Some System p configurations can already have more than 16 nodes so we need to increase NODES_SHIFT. I chose 256 to give us some room to grow in the future, although we can look at something smaller if the memory bloat is considered too much. Unless we clamp MAX_ACTIVE_REGIONS we end up with 300kB of extra bloat in early_node_map in mm/page_alloc.c: < 6144 early_node_map > 307200 early_node_map due to: #if MAX_NUMNODES >= 32 /* If there can be many nodes, allow up to 50 holes per node */ #define MAX_ACTIVE_REGIONS (MAX_NUMNODES*50) #else /* By default, allow up to 256 distinct regions */ #define MAX_ACTIVE_REGIONS 256 Since our memory is mostly contiguous it seems reasonable to keep this at 256 for now. I also set 32bit to 32 to save space (is there any chance a 32bit system will have more than 32 discontiguous memory ranges?). Even with that fixed we have a few data structures that grow: < 896 bootmem_node_data > 14336 bootmem_node_data < 1280 node_devices > 20480 node_devices < 25088 kmalloc_caches > 59648 kmalloc_caches < 1632 hstates > 21792 hstates Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-09-24powerpc/perf_counter: Fix vdso detectionAnton Blanchard
perf_counter uses arch_vma_name() to detect a vdso region which in turn uses current->mm->context.vdso_base. We need to initialise this before doing the mmap or else we fail to detect the vdso. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-09-24powerpc: Move 64bit heap above 1TB on machines with 1TB segmentsAnton Blanchard
If we are using 1TB segments and we are allowed to randomise the heap, we can put it above 1TB so it is backed by a 1TB segment. Otherwise the heap will be in the bottom 1TB which always uses 256MB segments and this may result in a performance penalty. This functionality is disabled when heap randomisation is turned off: echo 1 > /proc/sys/kernel/randomize_va_space which may be useful when trying to allocate the maximum amount of 16M or 16G pages. On a microbenchmark that repeatedly touches 32GB of memory with a stride of 256MB + 4kB (designed to stress 256MB segments while still mapping nicely into the L1 cache), we see the improvement: Force malloc to use heap all the time: # export MALLOC_MMAP_MAX_=0 MALLOC_TRIM_THRESHOLD_=-1 Disable heap randomization: # echo 1 > /proc/sys/kernel/randomize_va_space # time ./test 12.51s Enable heap randomization: # echo 2 > /proc/sys/kernel/randomize_va_space # time ./test 1.70s Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-09-24powerpc: Change archdata dma_data to a unionBecky Bruce
Sometimes this is used to hold a simple offset, and sometimes it is used to hold a pointer. This patch changes it to a union containing void * and dma_addr_t. get/set accessors are also provided, because it was getting a bit ugly to get to the actual data. Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-09-24powerpc: Rename get_dma_direct_offset get_dma_offsetBecky Bruce
The former is no longer really accurate with the swiotlb case now a possibility. I also move it into dma-mapping.h - it no longer needs to be in dma.c, and there are about to be some more accessors that should all end up in the same place. A comment is added to indicate that this function is not used in configs where there is no simple dma offset, such as the iommu case. Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-09-24powerpc/mm: Remove duplicated #includeHuang Weiyi
Remove duplicated #include('s) in arch/powerpc/mm/tlb_low_64e.S Signed-off-by: Huang Weiyi <weiyi.huang@gmail.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-09-24powerpc/book3e-64: Remove duplicated #includeHuang Weiyi
Remove duplicated #include('s) in arch/powerpc/kernel/exceptions-64e.S Signed-off-by: Huang Weiyi <weiyi.huang@gmail.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-09-24powerpc: Check for unsupported relocs when using CONFIG_RELOCATABLETony Breeds
When using CONFIG_RELOCATABLE, we build the kernel as a position independent executable. The kernel then uses a little bit of relocation code to relocate itself. That code only deals with R_PPC64_RELATIVE relocations though. If for some reason you use assembly constructs such as LOAD_REG_IMMEDIATE() to load the address of a symbol, you'll generate different kinds of relocations that won't be processed properly and bad things will happen. (We have 2 such bugs today). The perl script tries to filter out "known" bad ones. It's possible that we are missing some in the case of a weak function that nobody implements, we'll see if we get false positive and fix it. Signed-off-by: Tony Breeds <tony@bakeyournoodle.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-09-24powerpc/pmc: Don't access lppaca on Book3EBenjamin Herrenschmidt
It doesn't exist ! Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-09-24powerpc: kmalloc failure ignored in vio_build_iommu_table()roel kluin
Prevent NULL dereference if kmalloc() fails. Signed-off-by: Roel Kluin <roel.kluin@gmail.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-09-23Merge git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux-2.6-for-linusLinus Torvalds
* git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux-2.6-for-linus: (39 commits) cpumask: Move deprecated functions to end of header. cpumask: remove unused deprecated functions, avoid accusations of insanity cpumask: use new-style cpumask ops in mm/quicklist. cpumask: use mm_cpumask() wrapper: x86 cpumask: use mm_cpumask() wrapper: um cpumask: use mm_cpumask() wrapper: mips cpumask: use mm_cpumask() wrapper: mn10300 cpumask: use mm_cpumask() wrapper: m32r cpumask: use mm_cpumask() wrapper: arm cpumask: Use accessors for cpu_*_mask: um cpumask: Use accessors for cpu_*_mask: powerpc cpumask: Use accessors for cpu_*_mask: mips cpumask: Use accessors for cpu_*_mask: m32r cpumask: remove arch_send_call_function_ipi cpumask: arch_send_call_function_ipi_mask: s390 cpumask: arch_send_call_function_ipi_mask: powerpc cpumask: arch_send_call_function_ipi_mask: mips cpumask: arch_send_call_function_ipi_mask: m32r cpumask: arch_send_call_function_ipi_mask: alpha cpumask: remove obsolete topology_core_siblings and topology_thread_siblings: ia64 ...
2009-09-23headers: utsname.h reduxAlexey Dobriyan
* remove asm/atomic.h inclusion from linux/utsname.h -- not needed after kref conversion * remove linux/utsname.h inclusion from files which do not need it NOTE: it looks like fs/binfmt_elf.c do not need utsname.h, however due to some personality stuff it _is_ needed -- cowardly leave ELF-related headers and files alone. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-24cpumask: use mm_cpumask() wrapper: x86Rusty Russell
Makes code futureproof against the impending change to mm->cpu_vm_mask (to be a pointer). It's also a chance to use the new cpumask_ ops which take a pointer (the older ones are deprecated, but there's no hurry for arch code). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2009-09-24cpumask: use mm_cpumask() wrapper: umRusty Russell
Makes code futureproof against the impending change to mm->cpu_vm_mask. It's also a chance to use the new cpumask_ ops which take a pointer (the older ones are deprecated, but there's no hurry for arch code). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2009-09-24cpumask: use mm_cpumask() wrapper: mipsRusty Russell
Makes code futureproof against the impending change to mm->cpu_vm_mask. It's also a chance to use the new cpumask_ ops which take a pointer (the older ones are deprecated, but there's no hurry for arch code). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2009-09-24cpumask: use mm_cpumask() wrapper: mn10300Rusty Russell
Makes code futureproof against the impending change to mm->cpu_vm_mask (to be a pointer). It's also a chance to use the new cpumask_ ops which take a pointer (the older ones are deprecated, but there's no hurry for arch code). Also change the actual arg name here to "mm" (which it is), not "task". Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2009-09-24cpumask: use mm_cpumask() wrapper: m32rRusty Russell
Makes code futureproof against the impending change to mm->cpu_vm_mask. It's also a chance to use the new cpumask_ ops which take a pointer (the older ones are deprecated, but there's no hurry for arch code). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Acked-by: Hirokazu Takata <takata@linux-m32r.org> (fixes)
2009-09-24cpumask: use mm_cpumask() wrapper: armRusty Russell
Makes code futureproof against the impending change to mm->cpu_vm_mask. It's also a chance to use the new cpumask_ ops which take a pointer (the older ones are deprecated, but there's no hurry for arch code). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2009-09-24cpumask: Use accessors for cpu_*_mask: umRusty Russell
Use the accessors rather than frobbing bits directly (the new versions are const). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com>
2009-09-24cpumask: Use accessors for cpu_*_mask: powerpcRusty Russell
Use the accessors rather than frobbing bits directly (the new versions are const). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com>
2009-09-24cpumask: Use accessors for cpu_*_mask: mipsRusty Russell
Use the accessors rather than frobbing bits directly (the new versions are const). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com>
2009-09-24cpumask: Use accessors for cpu_*_mask: m32rRusty Russell
Use the accessors rather than frobbing bits directly (the new versions are const). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com>
2009-09-24cpumask: remove arch_send_call_function_ipiRusty Russell
Now everyone is converted to arch_send_call_function_ipi_mask, remove the shim and the #defines. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2009-09-24cpumask: arch_send_call_function_ipi_mask: s390Rusty Russell
We're weaning the core code off handing cpumask's around on-stack. This introduces arch_send_call_function_ipi_mask(). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2009-09-24cpumask: arch_send_call_function_ipi_mask: powerpcRusty Russell
We're weaning the core code off handing cpumask's around on-stack. This introduces arch_send_call_function_ipi_mask(), and by defining it, the old arch_send_call_function_ipi is defined by the core code. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2009-09-24cpumask: arch_send_call_function_ipi_mask: mipsRusty Russell
We're weaning the core code off handing cpumask's around on-stack. This introduces arch_send_call_function_ipi_mask(), and by defining it, the old arch_send_call_function_ipi is defined by the core code. We also take the chance to wean the implementations off the obsolescent for_each_cpu_mask(): making send_ipi_mask take the pointer seemed the most natural way to ensure all implementations used for_each_cpu. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2009-09-24cpumask: arch_send_call_function_ipi_mask: m32rRusty Russell
We're weaning the core code off handing cpumask's around on-stack. This introduces arch_send_call_function_ipi_mask(), and by defining it, the old arch_send_call_function_ipi is defined by the core code. We also take the chance to wean the implementations off the obsolescent for_each_cpu_mask(): making send_ipi_mask take the pointer seemed the most natural way to ensure all implementations used for_each_cpu. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2009-09-24cpumask: arch_send_call_function_ipi_mask: alphaRusty Russell
We're weaning the core code off handing cpumask's around on-stack. This introduces arch_send_call_function_ipi_mask(). We also take the chance to wean the send_ipi_message off the obsolescent for_each_cpu_mask(): making it take a pointer seemed the most natural way to do this. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2009-09-24cpumask: remove obsolete topology_core_siblings and ↵Rusty Russell
topology_thread_siblings: ia64 There were replaced by topology_core_cpumask and topology_thread_cpumask. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2009-09-24cpumask: remove obsolete topology_core_siblings and ↵Rusty Russell
topology_thread_siblings: powerpc There were replaced by topology_core_cpumask and topology_thread_cpumask. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2009-09-24cpumask: remove obsolete topology_core_siblings and ↵Rusty Russell
topology_thread_siblings: s390 There were replaced by topology_core_cpumask and topology_thread_cpumask. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2009-09-24cpumask: remove obsolete topology_core_siblings and ↵Rusty Russell
topology_thread_siblings: sparc There were replaced by topology_core_cpumask and topology_thread_cpumask. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2009-09-24ia64: convert last user of smp_call_function_maskRusty Russell
smp_call_function_many is the new version: it takes a pointer. Also, use mm accessor macro while we're changing this. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2009-09-24cpumask: remove last assignment to mask field of struct irqaction.Rusty Russell
This snuck in after the patch which removed all the others. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Cc: Ingo Molnar <mingo@elte.hu>
2009-09-24cpumask: remove dangerous CPU_MASK_ALL_PTR, &CPU_MASK_ALL.: mipsRusty Russell
(Thanks to Al Viro for reminding me of this, via Ingo) CPU_MASK_ALL is the (deprecated) "all bits set" cpumask, defined as so: #define CPU_MASK_ALL (cpumask_t) { { ... } } Taking the address of such a temporary is questionable at best, unfortunately 321a8e9d (cpumask: add CPU_MASK_ALL_PTR macro) added CPU_MASK_ALL_PTR: #define CPU_MASK_ALL_PTR (&CPU_MASK_ALL) Which formalizes this practice. One day gcc could bite us over this usage (though we seem to have gotten away with it so far). So replace everywhere which used &CPU_MASK_ALL or CPU_MASK_ALL_PTR with the modern "cpu_all_mask" (a real struct cpumask *), and remove CPU_MASK_ALL_PTR altogether. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Acked-by: Ingo Molnar <mingo@elte.hu> Reported-by: Al Viro <viro@zeniv.linux.org.uk> Cc: Mike Travis <travis@sgi.com>