aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2009-01-18Merge branch 'tj-percpu' of ↵Ingo Molnar
git://git.kernel.org/pub/scm/linux/kernel/git/tj/misc into core/percpu
2009-01-19x86-64: Use absolute displacements for per-cpu accesses.Brian Gerst
Accessing memory through %gs should not use rip-relative addressing. Adding a P prefix for the argument tells gcc to not add (%rip) to the memory references. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2009-01-19x86-64: Move isidle from PDA to per-cpu.Brian Gerst
tj: s/isidle/is_idle/ Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2009-01-19x86-64: Move nodenumber from PDA to per-cpu.Brian Gerst
tj: * s/nodenumber/node_number/ * removed now unused pda variable from pda_init() Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2009-01-19x86-64: Move irqcount from PDA to per-cpu.Brian Gerst
tj: s/irqcount/irq_count/ Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2009-01-19x86-64: Move oldrsp from PDA to per-cpu.Brian Gerst
tj: * in asm-offsets_64.c, pda.h inclusion shouldn't be removed as pda is still referenced in the file * s/oldrsp/old_rsp/ Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2009-01-19x86-64: Move kernelstack from PDA to per-cpu.Brian Gerst
Also clean up PER_CPU_VAR usage in xen-asm_64.S tj: * remove now unused stack_thread_info() * s/kernelstack/kernel_stack/ * added FIXME comment in xen-asm_64.S Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2009-01-19x86-64: Move current task from PDA to per-cpu and consolidate with 32-bit.Brian Gerst
Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2009-01-19x86-64: Move cpu number from PDA to per-cpu and consolidate with 32-bit.Brian Gerst
tj: moved cpu_number definition out of CONFIG_HAVE_SETUP_PER_CPU_AREA for voyager. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2009-01-19x86-64: Convert exception stacks to per-cpuBrian Gerst
Move the exception stacks to per-cpu, removing specific allocation code. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2009-01-19x86-64: Convert irqstacks to per-cpuBrian Gerst
Move the irqstackptr variable from the PDA to per-cpu. Make the stacks themselves per-cpu, removing some specific allocation code. Add a seperate flag (is_boot_cpu) to simplify the per-cpu boot adjustments. tj: * sprinkle some underbars around. * irq_stack_ptr is not used till traps_init(), no reason to initialize it early. On SMP, just leaving it NULL till proper initialization in setup_per_cpu_areas() works. Dropped is_boot_cpu and early irq_stack_ptr initialization. * do DECLARE/DEFINE_PER_CPU(char[IRQ_STACK_SIZE], irq_stack) instead of (char, irq_stack[IRQ_STACK_SIZE]). Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2009-01-19x86-64: Move TLB state from PDA to per-cpu and consolidate with 32-bit.Brian Gerst
Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2009-01-19x86-64: Move irq stats from PDA to per-cpu and consolidate with 32-bit.Brian Gerst
Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2009-01-17linker script: add missing .data.percpu.page_alignedTejun Heo
arm, arm/mach-integrator and powerpc were missing .data.percpu.page_aligned in their percpu output section definitions. Add it. Signed-off-by: Tejun Heo <tj@kernel.org>
2009-01-17linker script: add missing VMLINUX_SYMBOLTejun Heo
The newly added PERCPU_*() macros define and use __per_cpu_load but VMLINUX_SYMBOL() was missing from usages causing build failures on archs where linker visible symbol is different from C symbols (e.g. blackfin). Signed-off-by: Tejun Heo <tj@kernel.org>
2009-01-16x86_64: initialize this_cpu_off to __per_cpu_loadTejun Heo
On x86_64, if get_per_cpu_var() is used before per cpu area is setup (if lockdep is turned on, it happens), it needs this_cpu_off to point to __per_cpu_load. Initialize accordingly. Signed-off-by: Tejun Heo <tj@kernel.org>
2009-01-16x86: fix build bug introduced during mergeTejun Heo
EXPORT_PER_CPU_SYMBOL() got misplaced during merge leading to build failure. Fix it. Signed-off-by: Tejun Heo <tj@kernel.org>
2009-01-16percpu: add optimized generic percpu accessorsIngo Molnar
It is an optimization and a cleanup, and adds the following new generic percpu methods: percpu_read() percpu_write() percpu_add() percpu_sub() percpu_and() percpu_or() percpu_xor() and implements support for them on x86. (other architectures will fall back to a default implementation) The advantage is that for example to read a local percpu variable, instead of this sequence: return __get_cpu_var(var); ffffffff8102ca2b: 48 8b 14 fd 80 09 74 mov -0x7e8bf680(,%rdi,8),%rdx ffffffff8102ca32: 81 ffffffff8102ca33: 48 c7 c0 d8 59 00 00 mov $0x59d8,%rax ffffffff8102ca3a: 48 8b 04 10 mov (%rax,%rdx,1),%rax We can get a single instruction by using the optimized variants: return percpu_read(var); ffffffff8102ca3f: 65 48 8b 05 91 8f fd mov %gs:0x7efd8f91(%rip),%rax I also cleaned up the x86-specific APIs and made the x86 code use these new generic percpu primitives. tj: * fixed generic percpu_sub() definition as Roel Kluin pointed out * added percpu_and() for completeness's sake * made generic percpu ops atomic against preemption Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Tejun Heo <tj@kernel.org>
2009-01-16x86: misc clean up after the percpu updateTejun Heo
Do the following cleanups: * kill x86_64_init_pda() which now is equivalent to pda_init() * use per_cpu_offset() instead of cpu_pda() when initializing initial_gs Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-16x86: convert pda ops to wrappers around x86 percpu accessorsTejun Heo
pda is now a percpu variable and there's no reason it can't use plain x86 percpu accessors. Add x86_test_and_clear_bit_percpu() and replace pda op implementations with wrappers around x86 percpu accessors. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-16x86: make pda a percpu variableTejun Heo
[ Based on original patch from Christoph Lameter and Mike Travis. ] As pda is now allocated in percpu area, it can easily be made a proper percpu variable. Make it so by defining per cpu symbol from linker script and declaring it in C code for SMP and simply defining it for UP. This change cleans up code and brings SMP and UP closer a bit. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-16x86: merge 64 and 32 SMP percpu handlingTejun Heo
Now that pda is allocated as part of percpu, percpu doesn't need to be accessed through pda. Unify x86_64 SMP percpu access with x86_32 SMP one. Other than the segment register, operand size and the base of percpu symbols, they behave identical now. This patch replaces now unnecessary pda->data_offset with a dummy field which is necessary to keep stack_canary at its place. This patch also moves per_cpu_offset initialization out of init_gdt() into setup_per_cpu_areas(). Note that this change also necessitates explicit per_cpu_offset initializations in voyager_smp.c. With this change, x86_OP_percpu()'s are as efficient on x86_64 as on x86_32 and also x86_64 can use assembly PER_CPU macros. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-16x86: fold pda into percpu area on SMPTejun Heo
[ Based on original patch from Christoph Lameter and Mike Travis. ] Currently pdas and percpu areas are allocated separately. %gs points to local pda and percpu area can be reached using pda->data_offset. This patch folds pda into percpu area. Due to strange gcc requirement, pda needs to be at the beginning of the percpu area so that pda->stack_canary is at %gs:40. To achieve this, a new percpu output section macro - PERCPU_VADDR_PREALLOC() - is added and used to reserve pda sized chunk at the start of the percpu area. After this change, for boot cpu, %gs first points to pda in the data.init area and later during setup_per_cpu_areas() gets updated to point to the actual pda. This means that setup_per_cpu_areas() need to reload %gs for CPU0 while clearing pda area for other cpus as cpu0 already has modified it when control reaches setup_per_cpu_areas(). This patch also removes now unnecessary get_local_pda() and its call sites. A lot of this patch is taken from Mike Travis' "x86_64: Fold pda into per cpu area" patch. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-16x86: use static _cpu_pda arrayTejun Heo
_cpu_pda array first uses statically allocated storage in data.init and then switches to allocated bootmem to conserve space. However, after folding pda area into percpu area, _cpu_pda array will be removed completely. Drop the reallocation part to simplify the code for soon-to-follow changes. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-16x86: load pointer to pda into %gs while brining up a CPUTejun Heo
[ Based on original patch from Christoph Lameter and Mike Travis. ] CPU startup code in head_64.S loaded address of a zero page into %gs for temporary use till pda is loaded but address to the actual pda is available at the point. Load the real address directly instead. This will help unifying percpu and pda handling later on. This patch is mostly taken from Mike Travis' "x86_64: Fold pda into per cpu area" patch. Signed-off-by: Tejun Heo <tj@kernel.org>
2009-01-16x86: make percpu symbols zerobased on SMPTejun Heo
[ Based on original patch from Christoph Lameter and Mike Travis. ] This patch makes percpu symbols zerobased on x86_64 SMP by adding PERCPU_VADDR() to vmlinux.lds.h which helps setting explicit vaddr on the percpu output section and using it in vmlinux_64.lds.S. A new PHDR is added as existing ones cannot contain sections near address zero. PERCPU_VADDR() also adds a new symbol __per_cpu_load which always points to the vaddr of the loaded percpu data.init region. The following adjustments have been made to accomodate the address change. * code to locate percpu gdt_page in head_64.S is updated to add the load address to the gdt_page offset. * __per_cpu_load is used in places where access to the init data area is necessary. * pda->data_offset is initialized soon after C code is entered as zero value doesn't work anymore. This patch is mostly taken from Mike Travis' "x86_64: Base percpu variables at zero" patch. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-16x86: make vmlinux_32.lds.S use PERCPU() macroTejun Heo
Make vmlinux_32.lds.S use the generic PERCPU() macro instead of open coding it. This will ease future changes. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-16x86: cleanup early setup_percpu referencesMike Travis
[ Based on original patch from Christoph Lameter and Mike Travis. ] * Ruggedize some calls in setup_percpu.c to prevent mishaps in early calls, particularly for non-critical functions. * Cleanup DEBUG_PER_CPU_MAPS usages and some comments. Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-16x86: make early_per_cpu() a lvalue and use itTejun Heo
Make early_per_cpu() a lvalue as per_cpu() is and use it where applicable. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-16x86: fix pda_to_op()Tejun Heo
There's no instruction to move a 64bit immediate into memory location. Drop "i". Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-15Merge branches 'cpus4096', 'x86/cleanups' and 'x86/urgent' into x86/percpuIngo Molnar
2009-01-15x86: fix broken flush_tlb_others_ipi(), fixIngo Molnar
Impact: cleanup Use the proper type. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-15x86: avoid early crash in disable_local_APIC()Jan Beulich
E.g. when called due to an early panic. Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-15x86: fully honor "nolapic"Jan Beulich
Impact: widen the effect of the 'nolapic' boot parameter "nolapic" should not only suppress SMP and use of the LAPIC, but it also ought to have the effect of disabling all IO-APIC related activity as well as PCI MSI and HT-IRQs. Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-15irq: update all arches for new irq_desc, fixMike Travis
Impact: fix build errors Since the SPARSE IRQS changes redefined how the kstat irqs are organized, arch's must use the new accessor function: kstat_incr_irqs_this_cpu(irq, DESC); If CONFIG_SPARSE_IRQS is set, then DESC is a pointer to the irq_desc which has a pointer to the kstat_irqs. If not, then the .irqs field of struct kernel_stat is used instead. Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-14x86, pat: fix reserve_memtype() for legacy 1MB rangeSuresh Siddha
Thierry Vignaud reported: > http://bugzilla.kernel.org/show_bug.cgi?id=12372 > > On P4 with an SiS motherboard (video card is a SiS 651) > X server fails to start with error: > xf86MapVidMem: Could not mmap framebuffer (0x00000000,0x2000) (Invalid > argument) Here X is trying to map first 8KB of memory using /dev/mem. Existing code treats first 0-4KB of memory as non-RAM and 4KB-8KB as RAM. Recent code changes don't allow to map memory with different attributes at the same time. Fix this by treating the first 1MB legacy region as special and always track the attribute requests with in this region using linear linked list (and don't bother if the range is RAM or non-RAM or mixed) Reported-and-tested-by: Thierry Vignaud <tvignaud@mandriva.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-14x86: make 32bit MAX_HARDIRQS_PER_CPU to be NR_VECTORSYinghai Lu
Impact: clean up to be same as 64bit 32-bit is using per-cpu vector too, so don't use default NR_IRQS. Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-14Merge branch 'master' of ↵Ingo Molnar
ssh://master.kernel.org/pub/scm/linux/kernel/git/travis/linux-2.6-cpus4096-for-ingo into cpus4096
2009-01-14x86: replacing mp_config_intsrc with mpc_intsrcJaswinder Singh Rajput
Impact: cleanup, solve 80 columns wrap problems Signed-off-by: Jaswinder Singh Rajput <jaswinderrajput@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-14x86: replacing mp_config_ioapic with mpc_ioapicJaswinder Singh Rajput
Impact: cleanup, solve 80 columns wrap problems Signed-off-by: Jaswinder Singh Rajput <jaswinderrajput@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-14x86: fix broken flush_tlb_others_ipi()Suresh Siddha
This commit broke flush_tlb_others_ipi() causing boot hangs on a 16 logical cpu system: > commit 4595f9620cda8a1e973588e743cf5f8436dd20c6 > Author: Rusty Russell <rusty@rustcorp.com.au> > Date: Sat Jan 10 21:58:09 2009 -0800 > > x86: change flush_tlb_others to take a const struct cpumask This change resulted in sending the invalidate tlb vector to the sender itself causing the hang. flush_tlb_others_ipi() should exclude the sender itself from the destination list. Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-13Merge branch 'x86-pat-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'x86-pat-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86 PAT: remove CPA WARN_ON for zero pte x86 PAT: return compatible mapping to remap_pfn_range callers x86 PAT: change track_pfn_vma_new to take pgprot_t pointer param x86 PAT: consolidate old memtype new memtype check into a function x86 PAT: remove PFNMAP type on track_pfn_vma_new() error
2009-01-13Merge master.kernel.org:/home/rmk/linux-2.6-armLinus Torvalds
* master.kernel.org:/home/rmk/linux-2.6-arm: TWL4030: fix clk API usage [ARM] 5364/1: allow flush_ioremap_region() to be used from modules [ARM] w90x900: fix build errors and warnings [ARM] i.MX add missing include [ARM] i.MX: fix breakage from commit 278892736e99330195c8ae5861bcd9d791bbf19e [ARM] i.MX: remove LCDC controller register definitions from imx-regs.h
2009-01-13Fix timeouts in sys_pselect7Bernd Schmidt
Since we (Analog Devices) updated our Blackfin kernel to 2.6.28, we've seen occasional 5-second hangs from telnet. telnetd calls select with a NULL timeout, but with the new kernel, the system call occasionally returns 0, which causes telnet to call sleep (5). This did not happen with earlier kernels. The code in sys_pselect7 looks a bit strange, in particular the variable "to" is initialized to NULL, then changed if a non-null timeout was passed in, but not used further. It needs to be passed to core_sys_select instead of &end_time. This bug was introduced by 8ff3e8e85fa6c312051134b3953e397feb639f51 ("select: switch select() and poll() over to hrtimers"). Signed-off-by: Bernd Schmidt <bernd.schmidt@analog.com> Reviewed-by: Ulrich Drepper <drepper@redhat.com> Tested-by: Robin Getz <rgetz@blackfin.uclinux.org> Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-01-13fix early_serial_setup() regressionHelge Deller
Commit b430428a188e8a434325e251d0704af4b88b4711 ("8250: Don't clobber spinlocks.") introduced a regression on the parisc architecture, which broke the handover to the serial port at boottime. early_serial_setup() was changed to only copy a subset of the uart_port fields, and sadly the "type" and "line" fields were forgotten and thus the serial port was not initialized and could not be used for a handover. This patch fixes this by copying the missing fields. As this change to early_serial_setup() doesn't need an initialized spinlock in the uart_port struct any longer, we can drop the spinlock initialization in the superio driver. Cc: David Daney <ddaney@caviumnetworks.com> Cc: Tomaso Paoletti <tpaoletti@caviumnetworks.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Alan Cox <alan@lxorguk.ukuu.org.uk> Acked-by: Kyle McMartin <kyle@mcmartin.ca> Cc: linux-parisc@vger.kernel.org Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-01-13TWL4030: fix clk API usageRussell King
Always pass a struct device if one is available; and there's really no reason for the processor specific stuff in this file if only people would follow the API usage properly by using the struct device. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-01-13x86 PAT: remove CPA WARN_ON for zero ptevenkatesh.pallipadi@intel.com
Impact: reduce scope of debug check - avoid warnings The logic to find whether identity map exists or not using high_memory or max_low_pfn_mapped/max_pfn_mapped are not complete as the memory withing the range may not be mapped if there is a unusable hole in e820. Specifically, on my test system I started seeing these warnings with tools like hwinfo, acpidump trying to map ACPI region. [ 27.400018] ------------[ cut here ]------------ [ 27.400344] WARNING: at /home/venkip/src/linus/linux-2.6/arch/x86/mm/pageattr.c:560 __change_page_attr_set_clr+0xf3/0x8b8() [ 27.400821] Hardware name: X7DB8 [ 27.401070] CPA: called for zero pte. vaddr = ffff8800cff6a000 cpa->vaddr = ffff8800cff6a000 [ 27.401569] Modules linked in: [ 27.401882] Pid: 4913, comm: dmidecode Not tainted 2.6.28-05716-gfe0bdec #586 [ 27.402141] Call Trace: [ 27.402488] [<ffffffff80237c21>] warn_slowpath+0xd3/0x10f [ 27.402749] [<ffffffff80274ade>] ? find_get_page+0xb3/0xc9 [ 27.403028] [<ffffffff80274a2b>] ? find_get_page+0x0/0xc9 [ 27.403333] [<ffffffff80226425>] __change_page_attr_set_clr+0xf3/0x8b8 [ 27.403628] [<ffffffff8028ec99>] ? __purge_vmap_area_lazy+0x192/0x1a1 [ 27.403883] [<ffffffff8028eb52>] ? __purge_vmap_area_lazy+0x4b/0x1a1 [ 27.404172] [<ffffffff80290268>] ? vm_unmap_aliases+0x1ab/0x1bb [ 27.404512] [<ffffffff80290105>] ? vm_unmap_aliases+0x48/0x1bb [ 27.404766] [<ffffffff80226d28>] change_page_attr_set_clr+0x13e/0x2e6 [ 27.405026] [<ffffffff80698fa7>] ? _spin_unlock+0x26/0x2a [ 27.405292] [<ffffffff80227e6a>] ? reserve_memtype+0x19b/0x4e3 [ 27.405590] [<ffffffff80226ffd>] _set_memory_wb+0x22/0x24 [ 27.405844] [<ffffffff80225d28>] ioremap_change_attr+0x26/0x28 [ 27.406097] [<ffffffff80228355>] reserve_pfn_range+0x1a3/0x235 [ 27.406427] [<ffffffff80228430>] track_pfn_vma_new+0x49/0xb3 [ 27.406686] [<ffffffff80286c46>] remap_pfn_range+0x94/0x32c [ 27.406940] [<ffffffff8022878d>] ? phys_mem_access_prot_allowed+0xb5/0x1a8 [ 27.407209] [<ffffffff803e9bf4>] mmap_mem+0x75/0x9d [ 27.407523] [<ffffffff8028b3b4>] mmap_region+0x2cf/0x53e [ 27.407776] [<ffffffff8028b8cc>] do_mmap_pgoff+0x2a9/0x30d [ 27.408034] [<ffffffff8020f4a4>] sys_mmap+0x92/0xce [ 27.408339] [<ffffffff8020b65b>] system_call_fastpath+0x16/0x1b [ 27.408614] ---[ end trace 4b16ad70c09a602d ]--- [ 27.408871] dmidecode:4913 reserve_pfn_range ioremap_change_attr failed write-back for cff6a000-cff6b000 This is wih track_pfn_vma_new trying to keep identity map in sync. The address cff6a000 is the ACPI region according to e820. [ 0.000000] BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: 0000000000000000 - 000000000009c000 (usable) [ 0.000000] BIOS-e820: 000000000009c000 - 00000000000a0000 (reserved) [ 0.000000] BIOS-e820: 00000000000cc000 - 00000000000d0000 (reserved) [ 0.000000] BIOS-e820: 00000000000e4000 - 0000000000100000 (reserved) [ 0.000000] BIOS-e820: 0000000000100000 - 00000000cff60000 (usable) [ 0.000000] BIOS-e820: 00000000cff60000 - 00000000cff69000 (ACPI data) [ 0.000000] BIOS-e820: 00000000cff69000 - 00000000cff80000 (ACPI NVS) [ 0.000000] BIOS-e820: 00000000cff80000 - 00000000d0000000 (reserved) [ 0.000000] BIOS-e820: 00000000e0000000 - 00000000f0000000 (reserved) [ 0.000000] BIOS-e820: 00000000fec00000 - 00000000fec10000 (reserved) [ 0.000000] BIOS-e820: 00000000fee00000 - 00000000fee01000 (reserved) [ 0.000000] BIOS-e820: 00000000ff000000 - 0000000100000000 (reserved) [ 0.000000] BIOS-e820: 0000000100000000 - 0000000230000000 (usable) And is not mapped as per init_memory_mapping. [ 0.000000] init_memory_mapping: 0000000000000000-00000000cff60000 [ 0.000000] init_memory_mapping: 0000000100000000-0000000230000000 We can add logic to check for this. But, there can also be other holes in identity map when we have 1GB of aligned reserved space in e820. This patch handles it by removing the WARN_ON and returning a specific error value (EFAULT) to indicate that the address does not have any identity mapping. The code that tries to keep identity map in sync can ignore this error, with other callers of cpa still getting error here. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-13x86 PAT: return compatible mapping to remap_pfn_range callersvenkatesh.pallipadi@intel.com
Impact: avoid warning message, potentially solve 3D performance regression Change x86 PAT code to return compatible memtype if the exact memtype that was requested in remap_pfn_rage and friends is not available due to some conflict. This is done by returning the compatible type in pgprot parameter of track_pfn_vma_new(), and the caller uses that memtype for page table. Note that track_pfn_vma_copy() which is basically called during fork gets the prot from existing page table and should not have any conflict. Hence we use strict memtype check there and do not allow compatible memtypes. This patch fixes the bug reported here: http://marc.info/?l=linux-kernel&m=123108883716357&w=2 Specifically the error message: X:5010 map pfn expected mapping type write-back for d0000000-d0101000, got write-combining Should go away. Reported-and-bisected-by: Kevin Winchester <kjwinchester@gmail.com> Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-13x86 PAT: change track_pfn_vma_new to take pgprot_t pointer paramvenkatesh.pallipadi@intel.com
Impact: cleanup Change the protection parameter for track_pfn_vma_new() into a pgprot_t pointer. Subsequent patch changes the x86 PAT handling to return a compatible memtype in pgprot_t, if what was requested cannot be allowed due to conflicts. No fuctionality change in this patch. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-13x86 PAT: consolidate old memtype new memtype check into a functionvenkatesh.pallipadi@intel.com
Impact: cleanup Move the new memtype old memtype allowed check to header so that is can be shared by other users. Subsequent patch uses this in pat.c in remap_pfn_range() code path. No functionality change in this patch. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>