aboutsummaryrefslogtreecommitdiff
path: root/arch/s390/kernel/smp.c
AgeCommit message (Collapse)Author
2010-01-13[S390] Move __cpu_logical_map to smp.cHeiko Carstens
Finally move it to the place where it belongs to and make get rid of it for !CONFIG_SMP. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2010-01-13[S390] smp: setup smp_processor_id earlyHeiko Carstens
smp_processor_id() is supposed to work before setup_arch() gets called. Before that smp_processor_id() may return just an arbitrary value that is contained in the uninitialized boot lowcore. So provide the arch function which will override the weak function in init/main.c. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-10-29[S390] smp: fix sigp sense handlingHeiko Carstens
sigp sense only returns the status of a cpu if it is non zero. If the status of the sensed cpu is all zeros condition code 0 (accpeted) is set and no status bits are returned. The current code however assumes that a status was returned and tests bits in it. This means uninitalized data is accessed with random results. Worst case is that the code that checks if cpu is offline on cpu hotplug assumes that the target cpu is offline while it is still running. This leads potentially to memory corruption since resources that are still needed by the target cpu will be freed and could be resused while still in use. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-10-29[S390] smp: fix sigp stop handlingHeiko Carstens
According to the architecture a cpu must not necessarily enter stopped state after completion of a sigp instruction with "stop" order code. So remove the BUG() statement after self sending sigp stop to avoid that it ever gets reached. Also add a sigp busy check to make sure that the order gets delivered. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-10-29[S390] smp: fix prefix handling of offlined cpusHeiko Carstens
Offlined cpus still have valid prefix register contents. Dumpers will store the register contents of a cpu to the location where its prefix register points to. For offlined cpus the area (lowcore) has been freed and the dumper would write the uninteresting contents of the offline cpu to a memory location which might be in use by some other component and destroy valueable information. To fix this set the prefix register of offline cpus to absolute address zero again. This prevents the current dumpers to write to random memory locations. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-09-24cpumask: arch_send_call_function_ipi_mask: s390Rusty Russell
We're weaning the core code off handing cpumask's around on-stack. This introduces arch_send_call_function_ipi_mask(). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2009-09-22[S390] hibernate: Do real CPU swap at resume timeMichael Holzheu
Currently, when the physical resume CPU is not equal to the physical suspend CPU, we swap the CPUs logically, by modifying the logical/physical CPU mapping. This has two major drawbacks: First the change is visible from user space (e.g. CPU sysfs files) and second it is hard to ensure that nowhere in the kernel the physical CPU ID is stored before suspend. To fix this, we now really swap the physical CPUs, if the resume CPU is not the pysical suspend CPU. We restart the suspend CPU and stop the resume CPU using SIGP restart and SIGP stop. If the suspend CPU is no longer available, we write a message and load a disabled wait PSW. Signed-off-by: Michael Holzheu <michael.holzheu@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-09-22[S390] smp: introduce LC_ORDER and simplify lowcore handlingHeiko Carstens
Removes a couple of simple code duplications. But before I have to do this again, just simplify it. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-09-11[S390] Remove smp_cpu_not_running.Heiko Carstens
smp_cpu_not_running() and cpu_stopped() are doing the same. Remove one and also get rid of the last hard_smp_processor_id() leftover. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-09-11[S390] Limit cpu detection to 256 physical cpus.Heiko Carstens
Saves us more than 65k pointless IPIs. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-07-24[S390] vdso: fix per cpu area allocationHeiko Carstens
vdso per cpu area allocation in smp_prepare_cpus() happens with GFP_KERNEL but irqs disabled. Triggers this one: Badness at kernel/lockdep.c:2280 Modules linked in: CPU: 0 Not tainted 2.6.30 #2 Process swapper (pid: 1, task: 000000003fe88000, ksp: 000000003fe87eb8) Krnl PSW : 0400c00180000000 0000000000083360 (lockdep_trace_alloc+0xec/0xf8) [...] Call Trace: ([<00000000000832b6>] lockdep_trace_alloc+0x42/0xf8) [<00000000000b1880>] __alloc_pages_internal+0x3e8/0x5c4 [<00000000000b1b4a>] __get_free_pages+0x3a/0xb0 [<0000000000026546>] vdso_alloc_per_cpu+0x6a/0x18c [<00000000005eff82>] smp_prepare_cpus+0x322/0x594 [<00000000005e8232>] kernel_init+0x76/0x398 [<000000000001bb1e>] kernel_thread_starter+0x6/0xc [<000000000001bb18>] kernel_thread_starter+0x0/0xc Fix this by moving the allocation out of the irqs disabled section. Reported-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-06-22[S390] lockless idle time accountingMartin Schwidefsky
Replace the spinlock used in the idle time accounting with a sequence counter mechanism analog to seqlock. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-06-16[S390] s390: hibernation support for s390Hans-Joachim Picht
This patch introduces the hibernation backend support to the s390 architecture. Now it is possible to suspend a mainframe Linux guest using the following command: echo disk > /sys/power/state Signed-off-by: Hans-Joachim Picht <hans@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-06-12[S390] ftrace: add dynamic ftrace supportHeiko Carstens
Dynamic ftrace support for s390. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-06-12[S390] merge cpu.h into cputime.hMartin Schwidefsky
All definition in cpu.h have to do with cputime accounting. Move them to cputime.h and remove the header file. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-04-14[S390] smp: fix cpu_possible_map initializationHeiko Carstens
The cpu_possible_map by default is initialized with all ones in s390. If the kernel paramert possible_cpus=<x> is passed the cpu_possible_map is supposed to have x bits set. However the current code just sets the x bits without clearing the NR_CPUS bits that were already set. So we end up with an unchanged map that has all bits set. To fix this just clear the map before setting any new bits. This broke with def6cfb70bab83c0094bc0cedd27c4eda563043e "[S390] cpumask: Use accessors code." Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-04-14[S390] s390: move machine flags to lowcoreChristian Ehrhardt
Currently the storage of the machine flags is a globally exported unsigned long long variable. By moving the storage location into the lowcore struct we allow assembler code to check machine_flags directly even without needing a register. Addtionally the lowcore and therefore the machine flags too will be in cache most of the time. Signed-off-by: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-03-26[S390] cpumask: Use accessors code.Rusty Russell
Impact: use new API Use the accessors rather than frobbing bits directly. Most of this is in arch code I haven't even compiled, but is straightforward. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-03-26[S390] cpumask: prepare for iterators to only go to nr_cpu_ids/nr_cpumask_bits.Rusty Russell
Impact: cleanup, futureproof In fact, all cpumask ops will only be valid (in general) for bit numbers < nr_cpu_ids. So use that instead of NR_CPUS in various places (I also updated the immediate sites to use the new cpumask_ operators). This is always safe: no cpu number can be >= nr_cpu_ids, and nr_cpu_ids is initialized to NR_CPUS at boot. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-03-26[S390] smp: perform initial cpu reset before starting a cpuHeiko Carstens
Performing an initial cpu reset makes sure all registers and tlbs of the targeted cpu are initialized and flushed. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-03-26[S390] smp: fix memory leak on __cpu_upHeiko Carstens
If sigp_set_prefix fails on __cpu_up we leak the lowcore structures and async+panic stacks for the failed cpu. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-03-26[S390] zfcpdump: Prevent zcore from beeing built as a kernel module.Michael Holzheu
The zcore code switches to real addressing mode when creating a kernel dump. This is not possible, if it is built as a kernel module. With this patch zcore (zfcpdump) can't be built as a kernel module any more. Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-03-26[S390] eliminate ipl_device from lowcoreMartin Schwidefsky
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-03-26[S390] eliminate cpuinfo_S390 structureMartin Schwidefsky
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-03-26[S390] lockdep: trace hardirq off in smp_send_stopChristian Borntraeger
With lockdep we got the following trace after a panic: Badness at /home/autobuild/BUILD/linux-2.6.28-20090204/kernel/lockdep.c:2878 [...] Call Trace: [<0000000000176334>] lock_acquire+0x54/0xbc [<000000000050b4fe>] __atomic_notifier_call_chain+0x6e/0xdc [<000000000050b59c>] atomic_notifier_call_chain+0x30/0x44 [<0000000000504274>] panic+0xd0/0x1e8 [...] INFO: lockdep is turned off. Last Breaking-Event-Address: [<0000000000170e62>] check_flags+0xae/0x15c possible reason: unannotated irqs-off. lockdep is right. We missed a trace_hardirq_off in our smp_send_stop function and smp_send_stop is called before the panic call chain. Reported-by: Mijo <Safradin mijo@linux.vnet.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-03-26[S390] Automatic IPL after dumpFrank Munzert
Provide new shutdown action "dump_reipl" for automatic ipl after dump. Signed-off-by: Frank Munzert <munzert@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2009-01-09[S390] vdso: compile fixHeiko Carstens
!CONFIG_SMP: arch/s390/kernel/vdso.c: In function 'vdso_init': arch/s390/kernel/vdso.c:325: error: incompatible type for argument 2 of 'vdso_alloc_per_cpu' Also move the code out of the BUG_ON statement since it won't be executed on !CONFIG_BUG. And that would be a bug. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-01-03Merge branch 'cputime' of git://git390.osdl.marist.edu/pub/scm/linux-2.6Linus Torvalds
* 'cputime' of git://git390.osdl.marist.edu/pub/scm/linux-2.6: [PATCH] fast vdso implementation for CLOCK_THREAD_CPUTIME_ID [PATCH] improve idle cputime accounting [PATCH] improve precision of idle time detection. [PATCH] improve precision of process accounting. [PATCH] idle cputime accounting [PATCH] fix scaled & unscaled cputime accounting
2009-01-02Merge branch 'cpus4096-for-linus-2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'cpus4096-for-linus-2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (66 commits) x86: export vector_used_by_percpu_irq x86: use logical apicid in x2apic_cluster's x2apic_cpu_mask_to_apicid_and() sched: nominate preferred wakeup cpu, fix x86: fix lguest used_vectors breakage, -v2 x86: fix warning in arch/x86/kernel/io_apic.c sched: fix warning in kernel/sched.c sched: move test_sd_parent() to an SMP section of sched.h sched: add SD_BALANCE_NEWIDLE at MC and CPU level for sched_mc>0 sched: activate active load balancing in new idle cpus sched: bias task wakeups to preferred semi-idle packages sched: nominate preferred wakeup cpu sched: favour lower logical cpu number for sched_mc balance sched: framework for sched_mc/smt_power_savings=N sched: convert BALANCE_FOR_xx_POWER to inline functions x86: use possible_cpus=NUM to extend the possible cpus allowed x86: fix cpu_mask_to_apicid_and to include cpu_online_mask x86: update io_apic.c to the new cpumask code x86: Introduce topology_core_cpumask()/topology_thread_cpumask() x86: xen: use smp_call_function_many() x86: use work_on_cpu in x86/kernel/cpu/mcheck/mce_amd_64.c ... Fixed up trivial conflict in kernel/time/tick-sched.c manually
2008-12-31[PATCH] fast vdso implementation for CLOCK_THREAD_CPUTIME_IDMartin Schwidefsky
The extract cpu time instruction (ectg) instruction allows the user process to get the current thread cputime without calling into the kernel. The code that uses the instruction needs to switch to the access registers mode to get access to the per-cpu info page that contains the two base values that are needed to calculate the current cputime from the CPU timer with the ectg instruction. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2008-12-31[PATCH] improve precision of idle time detection.Martin Schwidefsky
Increase the precision of the idle time calculation that is exported to user space via /sys/devices/system/cpu/cpu<x>/idle_time_us Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2008-12-25[S390] convert cpu related printks to pr_xxx macros.Martin Schwidefsky
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2008-12-25[S390] panic_stack leak in smp_alloc_lowcoreMartin Schwidefsky
Fix freeing of the panic_stack if the allocation of async_stack failed. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2008-12-25[S390] Remove config options.Martin Schwidefsky
On s390 we always want to run with precise cputime accounting. Remove the config options VIRT_TIMER and VIRT_CPU_ACCOUNTING. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2008-12-25[S390] convert s390 to generic IPI infrastructureHeiko Carstens
Since etr/stp don't need the old smp_call_function semantics anymore we can convert s390 to the generic IPI infrastructure. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2008-12-13cpumask: centralize cpu_online_map and cpu_possible_mapRusty Russell
Impact: cleanup Each SMP arch defines these themselves. Move them to a central location. Twists: 1) Some archs (m32, parisc, s390) set possible_map to all 1, so we add a CONFIG_INIT_ALL_POSSIBLE for this rather than break them. 2) mips and sparc32 '#define cpu_possible_map phys_cpu_present_map'. Those archs simply have phys_cpu_present_map replaced everywhere. 3) Alpha defined cpu_possible_map to cpu_present_map; this is tricky so I just manipulate them both in sync. 4) IA64, cris and m32r have gratuitous 'extern cpumask_t cpu_possible_map' declarations. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Reviewed-by: Grant Grundler <grundler@parisc-linux.org> Tested-by: Tony Luck <tony.luck@intel.com> Acked-by: Ingo Molnar <mingo@elte.hu> Cc: Mike Travis <travis@sgi.com> Cc: ink@jurassic.park.msu.ru Cc: rmk@arm.linux.org.uk Cc: starvik@axis.com Cc: tony.luck@intel.com Cc: takata@linux-m32r.org Cc: ralf@linux-mips.org Cc: grundler@parisc-linux.org Cc: paulus@samba.org Cc: schwidefsky@de.ibm.com Cc: lethal@linux-sh.org Cc: wli@holomorphy.com Cc: davem@davemloft.net Cc: jdike@addtoit.com Cc: mingo@redhat.com
2008-10-28[S390] Fix sysdev class file creation.Heiko Carstens
Use sysdev_class_create_file() to create create sysdev class attributes instead of sysfs_create_file(). Using sysfs_create_file() wasn't a very good idea since the show and store functions have a different amount of parameters for sysfs files and sysdev class files. In particular the pointer to the buffer is the last argument and therefore accesses to random memory regions happened. Still worked surprisingly well until we got a kernel panic. Cc: stable@kernel.org Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2008-09-08kernel/cpu.c: create a CPU_STARTING cpu_chain notifierManfred Spraul
Right now, there is no notifier that is called on a new cpu, before the new cpu begins processing interrupts/softirqs. Various kernel function would need that notification, e.g. kvm works around by calling smp_call_function_single(), rcu polls cpu_online_map. The patch adds a CPU_STARTING notification. It also adds a helper function that sends the message to all cpu_chain handlers. Tested on x86-64. All other archs are untested. Especially on sparc, I'm not sure if I got it right. Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-21[S390] Remove unneeded spinlock initialization.Heiko Carstens
Remove the now unneeded s390_idle.lock spinlock initialization after Josef Sipek did it the right way in arch/s390/kernel/process.c. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2008-07-21sysdev: Pass the attribute to the low level sysdev show/store functionAndi Kleen
This allow to dynamically generate attributes and share show/store functions between attributes. Right now most attributes are generated by special macros and lots of duplicated code. With the attribute passed it's instead possible to attach some data to the attribute and then use that in shared low level functions to do different things. I need this for the dynamically generated bank attributes in the x86 machine check code, but it'll allow some further cleanups. I converted all users in tree to the new show/store prototype. It's a single huge patch to avoid unbisectable sections. Runtime tested: x86-32, x86-64 Compiled only: ia64, powerpc Not compile tested/only grep converted: sh, arm, avr32 Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2008-06-26on_each_cpu(): kill unused 'retry' parameterJens Axboe
It's not even passed on to smp_call_function() anymore, since that was removed. So kill it. Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-06-26smp_call_function: get rid of the unused nonatomic/retry argumentJens Axboe
It's never used and the comments refer to nonatomic and retry interchangably. So get rid of it. Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-06-10[S390] Fix build failure in __cpu_up()Segher Boessenkool
The first argument to __ctl_store() should be the array to store stuff in, not just the first element of that array. With the current code in __cpu_up(), mainline GCC dies with an internal compiler error. I didn't diagnose that further, but just fixed the kernel bug. Signed-off-by: Segher Boessenkool <segher@kernel.crashing.org> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
2008-05-30[S390] Fix section mismatch warnings.Heiko Carstens
This fixes the last remaining section mismatch warnings in s390 architecture code. It reveals also a real bug introduced by... me with git commit 2069e978d5a6e7b45d58027e3de7f879b8c5e488 ("[S390] sparsemem vmemmap: initialize memmap.") Calling the generic vmemmap_alloc_block() function to get initialized memory is a nice idea, however that function is __meminit annotated and therefore the function might be gone if we try to call it later. This can happen if a DCSS segment gets added. So basically revert the patch and clear the memmap explicitly to fix the original bug. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2008-05-15[S390] smp: __smp_call_function_map vs cpu_online_map fix.Heiko Carstens
Both smp_call_function() and __smp_call_function_map() access cpu_online_map. Both functions run with preemption disabled which protects for cpus going offline. However new cpus can be added and therefore the cpu_online_map can change unexpectedly. So use the call_lock to protect against changes to the cpu_online_map in start_secondary() and all smp_call_* functions. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2008-04-30[S390] Automatically detect added cpus.Heiko Carstens
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2008-04-30[S390] smp: Fix locking order.Heiko Carstens
On some smp sysfs store attributes get_online_cpus() may block on cpu_hotplug.lock, but we hold already smp_cpu_state_mutex. Since the locking order on cpu hotplug via arch_update_cpu_topology is inverse this might lead to deadlocks. So make sure locking order is always the same. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2008-04-17[S390] Fix a lot of sparse warnings.Heiko Carstens
Most noteable part of this commit is the new local header file entry.h which contains all the function declarations of functions that get only called from asm code or are arch internal. That way we can avoid extern declarations in C files. This is more or less the same that was done for sparc64. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2008-04-17[S390] Convert monitor calls to function calls.Heiko Carstens
Remove the program check generating monitor calls and use function calls instead. Theres is no real advantage in using monitor calls, but they do make debugging harder, because of all the program checks it generates. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2008-04-17[S390] Vertical cpu management.Heiko Carstens
If vertical cpu polarization is active then the hypervisor will dispatch certain cpus for a longer time than other cpus for maximum performance. For example if a guest would have three virtual cpus, each of them with a share of 33 percent, then in case of vertical cpu polarization all of the processing time would be combined to a single cpu which would run all the time, while the other two cpus would get nearly no cpu time. There are three different types of vertical cpus: high, medium and low. Low cpus hardly get any real cpu time, while high cpus get a full real cpu. Medium cpus get something in between. In order to switch between the two possible modes (default is horizontal) a 0 for horizontal polarization or a 1 for vertical polarization must be written to the dispatching sysfs attribute: /sys/devices/system/cpu/dispatching The polarization of each single cpu can be figured out by the polarization sysfs attribute of each cpu: /sys/devices/system/cpu/cpuX/polarization horizontal, vertical:high, vertical:medium, vertical:low or unknown. When switching polarization the polarization attribute may contain the value unknown until the configuration change is done and the kernel has figured out the new polarization of each cpu. Note that running a system with different types of vertical cpus may result in significant performance regressions. If possible only one type of vertical cpus should be used. All other cpus should be offlined. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>