Age | Commit message (Collapse) | Author |
|
First argument is LDC channel ID, then mapping cookie,
then the MTE revoke cookie.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
If the system supports hypervisor based statistics, allow them to
be fetched, enabled, and disabled via sysfs.
Enable and disable via the boolean:
/sys/devices/systems/cpu/cpuN/mmustat_enable
Statistic values are provided under:
/sys/devices/systems/cpu/cpuN/mmu_status/
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Several interfaces were missing and others misnumbered or
improperly documented.
Also, make sure to check the return value when registering
the kernel TSBs with the hypervisor. This helped to find
the 4MB kernel TSB alignment bug fixed in a previous changeset.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Hypervisor interfaces need to be negotiated in order to use
some API calls reliably. So add a small set of interfaces
to request API versions and query current settings.
This allows us to fix some bugs in the hypervisor console:
1) If we can negotiate API group CORE of at least major 1
minor 1 we can use con_read and con_write which can improve
console performance quite a bit.
2) When we do a console write request, we should hold the
spinlock around the whole request, not a byte at a time.
What would happen is that it's easy for output from
different cpus to get mixed with each other.
3) Use consistent udelay() based polling, udelay(1) each
loop with a limit of 1000 polls to handle stuck hypervisor
console.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
There were several bugs in the SUN4V cpu mondo dispatch code.
In fact, if we ever got a EWOULDBLOCK or other error from
the hypervisor call, we'd potentially send a cpu mondo multiple
times to the same cpu and even worse we could loop until the
timeout resending the same mondo over and over to such cpus.
So let's bulletproof this thing as follows:
1) Implement cpu_mondo_send() and cpu_state() hypervisor calls
in arch/sparc64/kernel/entry.S, add prototypes to asm/hypervisor.h
2) Don't build and update the cpulist using inline functions, this
was causing the cpu mask to not get updated in the caller.
3) Disable interrupts during the entire mondo send, otherwise our
cpu list and/or mondo block could get overwritten if we take
an interrupt and do a cpu mondo send on the current cpu.
4) Check for all possible error return types from the cpu_mondo_send()
hypervisor call. In particular:
HV_EOK) Our work is done, all cpus have received the mondo.
HV_CPUERROR) One or more of the cpus in the cpu list we passed
to the hypervisor are in error state. Use cpu_state()
calls over the entries in the cpu list to see which
ones. Record them in "error_mask" and report this
after we are done sending the mondo to cpus which are
not in error state.
HV_EWOULDBLOCK) We need to keep trying.
Any other error we consider fatal, we report the event and exit
immediately.
5) We only timeout if forward progress is not made. Forward progress
is defined as having at least one cpu get the mondo successfully
in a given cpu_mondo_send() call. Otherwise we bump a counter
and delay a little. If the counter hits a limit, we signal an
error and report the event.
Also, smp_call_function_mask() error handling reports the number
of cpus incorrectly.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Call it from register_one_mondo().
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
And check for errors at call sites.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
It is not PCI specific, it is for all system interrupts.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
For constructing hypervisor PCI TSB IDs.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|