Age | Commit message (Collapse) | Author |
|
Introduce a mutex for struct ccwgroup to prevent simuntaneous
register/unregister on the same ccwgroup device.
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Ensure that channel-path related subchannel data is only retrieved and
used when it is valid and that it is updated when it may have changed.
Signed-off-by: Peter Oberparleiter <peter.oberparleiter@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Use a bitmap for indicating which subchannels require evaluation
instead of allocating memory for each evaluation request. This
approach reduces memory consumption during recovery in case of
massive evaluation request occurrence and removes the need for
memory allocation failure handling.
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Peter Oberparleiter <peter.oberparleiter@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Path verification triggered by changes to the available CHPIDs will be
interrupted by another change but not re-started. This results in an
invalid path mask.
To solve this make sure to completely re-start path verification when
changing the available paths.
Signed-off-by: Stefan Bader <shbader@de.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Convert ccw_uevent to use add_uevent_var and adapt snprint_alias.
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Add a new attribute to the channel-path sysfs directory through which
channel-path configure operations can be triggered. Also listen for
hardware events requesting channel-path configure operations and
process them accordingly.
Signed-off-by: Peter Oberparleiter <peter.oberparleiter@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
|
|
Detangle the online_store code and make it more readable.
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Check validity flag of CHPID description data before continuing with
channel-path initialization.
Signed-off-by: Peter Oberparleiter <peter.oberparleiter@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Signed-off-by: Peter Oberparleiter <peter.oberparleiter@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
|
|
Channel path status can now be modified by writing '0' and '1'
to the sysfs status attribute in addition to 'offline' and
'online' respectively.
Signed-off-by: Peter Oberparleiter <peter.oberparleiter@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
|
|
Introduce data type for channel-path IDs.
Signed-off-by: Peter Oberparleiter <peter.oberparleiter@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
|
|
Clean interface between cio and ipl code, so Peter stops complaining.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
|
|
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
UBI (Latin: "where?") manages multiple logical volumes on a single
flash device, specifically supporting NAND flash devices. UBI provides
a flexible partitioning concept which still allows for wear-levelling
across the whole flash device.
In a sense, UBI may be compared to the Logical Volume Manager
(LVM). Whereas LVM maps logical sector numbers to physical HDD sector
numbers, UBI maps logical eraseblocks to physical eraseblocks.
More information may be found at
http://www.linux-mtd.infradead.org/doc/ubi.html
Partitioning/Re-partitioning
An UBI volume occupies a certain number of erase blocks. This is
limited by a configured maximum volume size, which could also be
viewed as the partition size. Each individual UBI volume's size can
be changed independently of the other UBI volumes, provided that the
sum of all volume sizes doesn't exceed a certain limit.
UBI supports dynamic volumes and static volumes. Static volumes are
read-only and their contents are protected by CRC check sums.
Bad eraseblocks handling
UBI transparently handles bad eraseblocks. When a physical
eraseblock becomes bad, it is substituted by a good physical
eraseblock, and the user does not even notice this.
Scrubbing
On a NAND flash bit flips can occur on any write operation,
sometimes also on read. If bit flips persist on the device, at first
they can still be corrected by ECC, but once they accumulate,
correction will become impossible. Thus it is best to actively scrub
the affected eraseblock, by first copying it to a free eraseblock
and then erasing the original. The UBI layer performs this type of
scrubbing under the covers, transparently to the UBI volume users.
Erase Counts
UBI maintains an erase count header per eraseblock. This frees
higher-level layers (like file systems) from doing this and allows
for centralized erase count management instead. The erase counts are
used by the wear-levelling algorithm in the UBI layer. The algorithm
itself is exchangeable.
Booting from NAND
For booting directly from NAND flash the hardware must at least be
capable of fetching and executing a small portion of the NAND
flash. Some NAND flash controllers have this kind of support. They
usually limit the window to a few kilobytes in erase block 0. This
"initial program loader" (IPL) must then contain sufficient logic to
load and execute the next boot phase.
Due to bad eraseblocks, which may be randomly scattered over the
flash device, it is problematic to store the "secondary program
loader" (SPL) statically. Also, due to bit-flips it may become
corrupted over time. UBI allows to solve this problem gracefully by
storing the SPL in a small static UBI volume.
UBI volumes vs. static partitions
UBI volumes are still very similar to static MTD partitions:
* both consist of eraseblocks (logical eraseblocks in case of UBI
volumes, and physical eraseblocks in case of static partitions;
* both support three basic operations - read, write, erase.
But UBI volumes have the following advantages over traditional
static MTD partitions:
* there are no eraseblock wear-leveling constraints in case of UBI
volumes, so the user should not care about this;
* there are no bit-flips and bad eraseblocks in case of UBI volumes.
So, UBI volumes may be considered as flash devices with relaxed
restrictions.
Where can it be found?
Documentation, kernel code and applications can be found in the MTD
gits.
What are the applications for?
The applications help to create binary flash images for two purposes: pfi
files (partial flash images) for in-system update of UBI volumes, and plain
binary images, with or without OOB data in case of NAND, for a manufacturing
step. Furthermore some tools are/and will be created that allow flash content
analysis after a system has crashed..
Who did UBI?
The original ideas, where UBI is based on, were developed by Andreas
Arnez, Frank Haverkamp and Thomas Gleixner. Josh W. Boyer and some others
were involved too. The implementation of the kernel layer was done by Artem
B. Bityutskiy. The user-space applications and tools were written by Oliver
Lohmann with contributions from Frank Haverkamp, Andreas Arnez, and Artem.
Joern Engel contributed a patch which modifies JFFS2 so that it can be run on
a UBI volume. Thomas Gleixner did modifications to the NAND layer. Alexander
Schmidt made some testing work as well as core functionality improvements.
Signed-off-by: Artem B. Bityutskiy <dedekind@linutronix.de>
Signed-off-by: Frank Haverkamp <haver@vnet.ibm.com>
|
|
Major features:
1) Tagged queuing support.
2) Will properly negotiate for synchronous transfers even on
devices that reject the wide negotiation message, such as
CDROMs
3) Significantly lower kernel stack usage in interrupt
handler path by elimination of function vector arrays,
replaced by a top-level switch statement state machine.
4) Uses generic scsi infrastructure as much as possible to
avoid code duplication.
5) Automatic request of sense data in response to CHECK_CONDITION
6) Portable to other platforms using ESP such as DEC and Sun3
systems.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Also __sparc__ --> CONFIG_SPARC
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
And use CONFIG_SPARC instead of CONFIG_SPARC64 as the
test.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Just like powerpc does.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The only unfortunate bit here is that the name field of struct map_info
is not const, so for now we put a cast on the assignment of it.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
(akpm: remove CVS control string too)
Signed-off-by: Matthias Kaehlcke <matthias.kaehlcke@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
SPIN_LOCK_UNLOCKED cleanup,use __SPIN_LOCK_UNLOCKED instead
Signed-off-by: Milind Arun Choudhary <milindchoudhary@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
ROUND_UP macro cleanup use DIV_ROUND_UP
Signed-off-by: Milind Arun Choudhary <milindchoudhary@gmail.com>
Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Fixed tun/tap driver's handling of hw addresses. The hw address is stored
in both the net_device.dev_addr and tun.dev_addr fields. These fields were
not kept synchronized, and in fact weren't even initialized to the same
value. Now during both init and when performing SIOCSIFHWADDR on the tun
device these values are both updated. However, if SIOCSIFHWADDR is
performed on the net device directly (for instance, setting the hw address
using ifconfig), the tun device does not get updated. Perhaps the
tun.dev_addr field should be removed completely at some point, as it is
redundant and net_device.dev_addr can be used anywhere it is used.
Signed-off-by: Brian Braunstein <linuxkernel@bristyle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
drivers/net/hamradio/yam.c: In function `yam_tx_byte':
drivers/net/hamradio/yam.c:643: warning: passing arg 1 of `skb_copy_from_linear_data_offset' from incompatible pointer type
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Current e1000_xmit_frame spews raw interrupt disabled nag messages when
used with RT kernel patches. This patch uses spin_trylock_irqsave,
which allows RT patches to properly manage the irq semantics.
Signed-off-by: Mark Huth <mhuth@mvista.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
Upon code inspection it was spotted that the firmware handover bit get/set
mismatched, which may have resulted in management issues on PCI-E
adapters. Setting them correctly may fix some management issues such
as arp routing etc.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
DEBUG_SHIRQ code exposed that e1000 was not ready for incoming interrupts
after having called pci_request_irq. This obviously requires us to finish
our software setup which assigns the irq handler before we request the
irq.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
Correct minor typo in drivers/net/wireless/Kconfig identified by
Stefano Brivio <stefano.brivio@polimi.it>.
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch refactors the wireless Kconfig all over and already
introduces net/wireless/Kconfig with just the WEXT bit for now,
the cfg80211 patch will add to that as well.
Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: Samuel Ortiz <samuel@sortiz.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
pppoe_flush_dev() kicks all sockets bound to a device that is going down.
In doing so, locks must be taken in the right order consistently (sock lock,
followed by the pppoe_hash_lock). However, the scan process is based on
us holding the sock lock. So, when something is found in the scan we must
release the lock we're holding and grab the sock lock.
This patch fixes race conditions between this code and pppoe_release(),
both of which perform similar functions but would naturally prefer to grab
locks in opposing orders. Both code paths are now going after these locks
in a consistent manner.
pppoe_hash_lock protects the contents of the "pppox_sock" objects that reside
inside the hash. Thus, NULL'ing out the pppoe_dev field should be done
under the protection of this lock.
Signed-off-by: Michal Ostrowski <mostrows@earthlink.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
called on it
below you find a patch that fixes a memory leak when a PPPoE socket is
release()d after it has been connect()ed, but before the PPPIOCGCHAN ioctl
ever has been called on it.
This is somewhat of a security problem, too, since PPPoE sockets can be
created by any user, so any user can easily allocate all the machine's
RAM to non-swappable address space and thus DoS the system.
Is there any specific reason for PPPoE sockets being available to any
unprivileged process, BTW? After all, you need a packet socket for the
discovery stage anyway, so it's unlikely that any unprivileged process
will ever need to create a PPPoE socket, no? Allocating all session IDs
for a known AC is a kind of DoS, too, after all - with Juniper ERXes,
this is really easy, actually, since they don't ever assign session ids
above 8000 ...
Signed-off-by: Florian Zumbiehl <florz@florz.de>
Acked-by: Michal Ostrowski <mostrows@earthlink.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
below you find a patch that (hopefully) fixes a race between an interface
going down and a connect() to a peer on that interface. Before,
connect() would determine that an interface is up, then the interface
could go down and all entries referring to that interface in the
item_hash_table would be marked as ZOMBIEs and their references to
the device would be freed, and after that, connect() would put a new
entry into the hash table referring to the device that meanwhile is
down already - which also would cause unregister_netdevice() to wait
until the socket has been release()d.
This patch does not suffice if we are not allowed to accept connect()s
referring to a device that we already acked a NETDEV_GOING_DOWN for
(that is: all references are only guaranteed to be freed after
NETDEV_DOWN has been acknowledged, not necessarily after the
NETDEV_GOING_DOWN already). And if we are allowed to, we could avoid
looking through the hash table upon NETDEV_GOING_DOWN completely and
only do that once we get the NETDEV_DOWN ...
mostrows:
pppoe_flush_dev is called on NETDEV_GOING_DOWN and NETDEV_DOWN to deal with
this "late connect" issue. Ideally one would hope to notify users at the
"NETDEV_GOING_DOWN" phase (just to pretend to be nice). However, it is the
NETDEV_DOWN scan that takes all the responsibility for ensuring nobody is
hanging around at that time.
Signed-off-by: Florian Zumbiehl <florz@florz.de>
Acked-by: Michal Ostrowski <mostrows@earthlink.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
below is a patch that just removes dead code/initializers without any
effect (first access is an assignment) that I stumbled accross while
reading the source.
Signed-off-by: Florian Zumbiehl <florz@florz.de>
Acked-by: Michal Ostrowski <mostrows@earthlink.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Switch cb_lock to mutex and allow netlink kernel users to override it
with a subsystem specific mutex for consistent locking in dump callbacks.
All netlink_dump_start users have been audited not to rely on any
side-effects of the previously used spinlock.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Rusty added a new 'stats' field to struct net_device.
loopback driver can use it instead of declaring another struct
net_device_stats This saves some memory.
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
When csum_offset was introduced we did a conversion from csum to
csum_offset where applicable. A couple of drivers were missed in
this process.
It was harmless to begin with since the two fields coincided. Now
that we've made them different with the addition of csum_start, the
missed drivers must be converted or they can't send packets out at
all that require checksum offload.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
To clearly state the intent of copying to linear sk_buffs, _offset being a
overly long variant but interesting for the sake of saving some bytes.
Signed-off-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
|
|
We have to put back the cast to "char *" because these
pointers are volatile.
Reported by Andrew Morton.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Network drivers which keep stats allocate their own stats structure
then write a get_stats() function to return them. It would be nice if
this were done by default.
1) Add a new "stats" field to "struct net_device".
2) Add a new feature field to say "this driver uses the internal one"
3) Have a default "get_stats" which returns NULL if that feature not set.
4) Change callers to check result of get_stats call for NULL, not if
->get_stats is set.
This should not break backwards compatibility with older drivers, yet
allow modern drivers to shed some boilerplate code.
Lightly tested: works for a modified lguest network driver.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In order to get more randomness for secure_tcpv6_sequence_number(),
secure_tcp_sequence_number(), secure_dccp_sequence_number() functions,
we can use the high resolution time services, providing nanosec
resolution.
I've also done two kmalloc()/kzalloc() conversions.
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
Acked-by: James Morris <jmorris@namei.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|