Age | Commit message (Collapse) | Author |
|
Instead of all drivers reading pci config space to get the revision
ID, they can now use the pci_device->revision member.
This exposes some issues where drivers where reading a word or a dword
for the revision number, and adding useless error-handling around the
read. Some drivers even just read it for no purpose of all.
In devices where the revision ID is being copied over and used in what
appears to be the equivalent of hotpath, I have left the copy code
and the cached copy as not to influence the driver's performance.
Compile tested with make all{yes,mod}config on x86_64 and i386.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Acked-by: Dave Jones <davej@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
To assure the symmetry of poll enable/disable in up/down, we should
initialize the netdevice to be poll_disabled at load time. Doing
this after register_netdevice leaves us open to another race, so
lets move all the netif_* calls above register_netdevice so the
stack starts out how we expect it to be.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Doug Chapman <doug.chapman@hp.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
This restores the previously removed netif_poll_enable call in e1000_open.
It's needed on all but the first call to e1000_open for a NIC as
e1000_close always calls netif_poll_disable.
netif_poll_enable can only be called safely if no polls have been
scheduled. This should be the case as long as we don't enter our IRQ
handler.
In order to guarantee this we explicitly disable IRQs as early as possible
when we're probing the NIC.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "Kok, Auke" <auke-jan.h.kok@intel.com>
Cc: Jeff Garzik <jeff@garzik.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
e1000_clean(), kernel 2.6.21.1)
Herbert Xu wrote:
"netif_poll_enable can only be called if you've previously called
netif_poll_disable. Otherwise a poll might already be in action
and you may get a crash like this."
Removing the call to netif_poll_enable in e1000_open should fix this issue,
the only other call to netif_poll_enable is in e1000_up() which is only
reached after a device reset or resume.
Bugzilla: http://bugzilla.kernel.org/show_bug.cgi?id=8455
https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=240339
Tested by Doug Chapman <doug.chapman@hp.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
pci_enable_msi failure is a normal event so we should not print any error.
Going over the code I spotted a missing pci_disable_msi() leak when irq
allocation fails. The whole code also needed a cleanup, so I combined the
two different calls to pci_request_irq into a single call making this
look a lot better. All #ifdef CONFIG_PCI_MSI's have been removed.
Compile tested with both CONFIG_PCI_MSI enabled and disabled.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
flush_work(wq, work) doesn't need the first parameter, we can use cwq->wq
(this was possible from the very beginnig, I missed this). So we can unify
flush_work_keventd and flush_work.
Also, rename flush_work() to cancel_work_sync() and fix all callers.
Perhaps this is not the best name, but "flush_work" is really bad.
(akpm: this is why the earlier patches bypassed maintainers)
Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru>
Cc: Jeff Garzik <jeff@garzik.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Tejun Heo <htejun@gmail.com>
Cc: Auke Kok <auke-jan.h.kok@intel.com>,
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Switch e1000 over to flush_work_keventd(). This probably fixes a netdev-close
versus linkwatch rtnl_lock() deadlock which nobody knew about.
(akpm: bypassed maintainers, sorry. There are other patches which depend on
this)
Cc: "Maciej W. Rozycki" <macro@linux-mips.org>
Cc: David Howells <dhowells@redhat.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jeff Garzik <jeff@garzik.org>
Acked-by: Auke Kok <auke-jan.h.kok@intel.com>
Cc: Oleg Nesterov <oleg@tv-sign.ru>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
E1000_ROUNDUP macro cleanup, use ALIGN
Signed-off-by: Milind Arun Choudhary <milindchoudhary@gmail.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
Replace kmalloc+memsetout the driver. Slightly modified by Auke Kok.
Signed-off-by: Yan Burman <burman.yan@gmail.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
Use the round_jiffies() function in e1000.
These timers all were of the "about once a second" or "about once every X
seconds" variety and several showed up in the "what wakes the cpu up" profiles
that the tickless patches provide. Some timers are highly dynamic based on
network load; but even on low activity systems they still show up so the
rounding is done only in cases of low activity, allowing higher frequency
timers in the high activity case.
The various hardware watchdogs are an obvious case; they run every 2 seconds
but aren't otherwise specific of exactly when they need to run.
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Acked-by: Auke Kok <auke-jan.h.kok@intel.com>
Cc: Jeff Garzik <jeff@garzik.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
master.kernel.org:/pub/scm/linux/kernel/git/jgarzik/netdev-2.6
* 'e1000-fixes' of master.kernel.org:/pub/scm/linux/kernel/git/jgarzik/netdev-2.6:
e1000: FIX: Stop raw interrupts disabled nag from RT
e1000: FIX: firmware handover bits
e1000: FIX: be ready for incoming irq at pci_request_irq
|
|
Current e1000_xmit_frame spews raw interrupt disabled nag messages when
used with RT kernel patches. This patch uses spin_trylock_irqsave,
which allows RT patches to properly manage the irq semantics.
Signed-off-by: Mark Huth <mhuth@mvista.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
Upon code inspection it was spotted that the firmware handover bit get/set
mismatched, which may have resulted in management issues on PCI-E
adapters. Setting them correctly may fix some management issues such
as arp routing etc.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
DEBUG_SHIRQ code exposed that e1000 was not ready for incoming interrupts
after having called pci_request_irq. This obviously requires us to finish
our software setup which assigns the irq handler before we request the
irq.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
When csum_offset was introduced we did a conversion from csum to
csum_offset where applicable. A couple of drivers were missed in
this process.
It was harmless to begin with since the two fields coincided. Now
that we've made them different with the addition of csum_start, the
missed drivers must be converted or they can't send packets out at
all that require checksum offload.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
To clearly state the intent of copying to linear sk_buffs, _offset being a
overly long variant but interesting for the sake of saving some bytes.
Signed-off-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
|
|
So that it is also an offset from skb->head, reduces its size from 8 to 4 bytes
on 64bit architectures, allowing us to combine the 4 bytes hole left by the
layer headers conversion, reducing struct sk_buff size to 256 bytes, i.e. 4
64byte cachelines, and since the sk_buff slab cache is SLAB_HWCACHE_ALIGN...
:-)
Many calculations that previously required that skb->{transport,network,
mac}_header be first converted to a pointer now can be done directly, being
meaningful as offsets or pointers.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The ip_hdrlen() buddy, created to reduce the number of skb->h.th-> uses and to
avoid the longer, open coded equivalent.
Ditched a no-op in bnx2 in the process.
I wonder if we should have a BUG_ON(skb->h.th->doff < 5) in tcp_optlen()...
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
For the quite common 'skb->h.raw - skb->data' sequence.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Now the skb->nh union has just one member, .raw, i.e. it is just like the
skb->mac union, strange, no? I'm just leaving it like that till the transport
layer is done with, when we'll rename skb->mac.raw to skb->mac_header (or
->mac_header_offset?), ditto for ->{h,nh}.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
For the quite common 'skb->nh.raw - skb->data' sequence.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This reverts commit 60cba200f11b6f90f35634c5cd608773ae3721b7. It's been
linked to lockups of the e1000 hardware, see for example
https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=229603
but it's likely that the commit itself is not really introducing the
bug, but just allowing an unrelated problem to rear its ugly head (ie
one current working theory is that the code exposes us to a hardware
race condition by decreasing the amount of time we spend in each NAPI
poll cycle).
We'll revert it until root cause is known. Intel has a repeatable
reproduction on two different machines and bus traces of the hardware
doing something bad.
Acked-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Cc: Jeff Garzik <jeff@garzik.org>
Cc: David S. Miller <davem@davemloft.net>
Cc: Greg KH <gregkh@suse.de>
Cc: Dave Jones <davej@redhat.com>
Cc: Auke Kok <auke-jan.h.kok@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
This patch splits the vlan_group struct into a multi-allocated struct. On
x86_64, the size of the original struct is a little more than 32KB, causing
a 4-order allocation, which is prune to problems caused by buddy-system
external fragmentation conditions.
I couldn't just use vmalloc() because vfree() cannot be called in the
softirq context of the RCU callback.
Signed-off-by: Dan Aloni <da-x@monatomic.org>
Acked-by: Jeff Garzik <jeff@garzik.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This reverts commit d2ed16356ff4fb9de23fbc5e5d582ce580390106.
As Thomas Gleixner reports:
"e1000 is not working anymore. ifup fails permanentely.
ADDRCONF(NETDEV_UP): eth0: link is not ready
nothing else"
The broken commit was identified with "git bisect".
Auke Kok says:
"I think we need to drop this now. The report that says that this
*fixes* something might have been on regular interrupts only. I
currently suspect that it breaks all MSI interrupts, which would make
sense if I look a the code. Very bad indeed."
Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
Acked-by: Auke Kok <auke-jan.h.kok@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Jeff Garzik <jeff@garzik.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Now that 2.6.19 provides a proper implementation that saves MSI, PCI-E
config space, we can have the e1000 driver use those instead of it's
custom implementation.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
* master.kernel.org:/pub/scm/linux/kernel/git/gregkh/pci-2.6: (41 commits)
Revert "PCI: remove duplicate device id from ata_piix"
msi: Make MSI useable more architectures
msi: Kill the msi_desc array.
msi: Remove attach_msi_entry.
msi: Fix msi_remove_pci_irq_vectors.
msi: Remove msi_lock.
msi: Kill msi_lookup_irq
MSI: Combine pci_(save|restore)_msi/msix_state
MSI: Remove pci_scan_msi_device()
MSI: Replace pci_msi_quirk with calls to pci_no_msi()
PCI: remove duplicate device id from ipr
PCI: remove duplicate device id from ata_piix
PCI: power management: remove noise on non-manageable hw
PCI: cleanup MSI code
PCI: make isa_bridge Alpha-only
PCI: remove quirk_sis_96x_compatible()
PCI: Speed up the Intel SMBus unhiding quirk
PCI Quirk: 1k I/O space IOBL_ADR fix on P64H2
shpchp: delete trailing whitespace
shpchp: remove DBG_XXX_ROUTINE
...
|
|
Use newly minted routine to access the PCI channel state.
Signed-off-by: Linas Vepstas <linas@linas.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
Remove the NETIF_F_TSO #ifdef-ery in drivers/net; this was
for old-old-2.4 compat (even current 2.4 has NETIF_F_TSO)
but it's time to get rid of it by now.
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
|
|
The driver was still mis-calculating the number of bytes sent during
transmit, now the driver computes what appears to be exactly 100%
correct byte counts (not including CRC) when figuring out how many
bytes and frames were sent during the current transmit packet.
|
|
Since the driver sets the IP checksum insertion bit (IXSM in Status
field) in transmit context descriptors, it should clear the IP checksum
bits of any garbage so as not to confuse the hardware.
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
|
|
Print RX/TX flow control setting at link up time to display the
actual link FC properties instead of the advertised values.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
|
|
This fix attempts to solve a customer (IBM) reported issue with NAPI
enabled e1000 having bad performance when transmitting simultaneously
on four ports. The issue comes down to an interaction between NAPI,
hardware interrupt balancing, and the driver rescheduling poll on
the same processor. Try to fix by allowing the driver to re-enable
interrupts sooner instead of polling one more time, when there was
recently all the work completed in cleanup.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
|
|
Unfortunately the read-free MSI interrupt handler needs to flush write
the icr register and thus we can't be read-free. Our MSI irq routine
thus becomes a lot more simpler since we don't need to track link state
anymore.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
|
|
This reverts commit 72f3ab7462f4e153d1e8ac78e379716ad71d6923, which was
superceded by commit 683a2aa339f607c8a422835161ceab68b2a5a18a
("e1000: Do not truncate TSO TCP header with 82544 workaround"), which
fixed the real problem.
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
The e1000 driver has a workaround for 82544 on PCI-X where if the
terminating byte of a buffer is at addresses 0-3 mod 8, then 4 bytes
are shaved off it and defered to a new segment. This is due to an
erratum that could otherwise cause TX hangs.
Unfortunately this breaks TSO because it may cause the TCP header to
be split over two segments which itself causes TX hangs. The solution
is to pull 4 bytes of data up from the next segment rather than pushing
4 bytes off. This ensures the TCP header remains in one piece and
works around the PCI-X hang.
This patch is based on one from Jesse Brandeburg.
This bug has been trigered by both CONFIG_DEBUG_SLAB as well as Xen.
Note that the only reason we don't see this normally is because the
TCP stack starts writing from the end, i.e., it writes the TCP header
first then slaps on the IP header, etc. So the end of the TCP header
(skb->tail - 1 here) is always aligned correctly.
Had we made the start of the IP header (e.g., IPv6) 8-byte aligned
instead, this would happen for normal TCP traffic as well.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
Currently after an interface up, the link state is detected 2 seconds later
when the first watchdog timer runs. This patch changes that by triggering
the hardware to generate a link-change interrupt from the up() function
instead. This has the result that the link state gets detected immediately
and without races. This has the potential to speed up booting since a normal
distribution boot process waits for a link before DHCP is attempted.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
Add 3 extra packet redirect counters for tracking purposes to make sure
we can test that all packets arrive properly.
Originally from Jesse Brandeburg <jesse.brandeburg@intel.com>,
rewritten to use feature flags by me.
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
Allow the user to vary the size that copybreak works. Currently cb is enabled
for packets < 256 bytes, but various tests indicate that this should be
configurable for specific use cases. In addition, this parameter allows us
to force never/always during testing to get full and predictable coverage of
both code paths.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
Assign the PBA to be large enough to contain at least 2 jumbo frames on
all adapters. This dramatically increases performance on several adapters
and fixes TX performance degradation issues where the PBA was misallocated
in the old algorithm.
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
the driver has (ancient) code for messing with TIPG from the 82542 days.
Unfortunately this code was running on our current adapters and setting
TIPG for fiber to be +1 over the copper value. This caused 1.45Mpps
to be sent instead of 1.487Mpps.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
This bugfix makes sure that the driver data reflects the full new situation
before the adapter is reinitialized.
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
In rare occasions, ESB2 systems would end up started without the RX
unit being turned on. Add a check that runs post-init to work around
this issue.
Originally from Jesse Brandeburg <jesse.brandeburg@intel.com>,
rewritten to use feature flags by me.
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
CONFIG_DEBUG_SLAB changes alignments of the data structures the slab
allocators return. These break certain workarounds for TSO on the 82544.
Since DEBUG_SLAB is relatively rare and not used for performance sensitive
cases, the simplest fix is to disable TSO in this special situation.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
If the user has forced gigabit speed, phy power management must be disabled;
otherwise the NIC would try to negotiate to a linkspeed of 10/100 mbit on
shutdown, which would lead to a total loss of link. This loss of link breaks
Wake-on-Lan and IPMI.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
Several bugs existed in how we handle manageability issues all
over the driver. This patch consolidates all the managability
release and init code in two single functions and call them from
appropriate locations. This fixes several BMC packet redirect issues
and powerup/down hiccups.
Originally from Jesse Brandeburg <jesse.brandeburg@intel.com>, rewritten
to use feature flags by me.
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|
|
The 82543 chip does not count tx_carrier_errors properly in FD mode;
report zeros instead of garbage.
Originally from Jesse Brandeburg <jesse.brandeburg@intel.com>, rewritten
to use feature flags by me.
Signed-off-by: Jeff Garzik <jeff@garzik.org>
|