Age | Commit message (Collapse) | Author |
|
According to https://bugs.launchpad.net/bugs/294391
- 3rd generation iPods need the "fix capacity" workaround after all
(apparently they crash after the last sector was accessed),
- 2nd generation iPods need the "128 kB maximum request size"
workaround.
Alas both iPod generations feature the same model ID in the config ROM,
hence we can only define a shared quirks list entry for them. Luckily
the fix capacity workaround did not show a negative effect in Jarod's
tests with 2nd gen. iPod.
A side note: Apple computers in target mode (or at least an x86 Mac
mini) don't have firmware_version and model_id, hence none of the iPod
quirks list entries is active for them.
Tested-by: Jarod Wilson <jarod@redhat.com>
Acked-by: Jarod Wilson <jarod@redhat.com>
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
Reported-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
who also provided a first version of the fix.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
1394-2008 clause 16.3.4.1 (1394b-2002 clause 16.3.1.1) defines tighter
limits than 1394-2008 clause 6.2.2.3 (1394a-2000 clause 6.2.2.3).
Our previously too large limit doesn't matter though if the controller
reports its max_receive correctly.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
This fixes a regression by "firewire: keep highlevel drivers attached
during brief connection loss": There were 2 seconds unnecessary waiting
added to the shutdown procedure of each controller.
We use card->link as status flag to signal the device handler that there
is no use to wait for a come-back.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
camcorders and others
Camcorders have a tendency to fail read requests to their config ROM and
write request to their FCP command register with ack_busy_X. This has
become a problem with newer kernels and especially Panasonic camcorders,
causing AV/C in dvgrab and kino to fail. Dvgrab for example frequently
logs "send oops"; kino reports loss of AV/C control. I suspect that
lower CPU scheduling latencies in newer kernels made this issue more
prominent now.
According to
https://sourceforge.net/tracker/?func=detail&atid=114103&aid=2492640&group_id=14103
this can be fixed by configuring the FireWire controller for more
hardware retries for request transmission; these retries are evidently
more successful than libavc1394's own retry loop (typically 3 tries on
top of hardware retries).
Presumably the same issue has been reported at
https://bugzilla.redhat.com/show_bug.cgi?id=449252 and
https://bugzilla.redhat.com/show_bug.cgi?id=477279 .
In a quick test with a JVC camcorder (which didn't malfunction like the
reported camcorders), this change decreased the number of ack_busy_X
from 16 in three runs of dvgrab to 4 in three runs of the same capture
duration.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
The present message is mostly just noise. We only need to be notified
if the "active" flag does not go off before the retry loop terminates.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
There are situations when nodes vanish from the bus and come back
quickly thereafter:
- When certain bus-powered hubs are plugged in,
- when certain devices are plugged into 6-port hubs,
- when certain disk enclosures are switched from self-power to bus
power or vice versa and break the daisy chain during the transition,
- when the user plugs a cable out and quickly plugs it back in, e.g.
to reorder a daisy chain (works on Mac OS X if done quickly enough),
- when certain hubs temporarily malfunction during high bus traffic.
Until now, firewire-core reported affected nodes as lost to the
highlevel drivers (firewire-sbp2 and userspace drivers). We now delay
the destruction of device representations until after at least two
seconds after the last bus reset. If a "new" device is detected in this
period whose bus information block and root directory header match that
of a device which is pending for deletion, we resurrect that device and
send update calls to highlevel drivers.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
Noticed by Jarod Wilson: The bus manager work was unnecessarily delayed
each time the bus generation counter rolled over.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Signed-off-by: Jarod Wilson <jwilson@redhat.com>
|
|
The whole topology code only works if the old and new topologies which
are compared come from immediately successive self ID complete events.
If there happened bus resets without self ID complete events in the
meantime, or self ID complete events with invalid selfIDs, the topology
comparison could identify nodes wrongly, or more likely just corrupt
kernel memory or panic right away.
We now discard all nodes of the old topology and treat all current nodes
as new ones if the current self ID generation is not the previous one
plus 1.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Signed-off-by: Jarod Wilson <jwilson@redhat.com>
|
|
Due to commit 2831fe6f9cc4e16c103504ee09a47a084297c0f3, "driver core:
create a private portion of struct device", device_initialize() can no
longer be called from atomic contexts.
We now defer it until after config ROM probing. This requires changes
to the bus manager code because this may use a device before it was
probed.
Reported-by: Jay Fenlason <fenlason@redhat.com>
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
topology_map is by far the largest member in struct fw_card. Move it to
the very end of the struct so that card pointer dereferences have better
chances to hit the CPU cache.
This requires to increase the topology_map backing store to the size
specified in IEEE 1394, i.e. 256 rather than 255 quadlets. Otherwise
the topology_map response handler may access invalid memory.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
An earlier change, maybe long ago, removed the copying of self_id_count
into card->self_id_count. Since then each bus reset cleared
card->bm_retries even when it shouldn't.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
Take a reference to the card whenever fw_card_bm_work() is scheduled on
that card and release it when the work is done. This allows us to
remove the cancel_delayed_work_sync() in fw_core_remove_card().
Signed-off-by: Jay Fenlason <fenlason@redhat.com>
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de> (patch update)
|
|
Signed-off-by: Jay Fenlason <fenlason@redhat.com>
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
What was I thinking when I added sbp2_set_generation()? Its locking did
nothing (except for implicitly providing the necessary barrier between
node IDs update and generation update).
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
There is a DMA map/ unmap imbalance whenever a block write request
packet is sent and then dequeued with ohci_cancel_packet. The latter
may happen frequently if the AR resp tasklet is executed before the AT
req tasklet for the same transaction.
Add the missing dma_unmap_single. This fixes
https://bugzilla.redhat.com/show_bug.cgi?id=475156
Reported-by: Emmanuel Kowalski
Tested-by: Emmanuel Kowalski
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
Add another model ID of a broken firmware to prevent early I/O errors
by acesses at the end of the disk. Reported at linux1394-user,
http://marc.info/?t=122670842900002
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
Acked-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Kay Sievers <kay.sievers@vrfy.org>
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
1: There is a small race between queue_delayed_work() and its
corresponding kref_get(). Do the kref_get first, and _put it again
if the queue_delayed_work() failed, so there is no chance of the
kref going to zero while the work is scheduled.
2: An SBP2_LOGOUT_REQUEST could be sent out with a login_id full of
garbage. Initialize it to an invalid value so we can tell if we
ever got a valid login_id.
3: The node ID and generation may have changed but the new values may
not yet have been recorded in lu and tgt when the final logout is
attempted. Use the latest values from the device in
sbp2_release_target().
Signed-off-by: Jay Fenlason <fenlason@redhat.com>
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
This optimizes firewire-sbp2's device probe for the case that the local
node and the SBP-2 node were discovered at the same time. In this case,
fw-core's bus management work and fw-sbp2's login and SCSI probe work
are scheduled in parallel (in the globally shared workqueue and in
fw-sbp2's workqueue, respectively). The bus reset from fw-core may then
disturb and extremely delay the login and SCSI probe because the latter
fails with several command timeouts and retries and has to be retried
from scratch.
We avoid this particular situation of sbp2_login() and fw_card_bm_work()
running in parallel by delaying the first sbp2_login() a little bit.
This is meant to be a short-term fix for
https://bugzilla.redhat.com/show_bug.cgi?id=466679. In the long run,
the SCSI probe, i.e. fw-sbp2's call of __scsi_add_device(), should be
parallelized with sbp2_reconnect().
Problem reported and fix tested and confirmed by Alex Kanavin.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
Fix leaks when pci_probe fails. Simplify error log strings.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
The transmit and receive context dma memory was not being freed on
module removal. Neither was the config rom memory. Fix that.
The ab->next assignment is pure paranoia.
Signed-off-by: Jay Fenlason <fenlason@redhat.com>
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
With the bus_resets patch applied, it is easy to see this memory leak
by repeatedly resetting the firewire bus while running slabtop in
another window. Just watch kmalloc-32 grow and grow...
Signed-off-by: Jay Fenlason <fenlason@redhat.com>
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
The "color" is used during the topology building after a bus reset,
hovever in "struct fw_node"s it is stored in a u8, but in struct fw_card
it is stored in an int. When the value wraps in one struct, but not
the other, disaster strikes.
Signed-off-by: Jay Fenlason <fenlason@redhat.com>
Fixes http://bugzilla.kernel.org/show_bug.cgi?id=10922.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
Reported by Jay Fenlason: ioctl() did not return as intended
- the size of data read into ioctl_send_request,
- the number of datagrams enqueued by ioctl_queue_iso.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
Reported by Jay Fenlason:
The iso packet control accessors in fw-cdev.c had bogus masks.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
queuecommand() looked at the remote and local node IDs before it read
the bus generation. The corresponding race with sbp2_reconnect updating
these data was probably impossible to happen though because the current
code blocks the SCSI layer during reconnection. However, better safe
than sorry, especially if someone later improves the code to not block
the SCSI layer.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
1. We don't need to round the SBP-2 segment size limit down to a
multiple of 4 kB (0xffff -> 0xf000). It is only necessary to
ensure quadlet alignment (0xffff -> 0xfffc).
2. Use dma_set_max_seg_size() to tell the DMA mapping infrastructure
and the block IO layer about the restriction. This way we can
remove the size checks and segment splitting in the queuecommand
path.
This assumes that no other code in the firewire stack uses
dma_map_sg() with conflicting requirements. It furthermore assumes
that the controller device's platform actually allows us to set the
segment size to our liking. Assert the latter with a BUG_ON().
3. Also use blk_queue_max_segment_size() to tell the block IO layer
about it. It cannot know it because our scsi_add_host() does not
point to the FireWire controller's device.
Thanks to Grant Grundler and FUJITA Tomonori for advice.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
Share code between fw_send_request + wait_for_completion callers.
Signed-off-by: Jay Fenlason <fenlason@redhat.com>
Addendum:
Removes an unnecessary struct and an ununsed retry loop.
Calls it fw_run_transaction() instead of fw_send_request_sync().
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Acked-by: Kristian Høgsberg <krh@redhat.com>
|
|
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/ieee1394/linux1394-2.6
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ieee1394/linux1394-2.6:
firewire: Preserve response data alignment bug when it is harmless
|
|
Recently, a bug having to do with the alignment of transaction response
data was fixed. However, some apps such as libdc1394 relied on the
presence of that bug in order to function correctly. In order to stay
compatible with old versions of those apps, this patch preserves the bug
in cases where it is harmless to normal operation (such as the single
quadlet read) due to a simple duplication of data. This guarantees
maximum compatability for those users who are using the old app with the
fixed kernel.
Signed-off-by: David Moore <dcm@acm.org>
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/ieee1394/linux1394-2.6
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ieee1394/linux1394-2.6:
firewire: state userland requirements in Kconfig help
firewire: avoid memleak after phy config transmit failure
firewire: fw-ohci: TSB43AB22/A dualbuffer workaround
firewire: queue the right number of data
firewire: warn on unfinished transactions during card removal
firewire: small fw_fill_request cleanup
firewire: fully initialize fw_transaction before marking it pending
firewire: fix race of bus reset with request transmission
|
|
Add per-device dma_mapping_ops support for CONFIG_X86_64 as POWER
architecture does:
This enables us to cleanly fix the Calgary IOMMU issue that some devices
are not behind the IOMMU (http://lkml.org/lkml/2008/5/8/423).
I think that per-device dma_mapping_ops support would be also helpful for
KVM people to support PCI passthrough but Andi thinks that this makes it
difficult to support the PCI passthrough (see the above thread). So I
CC'ed this to KVM camp. Comments are appreciated.
A pointer to dma_mapping_ops to struct dev_archdata is added. If the
pointer is non NULL, DMA operations in asm/dma-mapping.h use it. If it's
NULL, the system-wide dma_ops pointer is used as before.
If it's useful for KVM people, I plan to implement a mechanism to register
a hook called when a new pci (or dma capable) device is created (it works
with hot plugging). It enables IOMMUs to set up an appropriate
dma_mapping_ops per device.
The major obstacle is that dma_mapping_error doesn't take a pointer to the
device unlike other DMA operations. So x86 can't have dma_mapping_ops per
device. Note all the POWER IOMMUs use the same dma_mapping_error function
so this is not a problem for POWER but x86 IOMMUs use different
dma_mapping_error functions.
The first patch adds the device argument to dma_mapping_error. The patch
is trivial but large since it touches lots of drivers and dma-mapping.h in
all the architecture.
This patch:
dma_mapping_error() doesn't take a pointer to the device unlike other DMA
operations. So we can't have dma_mapping_ops per device.
Note that POWER already has dma_mapping_ops per device but all the POWER
IOMMUs use the same dma_mapping_error function. x86 IOMMUs use device
argument.
[akpm@linux-foundation.org: fix sge]
[akpm@linux-foundation.org: fix svc_rdma]
[akpm@linux-foundation.org: build fix]
[akpm@linux-foundation.org: fix bnx2x]
[akpm@linux-foundation.org: fix s2io]
[akpm@linux-foundation.org: fix pasemi_mac]
[akpm@linux-foundation.org: fix sdhci]
[akpm@linux-foundation.org: build fix]
[akpm@linux-foundation.org: fix sparc]
[akpm@linux-foundation.org: fix ibmvscsi]
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: Muli Ben-Yehuda <muli@il.ibm.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Avi Kivity <avi@qumranet.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
Use only statically allocated data for PHY config packet transmission.
With the previous incarnation, some data wouldn't be freed if the packet
transmit callback was never called.
A theoretical drawback now is that, in PCs with more than one card,
card A may complete() for a waiter on card B. But this is highly
unlikely and its impact not serious. Bus manager B may reset bus B
before the PHY config went out, but the next phy config on B should be
fine. However, with a timeout of 100ms, this situation is close to
impossible.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
Isochronous reception in dualbuffer mode is reportedly broken with
TI TSB43AB22A on x86-64. Descriptor addresses above 2G have been
determined as the trigger:
https://bugzilla.redhat.com/show_bug.cgi?id=435550
Two fixes are possible:
- pci_set_consistent_dma_mask(pdev, DMA_31BIT_MASK);
at least when IR descriptors are allocated, or
- simply don't use dualbuffer.
This fix implements the latter workaround.
But we keep using dualbuffer on x86-32 which won't give us highmen (and
thus physical addresses outside the 31bit range) in coherent DMA memory
allocations. Right now we could for example also whitelist PPC32, but
DMA mapping implementation details are expected to change there.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Signed-off-by: Jarod Wilson <jwilson@redhat.com>
|
|
There will be 4 padding bytes in struct fw_cdev_event_response on some platforms
The member:__u32 data will point to these padding bytes. While queue the
response and data in complete_transaction in fw-cdev.c, it will queue like this:
|response(excluding padding bytes)|4 padding bytes|4 padding bytes|data.
It queue 4 extra bytes. That is to say it use "&response + sizeof(response)"
while other place of kernel and userspace library use "&response + offsetof
(typeof(response), data)". So it will lost the last 4 bytes of data. This patch
can fix it while not changing the struct definition.
Signed-off-by: JiSheng Zhang <jszhang3@mail.ustc.edu.cn>
This fixes responses to outbound block read requests on 64bit architectures.
Tested on i686, x86-64, and x86-64 with i686 userland, using firecontrol and
gscanbus.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/ieee1394/linux1394-2.6
* 'sbp2-spindown' of git://git.kernel.org/pub/scm/linux/kernel/git/ieee1394/linux1394-2.6:
ieee1394: sbp2: spin disks down on suspend and shutdown
firewire: fw-sbp2: spin disks down on suspend and shutdown
ieee1394: sbp2: fix spindown for PL-3507 and TSB42AA9 firmwares
firewire: fw-sbp2: fix spindown for PL-3507 and TSB42AA9 firmwares
scsi: sd: optionally set power condition in START STOP UNIT
|
|
After card->done and card->work are completed, any remaining pending
request would be a bug. We cannot safely complete a transaction at
that point anymore.
IOW card users must not drop their last fw_card reference (usually
indirect references through fw_device references) before their last
outbound transaction through that card was finished.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
- better name for a function argument
- removal of a local variable which became unnecessary after
"fully initialize fw_transaction before marking it pending"
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
In theory, card->flush_timer could already access a transaction between
fw_send_request()'s spin_unlock_irqrestore and the rest of what happens
in fw_send_request(). This would happen if the process which sends the
request is preempted and put to sleep right after spin_unlock_irqrestore
for longer than 100ms.
Therefore we fill in everything in struct fw_transaction at which the
flush_timer might look at before we lift the lock.
To do: Ensure that the timer does not pick up the transaction before
the time of the AT request event plus split transaction timeout.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
Reported by Jay Fenlason: A bus reset tasklet may call
fw_flush_transactions and touch transactions (call their callback which
will free them) while the context which submitted the transaction is
still inserting it into the transmission queue.
A simple solution to this problem is to _not_ "flush" the transactions
because of a bus reset (complete the transcations as 'cancelled'). They
will now simply time out (completed as 'cancelled' by the split-timeout
timer).
Jay Fenlason thought of this fix too but I was quicker to type it out.
:-)
Background:
Contexts which access an instance of struct fw_transaction are:
1. the submitter, until it inserted the packet which is embedded in the
transaction into the AT req DMA,
2. the AsReqTrContext tasklet when the request packet was acked by the
responder node or transmission to the responder failed,
3. the AsRspRcvContext tasklet when it found a request which matched
an incoming response,
4. the card->flush_timer when it picks up timed-out transactions to
cancel them,
5. the bus reset tasklet when it cancels transactions (this access is
eliminated by this patch),
6. a process which shuts down an fw_card (unregisters it from fw-core
when the controller is unbound from fw-ohci) --- although in this
case there shouldn't really be any transactions anymore because we
wait until all card users finished their business with the card.
All of these contexts run concurrently (except for the 6th, presumably).
The 1st is safe against the 2nd and 3rd because of the way how a request
packet is carefully submitted to the hardware. A race between 2nd and
3rd has been fixed a while ago (bug 9617). The 4th is almost safe
against 1st, 2nd, 3rd; there are issues with it if huge scheduling
latencies occur, to be fixed separately. The 5th looks safe against
2nd, 3rd, and 4th but is unsafe against 1st. Maybe this could be fixed
with an explicit state variable in struct fw_transaction. But this
would require fw_transaction to be rewritten as only dynamically
allocatable object with reference counting --- not a good solution if we
also can simply kill this 5th accessing context (replace it by the 4th).
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
Contrary to a comment in the source, request->ack of a broadcast write
request can be ACK_PENDING. Hence the existing check is insufficient.
Debug dmesg before:
AR spd 0 tl 00, ffc0 -> ffff, ack_pending , QW req, fffff0000234 = ffffffff
AT spd 0 tl 00, ffff -> ffc0, ack_complete, W resp
And the requesting node (linux1394) reports an unsolicited response.
Debug dmesg after:
AR spd 0 tl 00, ffc0 -> ffff, ack_pending , QW req, fffff0000234 = ffffffff
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
This is a functionally equivalent replacement of the current reference
counting of struct fw_card instances. It only converts it to common
idioms as suggested by Kristian Høgsberg:
- struct kref replaces atomic_t as the counter.
- wait_for_completion is used to wait for all card users to complete.
BTW, it may make sense to count card->flush_timer and card->work as
card users too.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|
|
See IEEE 1394a clause 8.3.2.3.11.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
|