aboutsummaryrefslogtreecommitdiff
path: root/Documentation
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation')
-rw-r--r--Documentation/SM501.txt66
-rw-r--r--Documentation/volatile-considered-harmful.txt119
2 files changed, 185 insertions, 0 deletions
diff --git a/Documentation/SM501.txt b/Documentation/SM501.txt
new file mode 100644
index 00000000000..3a1bd95d376
--- /dev/null
+++ b/Documentation/SM501.txt
@@ -0,0 +1,66 @@
+ SM501 Driver
+ ============
+
+Copyright 2006, 2007 Simtec Electronics
+
+Core
+----
+
+The core driver in drivers/mfd provides common services for the
+drivers which manage the specific hardware blocks. These services
+include locking for common registers, clock control and resource
+management.
+
+The core registers drivers for both PCI and generic bus based
+chips via the platform device and driver system.
+
+On detection of a device, the core initialises the chip (which may
+be specified by the platform data) and then exports the selected
+peripheral set as platform devices for the specific drivers.
+
+The core re-uses the platform device system as the platform device
+system provides enough features to support the drivers without the
+need to create a new bus-type and the associated code to go with it.
+
+
+Resources
+---------
+
+Each peripheral has a view of the device which is implicitly narrowed to
+the specific set of resources that peripheral requires in order to
+function correctly.
+
+The centralised memory allocation allows the driver to ensure that the
+maximum possible resource allocation can be made to the video subsystem
+as this is by-far the most resource-sensitive of the on-chip functions.
+
+The primary issue with memory allocation is that of moving the video
+buffers once a display mode is chosen. Indeed when a video mode change
+occurs the memory footprint of the video subsystem changes.
+
+Since video memory is difficult to move without changing the display
+(unless sufficient contiguous memory can be provided for the old and new
+modes simultaneously) the video driver fully utilises the memory area
+given to it by aligning fb0 to the start of the area and fb1 to the end
+of it. Any memory left over in the middle is used for the acceleration
+functions, which are transient and thus their location is less critical
+as it can be moved.
+
+
+Configuration
+-------------
+
+The platform device driver uses a set of platform data to pass
+configurations through to the core and the subsidiary drivers
+so that there can be support for more than one system carrying
+an SM501 built into a single kernel image.
+
+The PCI driver assumes that the PCI card behaves as per the Silicon
+Motion reference design.
+
+There is an errata (AB-5) affecting the selection of the
+of the M1XCLK and M1CLK frequencies. These two clocks
+must be sourced from the same PLL, although they can then
+be divided down individually. If this is not set, then SM501 may
+lock and hang the whole system. The driver will refuse to
+attach if the PLL selection is different.
diff --git a/Documentation/volatile-considered-harmful.txt b/Documentation/volatile-considered-harmful.txt
new file mode 100644
index 00000000000..10c2e411cca
--- /dev/null
+++ b/Documentation/volatile-considered-harmful.txt
@@ -0,0 +1,119 @@
+Why the "volatile" type class should not be used
+------------------------------------------------
+
+C programmers have often taken volatile to mean that the variable could be
+changed outside of the current thread of execution; as a result, they are
+sometimes tempted to use it in kernel code when shared data structures are
+being used. In other words, they have been known to treat volatile types
+as a sort of easy atomic variable, which they are not. The use of volatile in
+kernel code is almost never correct; this document describes why.
+
+The key point to understand with regard to volatile is that its purpose is
+to suppress optimization, which is almost never what one really wants to
+do. In the kernel, one must protect shared data structures against
+unwanted concurrent access, which is very much a different task. The
+process of protecting against unwanted concurrency will also avoid almost
+all optimization-related problems in a more efficient way.
+
+Like volatile, the kernel primitives which make concurrent access to data
+safe (spinlocks, mutexes, memory barriers, etc.) are designed to prevent
+unwanted optimization. If they are being used properly, there will be no
+need to use volatile as well. If volatile is still necessary, there is
+almost certainly a bug in the code somewhere. In properly-written kernel
+code, volatile can only serve to slow things down.
+
+Consider a typical block of kernel code:
+
+ spin_lock(&the_lock);
+ do_something_on(&shared_data);
+ do_something_else_with(&shared_data);
+ spin_unlock(&the_lock);
+
+If all the code follows the locking rules, the value of shared_data cannot
+change unexpectedly while the_lock is held. Any other code which might
+want to play with that data will be waiting on the lock. The spinlock
+primitives act as memory barriers - they are explicitly written to do so -
+meaning that data accesses will not be optimized across them. So the
+compiler might think it knows what will be in shared_data, but the
+spin_lock() call, since it acts as a memory barrier, will force it to
+forget anything it knows. There will be no optimization problems with
+accesses to that data.
+
+If shared_data were declared volatile, the locking would still be
+necessary. But the compiler would also be prevented from optimizing access
+to shared_data _within_ the critical section, when we know that nobody else
+can be working with it. While the lock is held, shared_data is not
+volatile. When dealing with shared data, proper locking makes volatile
+unnecessary - and potentially harmful.
+
+The volatile storage class was originally meant for memory-mapped I/O
+registers. Within the kernel, register accesses, too, should be protected
+by locks, but one also does not want the compiler "optimizing" register
+accesses within a critical section. But, within the kernel, I/O memory
+accesses are always done through accessor functions; accessing I/O memory
+directly through pointers is frowned upon and does not work on all
+architectures. Those accessors are written to prevent unwanted
+optimization, so, once again, volatile is unnecessary.
+
+Another situation where one might be tempted to use volatile is
+when the processor is busy-waiting on the value of a variable. The right
+way to perform a busy wait is:
+
+ while (my_variable != what_i_want)
+ cpu_relax();
+
+The cpu_relax() call can lower CPU power consumption or yield to a
+hyperthreaded twin processor; it also happens to serve as a memory barrier,
+so, once again, volatile is unnecessary. Of course, busy-waiting is
+generally an anti-social act to begin with.
+
+There are still a few rare situations where volatile makes sense in the
+kernel:
+
+ - The above-mentioned accessor functions might use volatile on
+ architectures where direct I/O memory access does work. Essentially,
+ each accessor call becomes a little critical section on its own and
+ ensures that the access happens as expected by the programmer.
+
+ - Inline assembly code which changes memory, but which has no other
+ visible side effects, risks being deleted by GCC. Adding the volatile
+ keyword to asm statements will prevent this removal.
+
+ - The jiffies variable is special in that it can have a different value
+ every time it is referenced, but it can be read without any special
+ locking. So jiffies can be volatile, but the addition of other
+ variables of this type is strongly frowned upon. Jiffies is considered
+ to be a "stupid legacy" issue (Linus's words) in this regard; fixing it
+ would be more trouble than it is worth.
+
+ - Pointers to data structures in coherent memory which might be modified
+ by I/O devices can, sometimes, legitimately be volatile. A ring buffer
+ used by a network adapter, where that adapter changes pointers to
+ indicate which descriptors have been processed, is an example of this
+ type of situation.
+
+For most code, none of the above justifications for volatile apply. As a
+result, the use of volatile is likely to be seen as a bug and will bring
+additional scrutiny to the code. Developers who are tempted to use
+volatile should take a step back and think about what they are truly trying
+to accomplish.
+
+Patches to remove volatile variables are generally welcome - as long as
+they come with a justification which shows that the concurrency issues have
+been properly thought through.
+
+
+NOTES
+-----
+
+[1] http://lwn.net/Articles/233481/
+[2] http://lwn.net/Articles/233482/
+
+CREDITS
+-------
+
+Original impetus and research by Randy Dunlap
+Written by Jonathan Corbet
+Improvements via coments from Satyam Sharma, Johannes Stezenbach, Jesper
+ Juhl, Heikki Orsila, H. Peter Anvin, Philipp Hahn, and Stefan
+ Richter.