Merge branch 'master' of /home/davem/src/GIT/linux-2.6/

Conflicts:
	arch/sparc/Kconfig
This commit is contained in:
David S. Miller 2009-09-11 20:35:13 -07:00
Родитель b73d884756 86d710146f
Коммит cabc5c0f7f
970 изменённых файлов: 65085 добавлений и 24077 удалений

Просмотреть файл

@ -743,3 +743,80 @@ Revised:
RCU, realtime RCU, sleepable RCU, performance.
"
}
@article{PaulEMcKenney2008RCUOSR
,author="Paul E. McKenney and Jonathan Walpole"
,title="Introducing technology into the {Linux} kernel: a case study"
,Year="2008"
,journal="SIGOPS Oper. Syst. Rev."
,volume="42"
,number="5"
,pages="4--17"
,issn="0163-5980"
,doi={http://doi.acm.org/10.1145/1400097.1400099}
,publisher="ACM"
,address="New York, NY, USA"
,annotation={
Linux changed RCU to a far greater degree than RCU has changed Linux.
}
}
@unpublished{PaulEMcKenney2008HierarchicalRCU
,Author="Paul E. McKenney"
,Title="Hierarchical {RCU}"
,month="November"
,day="3"
,year="2008"
,note="Available:
\url{http://lwn.net/Articles/305782/}
[Viewed November 6, 2008]"
,annotation="
RCU with combining-tree-based grace-period detection,
permitting it to handle thousands of CPUs.
"
}
@conference{PaulEMcKenney2009MaliciousURCU
,Author="Paul E. McKenney"
,Title="Using a Malicious User-Level {RCU} to Torture {RCU}-Based Algorithms"
,Booktitle="linux.conf.au 2009"
,month="January"
,year="2009"
,address="Hobart, Australia"
,note="Available:
\url{http://www.rdrop.com/users/paulmck/RCU/urcutorture.2009.01.22a.pdf}
[Viewed February 2, 2009]"
,annotation="
Realtime RCU and torture-testing RCU uses.
"
}
@unpublished{MathieuDesnoyers2009URCU
,Author="Mathieu Desnoyers"
,Title="[{RFC} git tree] Userspace {RCU} (urcu) for {Linux}"
,month="February"
,day="5"
,year="2009"
,note="Available:
\url{http://lkml.org/lkml/2009/2/5/572}
\url{git://lttng.org/userspace-rcu.git}
[Viewed February 20, 2009]"
,annotation="
Mathieu Desnoyers's user-space RCU implementation.
git://lttng.org/userspace-rcu.git
"
}
@unpublished{PaulEMcKenney2009BloatWatchRCU
,Author="Paul E. McKenney"
,Title="{RCU}: The {Bloatwatch} Edition"
,month="March"
,day="17"
,year="2009"
,note="Available:
\url{http://lwn.net/Articles/323929/}
[Viewed March 20, 2009]"
,annotation="
Uniprocessor assumptions allow simplified RCU implementation.
"
}

Просмотреть файл

@ -2,14 +2,13 @@ RCU on Uniprocessor Systems
A common misconception is that, on UP systems, the call_rcu() primitive
may immediately invoke its function, and that the synchronize_rcu()
primitive may return immediately. The basis of this misconception
may immediately invoke its function. The basis of this misconception
is that since there is only one CPU, it should not be necessary to
wait for anything else to get done, since there are no other CPUs for
anything else to be happening on. Although this approach will -sort- -of-
work a surprising amount of the time, it is a very bad idea in general.
This document presents three examples that demonstrate exactly how bad an
idea this is.
This document presents three examples that demonstrate exactly how bad
an idea this is.
Example 1: softirq Suicide
@ -82,11 +81,18 @@ Quick Quiz #2: What locking restriction must RCU callbacks respect?
Summary
Permitting call_rcu() to immediately invoke its arguments or permitting
synchronize_rcu() to immediately return breaks RCU, even on a UP system.
So do not do it! Even on a UP system, the RCU infrastructure -must-
respect grace periods, and -must- invoke callbacks from a known environment
in which no locks are held.
Permitting call_rcu() to immediately invoke its arguments breaks RCU,
even on a UP system. So do not do it! Even on a UP system, the RCU
infrastructure -must- respect grace periods, and -must- invoke callbacks
from a known environment in which no locks are held.
It -is- safe for synchronize_sched() and synchronize_rcu_bh() to return
immediately on an UP system. It is also safe for synchronize_rcu()
to return immediately on UP systems, except when running preemptable
RCU.
Quick Quiz #3: Why can't synchronize_rcu() return immediately on
UP systems running preemptable RCU?
Answer to Quick Quiz #1:
@ -117,3 +123,13 @@ Answer to Quick Quiz #2:
callbacks acquire locks directly. However, a great many RCU
callbacks do acquire locks -indirectly-, for example, via
the kfree() primitive.
Answer to Quick Quiz #3:
Why can't synchronize_rcu() return immediately on UP systems
running preemptable RCU?
Because some other task might have been preempted in the middle
of an RCU read-side critical section. If synchronize_rcu()
simply immediately returned, it would prematurely signal the
end of the grace period, which would come as a nasty shock to
that other thread when it started running again.

Просмотреть файл

@ -11,7 +11,10 @@ over a rather long period of time, but improvements are always welcome!
structure is updated more than about 10% of the time, then
you should strongly consider some other approach, unless
detailed performance measurements show that RCU is nonetheless
the right tool for the job.
the right tool for the job. Yes, you might think of RCU
as simply cutting overhead off of the readers and imposing it
on the writers. That is exactly why normal uses of RCU will
do much more reading than updating.
Another exception is where performance is not an issue, and RCU
provides a simpler implementation. An example of this situation
@ -240,10 +243,11 @@ over a rather long period of time, but improvements are always welcome!
instead need to use synchronize_irq() or synchronize_sched().
12. Any lock acquired by an RCU callback must be acquired elsewhere
with irq disabled, e.g., via spin_lock_irqsave(). Failing to
disable irq on a given acquisition of that lock will result in
deadlock as soon as the RCU callback happens to interrupt that
acquisition's critical section.
with softirq disabled, e.g., via spin_lock_irqsave(),
spin_lock_bh(), etc. Failing to disable irq on a given
acquisition of that lock will result in deadlock as soon as the
RCU callback happens to interrupt that acquisition's critical
section.
13. RCU callbacks can be and are executed in parallel. In many cases,
the callback code simply wrappers around kfree(), so that this
@ -310,3 +314,9 @@ over a rather long period of time, but improvements are always welcome!
Because these primitives only wait for pre-existing readers,
it is the caller's responsibility to guarantee safety to
any subsequent readers.
16. The various RCU read-side primitives do -not- contain memory
barriers. The CPU (and in some cases, the compiler) is free
to reorder code into and out of RCU read-side critical sections.
It is the responsibility of the RCU update-side primitives to
deal with this.

Просмотреть файл

@ -36,7 +36,7 @@ o How can the updater tell when a grace period has completed
executed in user mode, or executed in the idle loop, we can
safely free up that item.
Preemptible variants of RCU (CONFIG_PREEMPT_RCU) get the
Preemptible variants of RCU (CONFIG_TREE_PREEMPT_RCU) get the
same effect, but require that the readers manipulate CPU-local
counters. These counters allow limited types of blocking
within RCU read-side critical sections. SRCU also uses
@ -79,10 +79,10 @@ o I hear that RCU is patented? What is with that?
o I hear that RCU needs work in order to support realtime kernels?
This work is largely completed. Realtime-friendly RCU can be
enabled via the CONFIG_PREEMPT_RCU kernel configuration parameter.
However, work is in progress for enabling priority boosting of
preempted RCU read-side critical sections. This is needed if you
have CPU-bound realtime threads.
enabled via the CONFIG_TREE_PREEMPT_RCU kernel configuration
parameter. However, work is in progress for enabling priority
boosting of preempted RCU read-side critical sections. This is
needed if you have CPU-bound realtime threads.
o Where can I find more information on RCU?

Просмотреть файл

@ -170,6 +170,13 @@ module invokes call_rcu() from timers, you will need to first cancel all
the timers, and only then invoke rcu_barrier() to wait for any remaining
RCU callbacks to complete.
Of course, if you module uses call_rcu_bh(), you will need to invoke
rcu_barrier_bh() before unloading. Similarly, if your module uses
call_rcu_sched(), you will need to invoke rcu_barrier_sched() before
unloading. If your module uses call_rcu(), call_rcu_bh(), -and-
call_rcu_sched(), then you will need to invoke each of rcu_barrier(),
rcu_barrier_bh(), and rcu_barrier_sched().
Implementing rcu_barrier()

Просмотреть файл

@ -76,8 +76,10 @@ torture_type The type of RCU to test: "rcu" for the rcu_read_lock() API,
"rcu_sync" for rcu_read_lock() with synchronous reclamation,
"rcu_bh" for the rcu_read_lock_bh() API, "rcu_bh_sync" for
rcu_read_lock_bh() with synchronous reclamation, "srcu" for
the "srcu_read_lock()" API, and "sched" for the use of
preempt_disable() together with synchronize_sched().
the "srcu_read_lock()" API, "sched" for the use of
preempt_disable() together with synchronize_sched(),
and "sched_expedited" for the use of preempt_disable()
with synchronize_sched_expedited().
verbose Enable debug printk()s. Default is disabled.
@ -162,6 +164,23 @@ of the "old" and "current" counters for the corresponding CPU. The
"idx" value maps the "old" and "current" values to the underlying array,
and is useful for debugging.
Similarly, sched_expedited RCU provides the following:
sched_expedited-torture: rtc: d0000000016c1880 ver: 1090796 tfle: 0 rta: 1090796 rtaf: 0 rtf: 1090787 rtmbe: 0 nt: 27713319
sched_expedited-torture: Reader Pipe: 12660320201 95875 0 0 0 0 0 0 0 0 0
sched_expedited-torture: Reader Batch: 12660424885 0 0 0 0 0 0 0 0 0 0
sched_expedited-torture: Free-Block Circulation: 1090795 1090795 1090794 1090793 1090792 1090791 1090790 1090789 1090788 1090787 0
state: -1 / 0:0 3:0 4:0
As before, the first four lines are similar to those for RCU.
The last line shows the task-migration state. The first number is
-1 if synchronize_sched_expedited() is idle, -2 if in the process of
posting wakeups to the migration kthreads, and N when waiting on CPU N.
Each of the colon-separated fields following the "/" is a CPU:state pair.
Valid states are "0" for idle, "1" for waiting for quiescent state,
"2" for passed through quiescent state, and "3" when a race with a
CPU-hotplug event forces use of the synchronize_sched() primitive.
USAGE

Просмотреть файл

@ -191,8 +191,7 @@ rcu/rcuhier (which displays the struct rcu_node hierarchy).
The output of "cat rcu/rcudata" looks as follows:
rcu:
rcu:
rcu_sched:
0 c=17829 g=17829 pq=1 pqc=17829 qp=0 dt=10951/1 dn=0 df=1101 of=0 ri=36 ql=0 b=10
1 c=17829 g=17829 pq=1 pqc=17829 qp=0 dt=16117/1 dn=0 df=1015 of=0 ri=0 ql=0 b=10
2 c=17829 g=17829 pq=1 pqc=17829 qp=0 dt=1445/1 dn=0 df=1839 of=0 ri=0 ql=0 b=10
@ -306,7 +305,7 @@ comma-separated-variable spreadsheet format.
The output of "cat rcu/rcugp" looks as follows:
rcu: completed=33062 gpnum=33063
rcu_sched: completed=33062 gpnum=33063
rcu_bh: completed=464 gpnum=464
Again, this output is for both "rcu" and "rcu_bh". The fields are
@ -413,7 +412,7 @@ o Each element of the form "1/1 0:127 ^0" represents one struct
The output of "cat rcu/rcu_pending" looks as follows:
rcu:
rcu_sched:
0 np=255892 qsp=53936 cbr=0 cng=14417 gpc=10033 gps=24320 nf=6445 nn=146741
1 np=261224 qsp=54638 cbr=0 cng=25723 gpc=16310 gps=2849 nf=5912 nn=155792
2 np=237496 qsp=49664 cbr=0 cng=2762 gpc=45478 gps=1762 nf=1201 nn=136629

Просмотреть файл

@ -136,10 +136,10 @@ rcu_read_lock()
Used by a reader to inform the reclaimer that the reader is
entering an RCU read-side critical section. It is illegal
to block while in an RCU read-side critical section, though
kernels built with CONFIG_PREEMPT_RCU can preempt RCU read-side
critical sections. Any RCU-protected data structure accessed
during an RCU read-side critical section is guaranteed to remain
unreclaimed for the full duration of that critical section.
kernels built with CONFIG_TREE_PREEMPT_RCU can preempt RCU
read-side critical sections. Any RCU-protected data structure
accessed during an RCU read-side critical section is guaranteed to
remain unreclaimed for the full duration of that critical section.
Reference counts may be used in conjunction with RCU to maintain
longer-term references to data structures.
@ -785,6 +785,7 @@ RCU pointer/list traversal:
rcu_dereference
list_for_each_entry_rcu
hlist_for_each_entry_rcu
hlist_nulls_for_each_entry_rcu
list_for_each_continue_rcu (to be deprecated in favor of new
list_for_each_entry_continue_rcu)
@ -807,19 +808,23 @@ RCU: Critical sections Grace period Barrier
rcu_read_lock synchronize_net rcu_barrier
rcu_read_unlock synchronize_rcu
synchronize_rcu_expedited
call_rcu
bh: Critical sections Grace period Barrier
rcu_read_lock_bh call_rcu_bh rcu_barrier_bh
rcu_read_unlock_bh
rcu_read_unlock_bh synchronize_rcu_bh
synchronize_rcu_bh_expedited
sched: Critical sections Grace period Barrier
[preempt_disable] synchronize_sched rcu_barrier_sched
[and friends] call_rcu_sched
rcu_read_lock_sched synchronize_sched rcu_barrier_sched
rcu_read_unlock_sched call_rcu_sched
[preempt_disable] synchronize_sched_expedited
[and friends]
SRCU: Critical sections Grace period Barrier
@ -827,6 +832,9 @@ SRCU: Critical sections Grace period Barrier
srcu_read_lock synchronize_srcu N/A
srcu_read_unlock
SRCU: Initialization/cleanup
init_srcu_struct
cleanup_srcu_struct
See the comment headers in the source code (or the docbook generated
from them) for more information.

Просмотреть файл

@ -206,24 +206,6 @@ Who: Len Brown <len.brown@intel.com>
---------------------------
What: libata spindown skipping and warning
When: Dec 2008
Why: Some halt(8) implementations synchronize caches for and spin
down libata disks because libata didn't use to spin down disk on
system halt (only synchronized caches).
Spin down on system halt is now implemented. sysfs node
/sys/class/scsi_disk/h:c:i:l/manage_start_stop is present if
spin down support is available.
Because issuing spin down command to an already spun down disk
makes some disks spin up just to spin down again, libata tracks
device spindown status to skip the extra spindown command and
warn about it.
This is to give userspace tools the time to get updated and will
be removed after userspace is reasonably updated.
Who: Tejun Heo <htejun@gmail.com>
---------------------------
What: i386/x86_64 bzImage symlinks
When: April 2010
@ -394,15 +376,6 @@ Who: Thomas Gleixner <tglx@linutronix.de>
-----------------------------
What: obsolete generic irq defines and typedefs
When: 2.6.30
Why: The defines and typedefs (hw_interrupt_type, no_irq_type, irq_desc_t)
have been kept around for migration reasons. After more than two years
it's time to remove them finally
Who: Thomas Gleixner <tglx@linutronix.de>
---------------------------
What: fakephp and associated sysfs files in /sys/bus/pci/slots/
When: 2011
Why: In 2.6.27, the semantics of /sys/bus/pci/slots was redefined to
@ -468,3 +441,27 @@ Why: cpu_policy_rwsem has a new cleaner definition making it local to
cpufreq core and contained inside cpufreq.c. Other dependent
drivers should not use it in order to safely avoid lockdep issues.
Who: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
----------------------------
What: sound-slot/service-* module aliases and related clutters in
sound/sound_core.c
When: August 2010
Why: OSS sound_core grabs all legacy minors (0-255) of SOUND_MAJOR
(14) and requests modules using custom sound-slot/service-*
module aliases. The only benefit of doing this is allowing
use of custom module aliases which might as well be considered
a bug at this point. This preemptive claiming prevents
alternative OSS implementations.
Till the feature is removed, the kernel will be requesting
both sound-slot/service-* and the standard char-major-* module
aliases and allow turning off the pre-claiming selectively via
CONFIG_SOUND_OSS_CORE_PRECLAIM and soundcore.preclaim_oss
kernel parameter.
After the transition phase is complete, both the custom module
aliases and switches to disable it will go away. This removal
will also allow making ALSA OSS emulation independent of
sound_core. The dependency will be broken then too.
Who: Tejun Heo <tj@kernel.org>

Просмотреть файл

@ -0,0 +1,98 @@
The NFS client
==============
The NFS version 2 protocol was first documented in RFC1094 (March 1989).
Since then two more major releases of NFS have been published, with NFSv3
being documented in RFC1813 (June 1995), and NFSv4 in RFC3530 (April
2003).
The Linux NFS client currently supports all the above published versions,
and work is in progress on adding support for minor version 1 of the NFSv4
protocol.
The purpose of this document is to provide information on some of the
upcall interfaces that are used in order to provide the NFS client with
some of the information that it requires in order to fully comply with
the NFS spec.
The DNS resolver
================
NFSv4 allows for one server to refer the NFS client to data that has been
migrated onto another server by means of the special "fs_locations"
attribute. See
http://tools.ietf.org/html/rfc3530#section-6
and
http://tools.ietf.org/html/draft-ietf-nfsv4-referrals-00
The fs_locations information can take the form of either an ip address and
a path, or a DNS hostname and a path. The latter requires the NFS client to
do a DNS lookup in order to mount the new volume, and hence the need for an
upcall to allow userland to provide this service.
Assuming that the user has the 'rpc_pipefs' filesystem mounted in the usual
/var/lib/nfs/rpc_pipefs, the upcall consists of the following steps:
(1) The process checks the dns_resolve cache to see if it contains a
valid entry. If so, it returns that entry and exits.
(2) If no valid entry exists, the helper script '/sbin/nfs_cache_getent'
(may be changed using the 'nfs.cache_getent' kernel boot parameter)
is run, with two arguments:
- the cache name, "dns_resolve"
- the hostname to resolve
(3) After looking up the corresponding ip address, the helper script
writes the result into the rpc_pipefs pseudo-file
'/var/lib/nfs/rpc_pipefs/cache/dns_resolve/channel'
in the following (text) format:
"<ip address> <hostname> <ttl>\n"
Where <ip address> is in the usual IPv4 (123.456.78.90) or IPv6
(ffee:ddcc:bbaa:9988:7766:5544:3322:1100, ffee::1100, ...) format.
<hostname> is identical to the second argument of the helper
script, and <ttl> is the 'time to live' of this cache entry (in
units of seconds).
Note: If <ip address> is invalid, say the string "0", then a negative
entry is created, which will cause the kernel to treat the hostname
as having no valid DNS translation.
A basic sample /sbin/nfs_cache_getent
=====================================
#!/bin/bash
#
ttl=600
#
cut=/usr/bin/cut
getent=/usr/bin/getent
rpc_pipefs=/var/lib/nfs/rpc_pipefs
#
die()
{
echo "Usage: $0 cache_name entry_name"
exit 1
}
[ $# -lt 2 ] && die
cachename="$1"
cache_path=${rpc_pipefs}/cache/${cachename}/channel
case "${cachename}" in
dns_resolve)
name="$2"
result="$(${getent} hosts ${name} | ${cut} -f1 -d\ )"
[ -z "${result}" ] && result="0"
;;
*)
die
;;
esac
echo "${result} ${name} ${ttl}" >${cache_path}

Просмотреть файл

@ -1503,6 +1503,14 @@ and is between 256 and 4096 characters. It is defined in the file
[NFS] set the TCP port on which the NFSv4 callback
channel should listen.
nfs.cache_getent=
[NFS] sets the pathname to the program which is used
to update the NFS client cache entries.
nfs.cache_getent_timeout=
[NFS] sets the timeout after which an attempt to
update a cache entry is deemed to have failed.
nfs.idmap_cache_timeout=
[NFS] set the maximum lifetime for idmapper cache
entries.
@ -2395,6 +2403,18 @@ and is between 256 and 4096 characters. It is defined in the file
stifb= [HW]
Format: bpp:<bpp1>[:<bpp2>[:<bpp3>...]]
sunrpc.min_resvport=
sunrpc.max_resvport=
[NFS,SUNRPC]
SunRPC servers often require that client requests
originate from a privileged port (i.e. a port in the
range 0 < portnr < 1024).
An administrator who wishes to reserve some of these
ports for other uses may adjust the range that the
kernel's sunrpc client considers to be privileged
using these two parameters to set the minimum and
maximum port values.
sunrpc.pool_mode=
[NFS]
Control how the NFS server code allocates CPUs to
@ -2411,6 +2431,15 @@ and is between 256 and 4096 characters. It is defined in the file
pernode one pool for each NUMA node (equivalent
to global on non-NUMA machines)
sunrpc.tcp_slot_table_entries=
sunrpc.udp_slot_table_entries=
[NFS,SUNRPC]
Sets the upper limit on the number of simultaneous
RPC calls that can be sent from the client to a
server. Increasing these values may allow you to
improve throughput, but will also increase the
amount of memory reserved for use by the client.
swiotlb= [IA-64] Number of I/O TLB slabs
switches= [HW,M68k]
@ -2480,6 +2509,11 @@ and is between 256 and 4096 characters. It is defined in the file
trace_buf_size=nn[KMG]
[FTRACE] will set tracing buffer size.
trace_event=[event-list]
[FTRACE] Set and start specified trace events in order
to facilitate early boot debugging.
See also Documentation/trace/events.txt
trix= [HW,OSS] MediaTrix AudioTrix Pro
Format:
<io>,<irq>,<dma>,<dma2>,<sb_io>,<sb_irq>,<sb_dma>,<mpu_io>,<mpu_irq>

Просмотреть файл

@ -26,7 +26,7 @@ This document has the following sections:
- Notes on accessing payload contents
- Defining a key type
- Request-key callback service
- Key access filesystem
- Garbage collection
============
@ -113,6 +113,9 @@ Each key has a number of attributes:
(*) Dead. The key's type was unregistered, and so the key is now useless.
Keys in the last three states are subject to garbage collection. See the
section on "Garbage collection".
====================
KEY SERVICE OVERVIEW
@ -754,6 +757,26 @@ The keyctl syscall functions are:
successful.
(*) Install the calling process's session keyring on its parent.
long keyctl(KEYCTL_SESSION_TO_PARENT);
This functions attempts to install the calling process's session keyring
on to the calling process's parent, replacing the parent's current session
keyring.
The calling process must have the same ownership as its parent, the
keyring must have the same ownership as the calling process, the calling
process must have LINK permission on the keyring and the active LSM module
mustn't deny permission, otherwise error EPERM will be returned.
Error ENOMEM will be returned if there was insufficient memory to complete
the operation, otherwise 0 will be returned to indicate success.
The keyring will be replaced next time the parent process leaves the
kernel and resumes executing userspace.
===============
KERNEL SERVICES
===============
@ -1231,3 +1254,17 @@ by executing:
In this case, the program isn't required to actually attach the key to a ring;
the rings are provided for reference.
==================
GARBAGE COLLECTION
==================
Dead keys (for which the type has been removed) will be automatically unlinked
from those keyrings that point to them and deleted as soon as possible by a
background garbage collector.
Similarly, revoked and expired keys will be garbage collected, but only after a
certain amount of time has passed. This time is set as a number of seconds in:
/proc/sys/kernel/keys/gc_delay

Просмотреть файл

@ -27,6 +27,13 @@ To trigger an intermediate memory scan:
# echo scan > /sys/kernel/debug/kmemleak
To clear the list of all current possible memory leaks:
# echo clear > /sys/kernel/debug/kmemleak
New leaks will then come up upon reading /sys/kernel/debug/kmemleak
again.
Note that the orphan objects are listed in the order they were allocated
and one object at the beginning of the list may cause other subsequent
objects to be reported as orphan.
@ -42,6 +49,9 @@ Memory scanning parameters can be modified at run-time by writing to the
scan=<secs> - set the automatic memory scanning period in seconds
(default 600, 0 to stop the automatic scanning)
scan - trigger a memory scan
clear - clear list of current memory leak suspects, done by
marking all current reported unreferenced objects grey
dump=<addr> - dump information about the object found at <addr>
Kmemleak can also be disabled at boot-time by passing "kmemleak=off" on
the kernel command line.
@ -86,6 +96,27 @@ avoid this, kmemleak can also store the number of values pointing to an
address inside the block address range that need to be found so that the
block is not considered a leak. One example is __vmalloc().
Testing specific sections with kmemleak
---------------------------------------
Upon initial bootup your /sys/kernel/debug/kmemleak output page may be
quite extensive. This can also be the case if you have very buggy code
when doing development. To work around these situations you can use the
'clear' command to clear all reported unreferenced objects from the
/sys/kernel/debug/kmemleak output. By issuing a 'scan' after a 'clear'
you can find new unreferenced objects; this should help with testing
specific sections of code.
To test a critical section on demand with a clean kmemleak do:
# echo clear > /sys/kernel/debug/kmemleak
... test your kernel or modules ...
# echo scan > /sys/kernel/debug/kmemleak
Then as usual to get your report with:
# cat /sys/kernel/debug/kmemleak
Kmemleak API
------------

Просмотреть файл

@ -495,6 +495,13 @@ and for each vararg a long value. So e.g. for a debug entry with a format
string plus two varargs one would need to allocate a (3 * sizeof(long))
byte data area in the debug_register() function.
IMPORTANT: Using "%s" in sprintf event functions is dangerous. You can only
use "%s" in the sprintf event functions, if the memory for the passed string is
available as long as the debug feature exists. The reason behind this is that
due to performance considerations only a pointer to the string is stored in
the debug feature. If you log a string that is freed afterwards, you will get
an OOPS when inspecting the debug feature, because then the debug feature will
access the already freed memory.
NOTE: If using the sprintf view do NOT use other event/exception functions
than the sprintf-event and -exception functions.

Просмотреть файл

@ -60,6 +60,12 @@ Prior to version 0.9.0rc4 options had a 'snd_' prefix. This was removed.
slots - Reserve the slot index for the given driver.
This option takes multiple strings.
See "Module Autoloading Support" section for details.
debug - Specifies the debug message level
(0 = disable debug prints, 1 = normal debug messages,
2 = verbose debug messages)
This option appears only when CONFIG_SND_DEBUG=y.
This option can be dynamically changed via sysfs
/sys/modules/snd/parameters/debug file.
Module snd-pcm-oss
------------------
@ -513,6 +519,26 @@ Prior to version 0.9.0rc4 options had a 'snd_' prefix. This was removed.
or input, but you may use this module for any application which
requires a sound card (like RealPlayer).
pcm_devs - Number of PCM devices assigned to each card
(default = 1, up to 4)
pcm_substreams - Number of PCM substreams assigned to each PCM
(default = 8, up to 16)
hrtimer - Use hrtimer (=1, default) or system timer (=0)
fake_buffer - Fake buffer allocations (default = 1)
When multiple PCM devices are created, snd-dummy gives different
behavior to each PCM device:
0 = interleaved with mmap support
1 = non-interleaved with mmap support
2 = interleaved without mmap
3 = non-interleaved without mmap
As default, snd-dummy drivers doesn't allocate the real buffers
but either ignores read/write or mmap a single dummy page to all
buffer pages, in order to save the resouces. If your apps need
the read/ written buffer data to be consistent, pass fake_buffer=0
option.
The power-management is supported.
Module snd-echo3g
@ -768,6 +794,10 @@ Prior to version 0.9.0rc4 options had a 'snd_' prefix. This was removed.
bdl_pos_adj - Specifies the DMA IRQ timing delay in samples.
Passing -1 will make the driver to choose the appropriate
value based on the controller chip.
patch - Specifies the early "patch" files to modify the HD-audio
setup before initializing the codecs. This option is
available only when CONFIG_SND_HDA_PATCH_LOADER=y is set.
See HD-Audio.txt for details.
[Single (global) options]
single_cmd - Use single immediate commands to communicate with

Просмотреть файл

@ -114,8 +114,8 @@ ALC662/663/272
samsung-nc10 Samsung NC10 mini notebook
auto auto-config reading BIOS (default)
ALC882/885
==========
ALC882/883/885/888/889
======================
3stack-dig 3-jack with SPDIF I/O
6stack-dig 6-jack digital with SPDIF I/O
arima Arima W820Di1
@ -127,12 +127,8 @@ ALC882/885
mbp3 Macbook Pro rev3
imac24 iMac 24'' with jack detection
w2jc ASUS W2JC
auto auto-config reading BIOS (default)
ALC883/888
==========
3stack-dig 3-jack with SPDIF I/O
6stack-dig 6-jack digital with SPDIF I/O
3stack-2ch-dig 3-jack with SPDIF I/O (ALC883)
alc883-6stack-dig 6-jack digital with SPDIF I/O (ALC883)
3stack-6ch 3-jack 6-channel
3stack-6ch-dig 3-jack 6-channel with SPDIF I/O
6stack-dig-demo 6-jack digital for Intel demo board
@ -140,6 +136,7 @@ ALC883/888
acer-aspire Acer Aspire 9810
acer-aspire-4930g Acer Aspire 4930G
acer-aspire-6530g Acer Aspire 6530G
acer-aspire-7730g Acer Aspire 7730G
acer-aspire-8930g Acer Aspire 8930G
medion Medion Laptops
medion-md2 Medion MD2
@ -155,10 +152,13 @@ ALC883/888
3stack-hp HP machines with 3stack (Lucknow, Samba boards)
6stack-dell Dell machines with 6stack (Inspiron 530)
mitac Mitac 8252D
clevo-m540r Clevo M540R (6ch + digital)
clevo-m720 Clevo M720 laptop series
fujitsu-pi2515 Fujitsu AMILO Pi2515
fujitsu-xa3530 Fujitsu AMILO XA3530
3stack-6ch-intel Intel DG33* boards
intel-alc889a Intel IbexPeak with ALC889A
intel-x58 Intel DX58 with ALC889
asus-p5q ASUS P5Q-EM boards
mb31 MacBook 3,1
sony-vaio-tt Sony VAIO TT
@ -229,7 +229,7 @@ AD1984
======
basic default configuration
thinkpad Lenovo Thinkpad T61/X61
dell Dell T3400
dell_desktop Dell T3400
AD1986A
=======
@ -258,6 +258,7 @@ Conexant 5045
laptop-micsense Laptop with Mic sense (old model fujitsu)
laptop-hpmicsense Laptop with HP and Mic senses
benq Benq R55E
laptop-hp530 HP 530 laptop
test for testing/debugging purpose, almost all controls
can be adjusted. Appearing only when compiled with
$CONFIG_SND_DEBUG=y
@ -278,9 +279,16 @@ Conexant 5051
hp-dv6736 HP dv6736
lenovo-x200 Lenovo X200 laptop
Conexant 5066
=============
laptop Basic Laptop config (default)
dell-laptop Dell laptops
olpc-xo-1_5 OLPC XO 1.5
STAC9200
========
ref Reference board
oqo OQO Model 2
dell-d21 Dell (unknown)
dell-d22 Dell (unknown)
dell-d23 Dell (unknown)
@ -368,10 +376,12 @@ STAC92HD73*
===========
ref Reference board
no-jd BIOS setup but without jack-detection
intel Intel DG45* mobos
dell-m6-amic Dell desktops/laptops with analog mics
dell-m6-dmic Dell desktops/laptops with digital mics
dell-m6 Dell desktops/laptops with both type of mics
dell-eq Dell desktops/laptops
alienware Alienware M17x
auto BIOS setup (default)
STAC92HD83*
@ -385,3 +395,8 @@ STAC9872
========
vaio VAIO laptop without SPDIF
auto BIOS setup (default)
Cirrus Logic CS4206/4207
========================
mbp55 MacBook Pro 5,5
auto BIOS setup (default)

Просмотреть файл

@ -138,6 +138,10 @@ override the BIOS setup or to provide more comprehensive features.
The driver checks PCI SSID and looks through the static configuration
table until any matching entry is found. If you have a new machine,
you may see a message like below:
------------------------------------------------------------------------
hda_codec: ALC880: BIOS auto-probing.
------------------------------------------------------------------------
Meanwhile, in the earlier versions, you would see a message like:
------------------------------------------------------------------------
hda_codec: Unknown model for ALC880, trying auto-probe from BIOS...
------------------------------------------------------------------------
@ -403,6 +407,66 @@ re-configure based on that state, run like below:
------------------------------------------------------------------------
Early Patching
~~~~~~~~~~~~~~
When CONFIG_SND_HDA_PATCH_LOADER=y is set, you can pass a "patch" as a
firmware file for modifying the HD-audio setup before initializing the
codec. This can work basically like the reconfiguration via sysfs in
the above, but it does it before the first codec configuration.
A patch file is a plain text file which looks like below:
------------------------------------------------------------------------
[codec]
0x12345678 0xabcd1234 2
[model]
auto
[pincfg]
0x12 0x411111f0
[verb]
0x20 0x500 0x03
0x20 0x400 0xff
[hint]
hp_detect = yes
------------------------------------------------------------------------
The file needs to have a line `[codec]`. The next line should contain
three numbers indicating the codec vendor-id (0x12345678 in the
example), the codec subsystem-id (0xabcd1234) and the address (2) of
the codec. The rest patch entries are applied to this specified codec
until another codec entry is given.
The `[model]` line allows to change the model name of the each codec.
In the example above, it will be changed to model=auto.
Note that this overrides the module option.
After the `[pincfg]` line, the contents are parsed as the initial
default pin-configurations just like `user_pin_configs` sysfs above.
The values can be shown in user_pin_configs sysfs file, too.
Similarly, the lines after `[verb]` are parsed as `init_verbs`
sysfs entries, and the lines after `[hint]` are parsed as `hints`
sysfs entries, respectively.
The hd-audio driver reads the file via request_firmware(). Thus,
a patch file has to be located on the appropriate firmware path,
typically, /lib/firmware. For example, when you pass the option
`patch=hda-init.fw`, the file /lib/firmware/hda-init-fw must be
present.
The patch module option is specific to each card instance, and you
need to give one file name for each instance, separated by commas.
For example, if you have two cards, one for an on-board analog and one
for an HDMI video board, you may pass patch option like below:
------------------------------------------------------------------------
options snd-hda-intel patch=on-board-patch,hdmi-patch
------------------------------------------------------------------------
Power-Saving
~~~~~~~~~~~~
The power-saving is a kind of auto-suspend of the device. When the

Просмотреть файл

@ -19,6 +19,7 @@ Currently, these files might (depending on your configuration)
show up in /proc/sys/kernel:
- acpi_video_flags
- acct
- callhome [ S390 only ]
- auto_msgmni
- core_pattern
- core_uses_pid
@ -91,6 +92,21 @@ valid for 30 seconds.
==============================================================
callhome:
Controls the kernel's callhome behavior in case of a kernel panic.
The s390 hardware allows an operating system to send a notification
to a service organization (callhome) in case of an operating system panic.
When the value in this file is 0 (which is the default behavior)
nothing happens in case of a kernel panic. If this value is set to "1"
the complete kernel oops message is send to the IBM customer service
organization in case the mainframe the Linux operating system is running
on has a service contract with IBM.
==============================================================
core_pattern:
core_pattern is used to specify a core dumpfile pattern name.

Просмотреть файл

@ -83,6 +83,15 @@ When reading one of these enable files, there are four results:
X - there is a mixture of events enabled and disabled
? - this file does not affect any event
2.3 Boot option
---------------
In order to facilitate early boot debugging, use boot option:
trace_event=[event-list]
The format of this boot option is the same as described in section 2.1.
3. Defining an event-enabled tracepoint
=======================================

Просмотреть файл

@ -85,26 +85,19 @@ of ftrace. Here is a list of some of the key files:
This file holds the output of the trace in a human
readable format (described below).
latency_trace:
This file shows the same trace but the information
is organized more to display possible latencies
in the system (described below).
trace_pipe:
The output is the same as the "trace" file but this
file is meant to be streamed with live tracing.
Reads from this file will block until new data
is retrieved. Unlike the "trace" and "latency_trace"
files, this file is a consumer. This means reading
from this file causes sequential reads to display
more current data. Once data is read from this
file, it is consumed, and will not be read
again with a sequential read. The "trace" and
"latency_trace" files are static, and if the
tracer is not adding more data, they will display
the same information every time they are read.
Reads from this file will block until new data is
retrieved. Unlike the "trace" file, this file is a
consumer. This means reading from this file causes
sequential reads to display more current data. Once
data is read from this file, it is consumed, and
will not be read again with a sequential read. The
"trace" file is static, and if the tracer is not
adding more data,they will display the same
information every time they are read.
trace_options:
@ -117,10 +110,10 @@ of ftrace. Here is a list of some of the key files:
Some of the tracers record the max latency.
For example, the time interrupts are disabled.
This time is saved in this file. The max trace
will also be stored, and displayed by either
"trace" or "latency_trace". A new max trace will
only be recorded if the latency is greater than
the value in this file. (in microseconds)
will also be stored, and displayed by "trace".
A new max trace will only be recorded if the
latency is greater than the value in this
file. (in microseconds)
buffer_size_kb:
@ -210,7 +203,7 @@ Here is the list of current tracers that may be configured.
the trace with the longest max latency.
See tracing_max_latency. When a new max is recorded,
it replaces the old trace. It is best to view this
trace via the latency_trace file.
trace with the latency-format option enabled.
"preemptoff"
@ -307,8 +300,8 @@ the lowest priority thread (pid 0).
Latency trace format
--------------------
For traces that display latency times, the latency_trace file
gives somewhat more information to see why a latency happened.
When the latency-format option is enabled, the trace file gives
somewhat more information to see why a latency happened.
Here is a typical trace.
# tracer: irqsoff
@ -380,9 +373,10 @@ explains which is which.
The above is mostly meaningful for kernel developers.
time: This differs from the trace file output. The trace file output
includes an absolute timestamp. The timestamp used by the
latency_trace file is relative to the start of the trace.
time: When the latency-format option is enabled, the trace file
output includes a timestamp relative to the start of the
trace. This differs from the output when latency-format
is disabled, which includes an absolute timestamp.
delay: This is just to help catch your eye a bit better. And
needs to be fixed to be only relative to the same CPU.
@ -440,7 +434,8 @@ Here are the available options:
sym-addr:
bash-4000 [01] 1477.606694: simple_strtoul <c0339346>
verbose - This deals with the latency_trace file.
verbose - This deals with the trace file when the
latency-format option is enabled.
bash 4000 1 0 00000000 00010a95 [58127d26] 1720.415ms \
(+0.000ms): simple_strtoul (strict_strtoul)
@ -472,7 +467,7 @@ Here are the available options:
the app is no longer running
The lookup is performed when you read
trace,trace_pipe,latency_trace. Example:
trace,trace_pipe. Example:
a.out-1623 [000] 40874.465068: /root/a.out[+0x480] <-/root/a.out[+0
x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6]
@ -481,6 +476,11 @@ x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6]
every scheduling event. Will add overhead if
there's a lot of tasks running at once.
latency-format - This option changes the trace. When
it is enabled, the trace displays
additional information about the
latencies, as described in "Latency
trace format".
sched_switch
------------
@ -596,12 +596,13 @@ To reset the maximum, echo 0 into tracing_max_latency. Here is
an example:
# echo irqsoff > current_tracer
# echo latency-format > trace_options
# echo 0 > tracing_max_latency
# echo 1 > tracing_enabled
# ls -ltr
[...]
# echo 0 > tracing_enabled
# cat latency_trace
# cat trace
# tracer: irqsoff
#
irqsoff latency trace v1.1.5 on 2.6.26
@ -703,12 +704,13 @@ which preemption was disabled. The control of preemptoff tracer
is much like the irqsoff tracer.
# echo preemptoff > current_tracer
# echo latency-format > trace_options
# echo 0 > tracing_max_latency
# echo 1 > tracing_enabled
# ls -ltr
[...]
# echo 0 > tracing_enabled
# cat latency_trace
# cat trace
# tracer: preemptoff
#
preemptoff latency trace v1.1.5 on 2.6.26-rc8
@ -850,12 +852,13 @@ Again, using this trace is much like the irqsoff and preemptoff
tracers.
# echo preemptirqsoff > current_tracer
# echo latency-format > trace_options
# echo 0 > tracing_max_latency
# echo 1 > tracing_enabled
# ls -ltr
[...]
# echo 0 > tracing_enabled
# cat latency_trace
# cat trace
# tracer: preemptirqsoff
#
preemptirqsoff latency trace v1.1.5 on 2.6.26-rc8
@ -1012,11 +1015,12 @@ Instead of performing an 'ls', we will run 'sleep 1' under
'chrt' which changes the priority of the task.
# echo wakeup > current_tracer
# echo latency-format > trace_options
# echo 0 > tracing_max_latency
# echo 1 > tracing_enabled
# chrt -f 5 sleep 1
# echo 0 > tracing_enabled
# cat latency_trace
# cat trace
# tracer: wakeup
#
wakeup latency trace v1.1.5 on 2.6.26-rc8

Просмотреть файл

@ -0,0 +1,42 @@
" Enable folding for ftrace function_graph traces.
"
" To use, :source this file while viewing a function_graph trace, or use vim's
" -S option to load from the command-line together with a trace. You can then
" use the usual vim fold commands, such as "za", to open and close nested
" functions. While closed, a fold will show the total time taken for a call,
" as would normally appear on the line with the closing brace. Folded
" functions will not include finish_task_switch(), so folding should remain
" relatively sane even through a context switch.
"
" Note that this will almost certainly only work well with a
" single-CPU trace (e.g. trace-cmd report --cpu 1).
function! FunctionGraphFoldExpr(lnum)
let line = getline(a:lnum)
if line[-1:] == '{'
if line =~ 'finish_task_switch() {$'
return '>1'
endif
return 'a1'
elseif line[-1:] == '}'
return 's1'
else
return '='
endif
endfunction
function! FunctionGraphFoldText()
let s = split(getline(v:foldstart), '|', 1)
if getline(v:foldend+1) =~ 'finish_task_switch() {$'
let s[2] = ' task switch '
else
let e = split(getline(v:foldend), '|', 1)
let s[2] = e[2]
endif
return join(s, '|')
endfunction
setlocal foldexpr=FunctionGraphFoldExpr(v:lnum)
setlocal foldtext=FunctionGraphFoldText()
setlocal foldcolumn=12
setlocal foldmethod=expr

Просмотреть файл

@ -0,0 +1,955 @@
Lockless Ring Buffer Design
===========================
Copyright 2009 Red Hat Inc.
Author: Steven Rostedt <srostedt@redhat.com>
License: The GNU Free Documentation License, Version 1.2
(dual licensed under the GPL v2)
Reviewers: Mathieu Desnoyers, Huang Ying, Hidetoshi Seto,
and Frederic Weisbecker.
Written for: 2.6.31
Terminology used in this Document
---------------------------------
tail - where new writes happen in the ring buffer.
head - where new reads happen in the ring buffer.
producer - the task that writes into the ring buffer (same as writer)
writer - same as producer
consumer - the task that reads from the buffer (same as reader)
reader - same as consumer.
reader_page - A page outside the ring buffer used solely (for the most part)
by the reader.
head_page - a pointer to the page that the reader will use next
tail_page - a pointer to the page that will be written to next
commit_page - a pointer to the page with the last finished non nested write.
cmpxchg - hardware assisted atomic transaction that performs the following:
A = B iff previous A == C
R = cmpxchg(A, C, B) is saying that we replace A with B if and only if
current A is equal to C, and we put the old (current) A into R
R gets the previous A regardless if A is updated with B or not.
To see if the update was successful a compare of R == C may be used.
The Generic Ring Buffer
-----------------------
The ring buffer can be used in either an overwrite mode or in
producer/consumer mode.
Producer/consumer mode is where the producer were to fill up the
buffer before the consumer could free up anything, the producer
will stop writing to the buffer. This will lose most recent events.
Overwrite mode is where the produce were to fill up the buffer
before the consumer could free up anything, the producer will
overwrite the older data. This will lose the oldest events.
No two writers can write at the same time (on the same per cpu buffer),
but a writer may interrupt another writer, but it must finish writing
before the previous writer may continue. This is very important to the
algorithm. The writers act like a "stack". The way interrupts works
enforces this behavior.
writer1 start
<preempted> writer2 start
<preempted> writer3 start
writer3 finishes
writer2 finishes
writer1 finishes
This is very much like a writer being preempted by an interrupt and
the interrupt doing a write as well.
Readers can happen at any time. But no two readers may run at the
same time, nor can a reader preempt/interrupt another reader. A reader
can not preempt/interrupt a writer, but it may read/consume from the
buffer at the same time as a writer is writing, but the reader must be
on another processor to do so. A reader may read on its own processor
and can be preempted by a writer.
A writer can preempt a reader, but a reader can not preempt a writer.
But a reader can read the buffer at the same time (on another processor)
as a writer.
The ring buffer is made up of a list of pages held together by a link list.
At initialization a reader page is allocated for the reader that is not
part of the ring buffer.
The head_page, tail_page and commit_page are all initialized to point
to the same page.
The reader page is initialized to have its next pointer pointing to
the head page, and its previous pointer pointing to a page before
the head page.
The reader has its own page to use. At start up time, this page is
allocated but is not attached to the list. When the reader wants
to read from the buffer, if its page is empty (like it is on start up)
it will swap its page with the head_page. The old reader page will
become part of the ring buffer and the head_page will be removed.
The page after the inserted page (old reader_page) will become the
new head page.
Once the new page is given to the reader, the reader could do what
it wants with it, as long as a writer has left that page.
A sample of how the reader page is swapped: Note this does not
show the head page in the buffer, it is for demonstrating a swap
only.
+------+
|reader| RING BUFFER
|page |
+------+
+---+ +---+ +---+
| |-->| |-->| |
| |<--| |<--| |
+---+ +---+ +---+
^ | ^ |
| +-------------+ |
+-----------------+
+------+
|reader| RING BUFFER
|page |-------------------+
+------+ v
| +---+ +---+ +---+
| | |-->| |-->| |
| | |<--| |<--| |<-+
| +---+ +---+ +---+ |
| ^ | ^ | |
| | +-------------+ | |
| +-----------------+ |
+------------------------------------+
+------+
|reader| RING BUFFER
|page |-------------------+
+------+ <---------------+ v
| ^ +---+ +---+ +---+
| | | |-->| |-->| |
| | | | | |<--| |<-+
| | +---+ +---+ +---+ |
| | | ^ | |
| | +-------------+ | |
| +-----------------------------+ |
+------------------------------------+
+------+
|buffer| RING BUFFER
|page |-------------------+
+------+ <---------------+ v
| ^ +---+ +---+ +---+
| | | | | |-->| |
| | New | | | |<--| |<-+
| | Reader +---+ +---+ +---+ |
| | page ----^ | |
| | | |
| +-----------------------------+ |
+------------------------------------+
It is possible that the page swapped is the commit page and the tail page,
if what is in the ring buffer is less than what is held in a buffer page.
reader page commit page tail page
| | |
v | |
+---+ | |
| |<----------+ |
| |<------------------------+
| |------+
+---+ |
|
v
+---+ +---+ +---+ +---+
<---| |--->| |--->| |--->| |--->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
This case is still valid for this algorithm.
When the writer leaves the page, it simply goes into the ring buffer
since the reader page still points to the next location in the ring
buffer.
The main pointers:
reader page - The page used solely by the reader and is not part
of the ring buffer (may be swapped in)
head page - the next page in the ring buffer that will be swapped
with the reader page.
tail page - the page where the next write will take place.
commit page - the page that last finished a write.
The commit page only is updated by the outer most writer in the
writer stack. A writer that preempts another writer will not move the
commit page.
When data is written into the ring buffer, a position is reserved
in the ring buffer and passed back to the writer. When the writer
is finished writing data into that position, it commits the write.
Another write (or a read) may take place at anytime during this
transaction. If another write happens it must finish before continuing
with the previous write.
Write reserve:
Buffer page
+---------+
|written |
+---------+ <--- given back to writer (current commit)
|reserved |
+---------+ <--- tail pointer
| empty |
+---------+
Write commit:
Buffer page
+---------+
|written |
+---------+
|written |
+---------+ <--- next positon for write (current commit)
| empty |
+---------+
If a write happens after the first reserve:
Buffer page
+---------+
|written |
+---------+ <-- current commit
|reserved |
+---------+ <--- given back to second writer
|reserved |
+---------+ <--- tail pointer
After second writer commits:
Buffer page
+---------+
|written |
+---------+ <--(last full commit)
|reserved |
+---------+
|pending |
|commit |
+---------+ <--- tail pointer
When the first writer commits:
Buffer page
+---------+
|written |
+---------+
|written |
+---------+
|written |
+---------+ <--(last full commit and tail pointer)
The commit pointer points to the last write location that was
committed without preempting another write. When a write that
preempted another write is committed, it only becomes a pending commit
and will not be a full commit till all writes have been committed.
The commit page points to the page that has the last full commit.
The tail page points to the page with the last write (before
committing).
The tail page is always equal to or after the commit page. It may
be several pages ahead. If the tail page catches up to the commit
page then no more writes may take place (regardless of the mode
of the ring buffer: overwrite and produce/consumer).
The order of pages are:
head page
commit page
tail page
Possible scenario:
tail page
head page commit page |
| | |
v v v
+---+ +---+ +---+ +---+
<---| |--->| |--->| |--->| |--->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
There is a special case that the head page is after either the commit page
and possibly the tail page. That is when the commit (and tail) page has been
swapped with the reader page. This is because the head page is always
part of the ring buffer, but the reader page is not. When ever there
has been less than a full page that has been committed inside the ring buffer,
and a reader swaps out a page, it will be swapping out the commit page.
reader page commit page tail page
| | |
v | |
+---+ | |
| |<----------+ |
| |<------------------------+
| |------+
+---+ |
|
v
+---+ +---+ +---+ +---+
<---| |--->| |--->| |--->| |--->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
^
|
head page
In this case, the head page will not move when the tail and commit
move back into the ring buffer.
The reader can not swap a page into the ring buffer if the commit page
is still on that page. If the read meets the last commit (real commit
not pending or reserved), then there is nothing more to read.
The buffer is considered empty until another full commit finishes.
When the tail meets the head page, if the buffer is in overwrite mode,
the head page will be pushed ahead one. If the buffer is in producer/consumer
mode, the write will fail.
Overwrite mode:
tail page
|
v
+---+ +---+ +---+ +---+
<---| |--->| |--->| |--->| |--->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
^
|
head page
tail page
|
v
+---+ +---+ +---+ +---+
<---| |--->| |--->| |--->| |--->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
^
|
head page
tail page
|
v
+---+ +---+ +---+ +---+
<---| |--->| |--->| |--->| |--->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
^
|
head page
Note, the reader page will still point to the previous head page.
But when a swap takes place, it will use the most recent head page.
Making the Ring Buffer Lockless:
--------------------------------
The main idea behind the lockless algorithm is to combine the moving
of the head_page pointer with the swapping of pages with the reader.
State flags are placed inside the pointer to the page. To do this,
each page must be aligned in memory by 4 bytes. This will allow the 2
least significant bits of the address to be used as flags. Since
they will always be zero for the address. To get the address,
simply mask out the flags.
MASK = ~3
address & MASK
Two flags will be kept by these two bits:
HEADER - the page being pointed to is a head page
UPDATE - the page being pointed to is being updated by a writer
and was or is about to be a head page.
reader page
|
v
+---+
| |------+
+---+ |
|
v
+---+ +---+ +---+ +---+
<---| |--->| |-H->| |--->| |--->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
The above pointer "-H->" would have the HEADER flag set. That is
the next page is the next page to be swapped out by the reader.
This pointer means the next page is the head page.
When the tail page meets the head pointer, it will use cmpxchg to
change the pointer to the UPDATE state:
tail page
|
v
+---+ +---+ +---+ +---+
<---| |--->| |-H->| |--->| |--->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
tail page
|
v
+---+ +---+ +---+ +---+
<---| |--->| |-U->| |--->| |--->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
"-U->" represents a pointer in the UPDATE state.
Any access to the reader will need to take some sort of lock to serialize
the readers. But the writers will never take a lock to write to the
ring buffer. This means we only need to worry about a single reader,
and writes only preempt in "stack" formation.
When the reader tries to swap the page with the ring buffer, it
will also use cmpxchg. If the flag bit in the pointer to the
head page does not have the HEADER flag set, the compare will fail
and the reader will need to look for the new head page and try again.
Note, the flag UPDATE and HEADER are never set at the same time.
The reader swaps the reader page as follows:
+------+
|reader| RING BUFFER
|page |
+------+
+---+ +---+ +---+
| |--->| |--->| |
| |<---| |<---| |
+---+ +---+ +---+
^ | ^ |
| +---------------+ |
+-----H-------------+
The reader sets the reader page next pointer as HEADER to the page after
the head page.
+------+
|reader| RING BUFFER
|page |-------H-----------+
+------+ v
| +---+ +---+ +---+
| | |--->| |--->| |
| | |<---| |<---| |<-+
| +---+ +---+ +---+ |
| ^ | ^ | |
| | +---------------+ | |
| +-----H-------------+ |
+--------------------------------------+
It does a cmpxchg with the pointer to the previous head page to make it
point to the reader page. Note that the new pointer does not have the HEADER
flag set. This action atomically moves the head page forward.
+------+
|reader| RING BUFFER
|page |-------H-----------+
+------+ v
| ^ +---+ +---+ +---+
| | | |-->| |-->| |
| | | |<--| |<--| |<-+
| | +---+ +---+ +---+ |
| | | ^ | |
| | +-------------+ | |
| +-----------------------------+ |
+------------------------------------+
After the new head page is set, the previous pointer of the head page is
updated to the reader page.
+------+
|reader| RING BUFFER
|page |-------H-----------+
+------+ <---------------+ v
| ^ +---+ +---+ +---+
| | | |-->| |-->| |
| | | | | |<--| |<-+
| | +---+ +---+ +---+ |
| | | ^ | |
| | +-------------+ | |
| +-----------------------------+ |
+------------------------------------+
+------+
|buffer| RING BUFFER
|page |-------H-----------+ <--- New head page
+------+ <---------------+ v
| ^ +---+ +---+ +---+
| | | | | |-->| |
| | New | | | |<--| |<-+
| | Reader +---+ +---+ +---+ |
| | page ----^ | |
| | | |
| +-----------------------------+ |
+------------------------------------+
Another important point. The page that the reader page points back to
by its previous pointer (the one that now points to the new head page)
never points back to the reader page. That is because the reader page is
not part of the ring buffer. Traversing the ring buffer via the next pointers
will always stay in the ring buffer. Traversing the ring buffer via the
prev pointers may not.
Note, the way to determine a reader page is simply by examining the previous
pointer of the page. If the next pointer of the previous page does not
point back to the original page, then the original page is a reader page:
+--------+
| reader | next +----+
| page |-------->| |<====== (buffer page)
+--------+ +----+
| | ^
| v | next
prev | +----+
+------------->| |
+----+
The way the head page moves forward:
When the tail page meets the head page and the buffer is in overwrite mode
and more writes take place, the head page must be moved forward before the
writer may move the tail page. The way this is done is that the writer
performs a cmpxchg to convert the pointer to the head page from the HEADER
flag to have the UPDATE flag set. Once this is done, the reader will
not be able to swap the head page from the buffer, nor will it be able to
move the head page, until the writer is finished with the move.
This eliminates any races that the reader can have on the writer. The reader
must spin, and this is why the reader can not preempt the writer.
tail page
|
v
+---+ +---+ +---+ +---+
<---| |--->| |-H->| |--->| |--->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
tail page
|
v
+---+ +---+ +---+ +---+
<---| |--->| |-U->| |--->| |--->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
The following page will be made into the new head page.
tail page
|
v
+---+ +---+ +---+ +---+
<---| |--->| |-U->| |-H->| |--->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
After the new head page has been set, we can set the old head page
pointer back to NORMAL.
tail page
|
v
+---+ +---+ +---+ +---+
<---| |--->| |--->| |-H->| |--->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
After the head page has been moved, the tail page may now move forward.
tail page
|
v
+---+ +---+ +---+ +---+
<---| |--->| |--->| |-H->| |--->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
The above are the trivial updates. Now for the more complex scenarios.
As stated before, if enough writes preempt the first write, the
tail page may make it all the way around the buffer and meet the commit
page. At this time, we must start dropping writes (usually with some kind
of warning to the user). But what happens if the commit was still on the
reader page? The commit page is not part of the ring buffer. The tail page
must account for this.
reader page commit page
| |
v |
+---+ |
| |<----------+
| |
| |------+
+---+ |
|
v
+---+ +---+ +---+ +---+
<---| |--->| |-H->| |--->| |--->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
^
|
tail page
If the tail page were to simply push the head page forward, the commit when
leaving the reader page would not be pointing to the correct page.
The solution to this is to test if the commit page is on the reader page
before pushing the head page. If it is, then it can be assumed that the
tail page wrapped the buffer, and we must drop new writes.
This is not a race condition, because the commit page can only be moved
by the outter most writer (the writer that was preempted).
This means that the commit will not move while a writer is moving the
tail page. The reader can not swap the reader page if it is also being
used as the commit page. The reader can simply check that the commit
is off the reader page. Once the commit page leaves the reader page
it will never go back on it unless a reader does another swap with the
buffer page that is also the commit page.
Nested writes
-------------
In the pushing forward of the tail page we must first push forward
the head page if the head page is the next page. If the head page
is not the next page, the tail page is simply updated with a cmpxchg.
Only writers move the tail page. This must be done atomically to protect
against nested writers.
temp_page = tail_page
next_page = temp_page->next
cmpxchg(tail_page, temp_page, next_page)
The above will update the tail page if it is still pointing to the expected
page. If this fails, a nested write pushed it forward, the the current write
does not need to push it.
temp page
|
v
tail page
|
v
+---+ +---+ +---+ +---+
<---| |--->| |--->| |--->| |--->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
Nested write comes in and moves the tail page forward:
tail page (moved by nested writer)
temp page |
| |
v v
+---+ +---+ +---+ +---+
<---| |--->| |--->| |--->| |--->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
The above would fail the cmpxchg, but since the tail page has already
been moved forward, the writer will just try again to reserve storage
on the new tail page.
But the moving of the head page is a bit more complex.
tail page
|
v
+---+ +---+ +---+ +---+
<---| |--->| |-H->| |--->| |--->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
The write converts the head page pointer to UPDATE.
tail page
|
v
+---+ +---+ +---+ +---+
<---| |--->| |-U->| |--->| |--->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
But if a nested writer preempts here. It will see that the next
page is a head page, but it is also nested. It will detect that
it is nested and will save that information. The detection is the
fact that it sees the UPDATE flag instead of a HEADER or NORMAL
pointer.
The nested writer will set the new head page pointer.
tail page
|
v
+---+ +---+ +---+ +---+
<---| |--->| |-U->| |-H->| |--->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
But it will not reset the update back to normal. Only the writer
that converted a pointer from HEAD to UPDATE will convert it back
to NORMAL.
tail page
|
v
+---+ +---+ +---+ +---+
<---| |--->| |-U->| |-H->| |--->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
After the nested writer finishes, the outer most writer will convert
the UPDATE pointer to NORMAL.
tail page
|
v
+---+ +---+ +---+ +---+
<---| |--->| |--->| |-H->| |--->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
It can be even more complex if several nested writes came in and moved
the tail page ahead several pages:
(first writer)
tail page
|
v
+---+ +---+ +---+ +---+
<---| |--->| |-H->| |--->| |--->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
The write converts the head page pointer to UPDATE.
tail page
|
v
+---+ +---+ +---+ +---+
<---| |--->| |-U->| |--->| |--->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
Next writer comes in, and sees the update and sets up the new
head page.
(second writer)
tail page
|
v
+---+ +---+ +---+ +---+
<---| |--->| |-U->| |-H->| |--->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
The nested writer moves the tail page forward. But does not set the old
update page to NORMAL because it is not the outer most writer.
tail page
|
v
+---+ +---+ +---+ +---+
<---| |--->| |-U->| |-H->| |--->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
Another writer preempts and sees the page after the tail page is a head page.
It changes it from HEAD to UPDATE.
(third writer)
tail page
|
v
+---+ +---+ +---+ +---+
<---| |--->| |-U->| |-U->| |--->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
The writer will move the head page forward:
(third writer)
tail page
|
v
+---+ +---+ +---+ +---+
<---| |--->| |-U->| |-U->| |-H->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
But now that the third writer did change the HEAD flag to UPDATE it
will convert it to normal:
(third writer)
tail page
|
v
+---+ +---+ +---+ +---+
<---| |--->| |-U->| |--->| |-H->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
Then it will move the tail page, and return back to the second writer.
(second writer)
tail page
|
v
+---+ +---+ +---+ +---+
<---| |--->| |-U->| |--->| |-H->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
The second writer will fail to move the tail page because it was already
moved, so it will try again and add its data to the new tail page.
It will return to the first writer.
(first writer)
tail page
|
v
+---+ +---+ +---+ +---+
<---| |--->| |-U->| |--->| |-H->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
The first writer can not know atomically test if the tail page moved
while it updates the HEAD page. It will then update the head page to
what it thinks is the new head page.
(first writer)
tail page
|
v
+---+ +---+ +---+ +---+
<---| |--->| |-U->| |-H->| |-H->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
Since the cmpxchg returns the old value of the pointer the first writer
will see it succeeded in updating the pointer from NORMAL to HEAD.
But as we can see, this is not good enough. It must also check to see
if the tail page is either where it use to be or on the next page:
(first writer)
A B tail page
| | |
v v v
+---+ +---+ +---+ +---+
<---| |--->| |-U->| |-H->| |-H->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
If tail page != A and tail page does not equal B, then it must reset the
pointer back to NORMAL. The fact that it only needs to worry about
nested writers, it only needs to check this after setting the HEAD page.
(first writer)
A B tail page
| | |
v v v
+---+ +---+ +---+ +---+
<---| |--->| |-U->| |--->| |-H->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+
Now the writer can update the head page. This is also why the head page must
remain in UPDATE and only reset by the outer most writer. This prevents
the reader from seeing the incorrect head page.
(first writer)
A B tail page
| | |
v v v
+---+ +---+ +---+ +---+
<---| |--->| |--->| |--->| |-H->
--->| |<---| |<---| |<---| |<---
+---+ +---+ +---+ +---+

Просмотреть файл

@ -439,7 +439,7 @@ F: drivers/hwmon/ams/
AMSO1100 RNIC DRIVER
M: Tom Tucker <tom@opengridcomputing.com>
M: Steve Wise <swise@opengridcomputing.com>
L: general@lists.openfabrics.org
L: linux-rdma@vger.kernel.org
S: Maintained
F: drivers/infiniband/hw/amso1100/
@ -1494,7 +1494,7 @@ F: drivers/net/cxgb3/
CXGB3 IWARP RNIC DRIVER (IW_CXGB3)
M: Steve Wise <swise@chelsio.com>
L: general@lists.openfabrics.org
L: linux-rdma@vger.kernel.org
W: http://www.openfabrics.org
S: Supported
F: drivers/infiniband/hw/cxgb3/
@ -1868,7 +1868,7 @@ F: fs/efs/
EHCA (IBM GX bus InfiniBand adapter) DRIVER
M: Hoang-Nam Nguyen <hnguyen@de.ibm.com>
M: Christoph Raisch <raisch@de.ibm.com>
L: general@lists.openfabrics.org
L: linux-rdma@vger.kernel.org
S: Supported
F: drivers/infiniband/hw/ehca/
@ -2552,7 +2552,7 @@ INFINIBAND SUBSYSTEM
M: Roland Dreier <rolandd@cisco.com>
M: Sean Hefty <sean.hefty@intel.com>
M: Hal Rosenstock <hal.rosenstock@gmail.com>
L: general@lists.openfabrics.org (moderated for non-subscribers)
L: linux-rdma@vger.kernel.org
W: http://www.openib.org/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/roland/infiniband.git
S: Supported
@ -2729,7 +2729,7 @@ F: drivers/net/ipg.c
IPATH DRIVER
M: Ralph Campbell <infinipath@qlogic.com>
L: general@lists.openfabrics.org
L: linux-rdma@vger.kernel.org
T: git git://git.qlogic.com/ipath-linux-2.6
S: Supported
F: drivers/infiniband/hw/ipath/
@ -3485,7 +3485,7 @@ F: drivers/scsi/NCR_D700.*
NETEFFECT IWARP RNIC DRIVER (IW_NES)
M: Faisal Latif <faisal.latif@intel.com>
M: Chien Tung <chien.tin.tung@intel.com>
L: general@lists.openfabrics.org
L: linux-rdma@vger.kernel.org
W: http://www.neteffect.com
S: Supported
F: drivers/infiniband/hw/nes/

Просмотреть файл

@ -1,7 +1,7 @@
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 31
EXTRAVERSION = -rc8
EXTRAVERSION =
NAME = Man-Eating Seals of Antiquity
# *DOCUMENTATION*

Просмотреть файл

@ -30,6 +30,18 @@ config OPROFILE_IBS
If unsure, say N.
config OPROFILE_EVENT_MULTIPLEX
bool "OProfile multiplexing support (EXPERIMENTAL)"
default n
depends on OPROFILE && X86
help
The number of hardware counters is limited. The multiplexing
feature enables OProfile to gather more events than counters
are provided by the hardware. This is realized by switching
between events at an user specified time interval.
If unsure, say N.
config HAVE_OPROFILE
bool

Просмотреть файл

@ -75,6 +75,7 @@ register struct thread_info *__current_thread_info __asm__("$8");
#define TIF_UAC_SIGBUS 7
#define TIF_MEMDIE 8
#define TIF_RESTORE_SIGMASK 9 /* restore signal mask in do_signal */
#define TIF_NOTIFY_RESUME 10 /* callback before returning to user */
#define TIF_FREEZE 16 /* is freezing for suspend */
#define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)
@ -82,10 +83,12 @@ register struct thread_info *__current_thread_info __asm__("$8");
#define _TIF_NEED_RESCHED (1<<TIF_NEED_RESCHED)
#define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG)
#define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK)
#define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME)
#define _TIF_FREEZE (1<<TIF_FREEZE)
/* Work to do on interrupt/exception return. */
#define _TIF_WORK_MASK (_TIF_SIGPENDING | _TIF_NEED_RESCHED)
#define _TIF_WORK_MASK (_TIF_SIGPENDING | _TIF_NEED_RESCHED | \
_TIF_NOTIFY_RESUME)
/* Work to do on any return to userspace. */
#define _TIF_ALLWORK_MASK (_TIF_WORK_MASK \

Просмотреть файл

@ -20,6 +20,7 @@
#include <linux/binfmts.h>
#include <linux/bitops.h>
#include <linux/syscalls.h>
#include <linux/tracehook.h>
#include <asm/uaccess.h>
#include <asm/sigcontext.h>
@ -683,4 +684,11 @@ do_notify_resume(struct pt_regs *regs, struct switch_stack *sw,
{
if (thread_info_flags & (_TIF_SIGPENDING | _TIF_RESTORE_SIGMASK))
do_signal(regs, sw, r0, r19);
if (thread_info_flags & _TIF_NOTIFY_RESUME) {
clear_thread_flag(TIF_NOTIFY_RESUME);
tracehook_notify_resume(regs);
if (current->replacement_session_keyring)
key_replace_session_keyring();
}
}

Просмотреть файл

@ -130,11 +130,13 @@ extern void vfp_sync_state(struct thread_info *thread);
* TIF_SYSCALL_TRACE - syscall trace active
* TIF_SIGPENDING - signal pending
* TIF_NEED_RESCHED - rescheduling necessary
* TIF_NOTIFY_RESUME - callback before returning to user
* TIF_USEDFPU - FPU was used by this task this quantum (SMP)
* TIF_POLLING_NRFLAG - true if poll_idle() is polling TIF_NEED_RESCHED
*/
#define TIF_SIGPENDING 0
#define TIF_NEED_RESCHED 1
#define TIF_NOTIFY_RESUME 2 /* callback before returning to user */
#define TIF_SYSCALL_TRACE 8
#define TIF_POLLING_NRFLAG 16
#define TIF_USING_IWMMXT 17
@ -143,6 +145,7 @@ extern void vfp_sync_state(struct thread_info *thread);
#define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
#define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
#define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG)
#define _TIF_USING_IWMMXT (1 << TIF_USING_IWMMXT)

Просмотреть файл

@ -51,7 +51,7 @@ fast_work_pending:
work_pending:
tst r1, #_TIF_NEED_RESCHED
bne work_resched
tst r1, #_TIF_SIGPENDING
tst r1, #_TIF_SIGPENDING|_TIF_NOTIFY_RESUME
beq no_work_pending
mov r0, sp @ 'regs'
mov r2, why @ 'syscall'

Просмотреть файл

@ -12,6 +12,7 @@
#include <linux/personality.h>
#include <linux/freezer.h>
#include <linux/uaccess.h>
#include <linux/tracehook.h>
#include <asm/elf.h>
#include <asm/cacheflush.h>
@ -707,4 +708,11 @@ do_notify_resume(struct pt_regs *regs, unsigned int thread_flags, int syscall)
{
if (thread_flags & _TIF_SIGPENDING)
do_signal(&current->blocked, regs, syscall);
if (thread_flags & _TIF_NOTIFY_RESUME) {
clear_thread_flag(TIF_NOTIFY_RESUME);
tracehook_notify_resume(regs);
if (current->replacement_session_keyring)
key_replace_session_keyring();
}
}

Просмотреть файл

@ -128,6 +128,7 @@ static struct omap_mcbsp_platform_data omap34xx_mcbsp_pdata[] = {
.rx_irq = INT_24XX_MCBSP1_IRQ_RX,
.tx_irq = INT_24XX_MCBSP1_IRQ_TX,
.ops = &omap2_mcbsp_ops,
.buffer_size = 0x6F,
},
{
.phys_base = OMAP34XX_MCBSP2_BASE,
@ -136,6 +137,7 @@ static struct omap_mcbsp_platform_data omap34xx_mcbsp_pdata[] = {
.rx_irq = INT_24XX_MCBSP2_IRQ_RX,
.tx_irq = INT_24XX_MCBSP2_IRQ_TX,
.ops = &omap2_mcbsp_ops,
.buffer_size = 0x3FF,
},
{
.phys_base = OMAP34XX_MCBSP3_BASE,
@ -144,6 +146,7 @@ static struct omap_mcbsp_platform_data omap34xx_mcbsp_pdata[] = {
.rx_irq = INT_24XX_MCBSP3_IRQ_RX,
.tx_irq = INT_24XX_MCBSP3_IRQ_TX,
.ops = &omap2_mcbsp_ops,
.buffer_size = 0x6F,
},
{
.phys_base = OMAP34XX_MCBSP4_BASE,
@ -152,6 +155,7 @@ static struct omap_mcbsp_platform_data omap34xx_mcbsp_pdata[] = {
.rx_irq = INT_24XX_MCBSP4_IRQ_RX,
.tx_irq = INT_24XX_MCBSP4_IRQ_TX,
.ops = &omap2_mcbsp_ops,
.buffer_size = 0x6F,
},
{
.phys_base = OMAP34XX_MCBSP5_BASE,
@ -160,6 +164,7 @@ static struct omap_mcbsp_platform_data omap34xx_mcbsp_pdata[] = {
.rx_irq = INT_24XX_MCBSP5_IRQ_RX,
.tx_irq = INT_24XX_MCBSP5_IRQ_TX,
.ops = &omap2_mcbsp_ops,
.buffer_size = 0x6F,
},
};
#define OMAP34XX_MCBSP_PDATA_SZ ARRAY_SIZE(omap34xx_mcbsp_pdata)

Просмотреть файл

@ -3,10 +3,12 @@
#include <sound/core.h>
#include <sound/pcm.h>
#include <sound/ac97_codec.h>
/*
* @reset_gpio: AC97 reset gpio (normally gpio113 or gpio95)
* a -1 value means no gpio will be used for reset
* @codec_pdata: AC97 codec platform_data
* reset_gpio should only be specified for pxa27x CPUs where a silicon
* bug prevents correct operation of the reset line. If not specified,
@ -20,6 +22,7 @@ typedef struct {
void (*resume)(void *);
void *priv;
int reset_gpio;
void *codec_pdata[AC97_BUS_MAX_DEVICES];
} pxa2xx_audio_ops_t;
extern void pxa_set_ac97_info(pxa2xx_audio_ops_t *ops);

Просмотреть файл

@ -1127,6 +1127,11 @@ int omap_dma_running(void)
void omap_dma_link_lch(int lch_head, int lch_queue)
{
if (omap_dma_in_1510_mode()) {
if (lch_head == lch_queue) {
dma_write(dma_read(CCR(lch_head)) | (3 << 8),
CCR(lch_head));
return;
}
printk(KERN_ERR "DMA linking is not supported in 1510 mode\n");
BUG();
return;
@ -1149,6 +1154,11 @@ EXPORT_SYMBOL(omap_dma_link_lch);
void omap_dma_unlink_lch(int lch_head, int lch_queue)
{
if (omap_dma_in_1510_mode()) {
if (lch_head == lch_queue) {
dma_write(dma_read(CCR(lch_head)) & ~(3 << 8),
CCR(lch_head));
return;
}
printk(KERN_ERR "DMA linking is not supported in 1510 mode\n");
BUG();
return;

Просмотреть файл

@ -134,6 +134,11 @@
#define OMAP_MCBSP_REG_XCERG 0x74
#define OMAP_MCBSP_REG_XCERH 0x78
#define OMAP_MCBSP_REG_SYSCON 0x8C
#define OMAP_MCBSP_REG_THRSH2 0x90
#define OMAP_MCBSP_REG_THRSH1 0x94
#define OMAP_MCBSP_REG_IRQST 0xA0
#define OMAP_MCBSP_REG_IRQEN 0xA4
#define OMAP_MCBSP_REG_WAKEUPEN 0xA8
#define OMAP_MCBSP_REG_XCCR 0xAC
#define OMAP_MCBSP_REG_RCCR 0xB0
@ -249,8 +254,27 @@
#define RDISABLE 0x0001
/********************** McBSP SYSCONFIG bit definitions ********************/
#define CLOCKACTIVITY(value) ((value)<<8)
#define SIDLEMODE(value) ((value)<<3)
#define ENAWAKEUP 0x0004
#define SOFTRST 0x0002
/********************** McBSP DMA operating modes **************************/
#define MCBSP_DMA_MODE_ELEMENT 0
#define MCBSP_DMA_MODE_THRESHOLD 1
#define MCBSP_DMA_MODE_FRAME 2
/********************** McBSP WAKEUPEN bit definitions *********************/
#define XEMPTYEOFEN 0x4000
#define XRDYEN 0x0400
#define XEOFEN 0x0200
#define XFSXEN 0x0100
#define XSYNCERREN 0x0080
#define RRDYEN 0x0008
#define REOFEN 0x0004
#define RFSREN 0x0002
#define RSYNCERREN 0x0001
/* we don't do multichannel for now */
struct omap_mcbsp_reg_cfg {
u16 spcr2;
@ -344,6 +368,9 @@ struct omap_mcbsp_platform_data {
u8 dma_rx_sync, dma_tx_sync;
u16 rx_irq, tx_irq;
struct omap_mcbsp_ops *ops;
#ifdef CONFIG_ARCH_OMAP34XX
u16 buffer_size;
#endif
};
struct omap_mcbsp {
@ -377,6 +404,11 @@ struct omap_mcbsp {
struct omap_mcbsp_platform_data *pdata;
struct clk *iclk;
struct clk *fclk;
#ifdef CONFIG_ARCH_OMAP34XX
int dma_op_mode;
u16 max_tx_thres;
u16 max_rx_thres;
#endif
};
extern struct omap_mcbsp **mcbsp_ptr;
extern int omap_mcbsp_count;
@ -385,10 +417,25 @@ int omap_mcbsp_init(void);
void omap_mcbsp_register_board_cfg(struct omap_mcbsp_platform_data *config,
int size);
void omap_mcbsp_config(unsigned int id, const struct omap_mcbsp_reg_cfg * config);
#ifdef CONFIG_ARCH_OMAP34XX
void omap_mcbsp_set_tx_threshold(unsigned int id, u16 threshold);
void omap_mcbsp_set_rx_threshold(unsigned int id, u16 threshold);
u16 omap_mcbsp_get_max_tx_threshold(unsigned int id);
u16 omap_mcbsp_get_max_rx_threshold(unsigned int id);
int omap_mcbsp_get_dma_op_mode(unsigned int id);
#else
static inline void omap_mcbsp_set_tx_threshold(unsigned int id, u16 threshold)
{ }
static inline void omap_mcbsp_set_rx_threshold(unsigned int id, u16 threshold)
{ }
static inline u16 omap_mcbsp_get_max_tx_threshold(unsigned int id) { return 0; }
static inline u16 omap_mcbsp_get_max_rx_threshold(unsigned int id) { return 0; }
static inline int omap_mcbsp_get_dma_op_mode(unsigned int id) { return 0; }
#endif
int omap_mcbsp_request(unsigned int id);
void omap_mcbsp_free(unsigned int id);
void omap_mcbsp_start(unsigned int id);
void omap_mcbsp_stop(unsigned int id);
void omap_mcbsp_start(unsigned int id, int tx, int rx);
void omap_mcbsp_stop(unsigned int id, int tx, int rx);
void omap_mcbsp_xmit_word(unsigned int id, u32 word);
u32 omap_mcbsp_recv_word(unsigned int id);

Просмотреть файл

@ -198,6 +198,170 @@ void omap_mcbsp_config(unsigned int id, const struct omap_mcbsp_reg_cfg *config)
}
EXPORT_SYMBOL(omap_mcbsp_config);
#ifdef CONFIG_ARCH_OMAP34XX
/*
* omap_mcbsp_set_tx_threshold configures how to deal
* with transmit threshold. the threshold value and handler can be
* configure in here.
*/
void omap_mcbsp_set_tx_threshold(unsigned int id, u16 threshold)
{
struct omap_mcbsp *mcbsp;
void __iomem *io_base;
if (!cpu_is_omap34xx())
return;
if (!omap_mcbsp_check_valid_id(id)) {
printk(KERN_ERR "%s: Invalid id (%d)\n", __func__, id + 1);
return;
}
mcbsp = id_to_mcbsp_ptr(id);
io_base = mcbsp->io_base;
OMAP_MCBSP_WRITE(io_base, THRSH2, threshold);
}
EXPORT_SYMBOL(omap_mcbsp_set_tx_threshold);
/*
* omap_mcbsp_set_rx_threshold configures how to deal
* with receive threshold. the threshold value and handler can be
* configure in here.
*/
void omap_mcbsp_set_rx_threshold(unsigned int id, u16 threshold)
{
struct omap_mcbsp *mcbsp;
void __iomem *io_base;
if (!cpu_is_omap34xx())
return;
if (!omap_mcbsp_check_valid_id(id)) {
printk(KERN_ERR "%s: Invalid id (%d)\n", __func__, id + 1);
return;
}
mcbsp = id_to_mcbsp_ptr(id);
io_base = mcbsp->io_base;
OMAP_MCBSP_WRITE(io_base, THRSH1, threshold);
}
EXPORT_SYMBOL(omap_mcbsp_set_rx_threshold);
/*
* omap_mcbsp_get_max_tx_thres just return the current configured
* maximum threshold for transmission
*/
u16 omap_mcbsp_get_max_tx_threshold(unsigned int id)
{
struct omap_mcbsp *mcbsp;
if (!omap_mcbsp_check_valid_id(id)) {
printk(KERN_ERR "%s: Invalid id (%d)\n", __func__, id + 1);
return -ENODEV;
}
mcbsp = id_to_mcbsp_ptr(id);
return mcbsp->max_tx_thres;
}
EXPORT_SYMBOL(omap_mcbsp_get_max_tx_threshold);
/*
* omap_mcbsp_get_max_rx_thres just return the current configured
* maximum threshold for reception
*/
u16 omap_mcbsp_get_max_rx_threshold(unsigned int id)
{
struct omap_mcbsp *mcbsp;
if (!omap_mcbsp_check_valid_id(id)) {
printk(KERN_ERR "%s: Invalid id (%d)\n", __func__, id + 1);
return -ENODEV;
}
mcbsp = id_to_mcbsp_ptr(id);
return mcbsp->max_rx_thres;
}
EXPORT_SYMBOL(omap_mcbsp_get_max_rx_threshold);
/*
* omap_mcbsp_get_dma_op_mode just return the current configured
* operating mode for the mcbsp channel
*/
int omap_mcbsp_get_dma_op_mode(unsigned int id)
{
struct omap_mcbsp *mcbsp;
int dma_op_mode;
if (!omap_mcbsp_check_valid_id(id)) {
printk(KERN_ERR "%s: Invalid id (%u)\n", __func__, id + 1);
return -ENODEV;
}
mcbsp = id_to_mcbsp_ptr(id);
spin_lock_irq(&mcbsp->lock);
dma_op_mode = mcbsp->dma_op_mode;
spin_unlock_irq(&mcbsp->lock);
return dma_op_mode;
}
EXPORT_SYMBOL(omap_mcbsp_get_dma_op_mode);
static inline void omap34xx_mcbsp_request(struct omap_mcbsp *mcbsp)
{
/*
* Enable wakup behavior, smart idle and all wakeups
* REVISIT: some wakeups may be unnecessary
*/
if (cpu_is_omap34xx()) {
u16 syscon;
syscon = OMAP_MCBSP_READ(mcbsp->io_base, SYSCON);
syscon &= ~(ENAWAKEUP | SIDLEMODE(0x03) | CLOCKACTIVITY(0x03));
spin_lock_irq(&mcbsp->lock);
if (mcbsp->dma_op_mode == MCBSP_DMA_MODE_THRESHOLD) {
syscon |= (ENAWAKEUP | SIDLEMODE(0x02) |
CLOCKACTIVITY(0x02));
OMAP_MCBSP_WRITE(mcbsp->io_base, WAKEUPEN,
XRDYEN | RRDYEN);
} else {
syscon |= SIDLEMODE(0x01);
}
spin_unlock_irq(&mcbsp->lock);
OMAP_MCBSP_WRITE(mcbsp->io_base, SYSCON, syscon);
}
}
static inline void omap34xx_mcbsp_free(struct omap_mcbsp *mcbsp)
{
/*
* Disable wakup behavior, smart idle and all wakeups
*/
if (cpu_is_omap34xx()) {
u16 syscon;
syscon = OMAP_MCBSP_READ(mcbsp->io_base, SYSCON);
syscon &= ~(ENAWAKEUP | SIDLEMODE(0x03) | CLOCKACTIVITY(0x03));
/*
* HW bug workaround - If no_idle mode is taken, we need to
* go to smart_idle before going to always_idle, or the
* device will not hit retention anymore.
*/
syscon |= SIDLEMODE(0x02);
OMAP_MCBSP_WRITE(mcbsp->io_base, SYSCON, syscon);
syscon &= ~(SIDLEMODE(0x03));
OMAP_MCBSP_WRITE(mcbsp->io_base, SYSCON, syscon);
OMAP_MCBSP_WRITE(mcbsp->io_base, WAKEUPEN, 0);
}
}
#else
static inline void omap34xx_mcbsp_request(struct omap_mcbsp *mcbsp) {}
static inline void omap34xx_mcbsp_free(struct omap_mcbsp *mcbsp) {}
#endif
/*
* We can choose between IRQ based or polled IO.
* This needs to be called before omap_mcbsp_request().
@ -257,6 +421,9 @@ int omap_mcbsp_request(unsigned int id)
clk_enable(mcbsp->iclk);
clk_enable(mcbsp->fclk);
/* Do procedure specific to omap34xx arch, if applicable */
omap34xx_mcbsp_request(mcbsp);
/*
* Make sure that transmitter, receiver and sample-rate generator are
* not running before activating IRQs.
@ -305,6 +472,9 @@ void omap_mcbsp_free(unsigned int id)
if (mcbsp->pdata && mcbsp->pdata->ops && mcbsp->pdata->ops->free)
mcbsp->pdata->ops->free(id);
/* Do procedure specific to omap34xx arch, if applicable */
omap34xx_mcbsp_free(mcbsp);
clk_disable(mcbsp->fclk);
clk_disable(mcbsp->iclk);
@ -328,14 +498,15 @@ void omap_mcbsp_free(unsigned int id)
EXPORT_SYMBOL(omap_mcbsp_free);
/*
* Here we start the McBSP, by enabling the sample
* generator, both transmitter and receivers,
* and the frame sync.
* Here we start the McBSP, by enabling transmitter, receiver or both.
* If no transmitter or receiver is active prior calling, then sample-rate
* generator and frame sync are started.
*/
void omap_mcbsp_start(unsigned int id)
void omap_mcbsp_start(unsigned int id, int tx, int rx)
{
struct omap_mcbsp *mcbsp;
void __iomem *io_base;
int idle;
u16 w;
if (!omap_mcbsp_check_valid_id(id)) {
@ -348,32 +519,58 @@ void omap_mcbsp_start(unsigned int id)
mcbsp->rx_word_length = (OMAP_MCBSP_READ(io_base, RCR1) >> 5) & 0x7;
mcbsp->tx_word_length = (OMAP_MCBSP_READ(io_base, XCR1) >> 5) & 0x7;
/* Start the sample generator */
w = OMAP_MCBSP_READ(io_base, SPCR2);
OMAP_MCBSP_WRITE(io_base, SPCR2, w | (1 << 6));
idle = !((OMAP_MCBSP_READ(io_base, SPCR2) |
OMAP_MCBSP_READ(io_base, SPCR1)) & 1);
if (idle) {
/* Start the sample generator */
w = OMAP_MCBSP_READ(io_base, SPCR2);
OMAP_MCBSP_WRITE(io_base, SPCR2, w | (1 << 6));
}
/* Enable transmitter and receiver */
tx &= 1;
w = OMAP_MCBSP_READ(io_base, SPCR2);
OMAP_MCBSP_WRITE(io_base, SPCR2, w | 1);
OMAP_MCBSP_WRITE(io_base, SPCR2, w | tx);
rx &= 1;
w = OMAP_MCBSP_READ(io_base, SPCR1);
OMAP_MCBSP_WRITE(io_base, SPCR1, w | 1);
OMAP_MCBSP_WRITE(io_base, SPCR1, w | rx);
udelay(100);
/*
* Worst case: CLKSRG*2 = 8000khz: (1/8000) * 2 * 2 usec
* REVISIT: 100us may give enough time for two CLKSRG, however
* due to some unknown PM related, clock gating etc. reason it
* is now at 500us.
*/
udelay(500);
/* Start frame sync */
w = OMAP_MCBSP_READ(io_base, SPCR2);
OMAP_MCBSP_WRITE(io_base, SPCR2, w | (1 << 7));
if (idle) {
/* Start frame sync */
w = OMAP_MCBSP_READ(io_base, SPCR2);
OMAP_MCBSP_WRITE(io_base, SPCR2, w | (1 << 7));
}
if (cpu_is_omap2430() || cpu_is_omap34xx()) {
/* Release the transmitter and receiver */
w = OMAP_MCBSP_READ(io_base, XCCR);
w &= ~(tx ? XDISABLE : 0);
OMAP_MCBSP_WRITE(io_base, XCCR, w);
w = OMAP_MCBSP_READ(io_base, RCCR);
w &= ~(rx ? RDISABLE : 0);
OMAP_MCBSP_WRITE(io_base, RCCR, w);
}
/* Dump McBSP Regs */
omap_mcbsp_dump_reg(id);
}
EXPORT_SYMBOL(omap_mcbsp_start);
void omap_mcbsp_stop(unsigned int id)
void omap_mcbsp_stop(unsigned int id, int tx, int rx)
{
struct omap_mcbsp *mcbsp;
void __iomem *io_base;
int idle;
u16 w;
if (!omap_mcbsp_check_valid_id(id)) {
@ -385,16 +582,33 @@ void omap_mcbsp_stop(unsigned int id)
io_base = mcbsp->io_base;
/* Reset transmitter */
tx &= 1;
if (cpu_is_omap2430() || cpu_is_omap34xx()) {
w = OMAP_MCBSP_READ(io_base, XCCR);
w |= (tx ? XDISABLE : 0);
OMAP_MCBSP_WRITE(io_base, XCCR, w);
}
w = OMAP_MCBSP_READ(io_base, SPCR2);
OMAP_MCBSP_WRITE(io_base, SPCR2, w & ~(1));
OMAP_MCBSP_WRITE(io_base, SPCR2, w & ~tx);
/* Reset receiver */
rx &= 1;
if (cpu_is_omap2430() || cpu_is_omap34xx()) {
w = OMAP_MCBSP_READ(io_base, RCCR);
w |= (tx ? RDISABLE : 0);
OMAP_MCBSP_WRITE(io_base, RCCR, w);
}
w = OMAP_MCBSP_READ(io_base, SPCR1);
OMAP_MCBSP_WRITE(io_base, SPCR1, w & ~(1));
OMAP_MCBSP_WRITE(io_base, SPCR1, w & ~rx);
/* Reset the sample rate generator */
w = OMAP_MCBSP_READ(io_base, SPCR2);
OMAP_MCBSP_WRITE(io_base, SPCR2, w & ~(1 << 6));
idle = !((OMAP_MCBSP_READ(io_base, SPCR2) |
OMAP_MCBSP_READ(io_base, SPCR1)) & 1);
if (idle) {
/* Reset the sample rate generator */
w = OMAP_MCBSP_READ(io_base, SPCR2);
OMAP_MCBSP_WRITE(io_base, SPCR2, w & ~(1 << 6));
}
}
EXPORT_SYMBOL(omap_mcbsp_stop);
@ -883,6 +1097,149 @@ void omap_mcbsp_set_spi_mode(unsigned int id,
}
EXPORT_SYMBOL(omap_mcbsp_set_spi_mode);
#ifdef CONFIG_ARCH_OMAP34XX
#define max_thres(m) (mcbsp->pdata->buffer_size)
#define valid_threshold(m, val) ((val) <= max_thres(m))
#define THRESHOLD_PROP_BUILDER(prop) \
static ssize_t prop##_show(struct device *dev, \
struct device_attribute *attr, char *buf) \
{ \
struct omap_mcbsp *mcbsp = dev_get_drvdata(dev); \
\
return sprintf(buf, "%u\n", mcbsp->prop); \
} \
\
static ssize_t prop##_store(struct device *dev, \
struct device_attribute *attr, \
const char *buf, size_t size) \
{ \
struct omap_mcbsp *mcbsp = dev_get_drvdata(dev); \
unsigned long val; \
int status; \
\
status = strict_strtoul(buf, 0, &val); \
if (status) \
return status; \
\
if (!valid_threshold(mcbsp, val)) \
return -EDOM; \
\
mcbsp->prop = val; \
return size; \
} \
\
static DEVICE_ATTR(prop, 0644, prop##_show, prop##_store);
THRESHOLD_PROP_BUILDER(max_tx_thres);
THRESHOLD_PROP_BUILDER(max_rx_thres);
static const char *dma_op_modes[] = {
"element", "threshold", "frame",
};
static ssize_t dma_op_mode_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct omap_mcbsp *mcbsp = dev_get_drvdata(dev);
int dma_op_mode, i = 0;
ssize_t len = 0;
const char * const *s;
spin_lock_irq(&mcbsp->lock);
dma_op_mode = mcbsp->dma_op_mode;
spin_unlock_irq(&mcbsp->lock);
for (s = &dma_op_modes[i]; i < ARRAY_SIZE(dma_op_modes); s++, i++) {
if (dma_op_mode == i)
len += sprintf(buf + len, "[%s] ", *s);
else
len += sprintf(buf + len, "%s ", *s);
}
len += sprintf(buf + len, "\n");
return len;
}
static ssize_t dma_op_mode_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t size)
{
struct omap_mcbsp *mcbsp = dev_get_drvdata(dev);
const char * const *s;
int i = 0;
for (s = &dma_op_modes[i]; i < ARRAY_SIZE(dma_op_modes); s++, i++)
if (sysfs_streq(buf, *s))
break;
if (i == ARRAY_SIZE(dma_op_modes))
return -EINVAL;
spin_lock_irq(&mcbsp->lock);
if (!mcbsp->free) {
size = -EBUSY;
goto unlock;
}
mcbsp->dma_op_mode = i;
unlock:
spin_unlock_irq(&mcbsp->lock);
return size;
}
static DEVICE_ATTR(dma_op_mode, 0644, dma_op_mode_show, dma_op_mode_store);
static const struct attribute *additional_attrs[] = {
&dev_attr_max_tx_thres.attr,
&dev_attr_max_rx_thres.attr,
&dev_attr_dma_op_mode.attr,
NULL,
};
static const struct attribute_group additional_attr_group = {
.attrs = (struct attribute **)additional_attrs,
};
static inline int __devinit omap_additional_add(struct device *dev)
{
return sysfs_create_group(&dev->kobj, &additional_attr_group);
}
static inline void __devexit omap_additional_remove(struct device *dev)
{
sysfs_remove_group(&dev->kobj, &additional_attr_group);
}
static inline void __devinit omap34xx_device_init(struct omap_mcbsp *mcbsp)
{
mcbsp->dma_op_mode = MCBSP_DMA_MODE_ELEMENT;
if (cpu_is_omap34xx()) {
mcbsp->max_tx_thres = max_thres(mcbsp);
mcbsp->max_rx_thres = max_thres(mcbsp);
/*
* REVISIT: Set dmap_op_mode to THRESHOLD as default
* for mcbsp2 instances.
*/
if (omap_additional_add(mcbsp->dev))
dev_warn(mcbsp->dev,
"Unable to create additional controls\n");
} else {
mcbsp->max_tx_thres = -EINVAL;
mcbsp->max_rx_thres = -EINVAL;
}
}
static inline void __devexit omap34xx_device_exit(struct omap_mcbsp *mcbsp)
{
if (cpu_is_omap34xx())
omap_additional_remove(mcbsp->dev);
}
#else
static inline void __devinit omap34xx_device_init(struct omap_mcbsp *mcbsp) {}
static inline void __devexit omap34xx_device_exit(struct omap_mcbsp *mcbsp) {}
#endif /* CONFIG_ARCH_OMAP34XX */
/*
* McBSP1 and McBSP3 are directly mapped on 1610 and 1510.
* 730 has only 2 McBSP, and both of them are MPU peripherals.
@ -953,6 +1310,10 @@ static int __devinit omap_mcbsp_probe(struct platform_device *pdev)
mcbsp->dev = &pdev->dev;
mcbsp_ptr[id] = mcbsp;
platform_set_drvdata(pdev, mcbsp);
/* Initialize mcbsp properties for OMAP34XX if needed / applicable */
omap34xx_device_init(mcbsp);
return 0;
err_fclk:
@ -976,6 +1337,8 @@ static int __devexit omap_mcbsp_remove(struct platform_device *pdev)
mcbsp->pdata->ops->free)
mcbsp->pdata->ops->free(mcbsp->id);
omap34xx_device_exit(mcbsp);
clk_disable(mcbsp->fclk);
clk_disable(mcbsp->iclk);
clk_put(mcbsp->fclk);

Просмотреть файл

@ -0,0 +1,37 @@
/* arch/arm/plat-s3c/include/plat/audio-simtec.h
*
* Copyright 2008 Simtec Electronics
* http://armlinux.simtec.co.uk/
* Ben Dooks <ben@simtec.co.uk>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* Simtec Audio support.
*/
/**
* struct s3c24xx_audio_simtec_pdata - platform data for simtec audio
* @use_mpllin: Select codec clock from MPLLin
* @output_cdclk: Need to output CDCLK to the codec
* @have_mic: Set if we have a MIC socket
* @have_lout: Set if we have a LineOut socket
* @amp_gpio: GPIO pin to enable the AMP
* @amp_gain: Option GPIO to control AMP gain
*/
struct s3c24xx_audio_simtec_pdata {
unsigned int use_mpllin:1;
unsigned int output_cdclk:1;
unsigned int have_mic:1;
unsigned int have_lout:1;
int amp_gpio;
int amp_gain[2];
void (*startup)(void);
};
extern int simtec_audio_add(const char *codec_name,
struct s3c24xx_audio_simtec_pdata *pdata);

Просмотреть файл

@ -33,6 +33,11 @@
#define S3C2412_IISCON_RXDMA_ACTIVE (1 << 1)
#define S3C2412_IISCON_IIS_ACTIVE (1 << 0)
#define S3C64XX_IISMOD_BLC_16BIT (0 << 13)
#define S3C64XX_IISMOD_BLC_8BIT (1 << 13)
#define S3C64XX_IISMOD_BLC_24BIT (2 << 13)
#define S3C64XX_IISMOD_BLC_MASK (3 << 13)
#define S3C64XX_IISMOD_IMS_PCLK (0 << 10)
#define S3C64XX_IISMOD_IMS_SYSMUX (1 << 10)

Просмотреть файл

@ -84,6 +84,7 @@ static inline struct thread_info *current_thread_info(void)
#define TIF_MEMDIE 6
#define TIF_RESTORE_SIGMASK 7 /* restore signal mask in do_signal */
#define TIF_CPU_GOING_TO_SLEEP 8 /* CPU is entering sleep 0 mode */
#define TIF_NOTIFY_RESUME 9 /* callback before returning to user */
#define TIF_FREEZE 29
#define TIF_DEBUG 30 /* debugging enabled */
#define TIF_USERSPACE 31 /* true if FS sets userspace */
@ -96,6 +97,7 @@ static inline struct thread_info *current_thread_info(void)
#define _TIF_MEMDIE (1 << TIF_MEMDIE)
#define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK)
#define _TIF_CPU_GOING_TO_SLEEP (1 << TIF_CPU_GOING_TO_SLEEP)
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
#define _TIF_FREEZE (1 << TIF_FREEZE)
/* Note: The masks below must never span more than 16 bits! */
@ -103,13 +105,15 @@ static inline struct thread_info *current_thread_info(void)
/* work to do on interrupt/exception return */
#define _TIF_WORK_MASK \
((1 << TIF_SIGPENDING) \
| _TIF_NOTIFY_RESUME \
| (1 << TIF_NEED_RESCHED) \
| (1 << TIF_POLLING_NRFLAG) \
| (1 << TIF_BREAKPOINT) \
| (1 << TIF_RESTORE_SIGMASK))
/* work to do on any return to userspace */
#define _TIF_ALLWORK_MASK (_TIF_WORK_MASK | (1 << TIF_SYSCALL_TRACE))
#define _TIF_ALLWORK_MASK (_TIF_WORK_MASK | (1 << TIF_SYSCALL_TRACE) | \
_TIF_NOTIFY_RESUME)
/* work to do on return from debug mode */
#define _TIF_DBGWORK_MASK (_TIF_WORK_MASK & ~(1 << TIF_BREAKPOINT))

Просмотреть файл

@ -281,7 +281,7 @@ syscall_exit_work:
ld.w r1, r0[TI_flags]
rjmp 1b
2: mov r2, _TIF_SIGPENDING | _TIF_RESTORE_SIGMASK
2: mov r2, _TIF_SIGPENDING | _TIF_RESTORE_SIGMASK | _TIF_NOTIFY_RESUME
tst r1, r2
breq 3f
unmask_interrupts

Просмотреть файл

@ -16,6 +16,7 @@
#include <linux/ptrace.h>
#include <linux/unistd.h>
#include <linux/freezer.h>
#include <linux/tracehook.h>
#include <asm/uaccess.h>
#include <asm/ucontext.h>
@ -322,4 +323,11 @@ asmlinkage void do_notify_resume(struct pt_regs *regs, struct thread_info *ti)
if (ti->flags & (_TIF_SIGPENDING | _TIF_RESTORE_SIGMASK))
do_signal(regs, &current->blocked, syscall);
if (ti->flags & _TIF_NOTIFY_RESUME) {
clear_thread_flag(TIF_NOTIFY_RESUME);
tracehook_notify_resume(regs);
if (current->replacement_session_keyring)
key_replace_session_keyring();
}
}

Просмотреть файл

@ -16,6 +16,7 @@
#include <linux/errno.h>
#include <linux/ptrace.h>
#include <linux/user.h>
#include <linux/tracehook.h>
#include <asm/uaccess.h>
#include <asm/page.h>
@ -36,4 +37,11 @@ void do_notify_resume(int canrestart, struct pt_regs *regs,
/* deal with pending signal delivery */
if (thread_info_flags & _TIF_SIGPENDING)
do_signal(canrestart,regs);
if (thread_info_flags & _TIF_NOTIFY_RESUME) {
clear_thread_flag(TIF_NOTIFY_RESUME);
tracehook_notify_resume(regs);
if (current->replacement_session_keyring)
key_replace_session_keyring();
}
}

Просмотреть файл

@ -572,6 +572,8 @@ asmlinkage void do_notify_resume(__u32 thread_info_flags)
if (thread_info_flags & _TIF_NOTIFY_RESUME) {
clear_thread_flag(TIF_NOTIFY_RESUME);
tracehook_notify_resume(__frame);
if (current->replacement_session_keyring)
key_replace_session_keyring();
}
} /* end do_notify_resume() */

Просмотреть файл

@ -89,6 +89,7 @@ static inline struct thread_info *current_thread_info(void)
TIF_NEED_RESCHED */
#define TIF_MEMDIE 4
#define TIF_RESTORE_SIGMASK 5 /* restore signal mask in do_signal() */
#define TIF_NOTIFY_RESUME 6 /* callback before returning to user */
#define TIF_FREEZE 16 /* is freezing for suspend */
/* as above, but as bit values */
@ -97,6 +98,7 @@ static inline struct thread_info *current_thread_info(void)
#define _TIF_NEED_RESCHED (1<<TIF_NEED_RESCHED)
#define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG)
#define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK)
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
#define _TIF_FREEZE (1<<TIF_FREEZE)
#define _TIF_WORK_MASK 0x0000FFFE /* work to do on interrupt/exception return */

Просмотреть файл

@ -39,6 +39,7 @@
#include <linux/tty.h>
#include <linux/binfmts.h>
#include <linux/freezer.h>
#include <linux/tracehook.h>
#include <asm/setup.h>
#include <asm/uaccess.h>
@ -552,4 +553,11 @@ asmlinkage void do_notify_resume(struct pt_regs *regs, u32 thread_info_flags)
{
if (thread_info_flags & (_TIF_SIGPENDING | _TIF_RESTORE_SIGMASK))
do_signal(regs, NULL);
if (thread_info_flags & _TIF_NOTIFY_RESUME) {
clear_thread_flag(TIF_NOTIFY_RESUME);
tracehook_notify_resume(regs);
if (current->replacement_session_keyring)
key_replace_session_keyring();
}
}

Просмотреть файл

@ -44,7 +44,6 @@ static inline void dma_free_coherent(struct device *dev, size_t size,
#define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h)
#define get_dma_ops(dev) platform_dma_get_ops(dev)
#define flush_write_buffers()
#include <asm-generic/dma-mapping-common.h>
@ -69,6 +68,24 @@ dma_set_mask (struct device *dev, u64 mask)
return 0;
}
static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
{
if (!dev->dma_mask)
return 0;
return addr + size <= *dev->dma_mask;
}
static inline dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr)
{
return paddr;
}
static inline phys_addr_t dma_to_phys(struct device *dev, dma_addr_t daddr)
{
return daddr;
}
extern int dma_get_cache_alignment(void);
static inline void

Просмотреть файл

@ -10,7 +10,9 @@ EXPORT_SYMBOL(dma_ops);
static int __init dma_init(void)
{
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
return 0;
}
fs_initcall(dma_init);

Просмотреть файл

@ -192,6 +192,8 @@ do_notify_resume_user(sigset_t *unused, struct sigscratch *scr, long in_syscall)
if (test_thread_flag(TIF_NOTIFY_RESUME)) {
clear_thread_flag(TIF_NOTIFY_RESUME);
tracehook_notify_resume(&scr->pt);
if (current->replacement_session_keyring)
key_replace_session_keyring();
}
/* copy user rbs to kernel rbs */

Просмотреть файл

@ -96,20 +96,22 @@ END(ip_fast_csum)
GLOBAL_ENTRY(csum_ipv6_magic)
ld4 r20=[in0],4
ld4 r21=[in1],4
dep r15=in3,in2,32,16
zxt4 in2=in2
;;
ld4 r22=[in0],4
ld4 r23=[in1],4
mux1 r15=r15,@rev
dep r15=in3,in2,32,16
;;
ld4 r24=[in0],4
ld4 r25=[in1],4
shr.u r15=r15,16
mux1 r15=r15,@rev
add r16=r20,r21
add r17=r22,r23
zxt4 in4=in4
;;
ld4 r26=[in0],4
ld4 r27=[in1],4
shr.u r15=r15,16
add r18=r24,r25
add r8=r16,r17
;;

Просмотреть файл

@ -133,8 +133,7 @@ consider_steal_time(unsigned long new_itm)
account_idle_ticks(blocked);
run_local_timers();
if (rcu_pending(cpu))
rcu_check_callbacks(cpu, user_mode(get_irq_regs()));
rcu_check_callbacks(cpu, user_mode(get_irq_regs()));
scheduler_tick();
run_posix_cpu_timers(p);

Просмотреть файл

@ -149,6 +149,7 @@ static inline unsigned int get_thread_fault_code(void)
#define TIF_NEED_RESCHED 2 /* rescheduling necessary */
#define TIF_SINGLESTEP 3 /* restore singlestep on return to user mode */
#define TIF_IRET 4 /* return with iret */
#define TIF_NOTIFY_RESUME 5 /* callback before returning to user */
#define TIF_RESTORE_SIGMASK 8 /* restore signal mask in do_signal() */
#define TIF_USEDFPU 16 /* FPU was used by this task this quantum (SMP) */
#define TIF_POLLING_NRFLAG 17 /* true if poll_idle() is polling TIF_NEED_RESCHED */
@ -160,6 +161,7 @@ static inline unsigned int get_thread_fault_code(void)
#define _TIF_NEED_RESCHED (1<<TIF_NEED_RESCHED)
#define _TIF_SINGLESTEP (1<<TIF_SINGLESTEP)
#define _TIF_IRET (1<<TIF_IRET)
#define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME)
#define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK)
#define _TIF_USEDFPU (1<<TIF_USEDFPU)
#define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG)

Просмотреть файл

@ -21,6 +21,7 @@
#include <linux/stddef.h>
#include <linux/personality.h>
#include <linux/freezer.h>
#include <linux/tracehook.h>
#include <asm/cacheflush.h>
#include <asm/ucontext.h>
#include <asm/uaccess.h>
@ -408,5 +409,12 @@ void do_notify_resume(struct pt_regs *regs, sigset_t *oldset,
if (thread_info_flags & _TIF_SIGPENDING)
do_signal(regs,oldset);
if (thread_info_flags & _TIF_NOTIFY_RESUME) {
clear_thread_flag(TIF_NOTIFY_RESUME);
tracehook_notify_resume(regs);
if (current->replacement_session_keyring)
key_replace_session_keyring();
}
clear_thread_flag(TIF_IRET);
}

Просмотреть файл

@ -46,7 +46,6 @@
#define curptr a2
LFLUSH_I_AND_D = 0x00000808
LSIGTRAP = 5
/* process bits for task_struct.ptrace */
PT_TRACESYS_OFF = 3
@ -118,9 +117,6 @@ PT_DTRACE_BIT = 2
#define STR(X) STR1(X)
#define STR1(X) #X
#define PT_OFF_ORIG_D0 0x24
#define PT_OFF_FORMATVEC 0x32
#define PT_OFF_SR 0x2C
#define SAVE_ALL_INT \
"clrl %%sp@-;" /* stk_adj */ \
"pea -1:w;" /* orig d0 = -1 */ \

Просмотреть файл

@ -72,8 +72,8 @@ LENOSYS = 38
lea %sp@(-32),%sp /* space for 8 regs */
moveml %d1-%d5/%a0-%a2,%sp@
movel sw_usp,%a0 /* get usp */
movel %a0@-,%sp@(PT_PC) /* copy exception program counter */
movel %a0@-,%sp@(PT_FORMATVEC)/* copy exception format/vector/sr */
movel %a0@-,%sp@(PT_OFF_PC) /* copy exception program counter */
movel %a0@-,%sp@(PT_OFF_FORMATVEC)/*copy exception format/vector/sr */
bra 7f
6:
clrl %sp@- /* stkadj */
@ -89,8 +89,8 @@ LENOSYS = 38
bnes 8f /* no, skip */
move #0x2700,%sr /* disable intrs */
movel sw_usp,%a0 /* get usp */
movel %sp@(PT_PC),%a0@- /* copy exception program counter */
movel %sp@(PT_FORMATVEC),%a0@-/* copy exception format/vector/sr */
movel %sp@(PT_OFF_PC),%a0@- /* copy exception program counter */
movel %sp@(PT_OFF_FORMATVEC),%a0@-/*copy exception format/vector/sr */
moveml %sp@,%d1-%d5/%a0-%a2
lea %sp@(32),%sp /* space for 8 regs */
movel %sp@+,%d0

Просмотреть файл

@ -145,16 +145,16 @@ extern unsigned int fp_debugprint;
* these are only used during instruction decoding
* where we always know how deep we're on the stack.
*/
#define FPS_DO (PT_D0)
#define FPS_D1 (PT_D1)
#define FPS_D2 (PT_D2)
#define FPS_A0 (PT_A0)
#define FPS_A1 (PT_A1)
#define FPS_A2 (PT_A2)
#define FPS_SR (PT_SR)
#define FPS_PC (PT_PC)
#define FPS_EA (PT_PC+6)
#define FPS_PC2 (PT_PC+10)
#define FPS_DO (PT_OFF_D0)
#define FPS_D1 (PT_OFF_D1)
#define FPS_D2 (PT_OFF_D2)
#define FPS_A0 (PT_OFF_A0)
#define FPS_A1 (PT_OFF_A1)
#define FPS_A2 (PT_OFF_A2)
#define FPS_SR (PT_OFF_SR)
#define FPS_PC (PT_OFF_PC)
#define FPS_EA (PT_OFF_PC+6)
#define FPS_PC2 (PT_OFF_PC+10)
.macro fp_get_fp_reg
lea (FPD_FPREG,FPDATA,%d0.w*4),%a0

Просмотреть файл

@ -1,6 +1,10 @@
#ifndef _ASM_M68K_THREAD_INFO_H
#define _ASM_M68K_THREAD_INFO_H
#ifndef ASM_OFFSETS_C
#include <asm/asm-offsets.h>
#endif
#include <asm/current.h>
#include <asm/types.h>
#include <asm/page.h>
@ -31,7 +35,12 @@ struct thread_info {
#define init_thread_info (init_task.thread.info)
#define init_stack (init_thread_union.stack)
#define task_thread_info(tsk) (&(tsk)->thread.info)
#ifdef ASM_OFFSETS_C
#define task_thread_info(tsk) ((struct thread_info *) NULL)
#else
#define task_thread_info(tsk) ((struct thread_info *)((char *)tsk+TASK_TINFO))
#endif
#define task_stack_page(tsk) ((tsk)->stack)
#define current_thread_info() task_thread_info(current)

Просмотреть файл

@ -8,6 +8,8 @@
* #defines from the assembly-language output.
*/
#define ASM_OFFSETS_C
#include <linux/stddef.h>
#include <linux/sched.h>
#include <linux/kernel_stat.h>
@ -27,6 +29,9 @@ int main(void)
DEFINE(TASK_INFO, offsetof(struct task_struct, thread.info));
DEFINE(TASK_MM, offsetof(struct task_struct, mm));
DEFINE(TASK_ACTIVE_MM, offsetof(struct task_struct, active_mm));
#ifdef CONFIG_MMU
DEFINE(TASK_TINFO, offsetof(struct task_struct, thread.info));
#endif
/* offsets into the thread struct */
DEFINE(THREAD_KSP, offsetof(struct thread_struct, ksp));
@ -44,20 +49,20 @@ int main(void)
DEFINE(TINFO_FLAGS, offsetof(struct thread_info, flags));
/* offsets into the pt_regs */
DEFINE(PT_D0, offsetof(struct pt_regs, d0));
DEFINE(PT_ORIG_D0, offsetof(struct pt_regs, orig_d0));
DEFINE(PT_D1, offsetof(struct pt_regs, d1));
DEFINE(PT_D2, offsetof(struct pt_regs, d2));
DEFINE(PT_D3, offsetof(struct pt_regs, d3));
DEFINE(PT_D4, offsetof(struct pt_regs, d4));
DEFINE(PT_D5, offsetof(struct pt_regs, d5));
DEFINE(PT_A0, offsetof(struct pt_regs, a0));
DEFINE(PT_A1, offsetof(struct pt_regs, a1));
DEFINE(PT_A2, offsetof(struct pt_regs, a2));
DEFINE(PT_PC, offsetof(struct pt_regs, pc));
DEFINE(PT_SR, offsetof(struct pt_regs, sr));
DEFINE(PT_OFF_D0, offsetof(struct pt_regs, d0));
DEFINE(PT_OFF_ORIG_D0, offsetof(struct pt_regs, orig_d0));
DEFINE(PT_OFF_D1, offsetof(struct pt_regs, d1));
DEFINE(PT_OFF_D2, offsetof(struct pt_regs, d2));
DEFINE(PT_OFF_D3, offsetof(struct pt_regs, d3));
DEFINE(PT_OFF_D4, offsetof(struct pt_regs, d4));
DEFINE(PT_OFF_D5, offsetof(struct pt_regs, d5));
DEFINE(PT_OFF_A0, offsetof(struct pt_regs, a0));
DEFINE(PT_OFF_A1, offsetof(struct pt_regs, a1));
DEFINE(PT_OFF_A2, offsetof(struct pt_regs, a2));
DEFINE(PT_OFF_PC, offsetof(struct pt_regs, pc));
DEFINE(PT_OFF_SR, offsetof(struct pt_regs, sr));
/* bitfields are a bit difficult */
DEFINE(PT_VECTOR, offsetof(struct pt_regs, pc) + 4);
DEFINE(PT_OFF_FORMATVEC, offsetof(struct pt_regs, pc) + 4);
/* offsets into the irq_handler struct */
DEFINE(IRQ_HANDLER, offsetof(struct irq_node, handler));
@ -84,10 +89,10 @@ int main(void)
DEFINE(FONT_DESC_PREF, offsetof(struct font_desc, pref));
/* signal defines */
DEFINE(SIGSEGV, SIGSEGV);
DEFINE(SEGV_MAPERR, SEGV_MAPERR);
DEFINE(SIGTRAP, SIGTRAP);
DEFINE(TRAP_TRACE, TRAP_TRACE);
DEFINE(LSIGSEGV, SIGSEGV);
DEFINE(LSEGV_MAPERR, SEGV_MAPERR);
DEFINE(LSIGTRAP, SIGTRAP);
DEFINE(LTRAP_TRACE, TRAP_TRACE);
/* offsets into the custom struct */
DEFINE(CUSTOMBASE, &amiga_custom);

Просмотреть файл

@ -77,17 +77,17 @@ ENTRY(ret_from_fork)
jra .Lret_from_exception
do_trace_entry:
movel #-ENOSYS,%sp@(PT_D0) | needed for strace
movel #-ENOSYS,%sp@(PT_OFF_D0)| needed for strace
subql #4,%sp
SAVE_SWITCH_STACK
jbsr syscall_trace
RESTORE_SWITCH_STACK
addql #4,%sp
movel %sp@(PT_ORIG_D0),%d0
movel %sp@(PT_OFF_ORIG_D0),%d0
cmpl #NR_syscalls,%d0
jcs syscall
badsys:
movel #-ENOSYS,%sp@(PT_D0)
movel #-ENOSYS,%sp@(PT_OFF_D0)
jra ret_from_syscall
do_trace_exit:
@ -103,7 +103,7 @@ ENTRY(ret_from_signal)
addql #4,%sp
/* on 68040 complete pending writebacks if any */
#ifdef CONFIG_M68040
bfextu %sp@(PT_VECTOR){#0,#4},%d0
bfextu %sp@(PT_OFF_FORMATVEC){#0,#4},%d0
subql #7,%d0 | bus error frame ?
jbne 1f
movel %sp,%sp@-
@ -127,7 +127,7 @@ ENTRY(system_call)
jcc badsys
syscall:
jbsr @(sys_call_table,%d0:l:4)@(0)
movel %d0,%sp@(PT_D0) | save the return value
movel %d0,%sp@(PT_OFF_D0) | save the return value
ret_from_syscall:
|oriw #0x0700,%sr
movew %curptr@(TASK_INFO+TINFO_FLAGS+2),%d0
@ -135,7 +135,7 @@ ret_from_syscall:
1: RESTORE_ALL
syscall_exit_work:
btst #5,%sp@(PT_SR) | check if returning to kernel
btst #5,%sp@(PT_OFF_SR) | check if returning to kernel
bnes 1b | if so, skip resched, signals
lslw #1,%d0
jcs do_trace_exit
@ -148,7 +148,7 @@ syscall_exit_work:
ENTRY(ret_from_exception)
.Lret_from_exception:
btst #5,%sp@(PT_SR) | check if returning to kernel
btst #5,%sp@(PT_OFF_SR) | check if returning to kernel
bnes 1f | if so, skip resched, signals
| only allow interrupts when we are really the last one on the
| kernel stack, otherwise stack overflow can occur during
@ -182,7 +182,7 @@ do_signal_return:
jbra resume_userspace
do_delayed_trace:
bclr #7,%sp@(PT_SR) | clear trace bit in SR
bclr #7,%sp@(PT_OFF_SR) | clear trace bit in SR
pea 1 | send SIGTRAP
movel %curptr,%sp@-
pea LSIGTRAP
@ -199,7 +199,7 @@ ENTRY(auto_inthandler)
GET_CURRENT(%d0)
addqb #1,%curptr@(TASK_INFO+TINFO_PREEMPT+1)
| put exception # in d0
bfextu %sp@(PT_VECTOR){#4,#10},%d0
bfextu %sp@(PT_OFF_FORMATVEC){#4,#10},%d0
subw #VEC_SPUR,%d0
movel %sp,%sp@-
@ -216,7 +216,7 @@ ret_from_interrupt:
ALIGN
ret_from_last_interrupt:
moveq #(~ALLOWINT>>8)&0xff,%d0
andb %sp@(PT_SR),%d0
andb %sp@(PT_OFF_SR),%d0
jne 2b
/* check if we need to do software interrupts */
@ -232,7 +232,7 @@ ENTRY(user_inthandler)
GET_CURRENT(%d0)
addqb #1,%curptr@(TASK_INFO+TINFO_PREEMPT+1)
| put exception # in d0
bfextu %sp@(PT_VECTOR){#4,#10},%d0
bfextu %sp@(PT_OFF_FORMATVEC){#4,#10},%d0
user_irqvec_fixup = . + 2
subw #VEC_USER,%d0

Просмотреть файл

@ -85,8 +85,8 @@ fp_err_ua2:
fp_err_ua1:
addq.l #4,%sp
move.l %a0,-(%sp)
pea SEGV_MAPERR
pea SIGSEGV
pea LSEGV_MAPERR
pea LSIGSEGV
jsr fpemu_signal
add.w #12,%sp
jra ret_from_exception
@ -96,8 +96,8 @@ fp_err_ua1:
| it does not really belong here, but...
fp_sendtrace060:
move.l (FPS_PC,%sp),-(%sp)
pea TRAP_TRACE
pea SIGTRAP
pea LTRAP_TRACE
pea LSIGTRAP
jsr fpemu_signal
add.w #12,%sp
jra ret_from_exception
@ -122,17 +122,17 @@ fp_get_data_reg:
.long fp_get_d6, fp_get_d7
fp_get_d0:
move.l (PT_D0+8,%sp),%d0
move.l (PT_OFF_D0+8,%sp),%d0
printf PREGISTER,"{d0->%08x}",1,%d0
rts
fp_get_d1:
move.l (PT_D1+8,%sp),%d0
move.l (PT_OFF_D1+8,%sp),%d0
printf PREGISTER,"{d1->%08x}",1,%d0
rts
fp_get_d2:
move.l (PT_D2+8,%sp),%d0
move.l (PT_OFF_D2+8,%sp),%d0
printf PREGISTER,"{d2->%08x}",1,%d0
rts
@ -173,35 +173,35 @@ fp_put_data_reg:
fp_put_d0:
printf PREGISTER,"{d0<-%08x}",1,%d0
move.l %d0,(PT_D0+8,%sp)
move.l %d0,(PT_OFF_D0+8,%sp)
rts
fp_put_d1:
printf PREGISTER,"{d1<-%08x}",1,%d0
move.l %d0,(PT_D1+8,%sp)
move.l %d0,(PT_OFF_D1+8,%sp)
rts
fp_put_d2:
printf PREGISTER,"{d2<-%08x}",1,%d0
move.l %d0,(PT_D2+8,%sp)
move.l %d0,(PT_OFF_D2+8,%sp)
rts
fp_put_d3:
printf PREGISTER,"{d3<-%08x}",1,%d0
| move.l %d0,%d3
move.l %d0,(PT_D3+8,%sp)
move.l %d0,(PT_OFF_D3+8,%sp)
rts
fp_put_d4:
printf PREGISTER,"{d4<-%08x}",1,%d0
| move.l %d0,%d4
move.l %d0,(PT_D4+8,%sp)
move.l %d0,(PT_OFF_D4+8,%sp)
rts
fp_put_d5:
printf PREGISTER,"{d5<-%08x}",1,%d0
| move.l %d0,%d5
move.l %d0,(PT_D5+8,%sp)
move.l %d0,(PT_OFF_D5+8,%sp)
rts
fp_put_d6:
@ -225,17 +225,17 @@ fp_get_addr_reg:
.long fp_get_a6, fp_get_a7
fp_get_a0:
move.l (PT_A0+8,%sp),%a0
move.l (PT_OFF_A0+8,%sp),%a0
printf PREGISTER,"{a0->%08x}",1,%a0
rts
fp_get_a1:
move.l (PT_A1+8,%sp),%a0
move.l (PT_OFF_A1+8,%sp),%a0
printf PREGISTER,"{a1->%08x}",1,%a0
rts
fp_get_a2:
move.l (PT_A2+8,%sp),%a0
move.l (PT_OFF_A2+8,%sp),%a0
printf PREGISTER,"{a2->%08x}",1,%a0
rts
@ -276,17 +276,17 @@ fp_put_addr_reg:
fp_put_a0:
printf PREGISTER,"{a0<-%08x}",1,%a0
move.l %a0,(PT_A0+8,%sp)
move.l %a0,(PT_OFF_A0+8,%sp)
rts
fp_put_a1:
printf PREGISTER,"{a1<-%08x}",1,%a0
move.l %a0,(PT_A1+8,%sp)
move.l %a0,(PT_OFF_A1+8,%sp)
rts
fp_put_a2:
printf PREGISTER,"{a2<-%08x}",1,%a0
move.l %a0,(PT_A2+8,%sp)
move.l %a0,(PT_OFF_A2+8,%sp)
rts
fp_put_a3:

Просмотреть файл

@ -115,6 +115,7 @@ register struct thread_info *__current_thread_info __asm__("$28");
#define TIF_NEED_RESCHED 2 /* rescheduling necessary */
#define TIF_SYSCALL_AUDIT 3 /* syscall auditing active */
#define TIF_SECCOMP 4 /* secure computing */
#define TIF_NOTIFY_RESUME 5 /* callback before returning to user */
#define TIF_RESTORE_SIGMASK 9 /* restore signal mask in do_signal() */
#define TIF_USEDFPU 16 /* FPU was used by this task this quantum (SMP) */
#define TIF_POLLING_NRFLAG 17 /* true if poll_idle() is polling TIF_NEED_RESCHED */
@ -139,6 +140,7 @@ register struct thread_info *__current_thread_info __asm__("$28");
#define _TIF_NEED_RESCHED (1<<TIF_NEED_RESCHED)
#define _TIF_SYSCALL_AUDIT (1<<TIF_SYSCALL_AUDIT)
#define _TIF_SECCOMP (1<<TIF_SECCOMP)
#define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME)
#define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK)
#define _TIF_USEDFPU (1<<TIF_USEDFPU)
#define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG)

Просмотреть файл

@ -21,6 +21,7 @@
#include <linux/compiler.h>
#include <linux/syscalls.h>
#include <linux/uaccess.h>
#include <linux/tracehook.h>
#include <asm/abi.h>
#include <asm/asm.h>
@ -700,4 +701,11 @@ asmlinkage void do_notify_resume(struct pt_regs *regs, void *unused,
/* deal with pending signal delivery */
if (thread_info_flags & (_TIF_SIGPENDING | _TIF_RESTORE_SIGMASK))
do_signal(regs);
if (thread_info_flags & _TIF_NOTIFY_RESUME) {
clear_thread_flag(TIF_NOTIFY_RESUME);
tracehook_notify_resume(regs);
if (current->replacement_session_keyring)
key_replace_session_keyring();
}
}

Просмотреть файл

@ -568,5 +568,7 @@ asmlinkage void do_notify_resume(struct pt_regs *regs, u32 thread_info_flags)
if (thread_info_flags & _TIF_NOTIFY_RESUME) {
clear_thread_flag(TIF_NOTIFY_RESUME);
tracehook_notify_resume(__frame);
if (current->replacement_session_keyring)
key_replace_session_keyring();
}
}

Просмотреть файл

@ -59,6 +59,7 @@ struct thread_info {
#define TIF_MEMDIE 5
#define TIF_RESTORE_SIGMASK 6 /* restore saved signal mask */
#define TIF_FREEZE 7 /* is freezing for suspend */
#define TIF_NOTIFY_RESUME 8 /* callback before returning to user */
#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
#define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
@ -67,8 +68,9 @@ struct thread_info {
#define _TIF_32BIT (1 << TIF_32BIT)
#define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK)
#define _TIF_FREEZE (1 << TIF_FREEZE)
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
#define _TIF_USER_WORK_MASK (_TIF_SIGPENDING | \
#define _TIF_USER_WORK_MASK (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | \
_TIF_NEED_RESCHED | _TIF_RESTORE_SIGMASK)
#endif /* __KERNEL__ */

Просмотреть файл

@ -948,7 +948,7 @@ intr_check_sig:
/* As above */
mfctl %cr30,%r1
LDREG TI_FLAGS(%r1),%r19
ldi (_TIF_SIGPENDING|_TIF_RESTORE_SIGMASK), %r20
ldi (_TIF_SIGPENDING|_TIF_RESTORE_SIGMASK|_TIF_NOTIFY_RESUME), %r20
and,COND(<>) %r19, %r20, %r0
b,n intr_restore /* skip past if we've nothing to do */

Просмотреть файл

@ -25,6 +25,7 @@
#include <linux/stddef.h>
#include <linux/compat.h>
#include <linux/elf.h>
#include <linux/tracehook.h>
#include <asm/ucontext.h>
#include <asm/rt_sigframe.h>
#include <asm/uaccess.h>
@ -645,4 +646,11 @@ void do_notify_resume(struct pt_regs *regs, long in_syscall)
if (test_thread_flag(TIF_SIGPENDING) ||
test_thread_flag(TIF_RESTORE_SIGMASK))
do_signal(regs, in_syscall);
if (test_thread_flag(TIF_NOTIFY_RESUME)) {
clear_thread_flag(TIF_NOTIFY_RESUME);
tracehook_notify_resume(regs);
if (current->replacement_session_keyring)
key_replace_session_keyring();
}
}

Просмотреть файл

@ -424,6 +424,29 @@ static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
#endif
}
static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
{
struct dma_mapping_ops *ops = get_dma_ops(dev);
if (ops->addr_needs_map && ops->addr_needs_map(dev, addr, size))
return 0;
if (!dev->dma_mask)
return 0;
return addr + size <= *dev->dma_mask;
}
static inline dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr)
{
return paddr + get_dma_direct_offset(dev);
}
static inline phys_addr_t dma_to_phys(struct device *dev, dma_addr_t daddr)
{
return daddr - get_dma_direct_offset(dev);
}
#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
#define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h)
#ifdef CONFIG_NOT_COHERENT_CACHE

Просмотреть файл

@ -104,8 +104,8 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
else
pte_update(ptep, ~_PAGE_HASHPTE, pte_val(pte));
#elif defined(CONFIG_PPC32) && defined(CONFIG_PTE_64BIT) && defined(CONFIG_SMP)
/* Second case is 32-bit with 64-bit PTE in SMP mode. In this case, we
#elif defined(CONFIG_PPC32) && defined(CONFIG_PTE_64BIT)
/* Second case is 32-bit with 64-bit PTE. In this case, we
* can just store as long as we do the two halves in the right order
* with a barrier in between. This is possible because we take care,
* in the hash code, to pre-invalidate if the PTE was already hashed,
@ -140,7 +140,7 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
#else
/* Anything else just stores the PTE normally. That covers all 64-bit
* cases, and 32-bit non-hash with 64-bit PTEs in UP mode
* cases, and 32-bit non-hash with 32-bit PTEs.
*/
*ptep = pte;
#endif

Просмотреть файл

@ -54,7 +54,7 @@
* This returns the old value in the lock, so we succeeded
* in getting the lock if the return value is 0.
*/
static inline unsigned long __spin_trylock(raw_spinlock_t *lock)
static inline unsigned long arch_spin_trylock(raw_spinlock_t *lock)
{
unsigned long tmp, token;
@ -76,7 +76,7 @@ static inline unsigned long __spin_trylock(raw_spinlock_t *lock)
static inline int __raw_spin_trylock(raw_spinlock_t *lock)
{
CLEAR_IO_SYNC;
return __spin_trylock(lock) == 0;
return arch_spin_trylock(lock) == 0;
}
/*
@ -108,7 +108,7 @@ static inline void __raw_spin_lock(raw_spinlock_t *lock)
{
CLEAR_IO_SYNC;
while (1) {
if (likely(__spin_trylock(lock) == 0))
if (likely(arch_spin_trylock(lock) == 0))
break;
do {
HMT_low();
@ -126,7 +126,7 @@ void __raw_spin_lock_flags(raw_spinlock_t *lock, unsigned long flags)
CLEAR_IO_SYNC;
while (1) {
if (likely(__spin_trylock(lock) == 0))
if (likely(arch_spin_trylock(lock) == 0))
break;
local_save_flags(flags_dis);
local_irq_restore(flags);
@ -181,7 +181,7 @@ extern void __raw_spin_unlock_wait(raw_spinlock_t *lock);
* This returns the old value in the lock + 1,
* so we got a read lock if the return value is > 0.
*/
static inline long __read_trylock(raw_rwlock_t *rw)
static inline long arch_read_trylock(raw_rwlock_t *rw)
{
long tmp;
@ -205,7 +205,7 @@ static inline long __read_trylock(raw_rwlock_t *rw)
* This returns the old value in the lock,
* so we got the write lock if the return value is 0.
*/
static inline long __write_trylock(raw_rwlock_t *rw)
static inline long arch_write_trylock(raw_rwlock_t *rw)
{
long tmp, token;
@ -228,7 +228,7 @@ static inline long __write_trylock(raw_rwlock_t *rw)
static inline void __raw_read_lock(raw_rwlock_t *rw)
{
while (1) {
if (likely(__read_trylock(rw) > 0))
if (likely(arch_read_trylock(rw) > 0))
break;
do {
HMT_low();
@ -242,7 +242,7 @@ static inline void __raw_read_lock(raw_rwlock_t *rw)
static inline void __raw_write_lock(raw_rwlock_t *rw)
{
while (1) {
if (likely(__write_trylock(rw) == 0))
if (likely(arch_write_trylock(rw) == 0))
break;
do {
HMT_low();
@ -255,12 +255,12 @@ static inline void __raw_write_lock(raw_rwlock_t *rw)
static inline int __raw_read_trylock(raw_rwlock_t *rw)
{
return __read_trylock(rw) > 0;
return arch_read_trylock(rw) > 0;
}
static inline int __raw_write_trylock(raw_rwlock_t *rw)
{
return __write_trylock(rw) == 0;
return arch_write_trylock(rw) == 0;
}
static inline void __raw_read_unlock(raw_rwlock_t *rw)

Просмотреть файл

@ -97,7 +97,7 @@ obj64-$(CONFIG_AUDIT) += compat_audit.o
obj-$(CONFIG_DYNAMIC_FTRACE) += ftrace.o
obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += ftrace.o
obj-$(CONFIG_PPC_PERF_CTRS) += perf_counter.o
obj-$(CONFIG_PPC_PERF_CTRS) += perf_counter.o perf_callchain.o
obj64-$(CONFIG_PPC_PERF_CTRS) += power4-pmu.o ppc970-pmu.o power5-pmu.o \
power5+-pmu.o power6-pmu.o power7-pmu.o
obj32-$(CONFIG_PPC_PERF_CTRS) += mpc7450-pmu.o

Просмотреть файл

@ -67,6 +67,8 @@ int main(void)
DEFINE(MMCONTEXTID, offsetof(struct mm_struct, context.id));
#ifdef CONFIG_PPC64
DEFINE(AUDITCONTEXT, offsetof(struct task_struct, audit_context));
DEFINE(SIGSEGV, SIGSEGV);
DEFINE(NMI_MASK, NMI_MASK);
#else
DEFINE(THREAD_INFO, offsetof(struct task_struct, stack));
#endif /* CONFIG_PPC64 */

Просмотреть файл

@ -24,50 +24,12 @@
int swiotlb __read_mostly;
unsigned int ppc_swiotlb_enable;
void *swiotlb_bus_to_virt(struct device *hwdev, dma_addr_t addr)
{
unsigned long pfn = PFN_DOWN(swiotlb_bus_to_phys(hwdev, addr));
void *pageaddr = page_address(pfn_to_page(pfn));
if (pageaddr != NULL)
return pageaddr + (addr % PAGE_SIZE);
return NULL;
}
dma_addr_t swiotlb_phys_to_bus(struct device *hwdev, phys_addr_t paddr)
{
return paddr + get_dma_direct_offset(hwdev);
}
phys_addr_t swiotlb_bus_to_phys(struct device *hwdev, dma_addr_t baddr)
{
return baddr - get_dma_direct_offset(hwdev);
}
/*
* Determine if an address needs bounce buffering via swiotlb.
* Going forward I expect the swiotlb code to generalize on using
* a dma_ops->addr_needs_map, and this function will move from here to the
* generic swiotlb code.
*/
int
swiotlb_arch_address_needs_mapping(struct device *hwdev, dma_addr_t addr,
size_t size)
{
struct dma_mapping_ops *dma_ops = get_dma_ops(hwdev);
BUG_ON(!dma_ops);
return dma_ops->addr_needs_map(hwdev, addr, size);
}
/*
* Determine if an address is reachable by a pci device, or if we must bounce.
*/
static int
swiotlb_pci_addr_needs_map(struct device *hwdev, dma_addr_t addr, size_t size)
{
u64 mask = dma_get_mask(hwdev);
dma_addr_t max;
struct pci_controller *hose;
struct pci_dev *pdev = to_pci_dev(hwdev);
@ -79,16 +41,9 @@ swiotlb_pci_addr_needs_map(struct device *hwdev, dma_addr_t addr, size_t size)
if ((addr + size > max) | (addr < hose->dma_window_base_cur))
return 1;
return !is_buffer_dma_capable(mask, addr, size);
return 0;
}
static int
swiotlb_addr_needs_map(struct device *hwdev, dma_addr_t addr, size_t size)
{
return !is_buffer_dma_capable(dma_get_mask(hwdev), addr, size);
}
/*
* At the moment, all platforms that use this code only require
* swiotlb to be used if we're operating on HIGHMEM. Since
@ -104,7 +59,6 @@ struct dma_mapping_ops swiotlb_dma_ops = {
.dma_supported = swiotlb_dma_supported,
.map_page = swiotlb_map_page,
.unmap_page = swiotlb_unmap_page,
.addr_needs_map = swiotlb_addr_needs_map,
.sync_single_range_for_cpu = swiotlb_sync_single_range_for_cpu,
.sync_single_range_for_device = swiotlb_sync_single_range_for_device,
.sync_sg_for_cpu = swiotlb_sync_sg_for_cpu,

Просмотреть файл

@ -729,6 +729,11 @@ BEGIN_FTR_SECTION
bne- do_ste_alloc /* If so handle it */
END_FTR_SECTION_IFCLR(CPU_FTR_SLB)
clrrdi r11,r1,THREAD_SHIFT
lwz r0,TI_PREEMPT(r11) /* If we're in an "NMI" */
andis. r0,r0,NMI_MASK@h /* (i.e. an irq when soft-disabled) */
bne 77f /* then don't call hash_page now */
/*
* On iSeries, we soft-disable interrupts here, then
* hard-enable interrupts so that the hash_page code can spin on
@ -833,6 +838,20 @@ handle_page_fault:
bl .low_hash_fault
b .ret_from_except
/*
* We come here as a result of a DSI at a point where we don't want
* to call hash_page, such as when we are accessing memory (possibly
* user memory) inside a PMU interrupt that occurred while interrupts
* were soft-disabled. We want to invoke the exception handler for
* the access, or panic if there isn't a handler.
*/
77: bl .save_nvgprs
mr r4,r3
addi r3,r1,STACK_FRAME_OVERHEAD
li r5,SIGSEGV
bl .bad_page_fault
b .ret_from_except
/* here we have a segment miss */
do_ste_alloc:
bl .ste_allocate /* try to insert stab entry */

Просмотреть файл

@ -0,0 +1,527 @@
/*
* Performance counter callchain support - powerpc architecture code
*
* Copyright © 2009 Paul Mackerras, IBM Corporation.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/kernel.h>
#include <linux/sched.h>
#include <linux/perf_counter.h>
#include <linux/percpu.h>
#include <linux/uaccess.h>
#include <linux/mm.h>
#include <asm/ptrace.h>
#include <asm/pgtable.h>
#include <asm/sigcontext.h>
#include <asm/ucontext.h>
#include <asm/vdso.h>
#ifdef CONFIG_PPC64
#include "ppc32.h"
#endif
/*
* Store another value in a callchain_entry.
*/
static inline void callchain_store(struct perf_callchain_entry *entry, u64 ip)
{
unsigned int nr = entry->nr;
if (nr < PERF_MAX_STACK_DEPTH) {
entry->ip[nr] = ip;
entry->nr = nr + 1;
}
}
/*
* Is sp valid as the address of the next kernel stack frame after prev_sp?
* The next frame may be in a different stack area but should not go
* back down in the same stack area.
*/
static int valid_next_sp(unsigned long sp, unsigned long prev_sp)
{
if (sp & 0xf)
return 0; /* must be 16-byte aligned */
if (!validate_sp(sp, current, STACK_FRAME_OVERHEAD))
return 0;
if (sp >= prev_sp + STACK_FRAME_OVERHEAD)
return 1;
/*
* sp could decrease when we jump off an interrupt stack
* back to the regular process stack.
*/
if ((sp & ~(THREAD_SIZE - 1)) != (prev_sp & ~(THREAD_SIZE - 1)))
return 1;
return 0;
}
static void perf_callchain_kernel(struct pt_regs *regs,
struct perf_callchain_entry *entry)
{
unsigned long sp, next_sp;
unsigned long next_ip;
unsigned long lr;
long level = 0;
unsigned long *fp;
lr = regs->link;
sp = regs->gpr[1];
callchain_store(entry, PERF_CONTEXT_KERNEL);
callchain_store(entry, regs->nip);
if (!validate_sp(sp, current, STACK_FRAME_OVERHEAD))
return;
for (;;) {
fp = (unsigned long *) sp;
next_sp = fp[0];
if (next_sp == sp + STACK_INT_FRAME_SIZE &&
fp[STACK_FRAME_MARKER] == STACK_FRAME_REGS_MARKER) {
/*
* This looks like an interrupt frame for an
* interrupt that occurred in the kernel
*/
regs = (struct pt_regs *)(sp + STACK_FRAME_OVERHEAD);
next_ip = regs->nip;
lr = regs->link;
level = 0;
callchain_store(entry, PERF_CONTEXT_KERNEL);
} else {
if (level == 0)
next_ip = lr;
else
next_ip = fp[STACK_FRAME_LR_SAVE];
/*
* We can't tell which of the first two addresses
* we get are valid, but we can filter out the
* obviously bogus ones here. We replace them
* with 0 rather than removing them entirely so
* that userspace can tell which is which.
*/
if ((level == 1 && next_ip == lr) ||
(level <= 1 && !kernel_text_address(next_ip)))
next_ip = 0;
++level;
}
callchain_store(entry, next_ip);
if (!valid_next_sp(next_sp, sp))
return;
sp = next_sp;
}
}
#ifdef CONFIG_PPC64
#ifdef CONFIG_HUGETLB_PAGE
#define is_huge_psize(pagesize) (HPAGE_SHIFT && mmu_huge_psizes[pagesize])
#else
#define is_huge_psize(pagesize) 0
#endif
/*
* On 64-bit we don't want to invoke hash_page on user addresses from
* interrupt context, so if the access faults, we read the page tables
* to find which page (if any) is mapped and access it directly.
*/
static int read_user_stack_slow(void __user *ptr, void *ret, int nb)
{
pgd_t *pgdir;
pte_t *ptep, pte;
int pagesize;
unsigned long addr = (unsigned long) ptr;
unsigned long offset;
unsigned long pfn;
void *kaddr;
pgdir = current->mm->pgd;
if (!pgdir)
return -EFAULT;
pagesize = get_slice_psize(current->mm, addr);
/* align address to page boundary */
offset = addr & ((1ul << mmu_psize_defs[pagesize].shift) - 1);
addr -= offset;
if (is_huge_psize(pagesize))
ptep = huge_pte_offset(current->mm, addr);
else
ptep = find_linux_pte(pgdir, addr);
if (ptep == NULL)
return -EFAULT;
pte = *ptep;
if (!pte_present(pte) || !(pte_val(pte) & _PAGE_USER))
return -EFAULT;
pfn = pte_pfn(pte);
if (!page_is_ram(pfn))
return -EFAULT;
/* no highmem to worry about here */
kaddr = pfn_to_kaddr(pfn);
memcpy(ret, kaddr + offset, nb);
return 0;
}
static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret)
{
if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned long) ||
((unsigned long)ptr & 7))
return -EFAULT;
if (!__get_user_inatomic(*ret, ptr))
return 0;
return read_user_stack_slow(ptr, ret, 8);
}
static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
{
if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
((unsigned long)ptr & 3))
return -EFAULT;
if (!__get_user_inatomic(*ret, ptr))
return 0;
return read_user_stack_slow(ptr, ret, 4);
}
static inline int valid_user_sp(unsigned long sp, int is_64)
{
if (!sp || (sp & 7) || sp > (is_64 ? TASK_SIZE : 0x100000000UL) - 32)
return 0;
return 1;
}
/*
* 64-bit user processes use the same stack frame for RT and non-RT signals.
*/
struct signal_frame_64 {
char dummy[__SIGNAL_FRAMESIZE];
struct ucontext uc;
unsigned long unused[2];
unsigned int tramp[6];
struct siginfo *pinfo;
void *puc;
struct siginfo info;
char abigap[288];
};
static int is_sigreturn_64_address(unsigned long nip, unsigned long fp)
{
if (nip == fp + offsetof(struct signal_frame_64, tramp))
return 1;
if (vdso64_rt_sigtramp && current->mm->context.vdso_base &&
nip == current->mm->context.vdso_base + vdso64_rt_sigtramp)
return 1;
return 0;
}
/*
* Do some sanity checking on the signal frame pointed to by sp.
* We check the pinfo and puc pointers in the frame.
*/
static int sane_signal_64_frame(unsigned long sp)
{
struct signal_frame_64 __user *sf;
unsigned long pinfo, puc;
sf = (struct signal_frame_64 __user *) sp;
if (read_user_stack_64((unsigned long __user *) &sf->pinfo, &pinfo) ||
read_user_stack_64((unsigned long __user *) &sf->puc, &puc))
return 0;
return pinfo == (unsigned long) &sf->info &&
puc == (unsigned long) &sf->uc;
}
static void perf_callchain_user_64(struct pt_regs *regs,
struct perf_callchain_entry *entry)
{
unsigned long sp, next_sp;
unsigned long next_ip;
unsigned long lr;
long level = 0;
struct signal_frame_64 __user *sigframe;
unsigned long __user *fp, *uregs;
next_ip = regs->nip;
lr = regs->link;
sp = regs->gpr[1];
callchain_store(entry, PERF_CONTEXT_USER);
callchain_store(entry, next_ip);
for (;;) {
fp = (unsigned long __user *) sp;
if (!valid_user_sp(sp, 1) || read_user_stack_64(fp, &next_sp))
return;
if (level > 0 && read_user_stack_64(&fp[2], &next_ip))
return;
/*
* Note: the next_sp - sp >= signal frame size check
* is true when next_sp < sp, which can happen when
* transitioning from an alternate signal stack to the
* normal stack.
*/
if (next_sp - sp >= sizeof(struct signal_frame_64) &&
(is_sigreturn_64_address(next_ip, sp) ||
(level <= 1 && is_sigreturn_64_address(lr, sp))) &&
sane_signal_64_frame(sp)) {
/*
* This looks like an signal frame
*/
sigframe = (struct signal_frame_64 __user *) sp;
uregs = sigframe->uc.uc_mcontext.gp_regs;
if (read_user_stack_64(&uregs[PT_NIP], &next_ip) ||
read_user_stack_64(&uregs[PT_LNK], &lr) ||
read_user_stack_64(&uregs[PT_R1], &sp))
return;
level = 0;
callchain_store(entry, PERF_CONTEXT_USER);
callchain_store(entry, next_ip);
continue;
}
if (level == 0)
next_ip = lr;
callchain_store(entry, next_ip);
++level;
sp = next_sp;
}
}
static inline int current_is_64bit(void)
{
/*
* We can't use test_thread_flag() here because we may be on an
* interrupt stack, and the thread flags don't get copied over
* from the thread_info on the main stack to the interrupt stack.
*/
return !test_ti_thread_flag(task_thread_info(current), TIF_32BIT);
}
#else /* CONFIG_PPC64 */
/*
* On 32-bit we just access the address and let hash_page create a
* HPTE if necessary, so there is no need to fall back to reading
* the page tables. Since this is called at interrupt level,
* do_page_fault() won't treat a DSI as a page fault.
*/
static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
{
if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
((unsigned long)ptr & 3))
return -EFAULT;
return __get_user_inatomic(*ret, ptr);
}
static inline void perf_callchain_user_64(struct pt_regs *regs,
struct perf_callchain_entry *entry)
{
}
static inline int current_is_64bit(void)
{
return 0;
}
static inline int valid_user_sp(unsigned long sp, int is_64)
{
if (!sp || (sp & 7) || sp > TASK_SIZE - 32)
return 0;
return 1;
}
#define __SIGNAL_FRAMESIZE32 __SIGNAL_FRAMESIZE
#define sigcontext32 sigcontext
#define mcontext32 mcontext
#define ucontext32 ucontext
#define compat_siginfo_t struct siginfo
#endif /* CONFIG_PPC64 */
/*
* Layout for non-RT signal frames
*/
struct signal_frame_32 {
char dummy[__SIGNAL_FRAMESIZE32];
struct sigcontext32 sctx;
struct mcontext32 mctx;
int abigap[56];
};
/*
* Layout for RT signal frames
*/
struct rt_signal_frame_32 {
char dummy[__SIGNAL_FRAMESIZE32 + 16];
compat_siginfo_t info;
struct ucontext32 uc;
int abigap[56];
};
static int is_sigreturn_32_address(unsigned int nip, unsigned int fp)
{
if (nip == fp + offsetof(struct signal_frame_32, mctx.mc_pad))
return 1;
if (vdso32_sigtramp && current->mm->context.vdso_base &&
nip == current->mm->context.vdso_base + vdso32_sigtramp)
return 1;
return 0;
}
static int is_rt_sigreturn_32_address(unsigned int nip, unsigned int fp)
{
if (nip == fp + offsetof(struct rt_signal_frame_32,
uc.uc_mcontext.mc_pad))
return 1;
if (vdso32_rt_sigtramp && current->mm->context.vdso_base &&
nip == current->mm->context.vdso_base + vdso32_rt_sigtramp)
return 1;
return 0;
}
static int sane_signal_32_frame(unsigned int sp)
{
struct signal_frame_32 __user *sf;
unsigned int regs;
sf = (struct signal_frame_32 __user *) (unsigned long) sp;
if (read_user_stack_32((unsigned int __user *) &sf->sctx.regs, &regs))
return 0;
return regs == (unsigned long) &sf->mctx;
}
static int sane_rt_signal_32_frame(unsigned int sp)
{
struct rt_signal_frame_32 __user *sf;
unsigned int regs;
sf = (struct rt_signal_frame_32 __user *) (unsigned long) sp;
if (read_user_stack_32((unsigned int __user *) &sf->uc.uc_regs, &regs))
return 0;
return regs == (unsigned long) &sf->uc.uc_mcontext;
}
static unsigned int __user *signal_frame_32_regs(unsigned int sp,
unsigned int next_sp, unsigned int next_ip)
{
struct mcontext32 __user *mctx = NULL;
struct signal_frame_32 __user *sf;
struct rt_signal_frame_32 __user *rt_sf;
/*
* Note: the next_sp - sp >= signal frame size check
* is true when next_sp < sp, for example, when
* transitioning from an alternate signal stack to the
* normal stack.
*/
if (next_sp - sp >= sizeof(struct signal_frame_32) &&
is_sigreturn_32_address(next_ip, sp) &&
sane_signal_32_frame(sp)) {
sf = (struct signal_frame_32 __user *) (unsigned long) sp;
mctx = &sf->mctx;
}
if (!mctx && next_sp - sp >= sizeof(struct rt_signal_frame_32) &&
is_rt_sigreturn_32_address(next_ip, sp) &&
sane_rt_signal_32_frame(sp)) {
rt_sf = (struct rt_signal_frame_32 __user *) (unsigned long) sp;
mctx = &rt_sf->uc.uc_mcontext;
}
if (!mctx)
return NULL;
return mctx->mc_gregs;
}
static void perf_callchain_user_32(struct pt_regs *regs,
struct perf_callchain_entry *entry)
{
unsigned int sp, next_sp;
unsigned int next_ip;
unsigned int lr;
long level = 0;
unsigned int __user *fp, *uregs;
next_ip = regs->nip;
lr = regs->link;
sp = regs->gpr[1];
callchain_store(entry, PERF_CONTEXT_USER);
callchain_store(entry, next_ip);
while (entry->nr < PERF_MAX_STACK_DEPTH) {
fp = (unsigned int __user *) (unsigned long) sp;
if (!valid_user_sp(sp, 0) || read_user_stack_32(fp, &next_sp))
return;
if (level > 0 && read_user_stack_32(&fp[1], &next_ip))
return;
uregs = signal_frame_32_regs(sp, next_sp, next_ip);
if (!uregs && level <= 1)
uregs = signal_frame_32_regs(sp, next_sp, lr);
if (uregs) {
/*
* This looks like an signal frame, so restart
* the stack trace with the values in it.
*/
if (read_user_stack_32(&uregs[PT_NIP], &next_ip) ||
read_user_stack_32(&uregs[PT_LNK], &lr) ||
read_user_stack_32(&uregs[PT_R1], &sp))
return;
level = 0;
callchain_store(entry, PERF_CONTEXT_USER);
callchain_store(entry, next_ip);
continue;
}
if (level == 0)
next_ip = lr;
callchain_store(entry, next_ip);
++level;
sp = next_sp;
}
}
/*
* Since we can't get PMU interrupts inside a PMU interrupt handler,
* we don't need separate irq and nmi entries here.
*/
static DEFINE_PER_CPU(struct perf_callchain_entry, callchain);
struct perf_callchain_entry *perf_callchain(struct pt_regs *regs)
{
struct perf_callchain_entry *entry = &__get_cpu_var(callchain);
entry->nr = 0;
if (current->pid == 0) /* idle task? */
return entry;
if (!user_mode(regs)) {
perf_callchain_kernel(regs, entry);
if (current->mm)
regs = task_pt_regs(current);
else
regs = NULL;
}
if (regs) {
if (current_is_64bit())
perf_callchain_user_64(regs, entry);
else
perf_callchain_user_32(regs, entry);
}
return entry;
}

Просмотреть файл

@ -317,7 +317,7 @@ static int power7_generic_events[] = {
*/
static int power7_cache_events[C(MAX)][C(OP_MAX)][C(RESULT_MAX)] = {
[C(L1D)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0x400f0, 0xc880 },
[C(OP_READ)] = { 0xc880, 0x400f0 },
[C(OP_WRITE)] = { 0, 0x300f0 },
[C(OP_PREFETCH)] = { 0xd8b8, 0 },
},
@ -327,8 +327,8 @@ static int power7_cache_events[C(MAX)][C(OP_MAX)][C(RESULT_MAX)] = {
[C(OP_PREFETCH)] = { 0x408a, 0 },
},
[C(LL)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0x6080, 0x6084 },
[C(OP_WRITE)] = { 0x6082, 0x6086 },
[C(OP_READ)] = { 0x16080, 0x26080 },
[C(OP_WRITE)] = { 0x16082, 0x26082 },
[C(OP_PREFETCH)] = { 0, 0 },
},
[C(DTLB)] = { /* RESULT_ACCESS RESULT_MISS */

Просмотреть файл

@ -92,15 +92,13 @@ static inline void create_shadowed_slbe(unsigned long ea, int ssize,
: "memory" );
}
void slb_flush_and_rebolt(void)
static void __slb_flush_and_rebolt(void)
{
/* If you change this make sure you change SLB_NUM_BOLTED
* appropriately too. */
unsigned long linear_llp, vmalloc_llp, lflags, vflags;
unsigned long ksp_esid_data, ksp_vsid_data;
WARN_ON(!irqs_disabled());
linear_llp = mmu_psize_defs[mmu_linear_psize].sllp;
vmalloc_llp = mmu_psize_defs[mmu_vmalloc_psize].sllp;
lflags = SLB_VSID_KERNEL | linear_llp;
@ -117,12 +115,6 @@ void slb_flush_and_rebolt(void)
ksp_vsid_data = get_slb_shadow()->save_area[2].vsid;
}
/*
* We can't take a PMU exception in the following code, so hard
* disable interrupts.
*/
hard_irq_disable();
/* We need to do this all in asm, so we're sure we don't touch
* the stack between the slbia and rebolting it. */
asm volatile("isync\n"
@ -139,6 +131,21 @@ void slb_flush_and_rebolt(void)
: "memory");
}
void slb_flush_and_rebolt(void)
{
WARN_ON(!irqs_disabled());
/*
* We can't take a PMU exception in the following code, so hard
* disable interrupts.
*/
hard_irq_disable();
__slb_flush_and_rebolt();
get_paca()->slb_cache_ptr = 0;
}
void slb_vmalloc_update(void)
{
unsigned long vflags;
@ -180,12 +187,20 @@ static inline int esids_match(unsigned long addr1, unsigned long addr2)
/* Flush all user entries from the segment table of the current processor. */
void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
{
unsigned long offset = get_paca()->slb_cache_ptr;
unsigned long offset;
unsigned long slbie_data = 0;
unsigned long pc = KSTK_EIP(tsk);
unsigned long stack = KSTK_ESP(tsk);
unsigned long unmapped_base;
/*
* We need interrupts hard-disabled here, not just soft-disabled,
* so that a PMU interrupt can't occur, which might try to access
* user memory (to get a stack trace) and possible cause an SLB miss
* which would update the slb_cache/slb_cache_ptr fields in the PACA.
*/
hard_irq_disable();
offset = get_paca()->slb_cache_ptr;
if (!cpu_has_feature(CPU_FTR_NO_SLBIE_B) &&
offset <= SLB_CACHE_ENTRIES) {
int i;
@ -200,7 +215,7 @@ void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
}
asm volatile("isync" : : : "memory");
} else {
slb_flush_and_rebolt();
__slb_flush_and_rebolt();
}
/* Workaround POWER5 < DD2.1 issue */

Просмотреть файл

@ -164,7 +164,7 @@ void switch_stab(struct task_struct *tsk, struct mm_struct *mm)
{
struct stab_entry *stab = (struct stab_entry *) get_paca()->stab_addr;
struct stab_entry *ste;
unsigned long offset = __get_cpu_var(stab_cache_ptr);
unsigned long offset;
unsigned long pc = KSTK_EIP(tsk);
unsigned long stack = KSTK_ESP(tsk);
unsigned long unmapped_base;
@ -172,6 +172,15 @@ void switch_stab(struct task_struct *tsk, struct mm_struct *mm)
/* Force previous translations to complete. DRENG */
asm volatile("isync" : : : "memory");
/*
* We need interrupts hard-disabled here, not just soft-disabled,
* so that a PMU interrupt can't occur, which might try to access
* user memory (to get a stack trace) and possible cause an STAB miss
* which would update the stab_cache/stab_cache_ptr per-cpu variables.
*/
hard_irq_disable();
offset = __get_cpu_var(stab_cache_ptr);
if (offset <= NR_STAB_CACHE_ENTRIES) {
int i;

Просмотреть файл

@ -234,7 +234,6 @@ static void xilinx_i8259_cascade(unsigned int irq, struct irq_desc *desc)
generic_handle_irq(cascade_irq);
/* Let xilinx_intc end the interrupt */
desc->chip->ack(irq);
desc->chip->unmask(irq);
}

Просмотреть файл

@ -84,7 +84,7 @@ config S390
select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_TRACE_MCOUNT_TEST
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FTRACE_SYSCALLS
select HAVE_SYSCALL_TRACEPOINTS
select HAVE_DYNAMIC_FTRACE
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_DEFAULT_NO_SPIN_MUTEXES
@ -95,7 +95,6 @@ config S390
select HAVE_ARCH_TRACEHOOK
select INIT_ALL_POSSIBLE
select HAVE_PERF_COUNTERS
select GENERIC_ATOMIC64 if !64BIT
config SCHED_OMIT_FRAME_POINTER
bool
@ -481,13 +480,6 @@ config CMM_IUCV
Select this option to enable the special message interface to
the cooperative memory management.
config PAGE_STATES
bool "Unused page notification"
help
This enables the notification of unused pages to the
hypervisor. The ESSA instruction is used to do the states
changes between a page that has content and the unused state.
config APPLDATA_BASE
bool "Linux - VM Monitor Stream, base infrastructure"
depends on PROC_FS

Просмотреть файл

@ -88,8 +88,7 @@ LDFLAGS_vmlinux := -e start
head-y := arch/s390/kernel/head.o arch/s390/kernel/init_task.o
core-y += arch/s390/mm/ arch/s390/kernel/ arch/s390/crypto/ \
arch/s390/appldata/ arch/s390/hypfs/ arch/s390/kvm/ \
arch/s390/power/
arch/s390/appldata/ arch/s390/hypfs/ arch/s390/kvm/
libs-y += arch/s390/lib/
drivers-y += drivers/s390/

Просмотреть файл

@ -250,8 +250,9 @@ static int des3_128_setkey(struct crypto_tfm *tfm, const u8 *key,
const u8 *temp_key = key;
u32 *flags = &tfm->crt_flags;
if (!(memcmp(key, &key[DES_KEY_SIZE], DES_KEY_SIZE))) {
*flags |= CRYPTO_TFM_RES_BAD_KEY_SCHED;
if (!(memcmp(key, &key[DES_KEY_SIZE], DES_KEY_SIZE)) &&
(*flags & CRYPTO_TFM_REQ_WEAK_KEY)) {
*flags |= CRYPTO_TFM_RES_WEAK_KEY;
return -EINVAL;
}
for (i = 0; i < 2; i++, temp_key += DES_KEY_SIZE) {
@ -411,9 +412,9 @@ static int des3_192_setkey(struct crypto_tfm *tfm, const u8 *key,
if (!(memcmp(key, &key[DES_KEY_SIZE], DES_KEY_SIZE) &&
memcmp(&key[DES_KEY_SIZE], &key[DES_KEY_SIZE * 2],
DES_KEY_SIZE))) {
*flags |= CRYPTO_TFM_RES_BAD_KEY_SCHED;
DES_KEY_SIZE)) &&
(*flags & CRYPTO_TFM_REQ_WEAK_KEY)) {
*flags |= CRYPTO_TFM_RES_WEAK_KEY;
return -EINVAL;
}
for (i = 0; i < 3; i++, temp_key += DES_KEY_SIZE) {

Просмотреть файл

@ -46,12 +46,38 @@ static int sha1_init(struct shash_desc *desc)
return 0;
}
static int sha1_export(struct shash_desc *desc, void *out)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
struct sha1_state *octx = out;
octx->count = sctx->count;
memcpy(octx->state, sctx->state, sizeof(octx->state));
memcpy(octx->buffer, sctx->buf, sizeof(octx->buffer));
return 0;
}
static int sha1_import(struct shash_desc *desc, const void *in)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
const struct sha1_state *ictx = in;
sctx->count = ictx->count;
memcpy(sctx->state, ictx->state, sizeof(ictx->state));
memcpy(sctx->buf, ictx->buffer, sizeof(ictx->buffer));
sctx->func = KIMD_SHA_1;
return 0;
}
static struct shash_alg alg = {
.digestsize = SHA1_DIGEST_SIZE,
.init = sha1_init,
.update = s390_sha_update,
.final = s390_sha_final,
.export = sha1_export,
.import = sha1_import,
.descsize = sizeof(struct s390_sha_ctx),
.statesize = sizeof(struct sha1_state),
.base = {
.cra_name = "sha1",
.cra_driver_name= "sha1-s390",

Просмотреть файл

@ -42,12 +42,38 @@ static int sha256_init(struct shash_desc *desc)
return 0;
}
static int sha256_export(struct shash_desc *desc, void *out)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
struct sha256_state *octx = out;
octx->count = sctx->count;
memcpy(octx->state, sctx->state, sizeof(octx->state));
memcpy(octx->buf, sctx->buf, sizeof(octx->buf));
return 0;
}
static int sha256_import(struct shash_desc *desc, const void *in)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
const struct sha256_state *ictx = in;
sctx->count = ictx->count;
memcpy(sctx->state, ictx->state, sizeof(ictx->state));
memcpy(sctx->buf, ictx->buf, sizeof(ictx->buf));
sctx->func = KIMD_SHA_256;
return 0;
}
static struct shash_alg alg = {
.digestsize = SHA256_DIGEST_SIZE,
.init = sha256_init,
.update = s390_sha_update,
.final = s390_sha_final,
.export = sha256_export,
.import = sha256_import,
.descsize = sizeof(struct s390_sha_ctx),
.statesize = sizeof(struct sha256_state),
.base = {
.cra_name = "sha256",
.cra_driver_name= "sha256-s390",

Просмотреть файл

@ -13,7 +13,10 @@
*
*/
#include <crypto/internal/hash.h>
#include <crypto/sha.h>
#include <linux/errno.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include "sha.h"
@ -37,12 +40,42 @@ static int sha512_init(struct shash_desc *desc)
return 0;
}
static int sha512_export(struct shash_desc *desc, void *out)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
struct sha512_state *octx = out;
octx->count[0] = sctx->count;
octx->count[1] = 0;
memcpy(octx->state, sctx->state, sizeof(octx->state));
memcpy(octx->buf, sctx->buf, sizeof(octx->buf));
return 0;
}
static int sha512_import(struct shash_desc *desc, const void *in)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
const struct sha512_state *ictx = in;
if (unlikely(ictx->count[1]))
return -ERANGE;
sctx->count = ictx->count[0];
memcpy(sctx->state, ictx->state, sizeof(ictx->state));
memcpy(sctx->buf, ictx->buf, sizeof(ictx->buf));
sctx->func = KIMD_SHA_512;
return 0;
}
static struct shash_alg sha512_alg = {
.digestsize = SHA512_DIGEST_SIZE,
.init = sha512_init,
.update = s390_sha_update,
.final = s390_sha_final,
.export = sha512_export,
.import = sha512_import,
.descsize = sizeof(struct s390_sha_ctx),
.statesize = sizeof(struct sha512_state),
.base = {
.cra_name = "sha512",
.cra_driver_name= "sha512-s390",
@ -78,7 +111,10 @@ static struct shash_alg sha384_alg = {
.init = sha384_init,
.update = s390_sha_update,
.final = s390_sha_final,
.export = sha512_export,
.import = sha512_import,
.descsize = sizeof(struct s390_sha_ctx),
.statesize = sizeof(struct sha512_state),
.base = {
.cra_name = "sha384",
.cra_driver_name= "sha384-s390",

Просмотреть файл

@ -900,7 +900,7 @@ CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST=y
CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_HAVE_FTRACE_SYSCALLS=y
CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
CONFIG_TRACING_SUPPORT=y
CONFIG_FTRACE=y
# CONFIG_FUNCTION_TRACER is not set

Просмотреть файл

@ -355,11 +355,7 @@ static struct dentry *hypfs_create_file(struct super_block *sb,
{
struct dentry *dentry;
struct inode *inode;
struct qstr qname;
qname.name = name;
qname.len = strlen(name);
qname.hash = full_name_hash(name, qname.len);
mutex_lock(&parent->d_inode->i_mutex);
dentry = lookup_one_len(name, parent, strlen(name));
if (IS_ERR(dentry)) {
@ -426,7 +422,7 @@ struct dentry *hypfs_create_u64(struct super_block *sb, struct dentry *dir,
char tmp[TMP_SIZE];
struct dentry *dentry;
snprintf(tmp, TMP_SIZE, "%lld\n", (unsigned long long int)value);
snprintf(tmp, TMP_SIZE, "%llu\n", (unsigned long long int)value);
buffer = kstrdup(tmp, GFP_KERNEL);
if (!buffer)
return ERR_PTR(-ENOMEM);

Просмотреть файл

@ -1,33 +1,23 @@
#ifndef __ARCH_S390_ATOMIC__
#define __ARCH_S390_ATOMIC__
/*
* Copyright 1999,2009 IBM Corp.
* Author(s): Martin Schwidefsky <schwidefsky@de.ibm.com>,
* Denis Joseph Barrow,
* Arnd Bergmann <arndb@de.ibm.com>,
*
* Atomic operations that C can't guarantee us.
* Useful for resource counting etc.
* s390 uses 'Compare And Swap' for atomicity in SMP enviroment.
*
*/
#include <linux/compiler.h>
#include <linux/types.h>
/*
* include/asm-s390/atomic.h
*
* S390 version
* Copyright (C) 1999-2005 IBM Deutschland Entwicklung GmbH, IBM Corporation
* Author(s): Martin Schwidefsky (schwidefsky@de.ibm.com),
* Denis Joseph Barrow,
* Arnd Bergmann (arndb@de.ibm.com)
*
* Derived from "include/asm-i386/bitops.h"
* Copyright (C) 1992, Linus Torvalds
*
*/
/*
* Atomic operations that C can't guarantee us. Useful for
* resource counting etc..
* S390 uses 'Compare And Swap' for atomicity in SMP enviroment
*/
#define ATOMIC_INIT(i) { (i) }
#ifdef __KERNEL__
#if __GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ > 2)
#define __CS_LOOP(ptr, op_val, op_string) ({ \
@ -77,7 +67,7 @@ static inline void atomic_set(atomic_t *v, int i)
barrier();
}
static __inline__ int atomic_add_return(int i, atomic_t * v)
static inline int atomic_add_return(int i, atomic_t *v)
{
return __CS_LOOP(v, i, "ar");
}
@ -87,7 +77,7 @@ static __inline__ int atomic_add_return(int i, atomic_t * v)
#define atomic_inc_return(_v) atomic_add_return(1, _v)
#define atomic_inc_and_test(_v) (atomic_add_return(1, _v) == 0)
static __inline__ int atomic_sub_return(int i, atomic_t * v)
static inline int atomic_sub_return(int i, atomic_t *v)
{
return __CS_LOOP(v, i, "sr");
}
@ -97,19 +87,19 @@ static __inline__ int atomic_sub_return(int i, atomic_t * v)
#define atomic_dec_return(_v) atomic_sub_return(1, _v)
#define atomic_dec_and_test(_v) (atomic_sub_return(1, _v) == 0)
static __inline__ void atomic_clear_mask(unsigned long mask, atomic_t * v)
static inline void atomic_clear_mask(unsigned long mask, atomic_t *v)
{
__CS_LOOP(v, ~mask, "nr");
__CS_LOOP(v, ~mask, "nr");
}
static __inline__ void atomic_set_mask(unsigned long mask, atomic_t * v)
static inline void atomic_set_mask(unsigned long mask, atomic_t *v)
{
__CS_LOOP(v, mask, "or");
__CS_LOOP(v, mask, "or");
}
#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
static __inline__ int atomic_cmpxchg(atomic_t *v, int old, int new)
static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
{
#if __GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ > 2)
asm volatile(
@ -127,7 +117,7 @@ static __inline__ int atomic_cmpxchg(atomic_t *v, int old, int new)
return old;
}
static __inline__ int atomic_add_unless(atomic_t *v, int a, int u)
static inline int atomic_add_unless(atomic_t *v, int a, int u)
{
int c, old;
c = atomic_read(v);
@ -146,9 +136,10 @@ static __inline__ int atomic_add_unless(atomic_t *v, int a, int u)
#undef __CS_LOOP
#ifdef __s390x__
#define ATOMIC64_INIT(i) { (i) }
#ifdef CONFIG_64BIT
#if __GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ > 2)
#define __CSG_LOOP(ptr, op_val, op_string) ({ \
@ -162,7 +153,7 @@ static __inline__ int atomic_add_unless(atomic_t *v, int a, int u)
: "=&d" (old_val), "=&d" (new_val), \
"=Q" (((atomic_t *)(ptr))->counter) \
: "d" (op_val), "Q" (((atomic_t *)(ptr))->counter) \
: "cc", "memory" ); \
: "cc", "memory"); \
new_val; \
})
@ -180,7 +171,7 @@ static __inline__ int atomic_add_unless(atomic_t *v, int a, int u)
"=m" (((atomic_t *)(ptr))->counter) \
: "a" (ptr), "d" (op_val), \
"m" (((atomic_t *)(ptr))->counter) \
: "cc", "memory" ); \
: "cc", "memory"); \
new_val; \
})
@ -198,39 +189,29 @@ static inline void atomic64_set(atomic64_t *v, long long i)
barrier();
}
static __inline__ long long atomic64_add_return(long long i, atomic64_t * v)
static inline long long atomic64_add_return(long long i, atomic64_t *v)
{
return __CSG_LOOP(v, i, "agr");
}
#define atomic64_add(_i, _v) atomic64_add_return(_i, _v)
#define atomic64_add_negative(_i, _v) (atomic64_add_return(_i, _v) < 0)
#define atomic64_inc(_v) atomic64_add_return(1, _v)
#define atomic64_inc_return(_v) atomic64_add_return(1, _v)
#define atomic64_inc_and_test(_v) (atomic64_add_return(1, _v) == 0)
static __inline__ long long atomic64_sub_return(long long i, atomic64_t * v)
static inline long long atomic64_sub_return(long long i, atomic64_t *v)
{
return __CSG_LOOP(v, i, "sgr");
}
#define atomic64_sub(_i, _v) atomic64_sub_return(_i, _v)
#define atomic64_sub_and_test(_i, _v) (atomic64_sub_return(_i, _v) == 0)
#define atomic64_dec(_v) atomic64_sub_return(1, _v)
#define atomic64_dec_return(_v) atomic64_sub_return(1, _v)
#define atomic64_dec_and_test(_v) (atomic64_sub_return(1, _v) == 0)
static __inline__ void atomic64_clear_mask(unsigned long mask, atomic64_t * v)
static inline void atomic64_clear_mask(unsigned long mask, atomic64_t *v)
{
__CSG_LOOP(v, ~mask, "ngr");
__CSG_LOOP(v, ~mask, "ngr");
}
static __inline__ void atomic64_set_mask(unsigned long mask, atomic64_t * v)
static inline void atomic64_set_mask(unsigned long mask, atomic64_t *v)
{
__CSG_LOOP(v, mask, "ogr");
__CSG_LOOP(v, mask, "ogr");
}
#define atomic64_xchg(v, new) (xchg(&((v)->counter), new))
static __inline__ long long atomic64_cmpxchg(atomic64_t *v,
static inline long long atomic64_cmpxchg(atomic64_t *v,
long long old, long long new)
{
#if __GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ > 2)
@ -249,8 +230,112 @@ static __inline__ long long atomic64_cmpxchg(atomic64_t *v,
return old;
}
static __inline__ int atomic64_add_unless(atomic64_t *v,
long long a, long long u)
#undef __CSG_LOOP
#else /* CONFIG_64BIT */
typedef struct {
long long counter;
} atomic64_t;
static inline long long atomic64_read(const atomic64_t *v)
{
register_pair rp;
asm volatile(
" lm %0,%N0,0(%1)"
: "=&d" (rp)
: "a" (&v->counter), "m" (v->counter)
);
return rp.pair;
}
static inline void atomic64_set(atomic64_t *v, long long i)
{
register_pair rp = {.pair = i};
asm volatile(
" stm %1,%N1,0(%2)"
: "=m" (v->counter)
: "d" (rp), "a" (&v->counter)
);
}
static inline long long atomic64_xchg(atomic64_t *v, long long new)
{
register_pair rp_new = {.pair = new};
register_pair rp_old;
asm volatile(
" lm %0,%N0,0(%2)\n"
"0: cds %0,%3,0(%2)\n"
" jl 0b\n"
: "=&d" (rp_old), "+m" (v->counter)
: "a" (&v->counter), "d" (rp_new)
: "cc");
return rp_old.pair;
}
static inline long long atomic64_cmpxchg(atomic64_t *v,
long long old, long long new)
{
register_pair rp_old = {.pair = old};
register_pair rp_new = {.pair = new};
asm volatile(
" cds %0,%3,0(%2)"
: "+&d" (rp_old), "+m" (v->counter)
: "a" (&v->counter), "d" (rp_new)
: "cc");
return rp_old.pair;
}
static inline long long atomic64_add_return(long long i, atomic64_t *v)
{
long long old, new;
do {
old = atomic64_read(v);
new = old + i;
} while (atomic64_cmpxchg(v, old, new) != old);
return new;
}
static inline long long atomic64_sub_return(long long i, atomic64_t *v)
{
long long old, new;
do {
old = atomic64_read(v);
new = old - i;
} while (atomic64_cmpxchg(v, old, new) != old);
return new;
}
static inline void atomic64_set_mask(unsigned long long mask, atomic64_t *v)
{
long long old, new;
do {
old = atomic64_read(v);
new = old | mask;
} while (atomic64_cmpxchg(v, old, new) != old);
}
static inline void atomic64_clear_mask(unsigned long long mask, atomic64_t *v)
{
long long old, new;
do {
old = atomic64_read(v);
new = old & mask;
} while (atomic64_cmpxchg(v, old, new) != old);
}
#endif /* CONFIG_64BIT */
static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u)
{
long long c, old;
c = atomic64_read(v);
@ -265,15 +350,17 @@ static __inline__ int atomic64_add_unless(atomic64_t *v,
return c != u;
}
#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
#undef __CSG_LOOP
#else /* __s390x__ */
#include <asm-generic/atomic64.h>
#endif /* __s390x__ */
#define atomic64_add(_i, _v) atomic64_add_return(_i, _v)
#define atomic64_add_negative(_i, _v) (atomic64_add_return(_i, _v) < 0)
#define atomic64_inc(_v) atomic64_add_return(1, _v)
#define atomic64_inc_return(_v) atomic64_add_return(1, _v)
#define atomic64_inc_and_test(_v) (atomic64_add_return(1, _v) == 0)
#define atomic64_sub(_i, _v) atomic64_sub_return(_i, _v)
#define atomic64_sub_and_test(_i, _v) (atomic64_sub_return(_i, _v) == 0)
#define atomic64_dec(_v) atomic64_sub_return(1, _v)
#define atomic64_dec_return(_v) atomic64_sub_return(1, _v)
#define atomic64_dec_and_test(_v) (atomic64_sub_return(1, _v) == 0)
#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
#define smp_mb__before_atomic_dec() smp_mb()
#define smp_mb__after_atomic_dec() smp_mb()
@ -281,5 +368,5 @@ static __inline__ int atomic64_add_unless(atomic64_t *v,
#define smp_mb__after_atomic_inc() smp_mb()
#include <asm-generic/atomic-long.h>
#endif /* __KERNEL__ */
#endif /* __ARCH_S390_ATOMIC__ */

Просмотреть файл

@ -78,28 +78,11 @@ csum_partial_copy_nocheck (const void *src, void *dst, int len, __wsum sum)
*/
static inline __sum16 csum_fold(__wsum sum)
{
#ifndef __s390x__
register_pair rp;
u32 csum = (__force u32) sum;
asm volatile(
" slr %N1,%N1\n" /* %0 = H L */
" lr %1,%0\n" /* %0 = H L, %1 = H L 0 0 */
" srdl %1,16\n" /* %0 = H L, %1 = 0 H L 0 */
" alr %1,%N1\n" /* %0 = H L, %1 = L H L 0 */
" alr %0,%1\n" /* %0 = H+L+C L+H */
" srl %0,16\n" /* %0 = H+L+C */
: "+&d" (sum), "=d" (rp) : : "cc");
#else /* __s390x__ */
asm volatile(
" sr 3,3\n" /* %0 = H*65536 + L */
" lr 2,%0\n" /* %0 = H L, 2/3 = H L / 0 0 */
" srdl 2,16\n" /* %0 = H L, 2/3 = 0 H / L 0 */
" alr 2,3\n" /* %0 = H L, 2/3 = L H / L 0 */
" alr %0,2\n" /* %0 = H+L+C L+H */
" srl %0,16\n" /* %0 = H+L+C */
: "+&d" (sum) : : "cc", "2", "3");
#endif /* __s390x__ */
return (__force __sum16) ~sum;
csum += (csum >> 16) + (csum << 16);
csum >>= 16;
return (__force __sum16) ~csum;
}
/*

Просмотреть файл

@ -125,4 +125,32 @@ struct chsc_cpd_info {
#define CHSC_INFO_CPD _IOWR(CHSC_IOCTL_MAGIC, 0x87, struct chsc_cpd_info)
#define CHSC_INFO_DCAL _IOWR(CHSC_IOCTL_MAGIC, 0x88, struct chsc_dcal)
#ifdef __KERNEL__
struct css_general_char {
u64 : 12;
u32 dynio : 1; /* bit 12 */
u32 : 28;
u32 aif : 1; /* bit 41 */
u32 : 3;
u32 mcss : 1; /* bit 45 */
u32 fcs : 1; /* bit 46 */
u32 : 1;
u32 ext_mb : 1; /* bit 48 */
u32 : 7;
u32 aif_tdd : 1; /* bit 56 */
u32 : 1;
u32 qebsm : 1; /* bit 58 */
u32 : 8;
u32 aif_osa : 1; /* bit 67 */
u32 : 14;
u32 cib : 1; /* bit 82 */
u32 : 5;
u32 fcx : 1; /* bit 88 */
u32 : 7;
}__attribute__((packed));
extern struct css_general_char css_general_characteristics;
#endif /* __KERNEL__ */
#endif

Просмотреть файл

@ -15,228 +15,7 @@
#define LPM_ANYPATH 0xff
#define __MAX_CSSID 0
/**
* struct cmd_scsw - command-mode subchannel status word
* @key: subchannel key
* @sctl: suspend control
* @eswf: esw format
* @cc: deferred condition code
* @fmt: format
* @pfch: prefetch
* @isic: initial-status interruption control
* @alcc: address-limit checking control
* @ssi: suppress-suspended interruption
* @zcc: zero condition code
* @ectl: extended control
* @pno: path not operational
* @res: reserved
* @fctl: function control
* @actl: activity control
* @stctl: status control
* @cpa: channel program address
* @dstat: device status
* @cstat: subchannel status
* @count: residual count
*/
struct cmd_scsw {
__u32 key : 4;
__u32 sctl : 1;
__u32 eswf : 1;
__u32 cc : 2;
__u32 fmt : 1;
__u32 pfch : 1;
__u32 isic : 1;
__u32 alcc : 1;
__u32 ssi : 1;
__u32 zcc : 1;
__u32 ectl : 1;
__u32 pno : 1;
__u32 res : 1;
__u32 fctl : 3;
__u32 actl : 7;
__u32 stctl : 5;
__u32 cpa;
__u32 dstat : 8;
__u32 cstat : 8;
__u32 count : 16;
} __attribute__ ((packed));
/**
* struct tm_scsw - transport-mode subchannel status word
* @key: subchannel key
* @eswf: esw format
* @cc: deferred condition code
* @fmt: format
* @x: IRB-format control
* @q: interrogate-complete
* @ectl: extended control
* @pno: path not operational
* @fctl: function control
* @actl: activity control
* @stctl: status control
* @tcw: TCW address
* @dstat: device status
* @cstat: subchannel status
* @fcxs: FCX status
* @schxs: subchannel-extended status
*/
struct tm_scsw {
u32 key:4;
u32 :1;
u32 eswf:1;
u32 cc:2;
u32 fmt:3;
u32 x:1;
u32 q:1;
u32 :1;
u32 ectl:1;
u32 pno:1;
u32 :1;
u32 fctl:3;
u32 actl:7;
u32 stctl:5;
u32 tcw;
u32 dstat:8;
u32 cstat:8;
u32 fcxs:8;
u32 schxs:8;
} __attribute__ ((packed));
/**
* union scsw - subchannel status word
* @cmd: command-mode SCSW
* @tm: transport-mode SCSW
*/
union scsw {
struct cmd_scsw cmd;
struct tm_scsw tm;
} __attribute__ ((packed));
int scsw_is_tm(union scsw *scsw);
u32 scsw_key(union scsw *scsw);
u32 scsw_eswf(union scsw *scsw);
u32 scsw_cc(union scsw *scsw);
u32 scsw_ectl(union scsw *scsw);
u32 scsw_pno(union scsw *scsw);
u32 scsw_fctl(union scsw *scsw);
u32 scsw_actl(union scsw *scsw);
u32 scsw_stctl(union scsw *scsw);
u32 scsw_dstat(union scsw *scsw);
u32 scsw_cstat(union scsw *scsw);
int scsw_is_solicited(union scsw *scsw);
int scsw_is_valid_key(union scsw *scsw);
int scsw_is_valid_eswf(union scsw *scsw);
int scsw_is_valid_cc(union scsw *scsw);
int scsw_is_valid_ectl(union scsw *scsw);
int scsw_is_valid_pno(union scsw *scsw);
int scsw_is_valid_fctl(union scsw *scsw);
int scsw_is_valid_actl(union scsw *scsw);
int scsw_is_valid_stctl(union scsw *scsw);
int scsw_is_valid_dstat(union scsw *scsw);
int scsw_is_valid_cstat(union scsw *scsw);
int scsw_cmd_is_valid_key(union scsw *scsw);
int scsw_cmd_is_valid_sctl(union scsw *scsw);
int scsw_cmd_is_valid_eswf(union scsw *scsw);
int scsw_cmd_is_valid_cc(union scsw *scsw);
int scsw_cmd_is_valid_fmt(union scsw *scsw);
int scsw_cmd_is_valid_pfch(union scsw *scsw);
int scsw_cmd_is_valid_isic(union scsw *scsw);
int scsw_cmd_is_valid_alcc(union scsw *scsw);
int scsw_cmd_is_valid_ssi(union scsw *scsw);
int scsw_cmd_is_valid_zcc(union scsw *scsw);
int scsw_cmd_is_valid_ectl(union scsw *scsw);
int scsw_cmd_is_valid_pno(union scsw *scsw);
int scsw_cmd_is_valid_fctl(union scsw *scsw);
int scsw_cmd_is_valid_actl(union scsw *scsw);
int scsw_cmd_is_valid_stctl(union scsw *scsw);
int scsw_cmd_is_valid_dstat(union scsw *scsw);
int scsw_cmd_is_valid_cstat(union scsw *scsw);
int scsw_cmd_is_solicited(union scsw *scsw);
int scsw_tm_is_valid_key(union scsw *scsw);
int scsw_tm_is_valid_eswf(union scsw *scsw);
int scsw_tm_is_valid_cc(union scsw *scsw);
int scsw_tm_is_valid_fmt(union scsw *scsw);
int scsw_tm_is_valid_x(union scsw *scsw);
int scsw_tm_is_valid_q(union scsw *scsw);
int scsw_tm_is_valid_ectl(union scsw *scsw);
int scsw_tm_is_valid_pno(union scsw *scsw);
int scsw_tm_is_valid_fctl(union scsw *scsw);
int scsw_tm_is_valid_actl(union scsw *scsw);
int scsw_tm_is_valid_stctl(union scsw *scsw);
int scsw_tm_is_valid_dstat(union scsw *scsw);
int scsw_tm_is_valid_cstat(union scsw *scsw);
int scsw_tm_is_valid_fcxs(union scsw *scsw);
int scsw_tm_is_valid_schxs(union scsw *scsw);
int scsw_tm_is_solicited(union scsw *scsw);
#define SCSW_FCTL_CLEAR_FUNC 0x1
#define SCSW_FCTL_HALT_FUNC 0x2
#define SCSW_FCTL_START_FUNC 0x4
#define SCSW_ACTL_SUSPENDED 0x1
#define SCSW_ACTL_DEVACT 0x2
#define SCSW_ACTL_SCHACT 0x4
#define SCSW_ACTL_CLEAR_PEND 0x8
#define SCSW_ACTL_HALT_PEND 0x10
#define SCSW_ACTL_START_PEND 0x20
#define SCSW_ACTL_RESUME_PEND 0x40
#define SCSW_STCTL_STATUS_PEND 0x1
#define SCSW_STCTL_SEC_STATUS 0x2
#define SCSW_STCTL_PRIM_STATUS 0x4
#define SCSW_STCTL_INTER_STATUS 0x8
#define SCSW_STCTL_ALERT_STATUS 0x10
#define DEV_STAT_ATTENTION 0x80
#define DEV_STAT_STAT_MOD 0x40
#define DEV_STAT_CU_END 0x20
#define DEV_STAT_BUSY 0x10
#define DEV_STAT_CHN_END 0x08
#define DEV_STAT_DEV_END 0x04
#define DEV_STAT_UNIT_CHECK 0x02
#define DEV_STAT_UNIT_EXCEP 0x01
#define SCHN_STAT_PCI 0x80
#define SCHN_STAT_INCORR_LEN 0x40
#define SCHN_STAT_PROG_CHECK 0x20
#define SCHN_STAT_PROT_CHECK 0x10
#define SCHN_STAT_CHN_DATA_CHK 0x08
#define SCHN_STAT_CHN_CTRL_CHK 0x04
#define SCHN_STAT_INTF_CTRL_CHK 0x02
#define SCHN_STAT_CHAIN_CHECK 0x01
/*
* architectured values for first sense byte
*/
#define SNS0_CMD_REJECT 0x80
#define SNS_CMD_REJECT SNS0_CMD_REJEC
#define SNS0_INTERVENTION_REQ 0x40
#define SNS0_BUS_OUT_CHECK 0x20
#define SNS0_EQUIPMENT_CHECK 0x10
#define SNS0_DATA_CHECK 0x08
#define SNS0_OVERRUN 0x04
#define SNS0_INCOMPL_DOMAIN 0x01
/*
* architectured values for second sense byte
*/
#define SNS1_PERM_ERR 0x80
#define SNS1_INV_TRACK_FORMAT 0x40
#define SNS1_EOC 0x20
#define SNS1_MESSAGE_TO_OPER 0x10
#define SNS1_NO_REC_FOUND 0x08
#define SNS1_FILE_PROTECTED 0x04
#define SNS1_WRITE_INHIBITED 0x02
#define SNS1_INPRECISE_END 0x01
/*
* architectured values for third sense byte
*/
#define SNS2_REQ_INH_WRITE 0x80
#define SNS2_CORRECTABLE 0x40
#define SNS2_FIRST_LOG_ERR 0x20
#define SNS2_ENV_DATA_PRESENT 0x10
#define SNS2_INPRECISE_END 0x04
#include <asm/scsw.h>
/**
* struct ccw1 - channel command word

Просмотреть файл

@ -0,0 +1,26 @@
/*
* Copyright IBM Corp. 2000,2009
* Author(s): Hartmut Penner <hp@de.ibm.com>,
* Martin Schwidefsky <schwidefsky@de.ibm.com>,
* Christian Ehrhardt <ehrhardt@de.ibm.com>,
*/
#ifndef _ASM_S390_CPU_H
#define _ASM_S390_CPU_H
#define MAX_CPU_ADDRESS 255
#ifndef __ASSEMBLY__
#include <linux/types.h>
struct cpuid
{
unsigned int version : 8;
unsigned int ident : 24;
unsigned int machine : 16;
unsigned int unused : 16;
} __packed;
#endif /* __ASSEMBLY__ */
#endif /* _ASM_S390_CPU_H */

Просмотреть файл

@ -1,25 +0,0 @@
/*
* Copyright IBM Corp. 2000,2009
* Author(s): Hartmut Penner <hp@de.ibm.com>,
* Martin Schwidefsky <schwidefsky@de.ibm.com>
* Christian Ehrhardt <ehrhardt@de.ibm.com>
*/
#ifndef _ASM_S390_CPUID_H_
#define _ASM_S390_CPUID_H_
/*
* CPU type and hardware bug flags. Kept separately for each CPU.
* Members of this structure are referenced in head.S, so think twice
* before touching them. [mj]
*/
typedef struct
{
unsigned int version : 8;
unsigned int ident : 24;
unsigned int machine : 16;
unsigned int unused : 16;
} __attribute__ ((packed)) cpuid_t;
#endif /* _ASM_S390_CPUID_H_ */

Просмотреть файл

@ -167,6 +167,10 @@ debug_text_event(debug_info_t* id, int level, const char* txt)
return debug_event_common(id,level,txt,strlen(txt));
}
/*
* IMPORTANT: Use "%s" in sprintf format strings with care! Only pointers are
* stored in the s390dbf. See Documentation/s390/s390dbf.txt for more details!
*/
extern debug_entry_t *
debug_sprintf_event(debug_info_t* id,int level,char *string,...)
__attribute__ ((format(printf, 3, 4)));
@ -206,7 +210,10 @@ debug_text_exception(debug_info_t* id, int level, const char* txt)
return debug_exception_common(id,level,txt,strlen(txt));
}
/*
* IMPORTANT: Use "%s" in sprintf format strings with care! Only pointers are
* stored in the s390dbf. See Documentation/s390/s390dbf.txt for more details!
*/
extern debug_entry_t *
debug_sprintf_exception(debug_info_t* id,int level,char *string,...)
__attribute__ ((format(printf, 3, 4)));

Просмотреть файл

@ -18,13 +18,6 @@
#include <linux/interrupt.h>
#include <asm/lowcore.h>
/* irq_cpustat_t is unused currently, but could be converted
* into a percpu variable instead of storing softirq_pending
* on the lowcore */
typedef struct {
unsigned int __softirq_pending;
} irq_cpustat_t;
#define local_softirq_pending() (S390_lowcore.softirq_pending)
#define __ARCH_IRQ_STAT

Просмотреть файл

@ -57,6 +57,8 @@ struct ipl_block_fcp {
} __attribute__((packed));
#define DIAG308_VMPARM_SIZE 64
#define DIAG308_SCPDATA_SIZE (PAGE_SIZE - (sizeof(struct ipl_list_hdr) + \
offsetof(struct ipl_block_fcp, scp_data)))
struct ipl_block_ccw {
u8 load_parm[8];
@ -91,7 +93,8 @@ extern void do_halt(void);
extern void do_poff(void);
extern void ipl_save_parameters(void);
extern void ipl_update_parameters(void);
extern void get_ipl_vmparm(char *);
extern size_t append_ipl_vmparm(char *, size_t);
extern size_t append_ipl_scpdata(char *, size_t);
enum {
IPL_DEVNO_VALID = 1,

Просмотреть файл

@ -17,7 +17,7 @@
#include <linux/interrupt.h>
#include <linux/kvm_host.h>
#include <asm/debug.h>
#include <asm/cpuid.h>
#include <asm/cpu.h>
#define KVM_MAX_VCPUS 64
#define KVM_MEMORY_SLOTS 32
@ -217,8 +217,8 @@ struct kvm_vcpu_arch {
struct hrtimer ckc_timer;
struct tasklet_struct tasklet;
union {
cpuid_t cpu_id;
u64 stidp_data;
struct cpuid cpu_id;
u64 stidp_data;
};
};

Просмотреть файл

@ -54,14 +54,4 @@ struct kvm_vqconfig {
* This is pagesize for historical reasons. */
#define KVM_S390_VIRTIO_RING_ALIGN 4096
#ifdef __KERNEL__
/* early virtio console setup */
#ifdef CONFIG_S390_GUEST
extern void s390_virtio_console_init(void);
#else
static inline void s390_virtio_console_init(void)
{
}
#endif /* CONFIG_VIRTIO_CONSOLE */
#endif /* __KERNEL__ */
#endif

Просмотреть файл

@ -132,7 +132,7 @@
#ifndef __ASSEMBLY__
#include <asm/cpuid.h>
#include <asm/cpu.h>
#include <asm/ptrace.h>
#include <linux/types.h>
@ -275,7 +275,7 @@ struct _lowcore
__u32 user_exec_asce; /* 0x02ac */
/* SMP info area */
cpuid_t cpu_id; /* 0x02b0 */
struct cpuid cpu_id; /* 0x02b0 */
__u32 cpu_nr; /* 0x02b8 */
__u32 softirq_pending; /* 0x02bc */
__u32 percpu_offset; /* 0x02c0 */
@ -380,7 +380,7 @@ struct _lowcore
__u64 user_exec_asce; /* 0x0318 */
/* SMP info area */
cpuid_t cpu_id; /* 0x0320 */
struct cpuid cpu_id; /* 0x0320 */
__u32 cpu_nr; /* 0x0328 */
__u32 softirq_pending; /* 0x032c */
__u64 percpu_offset; /* 0x0330 */

Просмотреть файл

@ -2,6 +2,7 @@
#define __MMU_H
typedef struct {
spinlock_t list_lock;
struct list_head crst_list;
struct list_head pgtable_list;
unsigned long asce_bits;

Просмотреть файл

@ -125,8 +125,6 @@ page_get_storage_key(unsigned long addr)
return skey;
}
#ifdef CONFIG_PAGE_STATES
struct page;
void arch_free_page(struct page *page, int order);
void arch_alloc_page(struct page *page, int order);
@ -134,8 +132,6 @@ void arch_alloc_page(struct page *page, int order);
#define HAVE_ARCH_FREE_PAGE
#define HAVE_ARCH_ALLOC_PAGE
#endif
#endif /* !__ASSEMBLY__ */
#define __PAGE_OFFSET 0x0UL

Просмотреть файл

@ -140,6 +140,7 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
static inline pgd_t *pgd_alloc(struct mm_struct *mm)
{
spin_lock_init(&mm->context.list_lock);
INIT_LIST_HEAD(&mm->context.crst_list);
INIT_LIST_HEAD(&mm->context.pgtable_list);
return (pgd_t *) crst_table_alloc(mm, s390_noexec);

Просмотреть файл

@ -14,7 +14,7 @@
#define __ASM_S390_PROCESSOR_H
#include <linux/linkage.h>
#include <asm/cpuid.h>
#include <asm/cpu.h>
#include <asm/page.h>
#include <asm/ptrace.h>
#include <asm/setup.h>
@ -26,7 +26,7 @@
*/
#define current_text_addr() ({ void *pc; asm("basr %0,0" : "=a" (pc)); pc; })
static inline void get_cpu_id(cpuid_t *ptr)
static inline void get_cpu_id(struct cpuid *ptr)
{
asm volatile("stidp 0(%1)" : "=m" (*ptr) : "a" (ptr));
}

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше