Merge branch 'linus' into x86/mce3

Conflicts:
	arch/x86/kernel/cpu/mcheck/mce_64.c
	arch/x86/kernel/irq.c

Merge reason: Resolve the conflicts above.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
This commit is contained in:
Ingo Molnar 2009-06-11 23:31:52 +02:00
Родитель 62fdac5913 512626a04e
Коммит 0d5959723e
1275 изменённых файлов: 87001 добавлений и 27373 удалений

Просмотреть файл

@ -60,3 +60,62 @@ Description:
Indicates whether the block layer should automatically
generate checksums for write requests bound for
devices that support receiving integrity metadata.
What: /sys/block/<disk>/alignment_offset
Date: April 2009
Contact: Martin K. Petersen <martin.petersen@oracle.com>
Description:
Storage devices may report a physical block size that is
bigger than the logical block size (for instance a drive
with 4KB physical sectors exposing 512-byte logical
blocks to the operating system). This parameter
indicates how many bytes the beginning of the device is
offset from the disk's natural alignment.
What: /sys/block/<disk>/<partition>/alignment_offset
Date: April 2009
Contact: Martin K. Petersen <martin.petersen@oracle.com>
Description:
Storage devices may report a physical block size that is
bigger than the logical block size (for instance a drive
with 4KB physical sectors exposing 512-byte logical
blocks to the operating system). This parameter
indicates how many bytes the beginning of the partition
is offset from the disk's natural alignment.
What: /sys/block/<disk>/queue/logical_block_size
Date: May 2009
Contact: Martin K. Petersen <martin.petersen@oracle.com>
Description:
This is the smallest unit the storage device can
address. It is typically 512 bytes.
What: /sys/block/<disk>/queue/physical_block_size
Date: May 2009
Contact: Martin K. Petersen <martin.petersen@oracle.com>
Description:
This is the smallest unit the storage device can write
without resorting to read-modify-write operation. It is
usually the same as the logical block size but may be
bigger. One example is SATA drives with 4KB sectors
that expose a 512-byte logical block size to the
operating system.
What: /sys/block/<disk>/queue/minimum_io_size
Date: April 2009
Contact: Martin K. Petersen <martin.petersen@oracle.com>
Description:
Storage devices may report a preferred minimum I/O size,
which is the smallest request the device can perform
without incurring a read-modify-write penalty. For disk
drives this is often the physical block size. For RAID
arrays it is often the stripe chunk size.
What: /sys/block/<disk>/queue/optimal_io_size
Date: April 2009
Contact: Martin K. Petersen <martin.petersen@oracle.com>
Description:
Storage devices may report an optimal I/O size, which is
the device's preferred unit of receiving I/O. This is
rarely reported for disk drives. For RAID devices it is
usually the stripe width or the internal block size.

Просмотреть файл

@ -0,0 +1,33 @@
Where: /sys/bus/pci/devices/<dev>/ccissX/cXdY/model
Date: March 2009
Kernel Version: 2.6.30
Contact: iss_storagedev@hp.com
Description: Displays the SCSI INQUIRY page 0 model for logical drive
Y of controller X.
Where: /sys/bus/pci/devices/<dev>/ccissX/cXdY/rev
Date: March 2009
Kernel Version: 2.6.30
Contact: iss_storagedev@hp.com
Description: Displays the SCSI INQUIRY page 0 revision for logical
drive Y of controller X.
Where: /sys/bus/pci/devices/<dev>/ccissX/cXdY/unique_id
Date: March 2009
Kernel Version: 2.6.30
Contact: iss_storagedev@hp.com
Description: Displays the SCSI INQUIRY page 83 serial number for logical
drive Y of controller X.
Where: /sys/bus/pci/devices/<dev>/ccissX/cXdY/vendor
Date: March 2009
Kernel Version: 2.6.30
Contact: iss_storagedev@hp.com
Description: Displays the SCSI INQUIRY page 0 vendor for logical drive
Y of controller X.
Where: /sys/bus/pci/devices/<dev>/ccissX/cXdY/block:cciss!cXdY
Date: March 2009
Kernel Version: 2.6.30
Contact: iss_storagedev@hp.com
Description: A symbolic link to /sys/block/cciss!cXdY

Просмотреть файл

@ -0,0 +1,18 @@
What: /sys/devices/system/cpu/cpu*/cache/index*/cache_disable_X
Date: August 2008
KernelVersion: 2.6.27
Contact: mark.langsdorf@amd.com
Description: These files exist in every cpu's cache index directories.
There are currently 2 cache_disable_# files in each
directory. Reading from these files on a supported
processor will return that cache disable index value
for that processor and node. Writing to one of these
files will cause the specificed cache index to be disabled.
Currently, only AMD Family 10h Processors support cache index
disable, and only for their L3 caches. See the BIOS and
Kernel Developer's Guide at
http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/31116-Public-GH-BKDG_3.20_2-4-09.pdf
for formatting information and other details on the
cache index disable.
Users: joachim.deguara@amd.com

Просмотреть файл

@ -704,12 +704,24 @@ this directory the following files can currently be found:
The current number of free dma_debug_entries
in the allocator.
dma-api/driver-filter
You can write a name of a driver into this file
to limit the debug output to requests from that
particular driver. Write an empty string to
that file to disable the filter and see
all errors again.
If you have this code compiled into your kernel it will be enabled by default.
If you want to boot without the bookkeeping anyway you can provide
'dma_debug=off' as a boot parameter. This will disable DMA-API debugging.
Notice that you can not enable it again at runtime. You have to reboot to do
so.
If you want to see debug messages only for a special device driver you can
specify the dma_debug_driver=<drivername> parameter. This will enable the
driver filter at boot time. The debug code will only print errors for that
driver afterwards. This filter can be disabled or changed later using debugfs.
When the code disables itself at runtime this is most likely because it ran
out of dma_debug_entries. These entries are preallocated at boot. The number
of preallocated entries is defined per architecture. If it is too low for you

Просмотреть файл

@ -13,7 +13,8 @@ DOCBOOKS := z8530book.xml mcabook.xml device-drivers.xml \
gadget.xml libata.xml mtdnand.xml librs.xml rapidio.xml \
genericirq.xml s390-drivers.xml uio-howto.xml scsi.xml \
mac80211.xml debugobjects.xml sh.xml regulator.xml \
alsa-driver-api.xml writing-an-alsa-driver.xml
alsa-driver-api.xml writing-an-alsa-driver.xml \
tracepoint.xml
###
# The build process is as follows (targets):

Просмотреть файл

@ -0,0 +1,89 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN"
"http://www.oasis-open.org/docbook/xml/4.1.2/docbookx.dtd" []>
<book id="Tracepoints">
<bookinfo>
<title>The Linux Kernel Tracepoint API</title>
<authorgroup>
<author>
<firstname>Jason</firstname>
<surname>Baron</surname>
<affiliation>
<address>
<email>jbaron@redhat.com</email>
</address>
</affiliation>
</author>
</authorgroup>
<legalnotice>
<para>
This documentation is free software; you can redistribute
it and/or modify it under the terms of the GNU General Public
License as published by the Free Software Foundation; either
version 2 of the License, or (at your option) any later
version.
</para>
<para>
This program is distributed in the hope that it will be
useful, but WITHOUT ANY WARRANTY; without even the implied
warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU General Public License for more details.
</para>
<para>
You should have received a copy of the GNU General Public
License along with this program; if not, write to the Free
Software Foundation, Inc., 59 Temple Place, Suite 330, Boston,
MA 02111-1307 USA
</para>
<para>
For more details see the file COPYING in the source
distribution of Linux.
</para>
</legalnotice>
</bookinfo>
<toc></toc>
<chapter id="intro">
<title>Introduction</title>
<para>
Tracepoints are static probe points that are located in strategic points
throughout the kernel. 'Probes' register/unregister with tracepoints
via a callback mechanism. The 'probes' are strictly typed functions that
are passed a unique set of parameters defined by each tracepoint.
</para>
<para>
From this simple callback mechanism, 'probes' can be used to profile, debug,
and understand kernel behavior. There are a number of tools that provide a
framework for using 'probes'. These tools include Systemtap, ftrace, and
LTTng.
</para>
<para>
Tracepoints are defined in a number of header files via various macros. Thus,
the purpose of this document is to provide a clear accounting of the available
tracepoints. The intention is to understand not only what tracepoints are
available but also to understand where future tracepoints might be added.
</para>
<para>
The API presented has functions of the form:
<function>trace_tracepointname(function parameters)</function>. These are the
tracepoints callbacks that are found throughout the code. Registering and
unregistering probes with these callback sites is covered in the
<filename>Documentation/trace/*</filename> directory.
</para>
</chapter>
<chapter id="irq">
<title>IRQ</title>
!Iinclude/trace/events/irq.h
</chapter>
</book>

Просмотреть файл

@ -192,23 +192,24 @@ rcu/rcuhier (which displays the struct rcu_node hierarchy).
The output of "cat rcu/rcudata" looks as follows:
rcu:
0 c=4011 g=4012 pq=1 pqc=4011 qp=0 rpfq=1 rp=3c2a dt=23301/73 dn=2 df=1882 of=0 ri=2126 ql=2 b=10
1 c=4011 g=4012 pq=1 pqc=4011 qp=0 rpfq=3 rp=39a6 dt=78073/1 dn=2 df=1402 of=0 ri=1875 ql=46 b=10
2 c=4010 g=4010 pq=1 pqc=4010 qp=0 rpfq=-5 rp=1d12 dt=16646/0 dn=2 df=3140 of=0 ri=2080 ql=0 b=10
3 c=4012 g=4013 pq=1 pqc=4012 qp=1 rpfq=3 rp=2b50 dt=21159/1 dn=2 df=2230 of=0 ri=1923 ql=72 b=10
4 c=4012 g=4013 pq=1 pqc=4012 qp=1 rpfq=3 rp=1644 dt=5783/1 dn=2 df=3348 of=0 ri=2805 ql=7 b=10
5 c=4012 g=4013 pq=0 pqc=4011 qp=1 rpfq=3 rp=1aac dt=5879/1 dn=2 df=3140 of=0 ri=2066 ql=10 b=10
6 c=4012 g=4013 pq=1 pqc=4012 qp=1 rpfq=3 rp=ed8 dt=5847/1 dn=2 df=3797 of=0 ri=1266 ql=10 b=10
7 c=4012 g=4013 pq=1 pqc=4012 qp=1 rpfq=3 rp=1fa2 dt=6199/1 dn=2 df=2795 of=0 ri=2162 ql=28 b=10
rcu:
0 c=17829 g=17829 pq=1 pqc=17829 qp=0 dt=10951/1 dn=0 df=1101 of=0 ri=36 ql=0 b=10
1 c=17829 g=17829 pq=1 pqc=17829 qp=0 dt=16117/1 dn=0 df=1015 of=0 ri=0 ql=0 b=10
2 c=17829 g=17829 pq=1 pqc=17829 qp=0 dt=1445/1 dn=0 df=1839 of=0 ri=0 ql=0 b=10
3 c=17829 g=17829 pq=1 pqc=17829 qp=0 dt=6681/1 dn=0 df=1545 of=0 ri=0 ql=0 b=10
4 c=17829 g=17829 pq=1 pqc=17829 qp=0 dt=1003/1 dn=0 df=1992 of=0 ri=0 ql=0 b=10
5 c=17829 g=17830 pq=1 pqc=17829 qp=1 dt=3887/1 dn=0 df=3331 of=0 ri=4 ql=2 b=10
6 c=17829 g=17829 pq=1 pqc=17829 qp=0 dt=859/1 dn=0 df=3224 of=0 ri=0 ql=0 b=10
7 c=17829 g=17830 pq=0 pqc=17829 qp=1 dt=3761/1 dn=0 df=1818 of=0 ri=0 ql=2 b=10
rcu_bh:
0 c=-268 g=-268 pq=1 pqc=-268 qp=0 rpfq=-145 rp=21d6 dt=23301/73 dn=2 df=0 of=0 ri=0 ql=0 b=10
1 c=-268 g=-268 pq=1 pqc=-268 qp=1 rpfq=-170 rp=20ce dt=78073/1 dn=2 df=26 of=0 ri=5 ql=0 b=10
2 c=-268 g=-268 pq=1 pqc=-268 qp=1 rpfq=-83 rp=fbd dt=16646/0 dn=2 df=28 of=0 ri=4 ql=0 b=10
3 c=-268 g=-268 pq=1 pqc=-268 qp=0 rpfq=-105 rp=178c dt=21159/1 dn=2 df=28 of=0 ri=2 ql=0 b=10
4 c=-268 g=-268 pq=1 pqc=-268 qp=1 rpfq=-30 rp=b54 dt=5783/1 dn=2 df=32 of=0 ri=0 ql=0 b=10
5 c=-268 g=-268 pq=1 pqc=-268 qp=1 rpfq=-29 rp=df5 dt=5879/1 dn=2 df=30 of=0 ri=3 ql=0 b=10
6 c=-268 g=-268 pq=1 pqc=-268 qp=1 rpfq=-28 rp=788 dt=5847/1 dn=2 df=32 of=0 ri=0 ql=0 b=10
7 c=-268 g=-268 pq=1 pqc=-268 qp=1 rpfq=-53 rp=1098 dt=6199/1 dn=2 df=30 of=0 ri=3 ql=0 b=10
0 c=-275 g=-275 pq=1 pqc=-275 qp=0 dt=10951/1 dn=0 df=0 of=0 ri=0 ql=0 b=10
1 c=-275 g=-275 pq=1 pqc=-275 qp=0 dt=16117/1 dn=0 df=13 of=0 ri=0 ql=0 b=10
2 c=-275 g=-275 pq=1 pqc=-275 qp=0 dt=1445/1 dn=0 df=15 of=0 ri=0 ql=0 b=10
3 c=-275 g=-275 pq=1 pqc=-275 qp=0 dt=6681/1 dn=0 df=9 of=0 ri=0 ql=0 b=10
4 c=-275 g=-275 pq=1 pqc=-275 qp=0 dt=1003/1 dn=0 df=15 of=0 ri=0 ql=0 b=10
5 c=-275 g=-275 pq=1 pqc=-275 qp=0 dt=3887/1 dn=0 df=15 of=0 ri=0 ql=0 b=10
6 c=-275 g=-275 pq=1 pqc=-275 qp=0 dt=859/1 dn=0 df=15 of=0 ri=0 ql=0 b=10
7 c=-275 g=-275 pq=1 pqc=-275 qp=0 dt=3761/1 dn=0 df=15 of=0 ri=0 ql=0 b=10
The first section lists the rcu_data structures for rcu, the second for
rcu_bh. Each section has one line per CPU, or eight for this 8-CPU system.
@ -253,12 +254,6 @@ o "pqc" indicates which grace period the last-observed quiescent
o "qp" indicates that RCU still expects a quiescent state from
this CPU.
o "rpfq" is the number of rcu_pending() calls on this CPU required
to induce this CPU to invoke force_quiescent_state().
o "rp" is low-order four hex digits of the count of how many times
rcu_pending() has been invoked on this CPU.
o "dt" is the current value of the dyntick counter that is incremented
when entering or leaving dynticks idle state, either by the
scheduler or by irq. The number after the "/" is the interrupt
@ -305,6 +300,9 @@ o "b" is the batch limit for this CPU. If more than this number
of RCU callbacks is ready to invoke, then the remainder will
be deferred.
There is also an rcu/rcudata.csv file with the same information in
comma-separated-variable spreadsheet format.
The output of "cat rcu/rcugp" looks as follows:
@ -411,3 +409,63 @@ o Each element of the form "1/1 0:127 ^0" represents one struct
For example, the first entry at the lowest level shows
"^0", indicating that it corresponds to bit zero in
the first entry at the middle level.
The output of "cat rcu/rcu_pending" looks as follows:
rcu:
0 np=255892 qsp=53936 cbr=0 cng=14417 gpc=10033 gps=24320 nf=6445 nn=146741
1 np=261224 qsp=54638 cbr=0 cng=25723 gpc=16310 gps=2849 nf=5912 nn=155792
2 np=237496 qsp=49664 cbr=0 cng=2762 gpc=45478 gps=1762 nf=1201 nn=136629
3 np=236249 qsp=48766 cbr=0 cng=286 gpc=48049 gps=1218 nf=207 nn=137723
4 np=221310 qsp=46850 cbr=0 cng=26 gpc=43161 gps=4634 nf=3529 nn=123110
5 np=237332 qsp=48449 cbr=0 cng=54 gpc=47920 gps=3252 nf=201 nn=137456
6 np=219995 qsp=46718 cbr=0 cng=50 gpc=42098 gps=6093 nf=4202 nn=120834
7 np=249893 qsp=49390 cbr=0 cng=72 gpc=38400 gps=17102 nf=41 nn=144888
rcu_bh:
0 np=146741 qsp=1419 cbr=0 cng=6 gpc=0 gps=0 nf=2 nn=145314
1 np=155792 qsp=12597 cbr=0 cng=0 gpc=4 gps=8 nf=3 nn=143180
2 np=136629 qsp=18680 cbr=0 cng=0 gpc=7 gps=6 nf=0 nn=117936
3 np=137723 qsp=2843 cbr=0 cng=0 gpc=10 gps=7 nf=0 nn=134863
4 np=123110 qsp=12433 cbr=0 cng=0 gpc=4 gps=2 nf=0 nn=110671
5 np=137456 qsp=4210 cbr=0 cng=0 gpc=6 gps=5 nf=0 nn=133235
6 np=120834 qsp=9902 cbr=0 cng=0 gpc=6 gps=3 nf=2 nn=110921
7 np=144888 qsp=26336 cbr=0 cng=0 gpc=8 gps=2 nf=0 nn=118542
As always, this is once again split into "rcu" and "rcu_bh" portions.
The fields are as follows:
o "np" is the number of times that __rcu_pending() has been invoked
for the corresponding flavor of RCU.
o "qsp" is the number of times that the RCU was waiting for a
quiescent state from this CPU.
o "cbr" is the number of times that this CPU had RCU callbacks
that had passed through a grace period, and were thus ready
to be invoked.
o "cng" is the number of times that this CPU needed another
grace period while RCU was idle.
o "gpc" is the number of times that an old grace period had
completed, but this CPU was not yet aware of it.
o "gps" is the number of times that a new grace period had started,
but this CPU was not yet aware of it.
o "nf" is the number of times that this CPU suspected that the
current grace period had run for too long, and thus needed to
be forced.
Please note that "forcing" consists of sending resched IPIs
to holdout CPUs. If that CPU really still is in an old RCU
read-side critical section, then we really do have to wait for it.
The assumption behing "forcing" is that the CPU is not still in
an old RCU read-side critical section, but has not yet responded
for some other reason.
o "nn" is the number of times that this CPU needed nothing. Alert
readers will note that the rcu "nn" number for a given CPU very
closely matches the rcu_bh "np" number for that same CPU. This
is due to short-circuit evaluation in rcu_pending().

Просмотреть файл

@ -184,8 +184,9 @@ length. Single character labels using special characters, that being anything
other than a letter or digit, are reserved for use by the Smack development
team. Smack labels are unstructured, case sensitive, and the only operation
ever performed on them is comparison for equality. Smack labels cannot
contain unprintable characters or the "/" (slash) character. Smack labels
cannot begin with a '-', which is reserved for special options.
contain unprintable characters, the "/" (slash), the "\" (backslash), the "'"
(quote) and '"' (double-quote) characters.
Smack labels cannot begin with a '-', which is reserved for special options.
There are some predefined labels:
@ -523,3 +524,18 @@ Smack supports some mount options:
These mount options apply to all file system types.
Smack auditing
If you want Smack auditing of security events, you need to set CONFIG_AUDIT
in your kernel configuration.
By default, all denied events will be audited. You can change this behavior by
writing a single character to the /smack/logging file :
0 : no logging
1 : log denied (default)
2 : log accepted
3 : log denied & accepted
Events are logged as 'key=value' pairs, for each event you at least will get
the subjet, the object, the rights requested, the action, the kernel function
that triggered the event, plus other pairs depending on the type of event
audited.

Просмотреть файл

@ -186,7 +186,7 @@ a virtual address mapping (unlike the earlier scheme of virtual address
do not have a corresponding kernel virtual address space mapping) and
low-memory pages.
Note: Please refer to Documentation/PCI/PCI-DMA-mapping.txt for a discussion
Note: Please refer to Documentation/DMA-mapping.txt for a discussion
on PCI high mem DMA aspects and mapping of scatter gather lists, and support
for 64 bit PCI.

Просмотреть файл

@ -60,7 +60,7 @@ go_lock | Called for the first local holder of a lock
go_unlock | Called on the final local unlock of a lock
go_dump | Called to print content of object for debugfs file, or on
| error to dump glock to the log.
go_type; | The type of the glock, LM_TYPE_.....
go_type | The type of the glock, LM_TYPE_.....
go_min_hold_time | The minimum hold time
The minimum hold time for each lock is the time after a remote lock

Просмотреть файл

@ -11,18 +11,15 @@ their I/O so file system consistency is maintained. One of the nifty
features of GFS is perfect consistency -- changes made to the file system
on one machine show up immediately on all other machines in the cluster.
GFS uses interchangable inter-node locking mechanisms. Different lock
modules can plug into GFS and each file system selects the appropriate
lock module at mount time. Lock modules include:
GFS uses interchangable inter-node locking mechanisms, the currently
supported mechanisms are:
lock_nolock -- allows gfs to be used as a local file system
lock_dlm -- uses a distributed lock manager (dlm) for inter-node locking
The dlm is found at linux/fs/dlm/
In addition to interfacing with an external locking manager, a gfs lock
module is responsible for interacting with external cluster management
systems. Lock_dlm depends on user space cluster management systems found
Lock_dlm depends on user space cluster management systems found
at the URL above.
To use gfs as a local file system, no external clustering systems are
@ -31,13 +28,19 @@ needed, simply:
$ mkfs -t gfs2 -p lock_nolock -j 1 /dev/block_device
$ mount -t gfs2 /dev/block_device /dir
GFS2 is not on-disk compatible with previous versions of GFS.
If you are using Fedora, you need to install the gfs2-utils package
and, for lock_dlm, you will also need to install the cman package
and write a cluster.conf as per the documentation.
GFS2 is not on-disk compatible with previous versions of GFS, but it
is pretty close.
The following man pages can be found at the URL above:
gfs2_fsck to repair a filesystem
fsck.gfs2 to repair a filesystem
gfs2_grow to expand a filesystem online
gfs2_jadd to add journals to a filesystem online
gfs2_tool to manipulate, examine and tune a filesystem
gfs2_quota to examine and change quota values in a filesystem
gfs2_convert to convert a gfs filesystem to gfs2 in-place
mount.gfs2 to help mount(8) mount a filesystem
mkfs.gfs2 to make a filesystem

Просмотреть файл

@ -0,0 +1,131 @@
Futex Requeue PI
----------------
Requeueing of tasks from a non-PI futex to a PI futex requires
special handling in order to ensure the underlying rt_mutex is never
left without an owner if it has waiters; doing so would break the PI
boosting logic [see rt-mutex-desgin.txt] For the purposes of
brevity, this action will be referred to as "requeue_pi" throughout
this document. Priority inheritance is abbreviated throughout as
"PI".
Motivation
----------
Without requeue_pi, the glibc implementation of
pthread_cond_broadcast() must resort to waking all the tasks waiting
on a pthread_condvar and letting them try to sort out which task
gets to run first in classic thundering-herd formation. An ideal
implementation would wake the highest-priority waiter, and leave the
rest to the natural wakeup inherent in unlocking the mutex
associated with the condvar.
Consider the simplified glibc calls:
/* caller must lock mutex */
pthread_cond_wait(cond, mutex)
{
lock(cond->__data.__lock);
unlock(mutex);
do {
unlock(cond->__data.__lock);
futex_wait(cond->__data.__futex);
lock(cond->__data.__lock);
} while(...)
unlock(cond->__data.__lock);
lock(mutex);
}
pthread_cond_broadcast(cond)
{
lock(cond->__data.__lock);
unlock(cond->__data.__lock);
futex_requeue(cond->data.__futex, cond->mutex);
}
Once pthread_cond_broadcast() requeues the tasks, the cond->mutex
has waiters. Note that pthread_cond_wait() attempts to lock the
mutex only after it has returned to user space. This will leave the
underlying rt_mutex with waiters, and no owner, breaking the
previously mentioned PI-boosting algorithms.
In order to support PI-aware pthread_condvar's, the kernel needs to
be able to requeue tasks to PI futexes. This support implies that
upon a successful futex_wait system call, the caller would return to
user space already holding the PI futex. The glibc implementation
would be modified as follows:
/* caller must lock mutex */
pthread_cond_wait_pi(cond, mutex)
{
lock(cond->__data.__lock);
unlock(mutex);
do {
unlock(cond->__data.__lock);
futex_wait_requeue_pi(cond->__data.__futex);
lock(cond->__data.__lock);
} while(...)
unlock(cond->__data.__lock);
/* the kernel acquired the the mutex for us */
}
pthread_cond_broadcast_pi(cond)
{
lock(cond->__data.__lock);
unlock(cond->__data.__lock);
futex_requeue_pi(cond->data.__futex, cond->mutex);
}
The actual glibc implementation will likely test for PI and make the
necessary changes inside the existing calls rather than creating new
calls for the PI cases. Similar changes are needed for
pthread_cond_timedwait() and pthread_cond_signal().
Implementation
--------------
In order to ensure the rt_mutex has an owner if it has waiters, it
is necessary for both the requeue code, as well as the waiting code,
to be able to acquire the rt_mutex before returning to user space.
The requeue code cannot simply wake the waiter and leave it to
acquire the rt_mutex as it would open a race window between the
requeue call returning to user space and the waiter waking and
starting to run. This is especially true in the uncontended case.
The solution involves two new rt_mutex helper routines,
rt_mutex_start_proxy_lock() and rt_mutex_finish_proxy_lock(), which
allow the requeue code to acquire an uncontended rt_mutex on behalf
of the waiter and to enqueue the waiter on a contended rt_mutex.
Two new system calls provide the kernel<->user interface to
requeue_pi: FUTEX_WAIT_REQUEUE_PI and FUTEX_REQUEUE_CMP_PI.
FUTEX_WAIT_REQUEUE_PI is called by the waiter (pthread_cond_wait()
and pthread_cond_timedwait()) to block on the initial futex and wait
to be requeued to a PI-aware futex. The implementation is the
result of a high-speed collision between futex_wait() and
futex_lock_pi(), with some extra logic to check for the additional
wake-up scenarios.
FUTEX_REQUEUE_CMP_PI is called by the waker
(pthread_cond_broadcast() and pthread_cond_signal()) to requeue and
possibly wake the waiting tasks. Internally, this system call is
still handled by futex_requeue (by passing requeue_pi=1). Before
requeueing, futex_requeue() attempts to acquire the requeue target
PI futex on behalf of the top waiter. If it can, this waiter is
woken. futex_requeue() then proceeds to requeue the remaining
nr_wake+nr_requeue tasks to the PI futex, calling
rt_mutex_start_proxy_lock() prior to each requeue to prepare the
task as a waiter on the underlying rt_mutex. It is possible that
the lock can be acquired at this stage as well, if so, the next
waiter is woken to finish the acquisition of the lock.
FUTEX_REQUEUE_PI accepts nr_wake and nr_requeue as arguments, but
their sum is all that really matters. futex_requeue() will wake or
requeue up to nr_wake + nr_requeue tasks. It will wake only as many
tasks as it can acquire the lock for, which in the majority of cases
should be 0 as good programming practice dictates that the caller of
either pthread_cond_broadcast() or pthread_cond_signal() acquire the
mutex prior to making the call. FUTEX_REQUEUE_PI requires that
nr_wake=1. nr_requeue should be INT_MAX for broadcast and 0 for
signal.

Просмотреть файл

@ -56,7 +56,6 @@ parameter is applicable:
ISAPNP ISA PnP code is enabled.
ISDN Appropriate ISDN support is enabled.
JOY Appropriate joystick support is enabled.
KMEMTRACE kmemtrace is enabled.
LIBATA Libata driver is enabled
LP Printer support is enabled.
LOOP Loopback device support is enabled.
@ -329,11 +328,6 @@ and is between 256 and 4096 characters. It is defined in the file
flushed before they will be reused, which
is a lot of faster
amd_iommu_size= [HW,X86-64]
Define the size of the aperture for the AMD IOMMU
driver. Possible values are:
'32M', '64M' (default), '128M', '256M', '512M', '1G'
amijoy.map= [HW,JOY] Amiga joystick support
Map of devices attached to JOY0DAT and JOY1DAT
Format: <a>,<b>
@ -646,6 +640,13 @@ and is between 256 and 4096 characters. It is defined in the file
DMA-API debugging code disables itself because the
architectural default is too low.
dma_debug_driver=<driver_name>
With this option the DMA-API debugging driver
filter feature can be enabled at boot time. Just
pass the driver to filter for as the parameter.
The filter can be disabled or changed to another
driver later using sysfs.
dscc4.setup= [NET]
dtc3181e= [HW,SCSI]
@ -752,12 +753,25 @@ and is between 256 and 4096 characters. It is defined in the file
ia64_pal_cache_flush instead of SAL_CACHE_FLUSH.
ftrace=[tracer]
[ftrace] will set and start the specified tracer
[FTRACE] will set and start the specified tracer
as early as possible in order to facilitate early
boot debugging.
ftrace_dump_on_oops
[ftrace] will dump the trace buffers on oops.
[FTRACE] will dump the trace buffers on oops.
ftrace_filter=[function-list]
[FTRACE] Limit the functions traced by the function
tracer at boot up. function-list is a comma separated
list of functions. This list can be changed at run
time by the set_ftrace_filter file in the debugfs
tracing directory.
ftrace_notrace=[function-list]
[FTRACE] Do not trace the functions specified in
function-list. This list can be changed at run time
by the set_ftrace_notrace file in the debugfs
tracing directory.
gamecon.map[2|3]=
[HW,JOY] Multisystem joystick and NES/SNES/PSX pad
@ -914,6 +928,12 @@ and is between 256 and 4096 characters. It is defined in the file
Formt: { "sha1" | "md5" }
default: "sha1"
ima_tcb [IMA]
Load a policy which meets the needs of the Trusted
Computing Base. This means IMA will measure all
programs exec'd, files mmap'd for exec, and all files
opened for read by uid=0.
in2000= [HW,SCSI]
See header of drivers/scsi/in2000.c.
@ -1054,15 +1074,6 @@ and is between 256 and 4096 characters. It is defined in the file
use the HighMem zone if it exists, and the Normal
zone if it does not.
kmemtrace.enable= [KNL,KMEMTRACE] Format: { yes | no }
Controls whether kmemtrace is enabled
at boot-time.
kmemtrace.subbufs=n [KNL,KMEMTRACE] Overrides the number of
subbufs kmemtrace's relay channel has. Set this
higher than default (KMEMTRACE_N_SUBBUFS in code) if
you experience buffer overruns.
kgdboc= [HW] kgdb over consoles.
Requires a tty driver that supports console polling.
(only serial suported for now)
@ -1072,6 +1083,10 @@ and is between 256 and 4096 characters. It is defined in the file
Configure the RouterBoard 532 series on-chip
Ethernet adapter MAC address.
kmemleak= [KNL] Boot-time kmemleak enable/disable
Valid arguments: on, off
Default: on
kstack=N [X86] Print N words from the kernel stack
in oops dumps.
@ -1663,6 +1678,14 @@ and is between 256 and 4096 characters. It is defined in the file
oprofile.timer= [HW]
Use timer interrupt instead of performance counters
oprofile.cpu_type= Force an oprofile cpu type
This might be useful if you have an older oprofile
userland or if you want common events.
Format: { archperfmon }
archperfmon: [X86] Force use of architectural
perfmon on Intel CPUs instead of the
CPU specific event set.
osst= [HW,SCSI] SCSI Tape Driver
Format: <buffer_size>,<write_threshold>
See also Documentation/scsi/st.txt.

142
Documentation/kmemleak.txt Normal file
Просмотреть файл

@ -0,0 +1,142 @@
Kernel Memory Leak Detector
===========================
Introduction
------------
Kmemleak provides a way of detecting possible kernel memory leaks in a
way similar to a tracing garbage collector
(http://en.wikipedia.org/wiki/Garbage_collection_%28computer_science%29#Tracing_garbage_collectors),
with the difference that the orphan objects are not freed but only
reported via /sys/kernel/debug/kmemleak. A similar method is used by the
Valgrind tool (memcheck --leak-check) to detect the memory leaks in
user-space applications.
Usage
-----
CONFIG_DEBUG_KMEMLEAK in "Kernel hacking" has to be enabled. A kernel
thread scans the memory every 10 minutes (by default) and prints any new
unreferenced objects found. To trigger an intermediate scan and display
all the possible memory leaks:
# mount -t debugfs nodev /sys/kernel/debug/
# cat /sys/kernel/debug/kmemleak
Note that the orphan objects are listed in the order they were allocated
and one object at the beginning of the list may cause other subsequent
objects to be reported as orphan.
Memory scanning parameters can be modified at run-time by writing to the
/sys/kernel/debug/kmemleak file. The following parameters are supported:
off - disable kmemleak (irreversible)
stack=on - enable the task stacks scanning
stack=off - disable the tasks stacks scanning
scan=on - start the automatic memory scanning thread
scan=off - stop the automatic memory scanning thread
scan=<secs> - set the automatic memory scanning period in seconds (0
to disable it)
Kmemleak can also be disabled at boot-time by passing "kmemleak=off" on
the kernel command line.
Basic Algorithm
---------------
The memory allocations via kmalloc, vmalloc, kmem_cache_alloc and
friends are traced and the pointers, together with additional
information like size and stack trace, are stored in a prio search tree.
The corresponding freeing function calls are tracked and the pointers
removed from the kmemleak data structures.
An allocated block of memory is considered orphan if no pointer to its
start address or to any location inside the block can be found by
scanning the memory (including saved registers). This means that there
might be no way for the kernel to pass the address of the allocated
block to a freeing function and therefore the block is considered a
memory leak.
The scanning algorithm steps:
1. mark all objects as white (remaining white objects will later be
considered orphan)
2. scan the memory starting with the data section and stacks, checking
the values against the addresses stored in the prio search tree. If
a pointer to a white object is found, the object is added to the
gray list
3. scan the gray objects for matching addresses (some white objects
can become gray and added at the end of the gray list) until the
gray set is finished
4. the remaining white objects are considered orphan and reported via
/sys/kernel/debug/kmemleak
Some allocated memory blocks have pointers stored in the kernel's
internal data structures and they cannot be detected as orphans. To
avoid this, kmemleak can also store the number of values pointing to an
address inside the block address range that need to be found so that the
block is not considered a leak. One example is __vmalloc().
Kmemleak API
------------
See the include/linux/kmemleak.h header for the functions prototype.
kmemleak_init - initialize kmemleak
kmemleak_alloc - notify of a memory block allocation
kmemleak_free - notify of a memory block freeing
kmemleak_not_leak - mark an object as not a leak
kmemleak_ignore - do not scan or report an object as leak
kmemleak_scan_area - add scan areas inside a memory block
kmemleak_no_scan - do not scan a memory block
kmemleak_erase - erase an old value in a pointer variable
kmemleak_alloc_recursive - as kmemleak_alloc but checks the recursiveness
kmemleak_free_recursive - as kmemleak_free but checks the recursiveness
Dealing with false positives/negatives
--------------------------------------
The false negatives are real memory leaks (orphan objects) but not
reported by kmemleak because values found during the memory scanning
point to such objects. To reduce the number of false negatives, kmemleak
provides the kmemleak_ignore, kmemleak_scan_area, kmemleak_no_scan and
kmemleak_erase functions (see above). The task stacks also increase the
amount of false negatives and their scanning is not enabled by default.
The false positives are objects wrongly reported as being memory leaks
(orphan). For objects known not to be leaks, kmemleak provides the
kmemleak_not_leak function. The kmemleak_ignore could also be used if
the memory block is known not to contain other pointers and it will no
longer be scanned.
Some of the reported leaks are only transient, especially on SMP
systems, because of pointers temporarily stored in CPU registers or
stacks. Kmemleak defines MSECS_MIN_AGE (defaulting to 1000) representing
the minimum age of an object to be reported as a memory leak.
Limitations and Drawbacks
-------------------------
The main drawback is the reduced performance of memory allocation and
freeing. To avoid other penalties, the memory scanning is only performed
when the /sys/kernel/debug/kmemleak file is read. Anyway, this tool is
intended for debugging purposes where the performance might not be the
most important requirement.
To keep the algorithm simple, kmemleak scans for values pointing to any
address inside a block's address range. This may lead to an increased
number of false negatives. However, it is likely that a real memory leak
will eventually become visible.
Another source of false negatives is the data stored in non-pointer
values. In a future version, kmemleak could only scan the pointer
members in the allocated structures. This feature would solve many of
the false negative cases described above.
The tool can report false positives. These are cases where an allocated
block doesn't need to be freed (some cases in the init_call functions),
the pointer is calculated by other methods than the usual container_of
macro or the pointer is stored in a location not scanned by kmemleak.
Page allocations and ioremap are not tracked. Only the ARM and x86
architectures are currently supported.

Просмотреть файл

@ -31,6 +31,7 @@ Contents:
- Locking functions.
- Interrupt disabling functions.
- Sleep and wake-up functions.
- Miscellaneous functions.
(*) Inter-CPU locking barrier effects.
@ -1217,6 +1218,132 @@ barriers are required in such a situation, they must be provided from some
other means.
SLEEP AND WAKE-UP FUNCTIONS
---------------------------
Sleeping and waking on an event flagged in global data can be viewed as an
interaction between two pieces of data: the task state of the task waiting for
the event and the global data used to indicate the event. To make sure that
these appear to happen in the right order, the primitives to begin the process
of going to sleep, and the primitives to initiate a wake up imply certain
barriers.
Firstly, the sleeper normally follows something like this sequence of events:
for (;;) {
set_current_state(TASK_UNINTERRUPTIBLE);
if (event_indicated)
break;
schedule();
}
A general memory barrier is interpolated automatically by set_current_state()
after it has altered the task state:
CPU 1
===============================
set_current_state();
set_mb();
STORE current->state
<general barrier>
LOAD event_indicated
set_current_state() may be wrapped by:
prepare_to_wait();
prepare_to_wait_exclusive();
which therefore also imply a general memory barrier after setting the state.
The whole sequence above is available in various canned forms, all of which
interpolate the memory barrier in the right place:
wait_event();
wait_event_interruptible();
wait_event_interruptible_exclusive();
wait_event_interruptible_timeout();
wait_event_killable();
wait_event_timeout();
wait_on_bit();
wait_on_bit_lock();
Secondly, code that performs a wake up normally follows something like this:
event_indicated = 1;
wake_up(&event_wait_queue);
or:
event_indicated = 1;
wake_up_process(event_daemon);
A write memory barrier is implied by wake_up() and co. if and only if they wake
something up. The barrier occurs before the task state is cleared, and so sits
between the STORE to indicate the event and the STORE to set TASK_RUNNING:
CPU 1 CPU 2
=============================== ===============================
set_current_state(); STORE event_indicated
set_mb(); wake_up();
STORE current->state <write barrier>
<general barrier> STORE current->state
LOAD event_indicated
The available waker functions include:
complete();
wake_up();
wake_up_all();
wake_up_bit();
wake_up_interruptible();
wake_up_interruptible_all();
wake_up_interruptible_nr();
wake_up_interruptible_poll();
wake_up_interruptible_sync();
wake_up_interruptible_sync_poll();
wake_up_locked();
wake_up_locked_poll();
wake_up_nr();
wake_up_poll();
wake_up_process();
[!] Note that the memory barriers implied by the sleeper and the waker do _not_
order multiple stores before the wake-up with respect to loads of those stored
values after the sleeper has called set_current_state(). For instance, if the
sleeper does:
set_current_state(TASK_INTERRUPTIBLE);
if (event_indicated)
break;
__set_current_state(TASK_RUNNING);
do_something(my_data);
and the waker does:
my_data = value;
event_indicated = 1;
wake_up(&event_wait_queue);
there's no guarantee that the change to event_indicated will be perceived by
the sleeper as coming after the change to my_data. In such a circumstance, the
code on both sides must interpolate its own memory barriers between the
separate data accesses. Thus the above sleeper ought to do:
set_current_state(TASK_INTERRUPTIBLE);
if (event_indicated) {
smp_rmb();
do_something(my_data);
}
and the waker should do:
my_data = value;
smp_wmb();
event_indicated = 1;
wake_up(&event_wait_queue);
MISCELLANEOUS FUNCTIONS
-----------------------
@ -1366,7 +1493,7 @@ WHERE ARE MEMORY BARRIERS NEEDED?
Under normal operation, memory operation reordering is generally not going to
be a problem as a single-threaded linear piece of code will still appear to
work correctly, even if it's in an SMP kernel. There are, however, three
work correctly, even if it's in an SMP kernel. There are, however, four
circumstances in which reordering definitely _could_ be a problem:
(*) Interprocessor interaction.

Просмотреть файл

@ -4,6 +4,7 @@
CONTENTS
========
0. WARNING
1. Overview
1.1 The problem
1.2 The solution
@ -14,6 +15,23 @@ CONTENTS
3. Future plans
0. WARNING
==========
Fiddling with these settings can result in an unstable system, the knobs are
root only and assumes root knows what he is doing.
Most notable:
* very small values in sched_rt_period_us can result in an unstable
system when the period is smaller than either the available hrtimer
resolution, or the time it takes to handle the budget refresh itself.
* very small values in sched_rt_runtime_us can result in an unstable
system when the runtime is so small the system has difficulty making
forward progress (NOTE: the migration thread and kstopmachine both
are real-time processes).
1. Overview
===========
@ -169,7 +187,7 @@ get their allocated time.
Implementing SCHED_EDF might take a while to complete. Priority Inheritance is
the biggest challenge as the current linux PI infrastructure is geared towards
the limited static priority levels 0-139. With deadline scheduling you need to
the limited static priority levels 0-99. With deadline scheduling you need to
do deadline inheritance (since priority is inversely proportional to the
deadline delta (deadline - now).

Просмотреть файл

@ -32,6 +32,7 @@ show up in /proc/sys/kernel:
- kstack_depth_to_print [ X86 only ]
- l2cr [ PPC only ]
- modprobe ==> Documentation/debugging-modules.txt
- modules_disabled
- msgmax
- msgmnb
- msgmni
@ -184,6 +185,16 @@ kernel stack.
==============================================================
modules_disabled:
A toggle value indicating if modules are allowed to be loaded
in an otherwise modular kernel. This toggle defaults to off
(0), but can be set true (1). Once true, modules can be
neither loaded nor unloaded, and the toggle cannot be set back
to false.
==============================================================
osrelease, ostype & version:
# cat osrelease

Просмотреть файл

@ -0,0 +1,90 @@
Event Tracing
Documentation written by Theodore Ts'o
Updated by Li Zefan
1. Introduction
===============
Tracepoints (see Documentation/trace/tracepoints.txt) can be used
without creating custom kernel modules to register probe functions
using the event tracing infrastructure.
Not all tracepoints can be traced using the event tracing system;
the kernel developer must provide code snippets which define how the
tracing information is saved into the tracing buffer, and how the
tracing information should be printed.
2. Using Event Tracing
======================
2.1 Via the 'set_event' interface
---------------------------------
The events which are available for tracing can be found in the file
/debug/tracing/available_events.
To enable a particular event, such as 'sched_wakeup', simply echo it
to /debug/tracing/set_event. For example:
# echo sched_wakeup >> /debug/tracing/set_event
[ Note: '>>' is necessary, otherwise it will firstly disable
all the events. ]
To disable an event, echo the event name to the set_event file prefixed
with an exclamation point:
# echo '!sched_wakeup' >> /debug/tracing/set_event
To disable all events, echo an empty line to the set_event file:
# echo > /debug/tracing/set_event
To enable all events, echo '*:*' or '*:' to the set_event file:
# echo *:* > /debug/tracing/set_event
The events are organized into subsystems, such as ext4, irq, sched,
etc., and a full event name looks like this: <subsystem>:<event>. The
subsystem name is optional, but it is displayed in the available_events
file. All of the events in a subsystem can be specified via the syntax
"<subsystem>:*"; for example, to enable all irq events, you can use the
command:
# echo 'irq:*' > /debug/tracing/set_event
2.2 Via the 'enable' toggle
---------------------------
The events available are also listed in /debug/tracing/events/ hierarchy
of directories.
To enable event 'sched_wakeup':
# echo 1 > /debug/tracing/events/sched/sched_wakeup/enable
To disable it:
# echo 0 > /debug/tracing/events/sched/sched_wakeup/enable
To enable all events in sched subsystem:
# echo 1 > /debug/tracing/events/sched/enable
To eanble all events:
# echo 1 > /debug/tracing/events/enable
When reading one of these enable files, there are four results:
0 - all events this file affects are disabled
1 - all events this file affects are enabled
X - there is a mixture of events enabled and disabled
? - this file does not affect any event
3. Defining an event-enabled tracepoint
=======================================
See The example provided in samples/trace_events

Просмотреть файл

@ -179,7 +179,7 @@ Here is the list of current tracers that may be configured.
Function call tracer to trace all kernel functions.
"function_graph_tracer"
"function_graph"
Similar to the function tracer except that the
function tracer probes the functions on their entry
@ -518,9 +518,18 @@ priority with zero (0) being the highest priority and the nice
values starting at 100 (nice -20). Below is a quick chart to map
the kernel priority to user land priorities.
Kernel priority: 0 to 99 ==> user RT priority 99 to 0
Kernel priority: 100 to 139 ==> user nice -20 to 19
Kernel priority: 140 ==> idle task priority
Kernel Space User Space
===============================================================
0(high) to 98(low) user RT priority 99(high) to 1(low)
with SCHED_RR or SCHED_FIFO
---------------------------------------------------------------
99 sched_priority is not used in scheduling
decisions(it must be specified as 0)
---------------------------------------------------------------
100(high) to 139(low) user nice -20(high) to 19(low)
---------------------------------------------------------------
140 idle task priority
---------------------------------------------------------------
The task states are:

Просмотреть файл

@ -0,0 +1,17 @@
The power tracer collects detailed information about C-state and P-state
transitions, instead of just looking at the high-level "average"
information.
There is a helper script found in scrips/tracing/power.pl in the kernel
sources which can be used to parse this information and create a
Scalable Vector Graphics (SVG) picture from the trace data.
To use this tracer:
echo 0 > /sys/kernel/debug/tracing/tracing_enabled
echo power > /sys/kernel/debug/tracing/current_tracer
echo 1 > /sys/kernel/debug/tracing/tracing_enabled
sleep 1
echo 0 > /sys/kernel/debug/tracing/tracing_enabled
cat /sys/kernel/debug/tracing/trace | \
perl scripts/tracing/power.pl > out.sv

Просмотреть файл

@ -50,6 +50,10 @@ Protocol 2.08: (Kernel 2.6.26) Added crc32 checksum and ELF format
Protocol 2.09: (Kernel 2.6.26) Added a field of 64-bit physical
pointer to single linked list of struct setup_data.
Protocol 2.10: (Kernel 2.6.31) Added a protocol for relaxed alignment
beyond the kernel_alignment added, new init_size and
pref_address fields. Added extended boot loader IDs.
**** MEMORY LAYOUT
The traditional memory map for the kernel loader, used for Image or
@ -168,12 +172,13 @@ Offset Proto Name Meaning
021C/4 2.00+ ramdisk_size initrd size (set by boot loader)
0220/4 2.00+ bootsect_kludge DO NOT USE - for bootsect.S use only
0224/2 2.01+ heap_end_ptr Free memory after setup end
0226/2 N/A pad1 Unused
0226/1 2.02+(3 ext_loader_ver Extended boot loader version
0227/1 2.02+(3 ext_loader_type Extended boot loader ID
0228/4 2.02+ cmd_line_ptr 32-bit pointer to the kernel command line
022C/4 2.03+ ramdisk_max Highest legal initrd address
0230/4 2.05+ kernel_alignment Physical addr alignment required for kernel
0234/1 2.05+ relocatable_kernel Whether kernel is relocatable or not
0235/1 N/A pad2 Unused
0235/1 2.10+ min_alignment Minimum alignment, as a power of two
0236/2 N/A pad3 Unused
0238/4 2.06+ cmdline_size Maximum size of the kernel command line
023C/4 2.07+ hardware_subarch Hardware subarchitecture
@ -182,6 +187,8 @@ Offset Proto Name Meaning
024C/4 2.08+ payload_length Length of kernel payload
0250/8 2.09+ setup_data 64-bit physical pointer to linked list
of struct setup_data
0258/8 2.10+ pref_address Preferred loading address
0260/4 2.10+ init_size Linear memory required during initialization
(1) For backwards compatibility, if the setup_sects field contains 0, the
real value is 4.
@ -190,6 +197,8 @@ Offset Proto Name Meaning
field are unusable, which means the size of a bzImage kernel
cannot be determined.
(3) Ignored, but safe to set, for boot protocols 2.02-2.09.
If the "HdrS" (0x53726448) magic number is not found at offset 0x202,
the boot protocol version is "old". Loading an old kernel, the
following parameters should be assumed:
@ -343,18 +352,32 @@ Protocol: 2.00+
0xTV here, where T is an identifier for the boot loader and V is
a version number. Otherwise, enter 0xFF here.
For boot loader IDs above T = 0xD, write T = 0xE to this field and
write the extended ID minus 0x10 to the ext_loader_type field.
Similarly, the ext_loader_ver field can be used to provide more than
four bits for the bootloader version.
For example, for T = 0x15, V = 0x234, write:
type_of_loader <- 0xE4
ext_loader_type <- 0x05
ext_loader_ver <- 0x23
Assigned boot loader ids:
0 LILO (0x00 reserved for pre-2.00 bootloader)
1 Loadlin
2 bootsect-loader (0x20, all other values reserved)
3 SYSLINUX
4 EtherBoot
3 Syslinux
4 Etherboot/gPXE
5 ELILO
7 GRUB
8 U-BOOT
8 U-Boot
9 Xen
A Gujin
B Qemu
C Arcturus Networks uCbootloader
E Extended (see ext_loader_type)
F Special (0xFF = undefined)
Please contact <hpa@zytor.com> if you need a bootloader ID
value assigned.
@ -453,6 +476,35 @@ Protocol: 2.01+
Set this field to the offset (from the beginning of the real-mode
code) of the end of the setup stack/heap, minus 0x0200.
Field name: ext_loader_ver
Type: write (optional)
Offset/size: 0x226/1
Protocol: 2.02+
This field is used as an extension of the version number in the
type_of_loader field. The total version number is considered to be
(type_of_loader & 0x0f) + (ext_loader_ver << 4).
The use of this field is boot loader specific. If not written, it
is zero.
Kernels prior to 2.6.31 did not recognize this field, but it is safe
to write for protocol version 2.02 or higher.
Field name: ext_loader_type
Type: write (obligatory if (type_of_loader & 0xf0) == 0xe0)
Offset/size: 0x227/1
Protocol: 2.02+
This field is used as an extension of the type number in
type_of_loader field. If the type in type_of_loader is 0xE, then
the actual type is (ext_loader_type + 0x10).
This field is ignored if the type in type_of_loader is not 0xE.
Kernels prior to 2.6.31 did not recognize this field, but it is safe
to write for protocol version 2.02 or higher.
Field name: cmd_line_ptr
Type: write (obligatory)
Offset/size: 0x228/4
@ -482,11 +534,19 @@ Protocol: 2.03+
0x37FFFFFF, you can start your ramdisk at 0x37FE0000.)
Field name: kernel_alignment
Type: read (reloc)
Type: read/modify (reloc)
Offset/size: 0x230/4
Protocol: 2.05+
Protocol: 2.05+ (read), 2.10+ (modify)
Alignment unit required by the kernel (if relocatable_kernel is true.)
Alignment unit required by the kernel (if relocatable_kernel is
true.) A relocatable kernel that is loaded at an alignment
incompatible with the value in this field will be realigned during
kernel initialization.
Starting with protocol version 2.10, this reflects the kernel
alignment preferred for optimal performance; it is possible for the
loader to modify this field to permit a lesser alignment. See the
min_alignment and pref_address field below.
Field name: relocatable_kernel
Type: read (reloc)
@ -498,6 +558,22 @@ Protocol: 2.05+
After loading, the boot loader must set the code32_start field to
point to the loaded code, or to a boot loader hook.
Field name: min_alignment
Type: read (reloc)
Offset/size: 0x235/1
Protocol: 2.10+
This field, if nonzero, indicates as a power of two the minimum
alignment required, as opposed to preferred, by the kernel to boot.
If a boot loader makes use of this field, it should update the
kernel_alignment field with the alignment unit desired; typically:
kernel_alignment = 1 << min_alignment
There may be a considerable performance cost with an excessively
misaligned kernel. Therefore, a loader should typically try each
power-of-two alignment from kernel_alignment down to this alignment.
Field name: cmdline_size
Type: read
Offset/size: 0x238/4
@ -582,6 +658,36 @@ Protocol: 2.09+
sure to consider the case where the linked list already contains
entries.
Field name: pref_address
Type: read (reloc)
Offset/size: 0x258/8
Protocol: 2.10+
This field, if nonzero, represents a preferred load address for the
kernel. A relocating bootloader should attempt to load at this
address if possible.
A non-relocatable kernel will unconditionally move itself and to run
at this address.
Field name: init_size
Type: read
Offset/size: 0x25c/4
This field indicates the amount of linear contiguous memory starting
at the kernel runtime start address that the kernel needs before it
is capable of examining its memory map. This is not the same thing
as the total amount of memory the kernel needs to boot, but it can
be used by a relocating boot loader to help select a safe load
address for the kernel.
The kernel runtime start address is determined by the following algorithm:
if (relocatable_kernel)
runtime_start = align_up(load_address, kernel_alignment)
else
runtime_start = pref_address
**** THE IMAGE CHECKSUM

Просмотреть файл

@ -180,11 +180,6 @@ NUMA
Otherwise, the remaining system RAM is allocated to an
additional node.
numa=hotadd=percent
Only allow hotadd memory to preallocate page structures upto
percent of already available memory.
numa=hotadd=0 will disable hotadd memory.
ACPI
acpi=off Don't enable ACPI

Просмотреть файл

@ -6,10 +6,11 @@ Virtual memory map with 4 level page tables:
0000000000000000 - 00007fffffffffff (=47 bits) user space, different per mm
hole caused by [48:63] sign extension
ffff800000000000 - ffff80ffffffffff (=40 bits) guard hole
ffff880000000000 - ffffc0ffffffffff (=57 TB) direct mapping of all phys. memory
ffffc10000000000 - ffffc1ffffffffff (=40 bits) hole
ffffc20000000000 - ffffe1ffffffffff (=45 bits) vmalloc/ioremap space
ffffe20000000000 - ffffe2ffffffffff (=40 bits) virtual memory map (1TB)
ffff880000000000 - ffffc7ffffffffff (=64 TB) direct mapping of all phys. memory
ffffc80000000000 - ffffc8ffffffffff (=40 bits) hole
ffffc90000000000 - ffffe8ffffffffff (=45 bits) vmalloc/ioremap space
ffffe90000000000 - ffffe9ffffffffff (=40 bits) hole
ffffea0000000000 - ffffeaffffffffff (=40 bits) virtual memory map (1TB)
... unused hole ...
ffffffff80000000 - ffffffffa0000000 (=512 MB) kernel text mapping, from phys 0
ffffffffa0000000 - fffffffffff00000 (=1536 MB) module mapping space

Просмотреть файл

@ -71,7 +71,7 @@ P: Person
M: Mail patches to
L: Mailing list that is relevant to this area
W: Web-page with status/info
T: SCM tree type and location. Type is one of: git, hg, quilt.
T: SCM tree type and location. Type is one of: git, hg, quilt, stgit.
S: Status, one of the following:
Supported: Someone is actually paid to look after this.
@ -159,7 +159,8 @@ F: drivers/net/r8169.c
8250/16?50 (AND CLONE UARTS) SERIAL DRIVER
L: linux-serial@vger.kernel.org
W: http://serial.sourceforge.net
S: Orphan
M: alan@lxorguk.ukuu.org.uk
S: Odd Fixes
F: drivers/serial/8250*
F: include/linux/serial_8250.h
@ -1978,6 +1979,16 @@ F: Documentation/edac.txt
F: drivers/edac/edac_*
F: include/linux/edac.h
EDAC-AMD64
P: Doug Thompson
M: dougthompson@xmission.com
P: Borislav Petkov
M: borislav.petkov@amd.com
L: bluesmoke-devel@lists.sourceforge.net (moderated for non-subscribers)
W: bluesmoke.sourceforge.net
S: Supported
F: drivers/edac/amd64_edac*
EDAC-E752X
P: Mark Gross
M: mark.gross@intel.com
@ -3359,6 +3370,12 @@ F: Documentation/trace/kmemtrace.txt
F: include/trace/kmemtrace.h
F: kernel/trace/kmemtrace.c
KMEMLEAK
P: Catalin Marinas
M: catalin.marinas@arm.com
L: linux-kernel@vger.kernel.org
S: Maintained
KPROBES
P: Ananth N Mavinakayanahalli
M: ananth@in.ibm.com
@ -4392,6 +4409,16 @@ S: Maintained
F: include/linux/delayacct.h
F: kernel/delayacct.c
PERFORMANCE COUNTER SUBSYSTEM
P: Peter Zijlstra
M: a.p.zijlstra@chello.nl
P: Paul Mackerras
M: paulus@samba.org
P: Ingo Molnar
M: mingo@elte.hu
L: linux-kernel@vger.kernel.org
S: Supported
PERSONALITY HANDLING
P: Christoph Hellwig
M: hch@infradead.org
@ -5629,6 +5656,7 @@ P: Alan Cox
M: alan@lxorguk.ukuu.org.uk
L: linux-kernel@vger.kernel.org
S: Maintained
T: stgit http://zeniv.linux.org.uk/~alan/ttydev/
TULIP NETWORK DRIVERS
P: Grant Grundler

Просмотреть файл

@ -1,7 +1,7 @@
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 30
EXTRAVERSION = -rc7
EXTRAVERSION =
NAME = Man-Eating Seals of Antiquity
# *DOCUMENTATION*
@ -533,7 +533,7 @@ endif
include $(srctree)/arch/$(SRCARCH)/Makefile
ifneq (CONFIG_FRAME_WARN,0)
ifneq ($(CONFIG_FRAME_WARN),0)
KBUILD_CFLAGS += $(call cc-option,-Wframe-larger-than=${CONFIG_FRAME_WARN})
endif

Просмотреть файл

@ -7,4 +7,20 @@
#define L1_CACHE_SHIFT 5
#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
/*
* Memory returned by kmalloc() may be used for DMA, so we must make
* sure that all such allocations are cache aligned. Otherwise,
* unrelated code may cause parts of the buffer to be read into the
* cache before the transfer is done, causing old data to be seen by
* the CPU.
*/
#define ARCH_KMALLOC_MINALIGN L1_CACHE_BYTES
/*
* With EABI on ARMv5 and above we must have 64-bit aligned slab pointers.
*/
#if defined(CONFIG_AEABI) && (__LINUX_ARM_ARCH__ >= 5)
#define ARCH_SLAB_MINALIGN 8
#endif
#endif

Просмотреть файл

@ -202,13 +202,6 @@ typedef struct page *pgtable_t;
(((current->personality & READ_IMPLIES_EXEC) ? VM_EXEC : 0) | \
VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
/*
* With EABI on ARMv5 and above we must have 64-bit aligned slab pointers.
*/
#if defined(CONFIG_AEABI) && (__LINUX_ARM_ARCH__ >= 5)
#define ARCH_SLAB_MINALIGN 8
#endif
#include <asm-generic/page.h>
#endif

Просмотреть файл

@ -890,7 +890,7 @@ static struct clk clko_clk = {
.con_id = n, \
.clk = &c, \
},
static struct clk_lookup lookups[] __initdata = {
static struct clk_lookup lookups[] = {
/* It's unlikely that any driver wants one of them directly:
_REGISTER_CLOCK(NULL, "ckih", ckih_clk)
_REGISTER_CLOCK(NULL, "ckil", ckil_clk)

Просмотреть файл

@ -621,7 +621,7 @@ DEFINE_CLOCK1(csi_clk, 0, 0, 0, parent, &csi_clk1, &per4_clk);
.clk = &c, \
},
static struct clk_lookup lookups[] __initdata = {
static struct clk_lookup lookups[] = {
_REGISTER_CLOCK("imx-uart.0", NULL, uart1_clk)
_REGISTER_CLOCK("imx-uart.1", NULL, uart2_clk)
_REGISTER_CLOCK("imx-uart.2", NULL, uart3_clk)

Просмотреть файл

@ -404,7 +404,7 @@ DEFINE_CLOCK(gpu2d_clk, 0, CCM_CGR3, 4, NULL, NULL);
.clk = &c, \
},
static struct clk_lookup lookups[] __initdata = {
static struct clk_lookup lookups[] = {
_REGISTER_CLOCK(NULL, "asrc", asrc_clk)
_REGISTER_CLOCK(NULL, "ata", ata_clk)
_REGISTER_CLOCK(NULL, "audmux", audmux_clk)

Просмотреть файл

@ -516,7 +516,7 @@ DEFINE_CLOCK(ipg_clk, 0, NULL, 0, ipg_get_rate, NULL, &ahb_clk);
.clk = &c, \
},
static struct clk_lookup lookups[] __initdata = {
static struct clk_lookup lookups[] = {
_REGISTER_CLOCK(NULL, "emi", emi_clk)
_REGISTER_CLOCK(NULL, "cspi", cspi1_clk)
_REGISTER_CLOCK(NULL, "cspi", cspi2_clk)

Просмотреть файл

@ -72,7 +72,10 @@ void __init pxa_set_mci_info(struct pxamci_platform_data *info)
}
static struct pxa2xx_udc_mach_info pxa_udc_info;
static struct pxa2xx_udc_mach_info pxa_udc_info = {
.gpio_pullup = -1,
.gpio_vbus = -1,
};
void __init pxa_set_udc_info(struct pxa2xx_udc_mach_info *info)
{

Просмотреть файл

@ -412,7 +412,7 @@ static struct platform_device imote2_flash_device = {
*/
static struct i2c_board_info __initdata imote2_i2c_board_info[] = {
{ /* UCAM sensor board */
.type = "max1238",
.type = "max1239",
.addr = 0x35,
}, { /* ITS400 Sensor board only */
.type = "max1363",

Просмотреть файл

@ -184,23 +184,37 @@ __v7_setup:
stmia r12, {r0-r5, r7, r9, r11, lr}
bl v7_flush_dcache_all
ldmia r12, {r0-r5, r7, r9, r11, lr}
mrc p15, 0, r0, c0, c0, 0 @ read main ID register
and r10, r0, #0xff000000 @ ARM?
teq r10, #0x41000000
bne 2f
and r5, r0, #0x00f00000 @ variant
and r6, r0, #0x0000000f @ revision
orr r0, r6, r5, lsr #20-4 @ combine variant and revision
#ifdef CONFIG_ARM_ERRATA_430973
mrc p15, 0, r10, c1, c0, 1 @ read aux control register
orr r10, r10, #(1 << 6) @ set IBE to 1
mcr p15, 0, r10, c1, c0, 1 @ write aux control register
teq r5, #0x00100000 @ only present in r1p*
mrceq p15, 0, r10, c1, c0, 1 @ read aux control register
orreq r10, r10, #(1 << 6) @ set IBE to 1
mcreq p15, 0, r10, c1, c0, 1 @ write aux control register
#endif
#ifdef CONFIG_ARM_ERRATA_458693
mrc p15, 0, r10, c1, c0, 1 @ read aux control register
orr r10, r10, #(1 << 5) @ set L1NEON to 1
orr r10, r10, #(1 << 9) @ set PLDNOP to 1
mcr p15, 0, r10, c1, c0, 1 @ write aux control register
teq r0, #0x20 @ only present in r2p0
mrceq p15, 0, r10, c1, c0, 1 @ read aux control register
orreq r10, r10, #(1 << 5) @ set L1NEON to 1
orreq r10, r10, #(1 << 9) @ set PLDNOP to 1
mcreq p15, 0, r10, c1, c0, 1 @ write aux control register
#endif
#ifdef CONFIG_ARM_ERRATA_460075
mrc p15, 1, r10, c9, c0, 2 @ read L2 cache aux ctrl register
orr r10, r10, #(1 << 22) @ set the Write Allocate disable bit
mcr p15, 1, r10, c9, c0, 2 @ write the L2 cache aux ctrl register
teq r0, #0x20 @ only present in r2p0
mrceq p15, 1, r10, c9, c0, 2 @ read L2 cache aux ctrl register
tsteq r10, #1 << 22
orreq r10, r10, #(1 << 22) @ set the Write Allocate disable bit
mcreq p15, 1, r10, c9, c0, 2 @ write the L2 cache aux ctrl register
#endif
mov r10, #0
2: mov r10, #0
#ifdef HARVARD_CACHE
mcr p15, 0, r10, c7, c5, 0 @ I+BTB cache invalidate
#endif

Просмотреть файл

@ -20,11 +20,16 @@
#define ASMARM_ARCH_UART_H
#define IMXUART_HAVE_RTSCTS (1<<0)
#define IMXUART_IRDA (1<<1)
struct imxuart_platform_data {
int (*init)(struct platform_device *pdev);
int (*exit)(struct platform_device *pdev);
unsigned int flags;
void (*irda_enable)(int enable);
unsigned int irda_inv_rx:1;
unsigned int irda_inv_tx:1;
unsigned short transceiver_delay;
};
#endif

Просмотреть файл

@ -147,24 +147,40 @@ static int __mbox_msg_send(struct omap_mbox *mbox, mbox_msg_t msg, void *arg)
return ret;
}
struct omap_msg_tx_data {
mbox_msg_t msg;
void *arg;
};
static void omap_msg_tx_end_io(struct request *rq, int error)
{
kfree(rq->special);
__blk_put_request(rq->q, rq);
}
int omap_mbox_msg_send(struct omap_mbox *mbox, mbox_msg_t msg, void* arg)
{
struct omap_msg_tx_data *tx_data;
struct request *rq;
struct request_queue *q = mbox->txq->queue;
int ret = 0;
tx_data = kmalloc(sizeof(*tx_data), GFP_ATOMIC);
if (unlikely(!tx_data))
return -ENOMEM;
rq = blk_get_request(q, WRITE, GFP_ATOMIC);
if (unlikely(!rq)) {
ret = -ENOMEM;
goto fail;
kfree(tx_data);
return -ENOMEM;
}
rq->data = (void *)msg;
blk_insert_request(q, rq, 0, arg);
tx_data->msg = msg;
tx_data->arg = arg;
rq->end_io = omap_msg_tx_end_io;
blk_insert_request(q, rq, 0, tx_data);
schedule_work(&mbox->txq->work);
fail:
return ret;
return 0;
}
EXPORT_SYMBOL(omap_mbox_msg_send);
@ -178,22 +194,28 @@ static void mbox_tx_work(struct work_struct *work)
struct request_queue *q = mbox->txq->queue;
while (1) {
struct omap_msg_tx_data *tx_data;
spin_lock(q->queue_lock);
rq = elv_next_request(q);
rq = blk_fetch_request(q);
spin_unlock(q->queue_lock);
if (!rq)
break;
ret = __mbox_msg_send(mbox, (mbox_msg_t) rq->data, rq->special);
tx_data = rq->special;
ret = __mbox_msg_send(mbox, tx_data->msg, tx_data->arg);
if (ret) {
enable_mbox_irq(mbox, IRQ_TX);
spin_lock(q->queue_lock);
blk_requeue_request(q, rq);
spin_unlock(q->queue_lock);
return;
}
spin_lock(q->queue_lock);
if (__blk_end_request(rq, 0, 0))
BUG();
__blk_end_request_all(rq, 0);
spin_unlock(q->queue_lock);
}
}
@ -218,16 +240,13 @@ static void mbox_rx_work(struct work_struct *work)
while (1) {
spin_lock_irqsave(q->queue_lock, flags);
rq = elv_next_request(q);
rq = blk_fetch_request(q);
spin_unlock_irqrestore(q->queue_lock, flags);
if (!rq)
break;
msg = (mbox_msg_t) rq->data;
if (blk_end_request(rq, 0, 0))
BUG();
msg = (mbox_msg_t)rq->special;
blk_end_request_all(rq, 0);
mbox->rxq->callback((void *)msg);
}
}
@ -264,7 +283,6 @@ static void __mbox_rx_interrupt(struct omap_mbox *mbox)
goto nomem;
msg = mbox_fifo_read(mbox);
rq->data = (void *)msg;
if (unlikely(mbox_seq_test(mbox, msg))) {
pr_info("mbox: Illegal seq bit!(%08x)\n", msg);
@ -272,7 +290,7 @@ static void __mbox_rx_interrupt(struct omap_mbox *mbox)
mbox->err_notify();
}
blk_insert_request(q, rq, 0, NULL);
blk_insert_request(q, rq, 0, (void *)msg);
if (mbox->ops->type == OMAP_MBOX_TYPE1)
break;
}
@ -329,16 +347,15 @@ omap_mbox_read(struct device *dev, struct device_attribute *attr, char *buf)
while (1) {
spin_lock_irqsave(q->queue_lock, flags);
rq = elv_next_request(q);
rq = blk_fetch_request(q);
spin_unlock_irqrestore(q->queue_lock, flags);
if (!rq)
break;
*p = (mbox_msg_t) rq->data;
*p = (mbox_msg_t)rq->special;
if (blk_end_request(rq, 0, 0))
BUG();
blk_end_request_all(rq, 0);
if (unlikely(mbox_seq_test(mbox, *p))) {
pr_info("mbox: Illegal seq bit!(%08x) ignored\n", *p);

Просмотреть файл

@ -6,6 +6,7 @@ config FRV
bool
default y
select HAVE_IDE
select HAVE_ARCH_TRACEHOOK
config ZONE_DMA
bool

Просмотреть файл

@ -112,7 +112,7 @@ extern unsigned long atomic_test_and_XOR_mask(unsigned long mask, volatile unsig
#define atomic_clear_mask(mask, v) atomic_test_and_ANDNOT_mask((mask), (v))
#define atomic_set_mask(mask, v) atomic_test_and_OR_mask((mask), (v))
static inline int test_and_clear_bit(int nr, volatile void *addr)
static inline int test_and_clear_bit(unsigned long nr, volatile void *addr)
{
volatile unsigned long *ptr = addr;
unsigned long mask = 1UL << (nr & 31);
@ -120,7 +120,7 @@ static inline int test_and_clear_bit(int nr, volatile void *addr)
return (atomic_test_and_ANDNOT_mask(mask, ptr) & mask) != 0;
}
static inline int test_and_set_bit(int nr, volatile void *addr)
static inline int test_and_set_bit(unsigned long nr, volatile void *addr)
{
volatile unsigned long *ptr = addr;
unsigned long mask = 1UL << (nr & 31);
@ -128,7 +128,7 @@ static inline int test_and_set_bit(int nr, volatile void *addr)
return (atomic_test_and_OR_mask(mask, ptr) & mask) != 0;
}
static inline int test_and_change_bit(int nr, volatile void *addr)
static inline int test_and_change_bit(unsigned long nr, volatile void *addr)
{
volatile unsigned long *ptr = addr;
unsigned long mask = 1UL << (nr & 31);
@ -136,22 +136,22 @@ static inline int test_and_change_bit(int nr, volatile void *addr)
return (atomic_test_and_XOR_mask(mask, ptr) & mask) != 0;
}
static inline void clear_bit(int nr, volatile void *addr)
static inline void clear_bit(unsigned long nr, volatile void *addr)
{
test_and_clear_bit(nr, addr);
}
static inline void set_bit(int nr, volatile void *addr)
static inline void set_bit(unsigned long nr, volatile void *addr)
{
test_and_set_bit(nr, addr);
}
static inline void change_bit(int nr, volatile void * addr)
static inline void change_bit(unsigned long nr, volatile void *addr)
{
test_and_change_bit(nr, addr);
}
static inline void __clear_bit(int nr, volatile void * addr)
static inline void __clear_bit(unsigned long nr, volatile void *addr)
{
volatile unsigned long *a = addr;
int mask;
@ -161,7 +161,7 @@ static inline void __clear_bit(int nr, volatile void * addr)
*a &= ~mask;
}
static inline void __set_bit(int nr, volatile void * addr)
static inline void __set_bit(unsigned long nr, volatile void *addr)
{
volatile unsigned long *a = addr;
int mask;
@ -171,7 +171,7 @@ static inline void __set_bit(int nr, volatile void * addr)
*a |= mask;
}
static inline void __change_bit(int nr, volatile void *addr)
static inline void __change_bit(unsigned long nr, volatile void *addr)
{
volatile unsigned long *a = addr;
int mask;
@ -181,7 +181,7 @@ static inline void __change_bit(int nr, volatile void *addr)
*a ^= mask;
}
static inline int __test_and_clear_bit(int nr, volatile void * addr)
static inline int __test_and_clear_bit(unsigned long nr, volatile void *addr)
{
volatile unsigned long *a = addr;
int mask, retval;
@ -193,7 +193,7 @@ static inline int __test_and_clear_bit(int nr, volatile void * addr)
return retval;
}
static inline int __test_and_set_bit(int nr, volatile void * addr)
static inline int __test_and_set_bit(unsigned long nr, volatile void *addr)
{
volatile unsigned long *a = addr;
int mask, retval;
@ -205,7 +205,7 @@ static inline int __test_and_set_bit(int nr, volatile void * addr)
return retval;
}
static inline int __test_and_change_bit(int nr, volatile void * addr)
static inline int __test_and_change_bit(unsigned long nr, volatile void *addr)
{
volatile unsigned long *a = addr;
int mask, retval;
@ -220,12 +220,13 @@ static inline int __test_and_change_bit(int nr, volatile void * addr)
/*
* This routine doesn't need to be atomic.
*/
static inline int __constant_test_bit(int nr, const volatile void * addr)
static inline int
__constant_test_bit(unsigned long nr, const volatile void *addr)
{
return ((1UL << (nr & 31)) & (((const volatile unsigned int *) addr)[nr >> 5])) != 0;
}
static inline int __test_bit(int nr, const volatile void * addr)
static inline int __test_bit(unsigned long nr, const volatile void *addr)
{
int * a = (int *) addr;
int mask;

Просмотреть файл

@ -116,6 +116,7 @@ do { \
} while(0)
#define USE_ELF_CORE_DUMP
#define CORE_DUMP_USE_REGSET
#define ELF_FDPIC_CORE_EFLAGS EF_FRV_FDPIC
#define ELF_EXEC_PAGESIZE 16384

Просмотреть файл

@ -87,8 +87,7 @@ static inline void pci_dma_sync_single(struct pci_dev *hwdev,
dma_addr_t dma_handle,
size_t size, int direction)
{
if (direction == PCI_DMA_NONE)
BUG();
BUG_ON(direction == PCI_DMA_NONE);
frv_cache_wback_inv((unsigned long)bus_to_virt(dma_handle),
(unsigned long)bus_to_virt(dma_handle) + size);
@ -105,9 +104,7 @@ static inline void pci_dma_sync_sg(struct pci_dev *hwdev,
int nelems, int direction)
{
int i;
if (direction == PCI_DMA_NONE)
BUG();
BUG_ON(direction == PCI_DMA_NONE);
for (i = 0; i < nelems; i++)
frv_cache_wback_inv(sg_dma_address(&sg[i]),

Просмотреть файл

@ -65,6 +65,8 @@
#ifdef __KERNEL__
#ifndef __ASSEMBLY__
struct task_struct;
/*
* we dedicate GR28 to keeping a pointer to the current exception frame
* - gr28 is destroyed on entry to the kernel from userspace
@ -73,11 +75,18 @@ register struct pt_regs *__frame asm("gr28");
#define user_mode(regs) (!((regs)->psr & PSR_S))
#define instruction_pointer(regs) ((regs)->pc)
#define user_stack_pointer(regs) ((regs)->sp)
extern unsigned long user_stack(const struct pt_regs *);
extern void show_regs(struct pt_regs *);
#define profile_pc(regs) ((regs)->pc)
#endif
#define task_pt_regs(task) ((task)->thread.frame0)
#define arch_has_single_step() (1)
extern void user_enable_single_step(struct task_struct *);
extern void user_disable_single_step(struct task_struct *);
#endif /* !__ASSEMBLY__ */
#endif /* __KERNEL__ */
#endif /* _ASM_PTRACE_H */

Просмотреть файл

@ -0,0 +1,123 @@
/* syscall parameter access functions
*
* Copyright (C) 2009 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public Licence
* as published by the Free Software Foundation; either version
* 2 of the Licence, or (at your option) any later version.
*/
#ifndef _ASM_SYSCALL_H
#define _ASM_SYSCALL_H
#include <linux/err.h>
#include <asm/ptrace.h>
/*
* Get the system call number or -1
*/
static inline long syscall_get_nr(struct task_struct *task,
struct pt_regs *regs)
{
return regs->syscallno;
}
/*
* Restore the clobbered GR8 register
* (1st syscall arg was overwritten with syscall return or error)
*/
static inline void syscall_rollback(struct task_struct *task,
struct pt_regs *regs)
{
regs->gr8 = regs->orig_gr8;
}
/*
* See if the syscall return value is an error, returning it if it is and 0 if
* not
*/
static inline long syscall_get_error(struct task_struct *task,
struct pt_regs *regs)
{
return IS_ERR_VALUE(regs->gr8) ? regs->gr8 : 0;
}
/*
* Get the syscall return value
*/
static inline long syscall_get_return_value(struct task_struct *task,
struct pt_regs *regs)
{
return regs->gr8;
}
/*
* Set the syscall return value
*/
static inline void syscall_set_return_value(struct task_struct *task,
struct pt_regs *regs,
int error, long val)
{
if (error)
regs->gr8 = -error;
else
regs->gr8 = val;
}
/*
* Retrieve the system call arguments
*/
static inline void syscall_get_arguments(struct task_struct *task,
struct pt_regs *regs,
unsigned int i, unsigned int n,
unsigned long *args)
{
/*
* Do this simply for now. If we need to start supporting
* fetching arguments from arbitrary indices, this will need some
* extra logic. Presently there are no in-tree users that depend
* on this behaviour.
*/
BUG_ON(i);
/* Argument pattern is: GR8, GR9, GR10, GR11, GR12, GR13 */
switch (n) {
case 6: args[5] = regs->gr13;
case 5: args[4] = regs->gr12;
case 4: args[3] = regs->gr11;
case 3: args[2] = regs->gr10;
case 2: args[1] = regs->gr9;
case 1: args[0] = regs->gr8;
break;
default:
BUG();
}
}
/*
* Alter the system call arguments
*/
static inline void syscall_set_arguments(struct task_struct *task,
struct pt_regs *regs,
unsigned int i, unsigned int n,
const unsigned long *args)
{
/* Same note as above applies */
BUG_ON(i);
switch (n) {
case 6: regs->gr13 = args[5];
case 5: regs->gr12 = args[4];
case 4: regs->gr11 = args[3];
case 3: regs->gr10 = args[2];
case 2: regs->gr9 = args[1];
case 1: regs->gr8 = args[0];
break;
default:
BUG();
}
}
#endif /* _ASM_SYSCALL_H */

Просмотреть файл

@ -109,20 +109,20 @@ register struct thread_info *__current_thread_info asm("gr15");
* - other flags in MSW
*/
#define TIF_SYSCALL_TRACE 0 /* syscall trace active */
#define TIF_SIGPENDING 1 /* signal pending */
#define TIF_NEED_RESCHED 2 /* rescheduling necessary */
#define TIF_SINGLESTEP 3 /* restore singlestep on return to user mode */
#define TIF_IRET 4 /* return with iret */
#define TIF_NOTIFY_RESUME 1 /* callback before returning to user */
#define TIF_SIGPENDING 2 /* signal pending */
#define TIF_NEED_RESCHED 3 /* rescheduling necessary */
#define TIF_SINGLESTEP 4 /* restore singlestep on return to user mode */
#define TIF_RESTORE_SIGMASK 5 /* restore signal mask in do_signal() */
#define TIF_POLLING_NRFLAG 16 /* true if poll_idle() is polling TIF_NEED_RESCHED */
#define TIF_MEMDIE 17 /* OOM killer killed process */
#define TIF_FREEZE 18 /* freezing for suspend */
#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
#define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
#define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)
#define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP)
#define _TIF_IRET (1 << TIF_IRET)
#define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK)
#define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG)
#define _TIF_FREEZE (1 << TIF_FREEZE)

Просмотреть файл

@ -886,7 +886,6 @@ system_call:
bnc icc0,#0,__syscall_badsys
ldi @(gr15,#TI_FLAGS),gr4
ori gr4,#_TIF_SYSCALL_TRACE,gr4
andicc gr4,#_TIF_SYSCALL_TRACE,gr0,icc0
bne icc0,#0,__syscall_trace_entry
@ -1150,11 +1149,10 @@ __entry_work_notifysig:
# perform syscall entry tracing
__syscall_trace_entry:
LEDS 0x6320
setlos.p #0,gr8
call do_syscall_trace
call syscall_trace_entry
ldi @(gr28,#REG_SYSCALLNO),gr7
lddi @(gr28,#REG_GR(8)) ,gr8
lddi.p @(gr28,#REG_GR(8)) ,gr8
ori gr8,#0,gr7 ; syscall_trace_entry() returned new syscallno
lddi @(gr28,#REG_GR(10)),gr10
lddi.p @(gr28,#REG_GR(12)),gr12
@ -1169,11 +1167,10 @@ __syscall_exit_work:
beq icc0,#1,__entry_work_pending
movsg psr,gr23
andi gr23,#~PSR_PIL,gr23 ; could let do_syscall_trace() call schedule()
andi gr23,#~PSR_PIL,gr23 ; could let syscall_trace_exit() call schedule()
movgs gr23,psr
setlos.p #1,gr8
call do_syscall_trace
call syscall_trace_exit
bra __entry_resume_userspace
__syscall_badsys:

Просмотреть файл

@ -19,6 +19,9 @@
#include <linux/user.h>
#include <linux/security.h>
#include <linux/signal.h>
#include <linux/regset.h>
#include <linux/elf.h>
#include <linux/tracehook.h>
#include <asm/uaccess.h>
#include <asm/page.h>
@ -32,6 +35,169 @@
* in exit.c or in signal.c.
*/
/*
* retrieve the contents of FRV userspace general registers
*/
static int genregs_get(struct task_struct *target,
const struct user_regset *regset,
unsigned int pos, unsigned int count,
void *kbuf, void __user *ubuf)
{
const struct user_int_regs *iregs = &target->thread.user->i;
int ret;
ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf,
iregs, 0, sizeof(*iregs));
if (ret < 0)
return ret;
return user_regset_copyout_zero(&pos, &count, &kbuf, &ubuf,
sizeof(*iregs), -1);
}
/*
* update the contents of the FRV userspace general registers
*/
static int genregs_set(struct task_struct *target,
const struct user_regset *regset,
unsigned int pos, unsigned int count,
const void *kbuf, const void __user *ubuf)
{
struct user_int_regs *iregs = &target->thread.user->i;
unsigned int offs_gr0, offs_gr1;
int ret;
/* not allowed to set PSR or __status */
if (pos < offsetof(struct user_int_regs, psr) + sizeof(long) &&
pos + count > offsetof(struct user_int_regs, psr))
return -EIO;
if (pos < offsetof(struct user_int_regs, __status) + sizeof(long) &&
pos + count > offsetof(struct user_int_regs, __status))
return -EIO;
/* set the control regs */
offs_gr0 = offsetof(struct user_int_regs, gr[0]);
offs_gr1 = offsetof(struct user_int_regs, gr[1]);
ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
iregs, 0, offs_gr0);
if (ret < 0)
return ret;
/* skip GR0/TBR */
ret = user_regset_copyin_ignore(&pos, &count, &kbuf, &ubuf,
offs_gr0, offs_gr1);
if (ret < 0)
return ret;
/* set the general regs */
ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
&iregs->gr[1], offs_gr1, sizeof(*iregs));
if (ret < 0)
return ret;
return user_regset_copyin_ignore(&pos, &count, &kbuf, &ubuf,
sizeof(*iregs), -1);
}
/*
* retrieve the contents of FRV userspace FP/Media registers
*/
static int fpmregs_get(struct task_struct *target,
const struct user_regset *regset,
unsigned int pos, unsigned int count,
void *kbuf, void __user *ubuf)
{
const struct user_fpmedia_regs *fpregs = &target->thread.user->f;
int ret;
ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf,
fpregs, 0, sizeof(*fpregs));
if (ret < 0)
return ret;
return user_regset_copyout_zero(&pos, &count, &kbuf, &ubuf,
sizeof(*fpregs), -1);
}
/*
* update the contents of the FRV userspace FP/Media registers
*/
static int fpmregs_set(struct task_struct *target,
const struct user_regset *regset,
unsigned int pos, unsigned int count,
const void *kbuf, const void __user *ubuf)
{
struct user_fpmedia_regs *fpregs = &target->thread.user->f;
int ret;
ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
fpregs, 0, sizeof(*fpregs));
if (ret < 0)
return ret;
return user_regset_copyin_ignore(&pos, &count, &kbuf, &ubuf,
sizeof(*fpregs), -1);
}
/*
* determine if the FP/Media registers have actually been used
*/
static int fpmregs_active(struct task_struct *target,
const struct user_regset *regset)
{
return tsk_used_math(target) ? regset->n : 0;
}
/*
* Define the register sets available on the FRV under Linux
*/
enum frv_regset {
REGSET_GENERAL,
REGSET_FPMEDIA,
};
static const struct user_regset frv_regsets[] = {
/*
* General register format is:
* PSR, ISR, CCR, CCCR, LR, LCR, PC, (STATUS), SYSCALLNO, ORIG_G8
* GNER0-1, IACC0, TBR, GR1-63
*/
[REGSET_GENERAL] = {
.core_note_type = NT_PRSTATUS,
.n = ELF_NGREG,
.size = sizeof(long),
.align = sizeof(long),
.get = genregs_get,
.set = genregs_set,
},
/*
* FPU/Media register format is:
* FR0-63, FNER0-1, MSR0-1, ACC0-7, ACCG0-8, FSR
*/
[REGSET_FPMEDIA] = {
.core_note_type = NT_PRFPREG,
.n = sizeof(struct user_fpmedia_regs) / sizeof(long),
.size = sizeof(long),
.align = sizeof(long),
.get = fpmregs_get,
.set = fpmregs_set,
.active = fpmregs_active,
},
};
static const struct user_regset_view user_frv_native_view = {
.name = "frv",
.e_machine = EM_FRV,
.regsets = frv_regsets,
.n = ARRAY_SIZE(frv_regsets),
};
const struct user_regset_view *task_user_regset_view(struct task_struct *task)
{
return &user_frv_native_view;
}
/*
* Get contents of register REGNO in task TASK.
*/
@ -68,41 +234,24 @@ static inline int put_reg(struct task_struct *task, int regno,
}
}
/*
* check that an address falls within the bounds of the target process's memory
* mappings
*/
static inline int is_user_addr_valid(struct task_struct *child,
unsigned long start, unsigned long len)
{
#ifdef CONFIG_MMU
if (start >= PAGE_OFFSET || len > PAGE_OFFSET - start)
return -EIO;
return 0;
#else
struct vm_area_struct *vma;
vma = find_vma(child->mm, start);
if (vma && start >= vma->vm_start && start + len <= vma->vm_end)
return 0;
return -EIO;
#endif
}
/*
* Called by kernel/ptrace.c when detaching..
*
* Control h/w single stepping
*/
void ptrace_disable(struct task_struct *child)
void user_enable_single_step(struct task_struct *child)
{
child->thread.frame0->__status |= REG__STATUS_STEP;
}
void user_disable_single_step(struct task_struct *child)
{
child->thread.frame0->__status &= ~REG__STATUS_STEP;
}
void ptrace_enable(struct task_struct *child)
void ptrace_disable(struct task_struct *child)
{
child->thread.frame0->__status |= REG__STATUS_STEP;
user_disable_single_step(child);
}
long arch_ptrace(struct task_struct *child, long request, long addr, long data)
@ -111,15 +260,6 @@ long arch_ptrace(struct task_struct *child, long request, long addr, long data)
int ret;
switch (request) {
/* when I and D space are separate, these will need to be fixed. */
case PTRACE_PEEKTEXT: /* read word at location addr. */
case PTRACE_PEEKDATA:
ret = -EIO;
if (is_user_addr_valid(child, addr, sizeof(tmp)) < 0)
break;
ret = generic_ptrace_peekdata(child, addr, data);
break;
/* read the word at location addr in the USER area. */
case PTRACE_PEEKUSR: {
tmp = 0;
@ -163,15 +303,6 @@ long arch_ptrace(struct task_struct *child, long request, long addr, long data)
break;
}
/* when I and D space are separate, this will have to be fixed. */
case PTRACE_POKETEXT: /* write the word at location addr. */
case PTRACE_POKEDATA:
ret = -EIO;
if (is_user_addr_valid(child, addr, sizeof(tmp)) < 0)
break;
ret = generic_ptrace_pokedata(child, addr, data);
break;
case PTRACE_POKEUSR: /* write the word at location addr in the USER area */
ret = -EIO;
if ((addr & 3) || addr < 0)
@ -179,7 +310,7 @@ long arch_ptrace(struct task_struct *child, long request, long addr, long data)
ret = 0;
switch (addr >> 2) {
case 0 ... PT__END-1:
case 0 ... PT__END - 1:
ret = put_reg(child, addr >> 2, data);
break;
@ -189,95 +320,29 @@ long arch_ptrace(struct task_struct *child, long request, long addr, long data)
}
break;
case PTRACE_SYSCALL: /* continue and stop at next (return from) syscall */
case PTRACE_CONT: /* restart after signal. */
ret = -EIO;
if (!valid_signal(data))
break;
if (request == PTRACE_SYSCALL)
set_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
else
clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
child->exit_code = data;
ptrace_disable(child);
wake_up_process(child);
ret = 0;
break;
case PTRACE_GETREGS: /* Get all integer regs from the child. */
return copy_regset_to_user(child, &user_frv_native_view,
REGSET_GENERAL,
0, sizeof(child->thread.user->i),
(void __user *)data);
/* make the child exit. Best I can do is send it a sigkill.
* perhaps it should be put in the status that it wants to
* exit.
*/
case PTRACE_KILL:
ret = 0;
if (child->exit_state == EXIT_ZOMBIE) /* already dead */
break;
child->exit_code = SIGKILL;
clear_tsk_thread_flag(child, TIF_SINGLESTEP);
ptrace_disable(child);
wake_up_process(child);
break;
case PTRACE_SETREGS: /* Set all integer regs in the child. */
return copy_regset_from_user(child, &user_frv_native_view,
REGSET_GENERAL,
0, sizeof(child->thread.user->i),
(const void __user *)data);
case PTRACE_SINGLESTEP: /* set the trap flag. */
ret = -EIO;
if (!valid_signal(data))
break;
clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
ptrace_enable(child);
child->exit_code = data;
wake_up_process(child);
ret = 0;
break;
case PTRACE_GETFPREGS: /* Get the child FP/Media state. */
return copy_regset_to_user(child, &user_frv_native_view,
REGSET_FPMEDIA,
0, sizeof(child->thread.user->f),
(void __user *)data);
case PTRACE_DETACH: /* detach a process that was attached. */
ret = ptrace_detach(child, data);
break;
case PTRACE_GETREGS: { /* Get all integer regs from the child. */
int i;
for (i = 0; i < PT__GPEND; i++) {
tmp = get_reg(child, i);
if (put_user(tmp, (unsigned long *) data)) {
ret = -EFAULT;
break;
}
data += sizeof(long);
}
ret = 0;
break;
}
case PTRACE_SETREGS: { /* Set all integer regs in the child. */
int i;
for (i = 0; i < PT__GPEND; i++) {
if (get_user(tmp, (unsigned long *) data)) {
ret = -EFAULT;
break;
}
put_reg(child, i, tmp);
data += sizeof(long);
}
ret = 0;
break;
}
case PTRACE_GETFPREGS: { /* Get the child FP/Media state. */
ret = 0;
if (copy_to_user((void *) data,
&child->thread.user->f,
sizeof(child->thread.user->f)))
ret = -EFAULT;
break;
}
case PTRACE_SETFPREGS: { /* Set the child FP/Media state. */
ret = 0;
if (copy_from_user(&child->thread.user->f,
(void *) data,
sizeof(child->thread.user->f)))
ret = -EFAULT;
break;
}
case PTRACE_SETFPREGS: /* Set the child FP/Media state. */
return copy_regset_from_user(child, &user_frv_native_view,
REGSET_FPMEDIA,
0, sizeof(child->thread.user->f),
(const void __user *)data);
case PTRACE_GETFDPIC:
tmp = 0;
@ -300,414 +365,36 @@ long arch_ptrace(struct task_struct *child, long request, long addr, long data)
break;
default:
ret = -EIO;
ret = ptrace_request(child, request, addr, data);
break;
}
return ret;
}
int __nongprelbss kstrace;
static const struct {
const char *name;
unsigned argmask;
} __syscall_name_table[NR_syscalls] = {
[0] = { "restart_syscall" },
[1] = { "exit", 0x000001 },
[2] = { "fork", 0xffffff },
[3] = { "read", 0x000141 },
[4] = { "write", 0x000141 },
[5] = { "open", 0x000235 },
[6] = { "close", 0x000001 },
[7] = { "waitpid", 0x000141 },
[8] = { "creat", 0x000025 },
[9] = { "link", 0x000055 },
[10] = { "unlink", 0x000005 },
[11] = { "execve", 0x000445 },
[12] = { "chdir", 0x000005 },
[13] = { "time", 0x000004 },
[14] = { "mknod", 0x000325 },
[15] = { "chmod", 0x000025 },
[16] = { "lchown", 0x000025 },
[17] = { "break" },
[18] = { "oldstat", 0x000045 },
[19] = { "lseek", 0x000131 },
[20] = { "getpid", 0xffffff },
[21] = { "mount", 0x043555 },
[22] = { "umount", 0x000005 },
[23] = { "setuid", 0x000001 },
[24] = { "getuid", 0xffffff },
[25] = { "stime", 0x000004 },
[26] = { "ptrace", 0x004413 },
[27] = { "alarm", 0x000001 },
[28] = { "oldfstat", 0x000041 },
[29] = { "pause", 0xffffff },
[30] = { "utime", 0x000045 },
[31] = { "stty" },
[32] = { "gtty" },
[33] = { "access", 0x000025 },
[34] = { "nice", 0x000001 },
[35] = { "ftime" },
[36] = { "sync", 0xffffff },
[37] = { "kill", 0x000011 },
[38] = { "rename", 0x000055 },
[39] = { "mkdir", 0x000025 },
[40] = { "rmdir", 0x000005 },
[41] = { "dup", 0x000001 },
[42] = { "pipe", 0x000004 },
[43] = { "times", 0x000004 },
[44] = { "prof" },
[45] = { "brk", 0x000004 },
[46] = { "setgid", 0x000001 },
[47] = { "getgid", 0xffffff },
[48] = { "signal", 0x000041 },
[49] = { "geteuid", 0xffffff },
[50] = { "getegid", 0xffffff },
[51] = { "acct", 0x000005 },
[52] = { "umount2", 0x000035 },
[53] = { "lock" },
[54] = { "ioctl", 0x000331 },
[55] = { "fcntl", 0x000331 },
[56] = { "mpx" },
[57] = { "setpgid", 0x000011 },
[58] = { "ulimit" },
[60] = { "umask", 0x000002 },
[61] = { "chroot", 0x000005 },
[62] = { "ustat", 0x000043 },
[63] = { "dup2", 0x000011 },
[64] = { "getppid", 0xffffff },
[65] = { "getpgrp", 0xffffff },
[66] = { "setsid", 0xffffff },
[67] = { "sigaction" },
[68] = { "sgetmask" },
[69] = { "ssetmask" },
[70] = { "setreuid" },
[71] = { "setregid" },
[72] = { "sigsuspend" },
[73] = { "sigpending" },
[74] = { "sethostname" },
[75] = { "setrlimit" },
[76] = { "getrlimit" },
[77] = { "getrusage" },
[78] = { "gettimeofday" },
[79] = { "settimeofday" },
[80] = { "getgroups" },
[81] = { "setgroups" },
[82] = { "select" },
[83] = { "symlink" },
[84] = { "oldlstat" },
[85] = { "readlink" },
[86] = { "uselib" },
[87] = { "swapon" },
[88] = { "reboot" },
[89] = { "readdir" },
[91] = { "munmap", 0x000034 },
[92] = { "truncate" },
[93] = { "ftruncate" },
[94] = { "fchmod" },
[95] = { "fchown" },
[96] = { "getpriority" },
[97] = { "setpriority" },
[99] = { "statfs" },
[100] = { "fstatfs" },
[102] = { "socketcall" },
[103] = { "syslog" },
[104] = { "setitimer" },
[105] = { "getitimer" },
[106] = { "stat" },
[107] = { "lstat" },
[108] = { "fstat" },
[111] = { "vhangup" },
[114] = { "wait4" },
[115] = { "swapoff" },
[116] = { "sysinfo" },
[117] = { "ipc" },
[118] = { "fsync" },
[119] = { "sigreturn" },
[120] = { "clone" },
[121] = { "setdomainname" },
[122] = { "uname" },
[123] = { "modify_ldt" },
[123] = { "cacheflush" },
[124] = { "adjtimex" },
[125] = { "mprotect" },
[126] = { "sigprocmask" },
[127] = { "create_module" },
[128] = { "init_module" },
[129] = { "delete_module" },
[130] = { "get_kernel_syms" },
[131] = { "quotactl" },
[132] = { "getpgid" },
[133] = { "fchdir" },
[134] = { "bdflush" },
[135] = { "sysfs" },
[136] = { "personality" },
[137] = { "afs_syscall" },
[138] = { "setfsuid" },
[139] = { "setfsgid" },
[140] = { "_llseek", 0x014331 },
[141] = { "getdents" },
[142] = { "_newselect", 0x000141 },
[143] = { "flock" },
[144] = { "msync" },
[145] = { "readv" },
[146] = { "writev" },
[147] = { "getsid", 0x000001 },
[148] = { "fdatasync", 0x000001 },
[149] = { "_sysctl", 0x000004 },
[150] = { "mlock" },
[151] = { "munlock" },
[152] = { "mlockall" },
[153] = { "munlockall" },
[154] = { "sched_setparam" },
[155] = { "sched_getparam" },
[156] = { "sched_setscheduler" },
[157] = { "sched_getscheduler" },
[158] = { "sched_yield" },
[159] = { "sched_get_priority_max" },
[160] = { "sched_get_priority_min" },
[161] = { "sched_rr_get_interval" },
[162] = { "nanosleep", 0x000044 },
[163] = { "mremap" },
[164] = { "setresuid" },
[165] = { "getresuid" },
[166] = { "vm86" },
[167] = { "query_module" },
[168] = { "poll" },
[169] = { "nfsservctl" },
[170] = { "setresgid" },
[171] = { "getresgid" },
[172] = { "prctl", 0x333331 },
[173] = { "rt_sigreturn", 0xffffff },
[174] = { "rt_sigaction", 0x001441 },
[175] = { "rt_sigprocmask", 0x001441 },
[176] = { "rt_sigpending", 0x000014 },
[177] = { "rt_sigtimedwait", 0x001444 },
[178] = { "rt_sigqueueinfo", 0x000411 },
[179] = { "rt_sigsuspend", 0x000014 },
[180] = { "pread", 0x003341 },
[181] = { "pwrite", 0x003341 },
[182] = { "chown", 0x000115 },
[183] = { "getcwd" },
[184] = { "capget" },
[185] = { "capset" },
[186] = { "sigaltstack" },
[187] = { "sendfile" },
[188] = { "getpmsg" },
[189] = { "putpmsg" },
[190] = { "vfork", 0xffffff },
[191] = { "ugetrlimit" },
[192] = { "mmap2", 0x313314 },
[193] = { "truncate64" },
[194] = { "ftruncate64" },
[195] = { "stat64", 0x000045 },
[196] = { "lstat64", 0x000045 },
[197] = { "fstat64", 0x000041 },
[198] = { "lchown32" },
[199] = { "getuid32", 0xffffff },
[200] = { "getgid32", 0xffffff },
[201] = { "geteuid32", 0xffffff },
[202] = { "getegid32", 0xffffff },
[203] = { "setreuid32" },
[204] = { "setregid32" },
[205] = { "getgroups32" },
[206] = { "setgroups32" },
[207] = { "fchown32" },
[208] = { "setresuid32" },
[209] = { "getresuid32" },
[210] = { "setresgid32" },
[211] = { "getresgid32" },
[212] = { "chown32" },
[213] = { "setuid32" },
[214] = { "setgid32" },
[215] = { "setfsuid32" },
[216] = { "setfsgid32" },
[217] = { "pivot_root" },
[218] = { "mincore" },
[219] = { "madvise" },
[220] = { "getdents64" },
[221] = { "fcntl64" },
[223] = { "security" },
[224] = { "gettid" },
[225] = { "readahead" },
[226] = { "setxattr" },
[227] = { "lsetxattr" },
[228] = { "fsetxattr" },
[229] = { "getxattr" },
[230] = { "lgetxattr" },
[231] = { "fgetxattr" },
[232] = { "listxattr" },
[233] = { "llistxattr" },
[234] = { "flistxattr" },
[235] = { "removexattr" },
[236] = { "lremovexattr" },
[237] = { "fremovexattr" },
[238] = { "tkill" },
[239] = { "sendfile64" },
[240] = { "futex" },
[241] = { "sched_setaffinity" },
[242] = { "sched_getaffinity" },
[243] = { "set_thread_area" },
[244] = { "get_thread_area" },
[245] = { "io_setup" },
[246] = { "io_destroy" },
[247] = { "io_getevents" },
[248] = { "io_submit" },
[249] = { "io_cancel" },
[250] = { "fadvise64" },
[252] = { "exit_group", 0x000001 },
[253] = { "lookup_dcookie" },
[254] = { "epoll_create" },
[255] = { "epoll_ctl" },
[256] = { "epoll_wait" },
[257] = { "remap_file_pages" },
[258] = { "set_tid_address" },
[259] = { "timer_create" },
[260] = { "timer_settime" },
[261] = { "timer_gettime" },
[262] = { "timer_getoverrun" },
[263] = { "timer_delete" },
[264] = { "clock_settime" },
[265] = { "clock_gettime" },
[266] = { "clock_getres" },
[267] = { "clock_nanosleep" },
[268] = { "statfs64" },
[269] = { "fstatfs64" },
[270] = { "tgkill" },
[271] = { "utimes" },
[272] = { "fadvise64_64" },
[273] = { "vserver" },
[274] = { "mbind" },
[275] = { "get_mempolicy" },
[276] = { "set_mempolicy" },
[277] = { "mq_open" },
[278] = { "mq_unlink" },
[279] = { "mq_timedsend" },
[280] = { "mq_timedreceive" },
[281] = { "mq_notify" },
[282] = { "mq_getsetattr" },
[283] = { "sys_kexec_load" },
};
asmlinkage void do_syscall_trace(int leaving)
/*
* handle tracing of system call entry
* - return the revised system call number or ULONG_MAX to cause ENOSYS
*/
asmlinkage unsigned long syscall_trace_entry(void)
{
#if 0
unsigned long *argp;
const char *name;
unsigned argmask;
char buffer[16];
if (!kstrace)
return;
if (!current->mm)
return;
if (__frame->gr7 == __NR_close)
return;
#if 0
if (__frame->gr7 != __NR_mmap2 &&
__frame->gr7 != __NR_vfork &&
__frame->gr7 != __NR_execve &&
__frame->gr7 != __NR_exit)
return;
#endif
argmask = 0;
name = NULL;
if (__frame->gr7 < NR_syscalls) {
name = __syscall_name_table[__frame->gr7].name;
argmask = __syscall_name_table[__frame->gr7].argmask;
}
if (!name) {
sprintf(buffer, "sys_%lx", __frame->gr7);
name = buffer;
__frame->__status |= REG__STATUS_SYSC_ENTRY;
if (tracehook_report_syscall_entry(__frame)) {
/* tracing decided this syscall should not happen, so
* We'll return a bogus call number to get an ENOSYS
* error, but leave the original number in
* __frame->syscallno
*/
return ULONG_MAX;
}
if (!leaving) {
if (!argmask) {
printk(KERN_CRIT "[%d] %s(%lx,%lx,%lx,%lx,%lx,%lx)\n",
current->pid,
name,
__frame->gr8,
__frame->gr9,
__frame->gr10,
__frame->gr11,
__frame->gr12,
__frame->gr13);
}
else if (argmask == 0xffffff) {
printk(KERN_CRIT "[%d] %s()\n",
current->pid,
name);
}
else {
printk(KERN_CRIT "[%d] %s(",
current->pid,
name);
argp = &__frame->gr8;
do {
switch (argmask & 0xf) {
case 1:
printk("%ld", (long) *argp);
break;
case 2:
printk("%lo", *argp);
break;
case 3:
printk("%lx", *argp);
break;
case 4:
printk("%p", (void *) *argp);
break;
case 5:
printk("\"%s\"", (char *) *argp);
break;
}
argp++;
argmask >>= 4;
if (argmask)
printk(",");
} while (argmask);
printk(")\n");
}
}
else {
if ((int)__frame->gr8 > -4096 && (int)__frame->gr8 < 4096)
printk(KERN_CRIT "[%d] %s() = %ld\n", current->pid, name, __frame->gr8);
else
printk(KERN_CRIT "[%d] %s() = %lx\n", current->pid, name, __frame->gr8);
}
return;
#endif
if (!test_thread_flag(TIF_SYSCALL_TRACE))
return;
if (!(current->ptrace & PT_PTRACED))
return;
/* we need to indicate entry or exit to strace */
if (leaving)
__frame->__status |= REG__STATUS_SYSC_EXIT;
else
__frame->__status |= REG__STATUS_SYSC_ENTRY;
ptrace_notify(SIGTRAP);
/*
* this isn't the same as continuing with a signal, but it will do
* for normal use. strace only continues with a signal if the
* stopping signal is not SIGTRAP. -brl
*/
if (current->exit_code) {
send_sig(current->exit_code, current, 1);
current->exit_code = 0;
}
return __frame->syscallno;
}
/*
* handle tracing of system call exit
*/
asmlinkage void syscall_trace_exit(void)
{
__frame->__status |= REG__STATUS_SYSC_EXIT;
tracehook_report_syscall_exit(__frame, 0);
}

Просмотреть файл

@ -21,6 +21,7 @@
#include <linux/unistd.h>
#include <linux/personality.h>
#include <linux/freezer.h>
#include <linux/tracehook.h>
#include <asm/ucontext.h>
#include <asm/uaccess.h>
#include <asm/cacheflush.h>
@ -516,6 +517,9 @@ static void do_signal(void)
* clear the TIF_RESTORE_SIGMASK flag */
if (test_thread_flag(TIF_RESTORE_SIGMASK))
clear_thread_flag(TIF_RESTORE_SIGMASK);
tracehook_signal_handler(signr, &info, &ka, __frame,
test_thread_flag(TIF_SINGLESTEP));
}
return;
@ -564,4 +568,10 @@ asmlinkage void do_notify_resume(__u32 thread_info_flags)
if (thread_info_flags & (_TIF_SIGPENDING | _TIF_RESTORE_SIGMASK))
do_signal();
/* deal with notification on about to resume userspace execution */
if (thread_info_flags & _TIF_NOTIFY_RESUME) {
clear_thread_flag(TIF_NOTIFY_RESUME);
tracehook_notify_resume(__frame);
}
} /* end do_notify_resume() */

Просмотреть файл

@ -23,8 +23,7 @@ long strncpy_from_user(char *dst, const char __user *src, long count)
char *p, ch;
long err = -EFAULT;
if (count < 0)
BUG();
BUG_ON(count < 0);
p = dst;
@ -76,8 +75,7 @@ long strnlen_user(const char __user *src, long count)
long err = 0;
char ch;
if (count < 0)
BUG();
BUG_ON(count < 0);
#ifndef CONFIG_MMU
if ((unsigned long) src < memory_start)

Просмотреть файл

@ -116,8 +116,7 @@ EXPORT_SYMBOL(dma_free_coherent);
dma_addr_t dma_map_single(struct device *dev, void *ptr, size_t size,
enum dma_data_direction direction)
{
if (direction == DMA_NONE)
BUG();
BUG_ON(direction == DMA_NONE);
frv_cache_wback_inv((unsigned long) ptr, (unsigned long) ptr + size);
@ -151,8 +150,7 @@ int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
frv_cache_wback_inv(sg_dma_address(&sg[i]),
sg_dma_address(&sg[i]) + sg_dma_len(&sg[i]));
if (direction == DMA_NONE)
BUG();
BUG_ON(direction == DMA_NONE);
return nents;
}

Просмотреть файл

@ -48,8 +48,7 @@ EXPORT_SYMBOL(dma_free_coherent);
dma_addr_t dma_map_single(struct device *dev, void *ptr, size_t size,
enum dma_data_direction direction)
{
if (direction == DMA_NONE)
BUG();
BUG_ON(direction == DMA_NONE);
frv_cache_wback_inv((unsigned long) ptr, (unsigned long) ptr + size);
@ -81,8 +80,7 @@ int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
void *vaddr;
int i;
if (direction == DMA_NONE)
BUG();
BUG_ON(direction == DMA_NONE);
dampr2 = __get_DAMPR(2);

Просмотреть файл

@ -371,6 +371,7 @@ struct kvm_vcpu_arch {
int last_run_cpu;
int vmm_tr_slot;
int vm_tr_slot;
int sn_rtc_tr_slot;
#define KVM_MP_STATE_RUNNABLE 0
#define KVM_MP_STATE_UNINITIALIZED 1
@ -465,6 +466,7 @@ struct kvm_arch {
unsigned long vmm_init_rr;
int online_vcpus;
int is_sn2;
struct kvm_ioapic *vioapic;
struct kvm_vm_stat stat;
@ -472,6 +474,7 @@ struct kvm_arch {
struct list_head assigned_dev_head;
struct iommu_domain *iommu_domain;
int iommu_flags;
struct hlist_head irq_ack_notifier_list;
unsigned long irq_sources_bitmap;
@ -578,6 +581,8 @@ struct kvm_vmm_info{
kvm_vmm_entry *vmm_entry;
kvm_tramp_entry *tramp_entry;
unsigned long vmm_ivt;
unsigned long patch_mov_ar;
unsigned long patch_mov_ar_sn2;
};
int kvm_highest_pending_irq(struct kvm_vcpu *vcpu);
@ -585,7 +590,6 @@ int kvm_emulate_halt(struct kvm_vcpu *vcpu);
int kvm_pal_emul(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run);
void kvm_sal_emul(struct kvm_vcpu *vcpu);
static inline void kvm_inject_nmi(struct kvm_vcpu *vcpu) {}
#endif /* __ASSEMBLY__*/
#endif

Просмотреть файл

@ -146,6 +146,8 @@
#define PAGE_GATE __pgprot(__ACCESS_BITS | _PAGE_PL_0 | _PAGE_AR_X_RX)
#define PAGE_KERNEL __pgprot(__DIRTY_BITS | _PAGE_PL_0 | _PAGE_AR_RWX)
#define PAGE_KERNELRX __pgprot(__ACCESS_BITS | _PAGE_PL_0 | _PAGE_AR_RX)
#define PAGE_KERNEL_UC __pgprot(__DIRTY_BITS | _PAGE_PL_0 | _PAGE_AR_RWX | \
_PAGE_MA_UC)
# ifndef __ASSEMBLY__

Просмотреть файл

@ -610,6 +610,9 @@ static struct irqaction ipi_irqaction = {
.name = "IPI"
};
/*
* KVM uses this interrupt to force a cpu out of guest mode
*/
static struct irqaction resched_irqaction = {
.handler = dummy_handler,
.flags = IRQF_DISABLED,

Просмотреть файл

@ -23,7 +23,7 @@ if VIRTUALIZATION
config KVM
tristate "Kernel-based Virtual Machine (KVM) support"
depends on HAVE_KVM && EXPERIMENTAL
depends on HAVE_KVM && MODULES && EXPERIMENTAL
# for device assignment:
depends on PCI
select PREEMPT_NOTIFIERS

Просмотреть файл

@ -41,6 +41,9 @@
#include <asm/div64.h>
#include <asm/tlb.h>
#include <asm/elf.h>
#include <asm/sn/addrs.h>
#include <asm/sn/clksupport.h>
#include <asm/sn/shub_mmr.h>
#include "misc.h"
#include "vti.h"
@ -65,6 +68,16 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
{ NULL }
};
static unsigned long kvm_get_itc(struct kvm_vcpu *vcpu)
{
#if defined(CONFIG_IA64_SGI_SN2) || defined(CONFIG_IA64_GENERIC)
if (vcpu->kvm->arch.is_sn2)
return rtc_time();
else
#endif
return ia64_getreg(_IA64_REG_AR_ITC);
}
static void kvm_flush_icache(unsigned long start, unsigned long len)
{
int l;
@ -119,8 +132,7 @@ void kvm_arch_hardware_enable(void *garbage)
unsigned long saved_psr;
int slot;
pte = pte_val(mk_pte_phys(__pa(kvm_vmm_base),
PAGE_KERNEL));
pte = pte_val(mk_pte_phys(__pa(kvm_vmm_base), PAGE_KERNEL));
local_irq_save(saved_psr);
slot = ia64_itr_entry(0x3, KVM_VMM_BASE, pte, KVM_VMM_SHIFT);
local_irq_restore(saved_psr);
@ -283,6 +295,18 @@ static int handle_sal_call(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
}
static int __apic_accept_irq(struct kvm_vcpu *vcpu, uint64_t vector)
{
struct vpd *vpd = to_host(vcpu->kvm, vcpu->arch.vpd);
if (!test_and_set_bit(vector, &vpd->irr[0])) {
vcpu->arch.irq_new_pending = 1;
kvm_vcpu_kick(vcpu);
return 1;
}
return 0;
}
/*
* offset: address offset to IPI space.
* value: deliver value.
@ -292,20 +316,20 @@ static void vcpu_deliver_ipi(struct kvm_vcpu *vcpu, uint64_t dm,
{
switch (dm) {
case SAPIC_FIXED:
kvm_apic_set_irq(vcpu, vector, 0);
break;
case SAPIC_NMI:
kvm_apic_set_irq(vcpu, 2, 0);
vector = 2;
break;
case SAPIC_EXTINT:
kvm_apic_set_irq(vcpu, 0, 0);
vector = 0;
break;
case SAPIC_INIT:
case SAPIC_PMI:
default:
printk(KERN_ERR"kvm: Unimplemented Deliver reserved IPI!\n");
break;
return;
}
__apic_accept_irq(vcpu, vector);
}
static struct kvm_vcpu *lid_to_vcpu(struct kvm *kvm, unsigned long id,
@ -413,6 +437,23 @@ static int handle_switch_rr6(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
return 1;
}
static int kvm_sn2_setup_mappings(struct kvm_vcpu *vcpu)
{
unsigned long pte, rtc_phys_addr, map_addr;
int slot;
map_addr = KVM_VMM_BASE + (1UL << KVM_VMM_SHIFT);
rtc_phys_addr = LOCAL_MMR_OFFSET | SH_RTC;
pte = pte_val(mk_pte_phys(rtc_phys_addr, PAGE_KERNEL_UC));
slot = ia64_itr_entry(0x3, map_addr, pte, PAGE_SHIFT);
vcpu->arch.sn_rtc_tr_slot = slot;
if (slot < 0) {
printk(KERN_ERR "Mayday mayday! RTC mapping failed!\n");
slot = 0;
}
return slot;
}
int kvm_emulate_halt(struct kvm_vcpu *vcpu)
{
@ -426,7 +467,7 @@ int kvm_emulate_halt(struct kvm_vcpu *vcpu)
if (irqchip_in_kernel(vcpu->kvm)) {
vcpu_now_itc = ia64_getreg(_IA64_REG_AR_ITC) + vcpu->arch.itc_offset;
vcpu_now_itc = kvm_get_itc(vcpu) + vcpu->arch.itc_offset;
if (time_after(vcpu_now_itc, vpd->itm)) {
vcpu->arch.timer_check = 1;
@ -447,10 +488,10 @@ int kvm_emulate_halt(struct kvm_vcpu *vcpu)
hrtimer_cancel(p_ht);
vcpu->arch.ht_active = 0;
if (test_and_clear_bit(KVM_REQ_UNHALT, &vcpu->requests))
if (test_and_clear_bit(KVM_REQ_UNHALT, &vcpu->requests) ||
kvm_cpu_has_pending_timer(vcpu))
if (vcpu->arch.mp_state == KVM_MP_STATE_HALTED)
vcpu->arch.mp_state =
KVM_MP_STATE_RUNNABLE;
vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
if (vcpu->arch.mp_state != KVM_MP_STATE_RUNNABLE)
return -EINTR;
@ -551,22 +592,35 @@ static int kvm_insert_vmm_mapping(struct kvm_vcpu *vcpu)
if (r < 0)
goto out;
vcpu->arch.vm_tr_slot = r;
#if defined(CONFIG_IA64_SGI_SN2) || defined(CONFIG_IA64_GENERIC)
if (kvm->arch.is_sn2) {
r = kvm_sn2_setup_mappings(vcpu);
if (r < 0)
goto out;
}
#endif
r = 0;
out:
return r;
}
static void kvm_purge_vmm_mapping(struct kvm_vcpu *vcpu)
{
struct kvm *kvm = vcpu->kvm;
ia64_ptr_entry(0x3, vcpu->arch.vmm_tr_slot);
ia64_ptr_entry(0x3, vcpu->arch.vm_tr_slot);
#if defined(CONFIG_IA64_SGI_SN2) || defined(CONFIG_IA64_GENERIC)
if (kvm->arch.is_sn2)
ia64_ptr_entry(0x3, vcpu->arch.sn_rtc_tr_slot);
#endif
}
static int kvm_vcpu_pre_transition(struct kvm_vcpu *vcpu)
{
unsigned long psr;
int r;
int cpu = smp_processor_id();
if (vcpu->arch.last_run_cpu != cpu ||
@ -578,36 +632,27 @@ static int kvm_vcpu_pre_transition(struct kvm_vcpu *vcpu)
vcpu->arch.host_rr6 = ia64_get_rr(RR6);
vti_set_rr6(vcpu->arch.vmm_rr);
return kvm_insert_vmm_mapping(vcpu);
local_irq_save(psr);
r = kvm_insert_vmm_mapping(vcpu);
local_irq_restore(psr);
return r;
}
static void kvm_vcpu_post_transition(struct kvm_vcpu *vcpu)
{
kvm_purge_vmm_mapping(vcpu);
vti_set_rr6(vcpu->arch.host_rr6);
}
static int vti_vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
static int __vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
union context *host_ctx, *guest_ctx;
int r;
/*Get host and guest context with guest address space.*/
host_ctx = kvm_get_host_context(vcpu);
guest_ctx = kvm_get_guest_context(vcpu);
r = kvm_vcpu_pre_transition(vcpu);
if (r < 0)
goto out;
kvm_vmm_info->tramp_entry(host_ctx, guest_ctx);
kvm_vcpu_post_transition(vcpu);
r = 0;
out:
return r;
}
static int __vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
int r;
/*
* down_read() may sleep and return with interrupts enabled
*/
down_read(&vcpu->kvm->slots_lock);
again:
if (signal_pending(current)) {
@ -616,26 +661,31 @@ again:
goto out;
}
/*
* down_read() may sleep and return with interrupts enabled
*/
down_read(&vcpu->kvm->slots_lock);
preempt_disable();
local_irq_disable();
vcpu->guest_mode = 1;
/*Get host and guest context with guest address space.*/
host_ctx = kvm_get_host_context(vcpu);
guest_ctx = kvm_get_guest_context(vcpu);
clear_bit(KVM_REQ_KICK, &vcpu->requests);
r = kvm_vcpu_pre_transition(vcpu);
if (r < 0)
goto vcpu_run_fail;
up_read(&vcpu->kvm->slots_lock);
kvm_guest_enter();
r = vti_vcpu_run(vcpu, kvm_run);
if (r < 0) {
local_irq_enable();
preempt_enable();
kvm_run->exit_reason = KVM_EXIT_FAIL_ENTRY;
goto out;
}
/*
* Transition to the guest
*/
kvm_vmm_info->tramp_entry(host_ctx, guest_ctx);
kvm_vcpu_post_transition(vcpu);
vcpu->arch.launched = 1;
vcpu->guest_mode = 0;
set_bit(KVM_REQ_KICK, &vcpu->requests);
local_irq_enable();
/*
@ -646,9 +696,10 @@ again:
*/
barrier();
kvm_guest_exit();
up_read(&vcpu->kvm->slots_lock);
preempt_enable();
down_read(&vcpu->kvm->slots_lock);
r = kvm_handle_exit(kvm_run, vcpu);
if (r > 0) {
@ -657,12 +708,20 @@ again:
}
out:
up_read(&vcpu->kvm->slots_lock);
if (r > 0) {
kvm_resched(vcpu);
down_read(&vcpu->kvm->slots_lock);
goto again;
}
return r;
vcpu_run_fail:
local_irq_enable();
preempt_enable();
kvm_run->exit_reason = KVM_EXIT_FAIL_ENTRY;
goto out;
}
static void kvm_set_mmio_data(struct kvm_vcpu *vcpu)
@ -788,6 +847,9 @@ struct kvm *kvm_arch_create_vm(void)
if (IS_ERR(kvm))
return ERR_PTR(-ENOMEM);
kvm->arch.is_sn2 = ia64_platform_is("sn2");
kvm_init_vm(kvm);
kvm->arch.online_vcpus = 0;
@ -884,7 +946,7 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
RESTORE_REGS(saved_gp);
vcpu->arch.irq_new_pending = 1;
vcpu->arch.itc_offset = regs->saved_itc - ia64_getreg(_IA64_REG_AR_ITC);
vcpu->arch.itc_offset = regs->saved_itc - kvm_get_itc(vcpu);
set_bit(KVM_REQ_RESUME, &vcpu->requests);
vcpu_put(vcpu);
@ -1043,10 +1105,6 @@ static void kvm_free_vmm_area(void)
}
}
static void vti_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
{
}
static int vti_init_vpd(struct kvm_vcpu *vcpu)
{
int i;
@ -1165,7 +1223,7 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
regs->cr_iip = PALE_RESET_ENTRY;
/*Initialize itc offset for vcpus*/
itc_offset = 0UL - ia64_getreg(_IA64_REG_AR_ITC);
itc_offset = 0UL - kvm_get_itc(vcpu);
for (i = 0; i < kvm->arch.online_vcpus; i++) {
v = (struct kvm_vcpu *)((char *)vcpu +
sizeof(struct kvm_vcpu_data) * i);
@ -1237,6 +1295,7 @@ static int vti_vcpu_setup(struct kvm_vcpu *vcpu, int id)
local_irq_save(psr);
r = kvm_insert_vmm_mapping(vcpu);
local_irq_restore(psr);
if (r)
goto fail;
r = kvm_vcpu_init(vcpu, vcpu->kvm, id);
@ -1254,13 +1313,11 @@ static int vti_vcpu_setup(struct kvm_vcpu *vcpu, int id)
goto uninit;
kvm_purge_vmm_mapping(vcpu);
local_irq_restore(psr);
return 0;
uninit:
kvm_vcpu_uninit(vcpu);
fail:
local_irq_restore(psr);
return r;
}
@ -1291,7 +1348,6 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm,
vcpu->kvm = kvm;
cpu = get_cpu();
vti_vcpu_load(vcpu, cpu);
r = vti_vcpu_setup(vcpu, id);
put_cpu();
@ -1427,7 +1483,7 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
}
for (i = 0; i < 4; i++)
regs->insvc[i] = vcpu->arch.insvc[i];
regs->saved_itc = vcpu->arch.itc_offset + ia64_getreg(_IA64_REG_AR_ITC);
regs->saved_itc = vcpu->arch.itc_offset + kvm_get_itc(vcpu);
SAVE_REGS(xtp);
SAVE_REGS(metaphysical_rr0);
SAVE_REGS(metaphysical_rr4);
@ -1574,6 +1630,7 @@ int kvm_arch_set_memory_region(struct kvm *kvm,
void kvm_arch_flush_shadow(struct kvm *kvm)
{
kvm_flush_remote_tlbs(kvm);
}
long kvm_arch_dev_ioctl(struct file *filp,
@ -1616,8 +1673,37 @@ out:
return 0;
}
/*
* On SN2, the ITC isn't stable, so copy in fast path code to use the
* SN2 RTC, replacing the ITC based default verion.
*/
static void kvm_patch_vmm(struct kvm_vmm_info *vmm_info,
struct module *module)
{
unsigned long new_ar, new_ar_sn2;
unsigned long module_base;
if (!ia64_platform_is("sn2"))
return;
module_base = (unsigned long)module->module_core;
new_ar = kvm_vmm_base + vmm_info->patch_mov_ar - module_base;
new_ar_sn2 = kvm_vmm_base + vmm_info->patch_mov_ar_sn2 - module_base;
printk(KERN_INFO "kvm: Patching ITC emulation to use SGI SN2 RTC "
"as source\n");
/*
* Copy the SN2 version of mov_ar into place. They are both
* the same size, so 6 bundles is sufficient (6 * 0x10).
*/
memcpy((void *)new_ar, (void *)new_ar_sn2, 0x60);
}
static int kvm_relocate_vmm(struct kvm_vmm_info *vmm_info,
struct module *module)
struct module *module)
{
unsigned long module_base;
unsigned long vmm_size;
@ -1639,6 +1725,7 @@ static int kvm_relocate_vmm(struct kvm_vmm_info *vmm_info,
return -EFAULT;
memcpy((void *)kvm_vmm_base, (void *)module_base, vmm_size);
kvm_patch_vmm(vmm_info, module);
kvm_flush_icache(kvm_vmm_base, vmm_size);
/*Recalculate kvm_vmm_info based on new VMM*/
@ -1792,38 +1879,24 @@ void kvm_arch_hardware_unsetup(void)
{
}
static void vcpu_kick_intr(void *info)
{
#ifdef DEBUG
struct kvm_vcpu *vcpu = (struct kvm_vcpu *)info;
printk(KERN_DEBUG"vcpu_kick_intr %p \n", vcpu);
#endif
}
void kvm_vcpu_kick(struct kvm_vcpu *vcpu)
{
int ipi_pcpu = vcpu->cpu;
int cpu = get_cpu();
int me;
int cpu = vcpu->cpu;
if (waitqueue_active(&vcpu->wq))
wake_up_interruptible(&vcpu->wq);
if (vcpu->guest_mode && cpu != ipi_pcpu)
smp_call_function_single(ipi_pcpu, vcpu_kick_intr, vcpu, 0);
me = get_cpu();
if (cpu != me && (unsigned) cpu < nr_cpu_ids && cpu_online(cpu))
if (!test_and_set_bit(KVM_REQ_KICK, &vcpu->requests))
smp_send_reschedule(cpu);
put_cpu();
}
int kvm_apic_set_irq(struct kvm_vcpu *vcpu, u8 vec, u8 trig)
int kvm_apic_set_irq(struct kvm_vcpu *vcpu, struct kvm_lapic_irq *irq)
{
struct vpd *vpd = to_host(vcpu->kvm, vcpu->arch.vpd);
if (!test_and_set_bit(vec, &vpd->irr[0])) {
vcpu->arch.irq_new_pending = 1;
kvm_vcpu_kick(vcpu);
return 1;
}
return 0;
return __apic_accept_irq(vcpu, irq->vector);
}
int kvm_apic_match_physical_addr(struct kvm_lapic *apic, u16 dest)
@ -1836,20 +1909,18 @@ int kvm_apic_match_logical_addr(struct kvm_lapic *apic, u8 mda)
return 0;
}
struct kvm_vcpu *kvm_get_lowest_prio_vcpu(struct kvm *kvm, u8 vector,
unsigned long bitmap)
int kvm_apic_compare_prio(struct kvm_vcpu *vcpu1, struct kvm_vcpu *vcpu2)
{
struct kvm_vcpu *lvcpu = kvm->vcpus[0];
int i;
return vcpu1->arch.xtp - vcpu2->arch.xtp;
}
for (i = 1; i < kvm->arch.online_vcpus; i++) {
if (!kvm->vcpus[i])
continue;
if (lvcpu->arch.xtp > kvm->vcpus[i]->arch.xtp)
lvcpu = kvm->vcpus[i];
}
return lvcpu;
int kvm_apic_match_dest(struct kvm_vcpu *vcpu, struct kvm_lapic *source,
int short_hand, int dest, int dest_mode)
{
struct kvm_lapic *target = vcpu->arch.apic;
return (dest_mode == 0) ?
kvm_apic_match_physical_addr(target, dest) :
kvm_apic_match_logical_addr(target, dest);
}
static int find_highest_bits(int *dat)
@ -1888,6 +1959,12 @@ int kvm_cpu_has_interrupt(struct kvm_vcpu *vcpu)
return 0;
}
int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu)
{
/* do real check here */
return 1;
}
int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
{
return vcpu->arch.timer_fired;
@ -1918,6 +1995,7 @@ static int vcpu_reset(struct kvm_vcpu *vcpu)
long psr;
local_irq_save(psr);
r = kvm_insert_vmm_mapping(vcpu);
local_irq_restore(psr);
if (r)
goto fail;
@ -1930,7 +2008,6 @@ static int vcpu_reset(struct kvm_vcpu *vcpu)
kvm_purge_vmm_mapping(vcpu);
r = 0;
fail:
local_irq_restore(psr);
return r;
}

Просмотреть файл

@ -21,6 +21,9 @@
#include <linux/kvm_host.h>
#include <linux/smp.h>
#include <asm/sn/addrs.h>
#include <asm/sn/clksupport.h>
#include <asm/sn/shub_mmr.h>
#include "vti.h"
#include "misc.h"
@ -188,12 +191,35 @@ static struct ia64_pal_retval pal_freq_base(struct kvm_vcpu *vcpu)
return result;
}
/*
* On the SGI SN2, the ITC isn't stable. Emulation backed by the SN2
* RTC is used instead. This function patches the ratios from SAL
* to match the RTC before providing them to the guest.
*/
static void sn2_patch_itc_freq_ratios(struct ia64_pal_retval *result)
{
struct pal_freq_ratio *ratio;
unsigned long sal_freq, sal_drift, factor;
result->status = ia64_sal_freq_base(SAL_FREQ_BASE_PLATFORM,
&sal_freq, &sal_drift);
ratio = (struct pal_freq_ratio *)&result->v2;
factor = ((sal_freq * 3) + (sn_rtc_cycles_per_second / 2)) /
sn_rtc_cycles_per_second;
ratio->num = 3;
ratio->den = factor;
}
static struct ia64_pal_retval pal_freq_ratios(struct kvm_vcpu *vcpu)
{
struct ia64_pal_retval result;
PAL_CALL(result, PAL_FREQ_RATIOS, 0, 0, 0);
if (vcpu->kvm->arch.is_sn2)
sn2_patch_itc_freq_ratios(&result);
return result;
}

Просмотреть файл

@ -20,6 +20,10 @@ void kvm_free_lapic(struct kvm_vcpu *vcpu);
int kvm_apic_match_physical_addr(struct kvm_lapic *apic, u16 dest);
int kvm_apic_match_logical_addr(struct kvm_lapic *apic, u8 mda);
int kvm_apic_set_irq(struct kvm_vcpu *vcpu, u8 vec, u8 trig);
int kvm_apic_match_dest(struct kvm_vcpu *vcpu, struct kvm_lapic *source,
int short_hand, int dest, int dest_mode);
int kvm_apic_compare_prio(struct kvm_vcpu *vcpu1, struct kvm_vcpu *vcpu2);
int kvm_apic_set_irq(struct kvm_vcpu *vcpu, struct kvm_lapic_irq *irq);
#define kvm_apic_present(x) (true)
#endif

Просмотреть файл

@ -11,6 +11,7 @@
#include <asm/asmmacro.h>
#include <asm/processor.h>
#include <asm/kvm_host.h>
#include "vti.h"
#include "asm-offsets.h"
@ -140,6 +141,35 @@ GLOBAL_ENTRY(kvm_asm_mov_from_ar)
;;
END(kvm_asm_mov_from_ar)
/*
* Special SGI SN2 optimized version of mov_from_ar using the SN2 RTC
* clock as it's source for emulating the ITC. This version will be
* copied on top of the original version if the host is determined to
* be an SN2.
*/
GLOBAL_ENTRY(kvm_asm_mov_from_ar_sn2)
add r18=VMM_VCPU_ITC_OFS_OFFSET, r21
movl r19 = (KVM_VMM_BASE+(1<<KVM_VMM_SHIFT))
add r16=VMM_VCPU_LAST_ITC_OFFSET,r21
extr.u r17=r25,6,7
mov r24=b0
;;
ld8 r18=[r18]
ld8 r19=[r19]
addl r20=@gprel(asm_mov_to_reg),gp
;;
add r19=r19,r18
shladd r17=r17,4,r20
;;
adds r30=kvm_resume_to_guest-asm_mov_to_reg,r20
st8 [r16] = r19
mov b0=r17
br.sptk.few b0
;;
END(kvm_asm_mov_from_ar_sn2)
// mov r1=rr[r3]
GLOBAL_ENTRY(kvm_asm_mov_from_rr)

Просмотреть файл

@ -652,20 +652,25 @@ void kvm_ia64_handle_break(unsigned long ifa, struct kvm_pt_regs *regs,
unsigned long isr, unsigned long iim)
{
struct kvm_vcpu *v = current_vcpu;
long psr;
if (ia64_psr(regs)->cpl == 0) {
/* Allow hypercalls only when cpl = 0. */
if (iim == DOMN_PAL_REQUEST) {
local_irq_save(psr);
set_pal_call_data(v);
vmm_transition(v);
get_pal_call_result(v);
vcpu_increment_iip(v);
local_irq_restore(psr);
return;
} else if (iim == DOMN_SAL_REQUEST) {
local_irq_save(psr);
set_sal_call_data(v);
vmm_transition(v);
get_sal_call_result(v);
vcpu_increment_iip(v);
local_irq_restore(psr);
return;
}
}

Просмотреть файл

@ -788,13 +788,29 @@ void vcpu_set_fpreg(struct kvm_vcpu *vcpu, unsigned long reg,
setfpreg(reg, val, regs); /* FIXME: handle NATs later*/
}
/*
* The Altix RTC is mapped specially here for the vmm module
*/
#define SN_RTC_BASE (u64 *)(KVM_VMM_BASE+(1UL<<KVM_VMM_SHIFT))
static long kvm_get_itc(struct kvm_vcpu *vcpu)
{
#if defined(CONFIG_IA64_SGI_SN2) || defined(CONFIG_IA64_GENERIC)
struct kvm *kvm = (struct kvm *)KVM_VM_BASE;
if (kvm->arch.is_sn2)
return (*SN_RTC_BASE);
else
#endif
return ia64_getreg(_IA64_REG_AR_ITC);
}
/************************************************************************
* lsapic timer
***********************************************************************/
u64 vcpu_get_itc(struct kvm_vcpu *vcpu)
{
unsigned long guest_itc;
guest_itc = VMX(vcpu, itc_offset) + ia64_getreg(_IA64_REG_AR_ITC);
guest_itc = VMX(vcpu, itc_offset) + kvm_get_itc(vcpu);
if (guest_itc >= VMX(vcpu, last_itc)) {
VMX(vcpu, last_itc) = guest_itc;
@ -809,7 +825,7 @@ static void vcpu_set_itc(struct kvm_vcpu *vcpu, u64 val)
struct kvm_vcpu *v;
struct kvm *kvm;
int i;
long itc_offset = val - ia64_getreg(_IA64_REG_AR_ITC);
long itc_offset = val - kvm_get_itc(vcpu);
unsigned long vitv = VCPU(vcpu, itv);
kvm = (struct kvm *)KVM_VM_BASE;

Просмотреть файл

@ -30,15 +30,19 @@ MODULE_AUTHOR("Intel");
MODULE_LICENSE("GPL");
extern char kvm_ia64_ivt;
extern char kvm_asm_mov_from_ar;
extern char kvm_asm_mov_from_ar_sn2;
extern fpswa_interface_t *vmm_fpswa_interface;
long vmm_sanity = 1;
struct kvm_vmm_info vmm_info = {
.module = THIS_MODULE,
.vmm_entry = vmm_entry,
.tramp_entry = vmm_trampoline,
.vmm_ivt = (unsigned long)&kvm_ia64_ivt,
.module = THIS_MODULE,
.vmm_entry = vmm_entry,
.tramp_entry = vmm_trampoline,
.vmm_ivt = (unsigned long)&kvm_ia64_ivt,
.patch_mov_ar = (unsigned long)&kvm_asm_mov_from_ar,
.patch_mov_ar_sn2 = (unsigned long)&kvm_asm_mov_from_ar_sn2,
};
static int __init kvm_vmm_init(void)

Просмотреть файл

@ -95,7 +95,7 @@ GLOBAL_ENTRY(kvm_vmm_panic)
;;
srlz.i // guarantee that interruption collection is on
;;
//(p15) ssm psr.i // restore psr.i
(p15) ssm psr.i // restore psr.
addl r14=@gprel(ia64_leave_hypervisor),gp
;;
KVM_SAVE_REST
@ -249,7 +249,7 @@ ENTRY(kvm_break_fault)
;;
srlz.i // guarantee that interruption collection is on
;;
//(p15)ssm psr.i // restore psr.i
(p15)ssm psr.i // restore psr.i
addl r14=@gprel(ia64_leave_hypervisor),gp
;;
KVM_SAVE_REST
@ -439,7 +439,7 @@ kvm_dispatch_vexirq:
;;
srlz.i // guarantee that interruption collection is on
;;
//(p15) ssm psr.i // restore psr.i
(p15) ssm psr.i // restore psr.i
adds r3=8,r2 // set up second base pointer
;;
KVM_SAVE_REST
@ -819,7 +819,7 @@ ENTRY(kvm_dtlb_miss_dispatch)
;;
srlz.i // guarantee that interruption collection is on
;;
//(p15) ssm psr.i // restore psr.i
(p15) ssm psr.i // restore psr.i
addl r14=@gprel(ia64_leave_hypervisor_prepare),gp
;;
KVM_SAVE_REST
@ -842,7 +842,7 @@ ENTRY(kvm_itlb_miss_dispatch)
;;
srlz.i // guarantee that interruption collection is on
;;
//(p15) ssm psr.i // restore psr.i
(p15) ssm psr.i // restore psr.i
addl r14=@gprel(ia64_leave_hypervisor),gp
;;
KVM_SAVE_REST
@ -871,7 +871,7 @@ ENTRY(kvm_dispatch_reflection)
;;
srlz.i // guarantee that interruption collection is on
;;
//(p15) ssm psr.i // restore psr.i
(p15) ssm psr.i // restore psr.i
addl r14=@gprel(ia64_leave_hypervisor),gp
;;
KVM_SAVE_REST
@ -898,7 +898,7 @@ ENTRY(kvm_dispatch_virtualization_fault)
;;
srlz.i // guarantee that interruption collection is on
;;
//(p15) ssm psr.i // restore psr.i
(p15) ssm psr.i // restore psr.i
addl r14=@gprel(ia64_leave_hypervisor_prepare),gp
;;
KVM_SAVE_REST
@ -920,7 +920,7 @@ ENTRY(kvm_dispatch_interrupt)
;;
srlz.i
;;
//(p15) ssm psr.i
(p15) ssm psr.i
addl r14=@gprel(ia64_leave_hypervisor),gp
;;
KVM_SAVE_REST
@ -1333,7 +1333,7 @@ hostret = r24
;;
(p7) srlz.i
;;
//(p6) ssm psr.i
(p6) ssm psr.i
;;
mov rp=rpsave
mov ar.pfs=pfssave

Просмотреть файл

@ -254,7 +254,8 @@ u64 guest_vhpt_lookup(u64 iha, u64 *pte)
"(p7) st8 [%2]=r9;;"
"ssm psr.ic;;"
"srlz.d;;"
/* "ssm psr.i;;" Once interrupts in vmm open, need fix*/
"ssm psr.i;;"
"srlz.d;;"
: "=r"(ret) : "r"(iha), "r"(pte):"memory");
return ret;

Просмотреть файл

@ -72,6 +72,7 @@ config MIPS_COBALT
select IRQ_CPU
select IRQ_GT641XX
select PCI_GT64XXX_PCI0
select PCI
select SYS_HAS_CPU_NEVADA
select SYS_HAS_EARLY_PRINTK
select SYS_SUPPORTS_32BIT_KERNEL
@ -593,7 +594,7 @@ config WR_PPMC
board, which is based on GT64120 bridge chip.
config CAVIUM_OCTEON_SIMULATOR
bool "Support for the Cavium Networks Octeon Simulator"
bool "Cavium Networks Octeon Simulator"
select CEVT_R4K
select 64BIT_PHYS_ADDR
select DMA_COHERENT
@ -607,7 +608,7 @@ config CAVIUM_OCTEON_SIMULATOR
hardware.
config CAVIUM_OCTEON_REFERENCE_BOARD
bool "Support for the Cavium Networks Octeon reference board"
bool "Cavium Networks Octeon reference board"
select CEVT_R4K
select 64BIT_PHYS_ADDR
select DMA_COHERENT

Просмотреть файл

@ -39,8 +39,8 @@ struct cache_desc {
#define MIPS_CACHE_PINDEX 0x00000020 /* Physically indexed cache */
struct cpuinfo_mips {
unsigned long udelay_val;
unsigned long asid_cache;
unsigned int udelay_val;
unsigned int asid_cache;
/*
* Capability and feature descriptor structure for MIPS CPU

Просмотреть файл

@ -11,94 +11,12 @@
#ifndef _ASM_DELAY_H
#define _ASM_DELAY_H
#include <linux/param.h>
#include <linux/smp.h>
extern void __delay(unsigned int loops);
extern void __ndelay(unsigned int ns);
extern void __udelay(unsigned int us);
#include <asm/compiler.h>
#include <asm/war.h>
static inline void __delay(unsigned long loops)
{
if (sizeof(long) == 4)
__asm__ __volatile__ (
" .set noreorder \n"
" .align 3 \n"
"1: bnez %0, 1b \n"
" subu %0, 1 \n"
" .set reorder \n"
: "=r" (loops)
: "0" (loops));
else if (sizeof(long) == 8 && !DADDI_WAR)
__asm__ __volatile__ (
" .set noreorder \n"
" .align 3 \n"
"1: bnez %0, 1b \n"
" dsubu %0, 1 \n"
" .set reorder \n"
: "=r" (loops)
: "0" (loops));
else if (sizeof(long) == 8 && DADDI_WAR)
__asm__ __volatile__ (
" .set noreorder \n"
" .align 3 \n"
"1: bnez %0, 1b \n"
" dsubu %0, %2 \n"
" .set reorder \n"
: "=r" (loops)
: "0" (loops), "r" (1));
}
/*
* Division by multiplication: you don't have to worry about
* loss of precision.
*
* Use only for very small delays ( < 1 msec). Should probably use a
* lookup table, really, as the multiplications take much too long with
* short delays. This is a "reasonable" implementation, though (and the
* first constant multiplications gets optimized away if the delay is
* a constant)
*/
static inline void __udelay(unsigned long usecs, unsigned long lpj)
{
unsigned long hi, lo;
/*
* The rates of 128 is rounded wrongly by the catchall case
* for 64-bit. Excessive precission? Probably ...
*/
#if defined(CONFIG_64BIT) && (HZ == 128)
usecs *= 0x0008637bd05af6c7UL; /* 2**64 / (1000000 / HZ) */
#elif defined(CONFIG_64BIT)
usecs *= (0x8000000000000000UL / (500000 / HZ));
#else /* 32-bit junk follows here */
usecs *= (unsigned long) (((0x8000000000000000ULL / (500000 / HZ)) +
0x80000000ULL) >> 32);
#endif
if (sizeof(long) == 4)
__asm__("multu\t%2, %3"
: "=h" (usecs), "=l" (lo)
: "r" (usecs), "r" (lpj)
: GCC_REG_ACCUM);
else if (sizeof(long) == 8 && !R4000_WAR)
__asm__("dmultu\t%2, %3"
: "=h" (usecs), "=l" (lo)
: "r" (usecs), "r" (lpj)
: GCC_REG_ACCUM);
else if (sizeof(long) == 8 && R4000_WAR)
__asm__("dmultu\t%3, %4\n\tmfhi\t%0"
: "=r" (usecs), "=h" (hi), "=l" (lo)
: "r" (usecs), "r" (lpj)
: GCC_REG_ACCUM);
__delay(usecs);
}
#define __udelay_val cpu_data[raw_smp_processor_id()].udelay_val
#define udelay(usecs) __udelay((usecs), __udelay_val)
#define ndelay(ns) __udelay(ns)
#define udelay(us) __udelay(us)
/* make sure "usecs *= ..." in udelay do not overflow. */
#if HZ >= 1000

Просмотреть файл

@ -60,12 +60,16 @@
((nr) << _IOC_NRSHIFT) | \
((size) << _IOC_SIZESHIFT))
#ifdef __KERNEL__
/* provoke compile error for invalid uses of size argument */
extern unsigned int __invalid_size_argument_for_IOC;
#define _IOC_TYPECHECK(t) \
((sizeof(t) == sizeof(t[1]) && \
sizeof(t) < (1 << _IOC_SIZEBITS)) ? \
sizeof(t) : __invalid_size_argument_for_IOC)
#else
#define _IOC_TYPECHECK(t) (sizeof(t))
#endif
/* used to create numbers */
#define _IO(type, nr) _IOC(_IOC_NONE, (type), (nr), 0)

Просмотреть файл

@ -42,7 +42,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
seq_printf(m, fmt, __cpu_name[n],
(version >> 4) & 0x0f, version & 0x0f,
(fp_vers >> 4) & 0x0f, fp_vers & 0x0f);
seq_printf(m, "BogoMIPS\t\t: %lu.%02lu\n",
seq_printf(m, "BogoMIPS\t\t: %u.%02u\n",
cpu_data[n].udelay_val / (500000/HZ),
(cpu_data[n].udelay_val / (5000/HZ)) % 100);
seq_printf(m, "wait instruction\t: %s\n", cpu_wait ? "yes" : "no");

Просмотреть файл

@ -2,8 +2,8 @@
# Makefile for MIPS-specific library files..
#
lib-y += csum_partial.o memcpy.o memcpy-inatomic.o memset.o strlen_user.o \
strncpy_user.o strnlen_user.o uncached.o
lib-y += csum_partial.o delay.o memcpy.o memcpy-inatomic.o memset.o \
strlen_user.o strncpy_user.o strnlen_user.o uncached.o
obj-y += iomap.o
obj-$(CONFIG_PCI) += iomap-pci.o

56
arch/mips/lib/delay.c Normal file
Просмотреть файл

@ -0,0 +1,56 @@
/*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* Copyright (C) 1994 by Waldorf Electronics
* Copyright (C) 1995 - 2000, 01, 03 by Ralf Baechle
* Copyright (C) 1999, 2000 Silicon Graphics, Inc.
* Copyright (C) 2007 Maciej W. Rozycki
*/
#include <linux/module.h>
#include <linux/param.h>
#include <linux/smp.h>
#include <asm/compiler.h>
#include <asm/war.h>
inline void __delay(unsigned int loops)
{
__asm__ __volatile__ (
" .set noreorder \n"
" .align 3 \n"
"1: bnez %0, 1b \n"
" subu %0, 1 \n"
" .set reorder \n"
: "=r" (loops)
: "0" (loops));
}
EXPORT_SYMBOL(__delay);
/*
* Division by multiplication: you don't have to worry about
* loss of precision.
*
* Use only for very small delays ( < 1 msec). Should probably use a
* lookup table, really, as the multiplications take much too long with
* short delays. This is a "reasonable" implementation, though (and the
* first constant multiplications gets optimized away if the delay is
* a constant)
*/
void __udelay(unsigned long us)
{
unsigned int lpj = current_cpu_data.udelay_val;
__delay((us * 0x000010c7 * HZ * lpj) >> 32);
}
EXPORT_SYMBOL(__udelay);
void __ndelay(unsigned long ns)
{
unsigned int lpj = current_cpu_data.udelay_val;
__delay((us * 0x00000005 * HZ * lpj) >> 32);
}
EXPORT_SYMBOL(__ndelay);

Просмотреть файл

@ -288,13 +288,7 @@ void __init prom_init(void)
*/
cfe_cons_handle = cfe_getstdhandle(CFE_STDHANDLE_CONSOLE);
if (cfe_getenv("LINUX_CMDLINE", arcs_cmdline, CL_SIZE) < 0) {
if (argc < 0) {
/*
* It's OK for direct boot to not provide a
* command line
*/
strcpy(arcs_cmdline, "root=/dev/ram0 ");
} else {
if (argc >= 0) {
/* The loader should have set the command line */
/* too early for panic to do any good */
printk("LINUX_CMDLINE not defined in cfe.");

Просмотреть файл

@ -8,6 +8,7 @@ mainmenu "Linux Kernel Configuration"
config MN10300
def_bool y
select HAVE_OPROFILE
select HAVE_ARCH_TRACEHOOK
config AM33
def_bool y

Просмотреть файл

@ -34,7 +34,7 @@
*/
typedef unsigned long elf_greg_t;
#define ELF_NGREG (sizeof (struct pt_regs) / sizeof(elf_greg_t))
#define ELF_NGREG ((sizeof(struct pt_regs) / sizeof(elf_greg_t)) - 1)
typedef elf_greg_t elf_gregset_t[ELF_NGREG];
#define ELF_NFPREG 32
@ -76,6 +76,7 @@ do { \
} while (0)
#define USE_ELF_CORE_DUMP
#define CORE_DUMP_USE_REGSET
#define ELF_EXEC_PAGESIZE 4096
/*

Просмотреть файл

@ -143,13 +143,7 @@ extern unsigned long thread_saved_pc(struct task_struct *tsk);
unsigned long get_wchan(struct task_struct *p);
#define task_pt_regs(task) \
({ \
struct pt_regs *__regs__; \
__regs__ = (struct pt_regs *) (KSTK_TOP(task_stack_page(task)) - 8); \
__regs__ - 1; \
})
#define task_pt_regs(task) ((task)->thread.uregs)
#define KSTK_EIP(task) (task_pt_regs(task)->pc)
#define KSTK_ESP(task) (task_pt_regs(task)->sp)

Просмотреть файл

@ -91,9 +91,17 @@ extern struct pt_regs *__frame; /* current frame pointer */
#if defined(__KERNEL__)
#if !defined(__ASSEMBLY__)
struct task_struct;
#define user_mode(regs) (((regs)->epsw & EPSW_nSL) == EPSW_nSL)
#define instruction_pointer(regs) ((regs)->pc)
#define user_stack_pointer(regs) ((regs)->sp)
extern void show_regs(struct pt_regs *);
#define arch_has_single_step() (1)
extern void user_enable_single_step(struct task_struct *);
extern void user_disable_single_step(struct task_struct *);
#endif /* !__ASSEMBLY */
#define profile_pc(regs) ((regs)->pc)

Просмотреть файл

@ -76,7 +76,7 @@ ENTRY(system_call)
cmp nr_syscalls,d0
bcc syscall_badsys
btst _TIF_SYSCALL_TRACE,(TI_flags,a2)
bne syscall_trace_entry
bne syscall_entry_trace
syscall_call:
add d0,d0,a1
add a1,a1
@ -104,11 +104,10 @@ restore_all:
syscall_exit_work:
btst _TIF_SYSCALL_TRACE,d2
beq work_pending
__sti # could let do_syscall_trace() call
__sti # could let syscall_trace_exit() call
# schedule() instead
mov fp,d0
mov 1,d1
call do_syscall_trace[],0 # do_syscall_trace(regs,entryexit)
call syscall_trace_exit[],0 # do_syscall_trace(regs)
jmp resume_userspace
ALIGN
@ -138,13 +137,11 @@ work_notifysig:
jmp resume_userspace
# perform syscall entry tracing
syscall_trace_entry:
syscall_entry_trace:
mov -ENOSYS,d0
mov d0,(REG_D0,fp)
mov fp,d0
clr d1
call do_syscall_trace[],0
mov (REG_ORIG_D0,fp),d0
call syscall_trace_entry[],0 # returns the syscall number to actually use
mov (REG_D1,fp),d1
cmp nr_syscalls,d0
bcs syscall_call

Просмотреть файл

@ -17,6 +17,9 @@
#include <linux/errno.h>
#include <linux/ptrace.h>
#include <linux/user.h>
#include <linux/regset.h>
#include <linux/elf.h>
#include <linux/tracehook.h>
#include <asm/uaccess.h>
#include <asm/pgtable.h>
#include <asm/system.h>
@ -64,12 +67,6 @@ static inline int get_stack_long(struct task_struct *task, int offset)
((unsigned long) task->thread.uregs + offset);
}
/*
* this routine will put a word on the processes privileged stack.
* the offset is how far from the base addr as stored in the TSS.
* this routine assumes that all the privileged stacks are in our
* data space.
*/
static inline
int put_stack_long(struct task_struct *task, int offset, unsigned long data)
{
@ -80,44 +77,191 @@ int put_stack_long(struct task_struct *task, int offset, unsigned long data)
return 0;
}
static inline unsigned long get_fpregs(struct fpu_state_struct *buf,
struct task_struct *tsk)
/*
* retrieve the contents of MN10300 userspace general registers
*/
static int genregs_get(struct task_struct *target,
const struct user_regset *regset,
unsigned int pos, unsigned int count,
void *kbuf, void __user *ubuf)
{
return __copy_to_user(buf, &tsk->thread.fpu_state,
sizeof(struct fpu_state_struct));
}
const struct pt_regs *regs = task_pt_regs(target);
int ret;
static inline unsigned long set_fpregs(struct task_struct *tsk,
struct fpu_state_struct *buf)
{
return __copy_from_user(&tsk->thread.fpu_state, buf,
sizeof(struct fpu_state_struct));
}
/* we need to skip regs->next */
ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf,
regs, 0, PT_ORIG_D0 * sizeof(long));
if (ret < 0)
return ret;
static inline void fpsave_init(struct task_struct *task)
{
memset(&task->thread.fpu_state, 0, sizeof(struct fpu_state_struct));
ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf,
&regs->orig_d0, PT_ORIG_D0 * sizeof(long),
NR_PTREGS * sizeof(long));
if (ret < 0)
return ret;
return user_regset_copyout_zero(&pos, &count, &kbuf, &ubuf,
NR_PTREGS * sizeof(long), -1);
}
/*
* make sure the single step bit is not set
* update the contents of the MN10300 userspace general registers
*/
void ptrace_disable(struct task_struct *child)
static int genregs_set(struct task_struct *target,
const struct user_regset *regset,
unsigned int pos, unsigned int count,
const void *kbuf, const void __user *ubuf)
{
#ifndef CONFIG_MN10300_USING_JTAG
struct user *dummy = NULL;
long tmp;
struct pt_regs *regs = task_pt_regs(target);
unsigned long tmp;
int ret;
tmp = get_stack_long(child, (unsigned long) &dummy->regs.epsw);
tmp &= ~EPSW_T;
put_stack_long(child, (unsigned long) &dummy->regs.epsw, tmp);
#endif
/* we need to skip regs->next */
ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
regs, 0, PT_ORIG_D0 * sizeof(long));
if (ret < 0)
return ret;
ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
&regs->orig_d0, PT_ORIG_D0 * sizeof(long),
PT_EPSW * sizeof(long));
if (ret < 0)
return ret;
/* we need to mask off changes to EPSW */
tmp = regs->epsw;
ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
&tmp, PT_EPSW * sizeof(long),
PT_PC * sizeof(long));
tmp &= EPSW_FLAG_V | EPSW_FLAG_C | EPSW_FLAG_N | EPSW_FLAG_Z;
tmp |= regs->epsw & ~(EPSW_FLAG_V | EPSW_FLAG_C | EPSW_FLAG_N |
EPSW_FLAG_Z);
regs->epsw = tmp;
if (ret < 0)
return ret;
/* and finally load the PC */
ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
&regs->pc, PT_PC * sizeof(long),
NR_PTREGS * sizeof(long));
if (ret < 0)
return ret;
return user_regset_copyin_ignore(&pos, &count, &kbuf, &ubuf,
NR_PTREGS * sizeof(long), -1);
}
/*
* set the single step bit
* retrieve the contents of MN10300 userspace FPU registers
*/
void ptrace_enable(struct task_struct *child)
static int fpuregs_get(struct task_struct *target,
const struct user_regset *regset,
unsigned int pos, unsigned int count,
void *kbuf, void __user *ubuf)
{
const struct fpu_state_struct *fpregs = &target->thread.fpu_state;
int ret;
unlazy_fpu(target);
ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf,
fpregs, 0, sizeof(*fpregs));
if (ret < 0)
return ret;
return user_regset_copyout_zero(&pos, &count, &kbuf, &ubuf,
sizeof(*fpregs), -1);
}
/*
* update the contents of the MN10300 userspace FPU registers
*/
static int fpuregs_set(struct task_struct *target,
const struct user_regset *regset,
unsigned int pos, unsigned int count,
const void *kbuf, const void __user *ubuf)
{
struct fpu_state_struct fpu_state = target->thread.fpu_state;
int ret;
ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
&fpu_state, 0, sizeof(fpu_state));
if (ret < 0)
return ret;
fpu_kill_state(target);
target->thread.fpu_state = fpu_state;
set_using_fpu(target);
return user_regset_copyin_ignore(&pos, &count, &kbuf, &ubuf,
sizeof(fpu_state), -1);
}
/*
* determine if the FPU registers have actually been used
*/
static int fpuregs_active(struct task_struct *target,
const struct user_regset *regset)
{
return is_using_fpu(target) ? regset->n : 0;
}
/*
* Define the register sets available on the MN10300 under Linux
*/
enum mn10300_regset {
REGSET_GENERAL,
REGSET_FPU,
};
static const struct user_regset mn10300_regsets[] = {
/*
* General register format is:
* A3, A2, D3, D2, MCVF, MCRL, MCRH, MDRQ
* E1, E0, E7...E2, SP, LAR, LIR, MDR
* A1, A0, D1, D0, ORIG_D0, EPSW, PC
*/
[REGSET_GENERAL] = {
.core_note_type = NT_PRSTATUS,
.n = ELF_NGREG,
.size = sizeof(long),
.align = sizeof(long),
.get = genregs_get,
.set = genregs_set,
},
/*
* FPU register format is:
* FS0-31, FPCR
*/
[REGSET_FPU] = {
.core_note_type = NT_PRFPREG,
.n = sizeof(struct fpu_state_struct) / sizeof(long),
.size = sizeof(long),
.align = sizeof(long),
.get = fpuregs_get,
.set = fpuregs_set,
.active = fpuregs_active,
},
};
static const struct user_regset_view user_mn10300_native_view = {
.name = "mn10300",
.e_machine = EM_MN10300,
.regsets = mn10300_regsets,
.n = ARRAY_SIZE(mn10300_regsets),
};
const struct user_regset_view *task_user_regset_view(struct task_struct *task)
{
return &user_mn10300_native_view;
}
/*
* set the single-step bit
*/
void user_enable_single_step(struct task_struct *child)
{
#ifndef CONFIG_MN10300_USING_JTAG
struct user *dummy = NULL;
@ -129,45 +273,37 @@ void ptrace_enable(struct task_struct *child)
#endif
}
/*
* make sure the single-step bit is not set
*/
void user_disable_single_step(struct task_struct *child)
{
#ifndef CONFIG_MN10300_USING_JTAG
struct user *dummy = NULL;
long tmp;
tmp = get_stack_long(child, (unsigned long) &dummy->regs.epsw);
tmp &= ~EPSW_T;
put_stack_long(child, (unsigned long) &dummy->regs.epsw, tmp);
#endif
}
void ptrace_disable(struct task_struct *child)
{
user_disable_single_step(child);
}
/*
* handle the arch-specific side of process tracing
*/
long arch_ptrace(struct task_struct *child, long request, long addr, long data)
{
struct fpu_state_struct fpu_state;
int i, ret;
unsigned long tmp;
int ret;
switch (request) {
/* read the word at location addr. */
case PTRACE_PEEKTEXT: {
unsigned long tmp;
int copied;
copied = access_process_vm(child, addr, &tmp, sizeof(tmp), 0);
ret = -EIO;
if (copied != sizeof(tmp))
break;
ret = put_user(tmp, (unsigned long *) data);
break;
}
/* read the word at location addr. */
case PTRACE_PEEKDATA: {
unsigned long tmp;
int copied;
copied = access_process_vm(child, addr, &tmp, sizeof(tmp), 0);
ret = -EIO;
if (copied != sizeof(tmp))
break;
ret = put_user(tmp, (unsigned long *) data);
break;
}
/* read the word at location addr in the USER area. */
case PTRACE_PEEKUSR: {
unsigned long tmp;
case PTRACE_PEEKUSR:
ret = -EIO;
if ((addr & 3) || addr < 0 ||
addr > sizeof(struct user) - 3)
@ -179,17 +315,6 @@ long arch_ptrace(struct task_struct *child, long request, long addr, long data)
ptrace_regid_to_frame[addr]);
ret = put_user(tmp, (unsigned long *) data);
break;
}
/* write the word at location addr. */
case PTRACE_POKETEXT:
case PTRACE_POKEDATA:
if (access_process_vm(child, addr, &data, sizeof(data), 1) ==
sizeof(data))
ret = 0;
else
ret = -EIO;
break;
/* write the word at location addr in the USER area */
case PTRACE_POKEUSR:
@ -204,132 +329,32 @@ long arch_ptrace(struct task_struct *child, long request, long addr, long data)
data);
break;
/* continue and stop at next (return from) syscall */
case PTRACE_SYSCALL:
/* restart after signal. */
case PTRACE_CONT:
ret = -EIO;
if ((unsigned long) data > _NSIG)
break;
if (request == PTRACE_SYSCALL)
set_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
else
clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
child->exit_code = data;
ptrace_disable(child);
wake_up_process(child);
ret = 0;
break;
case PTRACE_GETREGS: /* Get all integer regs from the child. */
return copy_regset_to_user(child, &user_mn10300_native_view,
REGSET_GENERAL,
0, NR_PTREGS * sizeof(long),
(void __user *)data);
/*
* make the child exit
* - the best I can do is send it a sigkill
* - perhaps it should be put in the status that it wants to
* exit
*/
case PTRACE_KILL:
ret = 0;
if (child->exit_state == EXIT_ZOMBIE) /* already dead */
break;
child->exit_code = SIGKILL;
clear_tsk_thread_flag(child, TIF_SINGLESTEP);
ptrace_disable(child);
wake_up_process(child);
break;
case PTRACE_SETREGS: /* Set all integer regs in the child. */
return copy_regset_from_user(child, &user_mn10300_native_view,
REGSET_GENERAL,
0, NR_PTREGS * sizeof(long),
(const void __user *)data);
case PTRACE_SINGLESTEP: /* set the trap flag. */
#ifndef CONFIG_MN10300_USING_JTAG
ret = -EIO;
if ((unsigned long) data > _NSIG)
break;
clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
ptrace_enable(child);
child->exit_code = data;
wake_up_process(child);
ret = 0;
#else
ret = -EINVAL;
#endif
break;
case PTRACE_GETFPREGS: /* Get the child FPU state. */
return copy_regset_to_user(child, &user_mn10300_native_view,
REGSET_FPU,
0, sizeof(struct fpu_state_struct),
(void __user *)data);
case PTRACE_DETACH: /* detach a process that was attached. */
ret = ptrace_detach(child, data);
break;
/* Get all gp regs from the child. */
case PTRACE_GETREGS: {
unsigned long tmp;
if (!access_ok(VERIFY_WRITE, (unsigned *) data, NR_PTREGS << 2)) {
ret = -EIO;
break;
}
for (i = 0; i < NR_PTREGS << 2; i += 4) {
tmp = get_stack_long(child, ptrace_regid_to_frame[i]);
__put_user(tmp, (unsigned long *) data);
data += sizeof(tmp);
}
ret = 0;
break;
}
case PTRACE_SETREGS: { /* Set all gp regs in the child. */
unsigned long tmp;
if (!access_ok(VERIFY_READ, (unsigned long *)data,
sizeof(struct pt_regs))) {
ret = -EIO;
break;
}
for (i = 0; i < NR_PTREGS << 2; i += 4) {
__get_user(tmp, (unsigned long *) data);
put_stack_long(child, ptrace_regid_to_frame[i], tmp);
data += sizeof(tmp);
}
ret = 0;
break;
}
case PTRACE_GETFPREGS: { /* Get the child FPU state. */
if (is_using_fpu(child)) {
unlazy_fpu(child);
fpu_state = child->thread.fpu_state;
} else {
memset(&fpu_state, 0, sizeof(fpu_state));
}
ret = -EIO;
if (copy_to_user((void *) data, &fpu_state,
sizeof(fpu_state)) == 0)
ret = 0;
break;
}
case PTRACE_SETFPREGS: { /* Set the child FPU state. */
ret = -EFAULT;
if (copy_from_user(&fpu_state, (const void *) data,
sizeof(fpu_state)) == 0) {
fpu_kill_state(child);
child->thread.fpu_state = fpu_state;
set_using_fpu(child);
ret = 0;
}
break;
}
case PTRACE_SETOPTIONS: {
if (data & PTRACE_O_TRACESYSGOOD)
child->ptrace |= PT_TRACESYSGOOD;
else
child->ptrace &= ~PT_TRACESYSGOOD;
ret = 0;
break;
}
case PTRACE_SETFPREGS: /* Set the child FPU state. */
return copy_regset_from_user(child, &user_mn10300_native_view,
REGSET_FPU,
0, sizeof(struct fpu_state_struct),
(const void __user *)data);
default:
ret = -EIO;
ret = ptrace_request(child, request, addr, data);
break;
}
@ -337,43 +362,26 @@ long arch_ptrace(struct task_struct *child, long request, long addr, long data)
}
/*
* notification of system call entry/exit
* - triggered by current->work.syscall_trace
* handle tracing of system call entry
* - return the revised system call number or ULONG_MAX to cause ENOSYS
*/
asmlinkage void do_syscall_trace(struct pt_regs *regs, int entryexit)
asmlinkage unsigned long syscall_trace_entry(struct pt_regs *regs)
{
#if 0
/* just in case... */
printk(KERN_DEBUG "[%d] syscall_%lu(%lx,%lx,%lx,%lx) = %lx\n",
current->pid,
regs->orig_d0,
regs->a0,
regs->d1,
regs->a3,
regs->a2,
regs->d0);
return;
#endif
if (tracehook_report_syscall_entry(regs))
/* tracing decided this syscall should not happen, so
* We'll return a bogus call number to get an ENOSYS
* error, but leave the original number in
* regs->orig_d0
*/
return ULONG_MAX;
if (!test_thread_flag(TIF_SYSCALL_TRACE) &&
!test_thread_flag(TIF_SINGLESTEP))
return;
if (!(current->ptrace & PT_PTRACED))
return;
/* the 0x80 provides a way for the tracing parent to distinguish
between a syscall stop and SIGTRAP delivery */
ptrace_notify(SIGTRAP |
((current->ptrace & PT_TRACESYSGOOD) &&
!test_thread_flag(TIF_SINGLESTEP) ? 0x80 : 0));
/*
* this isn't the same as continuing with a signal, but it will do
* for normal use. strace only continues with a signal if the
* stopping signal is not SIGTRAP. -brl
*/
if (current->exit_code) {
send_sig(current->exit_code, current, 1);
current->exit_code = 0;
}
return regs->orig_d0;
}
/*
* handle tracing of system call exit
*/
asmlinkage void syscall_trace_exit(struct pt_regs *regs)
{
tracehook_report_syscall_exit(regs, 0);
}

Просмотреть файл

@ -23,6 +23,7 @@
#include <linux/tty.h>
#include <linux/personality.h>
#include <linux/suspend.h>
#include <linux/tracehook.h>
#include <asm/cacheflush.h>
#include <asm/ucontext.h>
#include <asm/uaccess.h>
@ -511,6 +512,9 @@ static void do_signal(struct pt_regs *regs)
* clear the TIF_RESTORE_SIGMASK flag */
if (test_thread_flag(TIF_RESTORE_SIGMASK))
clear_thread_flag(TIF_RESTORE_SIGMASK);
tracehook_signal_handler(signr, &info, &ka, regs,
test_thread_flag(TIF_SINGLESTEP));
}
return;
@ -561,4 +565,9 @@ asmlinkage void do_notify_resume(struct pt_regs *regs, u32 thread_info_flags)
/* deal with pending signal delivery */
if (thread_info_flags & (_TIF_SIGPENDING | _TIF_RESTORE_SIGMASK))
do_signal(regs);
if (thread_info_flags & _TIF_NOTIFY_RESUME) {
clear_thread_flag(TIF_NOTIFY_RESUME);
tracehook_notify_resume(__frame);
}
}

Просмотреть файл

@ -165,24 +165,6 @@ ENTRY(itlb_aerror)
ENTRY(dtlb_aerror)
and ~EPSW_NMID,epsw
add -4,sp
mov d1,(sp)
movhu (MMUFCR_DFC),d1 # is it the initial valid write
# to this page?
and MMUFCR_xFC_INITWR,d1
beq dtlb_pagefault # jump if not
mov (DPTEL),d1 # set the dirty bit
# (don't replace with BSET!)
or _PAGE_DIRTY,d1
mov d1,(DPTEL)
mov (sp),d1
add 4,sp
rti
ALIGN
dtlb_pagefault:
mov (sp),d1
SAVE_ALL
add -4,sp # need to pass three params

Просмотреть файл

@ -1,7 +1,7 @@
#
# Automatically generated make config: don't edit
# Linux kernel version: 2.6.28-rc3
# Tue Nov 11 19:36:51 2008
# Linux kernel version: 2.6.30-rc7
# Mon May 25 14:53:25 2009
#
# CONFIG_PPC64 is not set
@ -14,6 +14,7 @@ CONFIG_6xx=y
# CONFIG_40x is not set
# CONFIG_44x is not set
# CONFIG_E200 is not set
CONFIG_PPC_BOOK3S=y
CONFIG_PPC_FPU=y
CONFIG_ALTIVEC=y
CONFIG_PPC_STD_MMU=y
@ -43,7 +44,7 @@ CONFIG_GENERIC_FIND_NEXT_BIT=y
CONFIG_PPC=y
CONFIG_EARLY_PRINTK=y
CONFIG_GENERIC_NVRAM=y
CONFIG_SCHED_NO_NO_OMIT_FRAME_POINTER=y
CONFIG_SCHED_OMIT_FRAME_POINTER=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
CONFIG_PPC_OF=y
CONFIG_OF=y
@ -52,12 +53,14 @@ CONFIG_OF=y
CONFIG_AUDIT_ARCH=y
CONFIG_GENERIC_BUG=y
CONFIG_SYS_SUPPORTS_APM_EMULATION=y
CONFIG_DTC=y
# CONFIG_DEFAULT_UIMAGE is not set
CONFIG_HIBERNATE_32=y
CONFIG_ARCH_HIBERNATION_POSSIBLE=y
CONFIG_ARCH_SUSPEND_POSSIBLE=y
# CONFIG_PPC_DCR_NATIVE is not set
# CONFIG_PPC_DCR_MMIO is not set
CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
#
@ -72,14 +75,24 @@ CONFIG_SWAP=y
CONFIG_SYSVIPC=y
CONFIG_SYSVIPC_SYSCTL=y
CONFIG_POSIX_MQUEUE=y
CONFIG_POSIX_MQUEUE_SYSCTL=y
# CONFIG_BSD_PROCESS_ACCT is not set
# CONFIG_TASKSTATS is not set
# CONFIG_AUDIT is not set
#
# RCU Subsystem
#
CONFIG_CLASSIC_RCU=y
# CONFIG_TREE_RCU is not set
# CONFIG_PREEMPT_RCU is not set
# CONFIG_TREE_RCU_TRACE is not set
# CONFIG_PREEMPT_RCU_TRACE is not set
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_CGROUPS is not set
# CONFIG_GROUP_SCHED is not set
# CONFIG_CGROUPS is not set
CONFIG_SYSFS_DEPRECATED=y
CONFIG_SYSFS_DEPRECATED_V2=y
# CONFIG_RELAY is not set
@ -88,23 +101,27 @@ CONFIG_NAMESPACES=y
# CONFIG_IPC_NS is not set
# CONFIG_USER_NS is not set
# CONFIG_PID_NS is not set
# CONFIG_NET_NS is not set
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_RD_GZIP=y
CONFIG_RD_BZIP2=y
CONFIG_RD_LZMA=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_SYSCTL=y
CONFIG_ANON_INODES=y
# CONFIG_EMBEDDED is not set
CONFIG_SYSCTL_SYSCALL=y
CONFIG_KALLSYMS=y
CONFIG_KALLSYMS_ALL=y
# CONFIG_KALLSYMS_EXTRA_PASS is not set
# CONFIG_STRIP_ASM_SYMS is not set
CONFIG_HOTPLUG=y
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
# CONFIG_COMPAT_BRK is not set
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_ANON_INODES=y
CONFIG_EPOLL=y
CONFIG_SIGNALFD=y
CONFIG_TIMERFD=y
@ -114,10 +131,12 @@ CONFIG_AIO=y
CONFIG_VM_EVENT_COUNTERS=y
CONFIG_PCI_QUIRKS=y
CONFIG_SLUB_DEBUG=y
# CONFIG_COMPAT_BRK is not set
# CONFIG_SLAB is not set
CONFIG_SLUB=y
# CONFIG_SLOB is not set
CONFIG_PROFILING=y
CONFIG_TRACEPOINTS=y
# CONFIG_MARKERS is not set
CONFIG_OPROFILE=y
CONFIG_HAVE_OPROFILE=y
@ -127,10 +146,10 @@ CONFIG_HAVE_IOREMAP_PROT=y
CONFIG_HAVE_KPROBES=y
CONFIG_HAVE_KRETPROBES=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
# CONFIG_SLOW_WORK is not set
# CONFIG_HAVE_GENERIC_DMA_COHERENT is not set
CONFIG_SLABINFO=y
CONFIG_RT_MUTEXES=y
# CONFIG_TINY_SHMEM is not set
CONFIG_BASE_SMALL=0
CONFIG_MODULES=y
# CONFIG_MODULE_FORCE_LOAD is not set
@ -138,11 +157,8 @@ CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
# CONFIG_MODVERSIONS is not set
# CONFIG_MODULE_SRCVERSION_ALL is not set
CONFIG_KMOD=y
CONFIG_BLOCK=y
CONFIG_LBD=y
# CONFIG_BLK_DEV_IO_TRACE is not set
CONFIG_LSF=y
CONFIG_BLK_DEV_BSG=y
# CONFIG_BLK_DEV_INTEGRITY is not set
@ -158,14 +174,11 @@ CONFIG_DEFAULT_AS=y
# CONFIG_DEFAULT_CFQ is not set
# CONFIG_DEFAULT_NOOP is not set
CONFIG_DEFAULT_IOSCHED="anticipatory"
CONFIG_CLASSIC_RCU=y
CONFIG_FREEZER=y
#
# Platform support
#
CONFIG_PPC_MULTIPLATFORM=y
CONFIG_CLASSIC32=y
# CONFIG_PPC_CHRP is not set
# CONFIG_MPC5121_ADS is not set
# CONFIG_MPC5121_GENERIC is not set
@ -178,7 +191,9 @@ CONFIG_PPC_PMAC=y
# CONFIG_PPC_83xx is not set
# CONFIG_PPC_86xx is not set
# CONFIG_EMBEDDED6xx is not set
# CONFIG_AMIGAONE is not set
CONFIG_PPC_NATIVE=y
CONFIG_PPC_OF_BOOT_TRAMPOLINE=y
# CONFIG_IPIC is not set
CONFIG_MPIC=y
# CONFIG_MPIC_WEIRD is not set
@ -212,11 +227,12 @@ CONFIG_CPU_FREQ_PMAC=y
CONFIG_PPC601_SYNC_FIX=y
# CONFIG_TAU is not set
# CONFIG_FSL_ULI1575 is not set
# CONFIG_SIMPLE_GPIO is not set
#
# Kernel options
#
# CONFIG_HIGHMEM is not set
CONFIG_HIGHMEM=y
CONFIG_TICK_ONESHOT=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
@ -239,6 +255,7 @@ CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
CONFIG_ARCH_HAS_WALK_MEMORY=y
CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y
# CONFIG_KEXEC is not set
# CONFIG_CRASH_DUMP is not set
CONFIG_ARCH_FLATMEM_ENABLE=y
CONFIG_ARCH_POPULATES_NODE_MAP=y
CONFIG_SELECT_MEMORY_MODEL=y
@ -250,12 +267,17 @@ CONFIG_FLAT_NODE_MEM_MAP=y
CONFIG_PAGEFLAGS_EXTENDED=y
CONFIG_SPLIT_PTLOCK_CPUS=4
# CONFIG_MIGRATION is not set
# CONFIG_RESOURCES_64BIT is not set
# CONFIG_PHYS_ADDR_T_64BIT is not set
CONFIG_ZONE_DMA_FLAG=1
CONFIG_BOUNCE=y
CONFIG_VIRT_TO_BUS=y
CONFIG_UNEVICTABLE_LRU=y
CONFIG_HAVE_MLOCK=y
CONFIG_HAVE_MLOCKED_PAGE_BIT=y
CONFIG_PPC_4K_PAGES=y
# CONFIG_PPC_16K_PAGES is not set
# CONFIG_PPC_64K_PAGES is not set
# CONFIG_PPC_256K_PAGES is not set
CONFIG_FORCE_MAX_ZONEORDER=11
CONFIG_PROC_DEVICETREE=y
# CONFIG_CMDLINE_BOOL is not set
@ -288,6 +310,8 @@ CONFIG_ARCH_SUPPORTS_MSI=y
# CONFIG_PCI_MSI is not set
# CONFIG_PCI_LEGACY is not set
# CONFIG_PCI_DEBUG is not set
# CONFIG_PCI_STUB is not set
# CONFIG_PCI_IOV is not set
CONFIG_PCCARD=m
# CONFIG_PCMCIA_DEBUG is not set
CONFIG_PCMCIA=m
@ -397,6 +421,8 @@ CONFIG_NETFILTER_XTABLES=m
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
# CONFIG_NETFILTER_XT_TARGET_CONNMARK is not set
# CONFIG_NETFILTER_XT_TARGET_DSCP is not set
CONFIG_NETFILTER_XT_TARGET_HL=m
# CONFIG_NETFILTER_XT_TARGET_LED is not set
CONFIG_NETFILTER_XT_TARGET_MARK=m
CONFIG_NETFILTER_XT_TARGET_NFLOG=m
CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m
@ -405,6 +431,7 @@ CONFIG_NETFILTER_XT_TARGET_RATEEST=m
CONFIG_NETFILTER_XT_TARGET_TRACE=m
CONFIG_NETFILTER_XT_TARGET_TCPMSS=m
CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=m
# CONFIG_NETFILTER_XT_MATCH_CLUSTER is not set
CONFIG_NETFILTER_XT_MATCH_COMMENT=m
# CONFIG_NETFILTER_XT_MATCH_CONNBYTES is not set
CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=m
@ -415,6 +442,7 @@ CONFIG_NETFILTER_XT_MATCH_DSCP=m
CONFIG_NETFILTER_XT_MATCH_ESP=m
# CONFIG_NETFILTER_XT_MATCH_HASHLIMIT is not set
CONFIG_NETFILTER_XT_MATCH_HELPER=m
CONFIG_NETFILTER_XT_MATCH_HL=m
CONFIG_NETFILTER_XT_MATCH_IPRANGE=m
CONFIG_NETFILTER_XT_MATCH_LENGTH=m
CONFIG_NETFILTER_XT_MATCH_LIMIT=m
@ -478,17 +506,15 @@ CONFIG_IP_NF_ARPFILTER=m
CONFIG_IP_NF_ARP_MANGLE=m
CONFIG_IP_DCCP=m
CONFIG_INET_DCCP_DIAG=m
CONFIG_IP_DCCP_ACKVEC=y
#
# DCCP CCIDs Configuration (EXPERIMENTAL)
#
CONFIG_IP_DCCP_CCID2=m
# CONFIG_IP_DCCP_CCID2_DEBUG is not set
CONFIG_IP_DCCP_CCID3=m
CONFIG_IP_DCCP_CCID3=y
# CONFIG_IP_DCCP_CCID3_DEBUG is not set
CONFIG_IP_DCCP_CCID3_RTO=100
CONFIG_IP_DCCP_TFRC_LIB=m
CONFIG_IP_DCCP_TFRC_LIB=y
#
# DCCP Kernel Hacking
@ -508,13 +534,16 @@ CONFIG_IP_DCCP_TFRC_LIB=m
# CONFIG_LAPB is not set
# CONFIG_ECONET is not set
# CONFIG_WAN_ROUTER is not set
# CONFIG_PHONET is not set
# CONFIG_NET_SCHED is not set
CONFIG_NET_CLS_ROUTE=y
# CONFIG_DCB is not set
#
# Network testing
#
# CONFIG_NET_PKTGEN is not set
# CONFIG_NET_DROP_MONITOR is not set
# CONFIG_HAMRADIO is not set
# CONFIG_CAN is not set
CONFIG_IRDA=m
@ -577,8 +606,6 @@ CONFIG_BT_HIDP=m
#
# Bluetooth device drivers
#
CONFIG_BT_HCIUSB=m
# CONFIG_BT_HCIUSB_SCO is not set
# CONFIG_BT_HCIBTUSB is not set
# CONFIG_BT_HCIUART is not set
CONFIG_BT_HCIBCM203X=m
@ -590,31 +617,27 @@ CONFIG_BT_HCIBFUSB=m
# CONFIG_BT_HCIBTUART is not set
# CONFIG_BT_HCIVHCI is not set
# CONFIG_AF_RXRPC is not set
# CONFIG_PHONET is not set
CONFIG_WIRELESS=y
CONFIG_CFG80211=m
CONFIG_NL80211=y
# CONFIG_CFG80211_REG_DEBUG is not set
CONFIG_WIRELESS_OLD_REGULATORY=y
CONFIG_WIRELESS_EXT=y
CONFIG_WIRELESS_EXT_SYSFS=y
# CONFIG_LIB80211 is not set
CONFIG_MAC80211=m
#
# Rate control algorithm selection
#
CONFIG_MAC80211_RC_PID=y
# CONFIG_MAC80211_RC_MINSTREL is not set
CONFIG_MAC80211_RC_DEFAULT_PID=y
# CONFIG_MAC80211_RC_DEFAULT_MINSTREL is not set
CONFIG_MAC80211_RC_DEFAULT="pid"
CONFIG_MAC80211_RC_MINSTREL=y
# CONFIG_MAC80211_RC_DEFAULT_PID is not set
CONFIG_MAC80211_RC_DEFAULT_MINSTREL=y
CONFIG_MAC80211_RC_DEFAULT="minstrel"
# CONFIG_MAC80211_MESH is not set
CONFIG_MAC80211_LEDS=y
# CONFIG_MAC80211_DEBUGFS is not set
# CONFIG_MAC80211_DEBUG_MENU is not set
CONFIG_IEEE80211=m
# CONFIG_IEEE80211_DEBUG is not set
CONFIG_IEEE80211_CRYPT_WEP=m
CONFIG_IEEE80211_CRYPT_CCMP=m
CONFIG_IEEE80211_CRYPT_TKIP=m
# CONFIG_WIMAX is not set
# CONFIG_RFKILL is not set
# CONFIG_NET_9P is not set
@ -662,17 +685,27 @@ CONFIG_BLK_DEV_RAM_SIZE=4096
# CONFIG_BLK_DEV_HD is not set
CONFIG_MISC_DEVICES=y
# CONFIG_PHANTOM is not set
# CONFIG_EEPROM_93CX6 is not set
# CONFIG_SGI_IOC4 is not set
# CONFIG_TIFM_CORE is not set
# CONFIG_ICS932S401 is not set
# CONFIG_ENCLOSURE_SERVICES is not set
# CONFIG_HP_ILO is not set
# CONFIG_ISL29003 is not set
# CONFIG_C2PORT is not set
#
# EEPROM support
#
# CONFIG_EEPROM_AT24 is not set
# CONFIG_EEPROM_LEGACY is not set
# CONFIG_EEPROM_93CX6 is not set
CONFIG_HAVE_IDE=y
CONFIG_IDE=y
#
# Please see Documentation/ide/ide.txt for help/info on IDE drives
#
CONFIG_IDE_XFER_MODE=y
CONFIG_IDE_TIMINGS=y
CONFIG_IDE_ATAPI=y
# CONFIG_BLK_DEV_IDE_SATA is not set
@ -684,7 +717,6 @@ CONFIG_BLK_DEV_IDECS=m
CONFIG_BLK_DEV_IDECD=y
CONFIG_BLK_DEV_IDECD_VERBOSE_ERRORS=y
# CONFIG_BLK_DEV_IDETAPE is not set
CONFIG_BLK_DEV_IDESCSI=y
# CONFIG_IDE_TASK_IOCTL is not set
CONFIG_IDE_PROC_FS=y
@ -714,6 +746,7 @@ CONFIG_BLK_DEV_IDEDMA_PCI=y
# CONFIG_BLK_DEV_JMICRON is not set
# CONFIG_BLK_DEV_SC1200 is not set
# CONFIG_BLK_DEV_PIIX is not set
# CONFIG_BLK_DEV_IT8172 is not set
# CONFIG_BLK_DEV_IT8213 is not set
# CONFIG_BLK_DEV_IT821X is not set
# CONFIG_BLK_DEV_NS87415 is not set
@ -728,7 +761,6 @@ CONFIG_BLK_DEV_SL82C105=y
# CONFIG_BLK_DEV_TC86C001 is not set
CONFIG_BLK_DEV_IDE_PMAC=y
CONFIG_BLK_DEV_IDE_PMAC_ATA100FIRST=y
CONFIG_BLK_DEV_IDEDMA_PMAC=y
CONFIG_BLK_DEV_IDEDMA=y
#
@ -772,6 +804,7 @@ CONFIG_SCSI_FC_ATTRS=y
# CONFIG_SCSI_SRP_ATTRS is not set
CONFIG_SCSI_LOWLEVEL=y
# CONFIG_ISCSI_TCP is not set
# CONFIG_SCSI_CXGB3_ISCSI is not set
# CONFIG_BLK_DEV_3W_XXXX_RAID is not set
# CONFIG_SCSI_3W_9XXX is not set
# CONFIG_SCSI_ACARD is not set
@ -791,8 +824,12 @@ CONFIG_SCSI_AIC7XXX_OLD=m
# CONFIG_MEGARAID_NEWGEN is not set
# CONFIG_MEGARAID_LEGACY is not set
# CONFIG_MEGARAID_SAS is not set
# CONFIG_SCSI_MPT2SAS is not set
# CONFIG_SCSI_HPTIOP is not set
# CONFIG_SCSI_BUSLOGIC is not set
# CONFIG_LIBFC is not set
# CONFIG_LIBFCOE is not set
# CONFIG_FCOE is not set
# CONFIG_SCSI_DMX3191D is not set
# CONFIG_SCSI_EATA is not set
# CONFIG_SCSI_FUTURE_DOMAIN is not set
@ -822,6 +859,7 @@ CONFIG_SCSI_MAC53C94=y
# CONFIG_SCSI_SRP is not set
# CONFIG_SCSI_LOWLEVEL_PCMCIA is not set
# CONFIG_SCSI_DH is not set
# CONFIG_SCSI_OSD_INITIATOR is not set
# CONFIG_ATA is not set
CONFIG_MD=y
CONFIG_BLK_DEV_MD=m
@ -881,6 +919,7 @@ CONFIG_THERM_ADT746X=m
# CONFIG_ANSLCD is not set
CONFIG_PMAC_RACKMETER=m
CONFIG_NETDEVICES=y
CONFIG_COMPAT_NET_DEV_OPS=y
CONFIG_DUMMY=m
# CONFIG_BONDING is not set
# CONFIG_MACVLAN is not set
@ -898,6 +937,8 @@ CONFIG_BMAC=y
CONFIG_SUNGEM=y
# CONFIG_CASSINI is not set
# CONFIG_NET_VENDOR_3COM is not set
# CONFIG_ETHOC is not set
# CONFIG_DNET is not set
# CONFIG_NET_TULIP is not set
# CONFIG_HP100 is not set
# CONFIG_IBM_NEW_EMAC_ZMII is not set
@ -913,7 +954,6 @@ CONFIG_PCNET32=y
# CONFIG_ADAPTEC_STARFIRE is not set
# CONFIG_B44 is not set
# CONFIG_FORCEDETH is not set
# CONFIG_EEPRO100 is not set
# CONFIG_E100 is not set
# CONFIG_FEALNX is not set
# CONFIG_NATSEMI is not set
@ -923,6 +963,7 @@ CONFIG_PCNET32=y
# CONFIG_R6040 is not set
# CONFIG_SIS900 is not set
# CONFIG_EPIC100 is not set
# CONFIG_SMSC9420 is not set
# CONFIG_SUNDANCE is not set
# CONFIG_TLAN is not set
# CONFIG_VIA_RHINE is not set
@ -935,6 +976,7 @@ CONFIG_NETDEV_1000=y
# CONFIG_E1000E is not set
# CONFIG_IP1000 is not set
# CONFIG_IGB is not set
# CONFIG_IGBVF is not set
# CONFIG_NS83820 is not set
# CONFIG_HAMACHI is not set
# CONFIG_YELLOWFIN is not set
@ -945,18 +987,20 @@ CONFIG_NETDEV_1000=y
# CONFIG_VIA_VELOCITY is not set
# CONFIG_TIGON3 is not set
# CONFIG_BNX2 is not set
# CONFIG_MV643XX_ETH is not set
# CONFIG_QLA3XXX is not set
# CONFIG_ATL1 is not set
# CONFIG_ATL1E is not set
# CONFIG_ATL1C is not set
# CONFIG_JME is not set
CONFIG_NETDEV_10000=y
# CONFIG_CHELSIO_T1 is not set
CONFIG_CHELSIO_T3_DEPENDS=y
# CONFIG_CHELSIO_T3 is not set
# CONFIG_ENIC is not set
# CONFIG_IXGBE is not set
# CONFIG_IXGB is not set
# CONFIG_S2IO is not set
# CONFIG_VXGE is not set
# CONFIG_MYRI10GE is not set
# CONFIG_NETXEN_NIC is not set
# CONFIG_NIU is not set
@ -966,6 +1010,7 @@ CONFIG_NETDEV_10000=y
# CONFIG_BNX2X is not set
# CONFIG_QLGE is not set
# CONFIG_SFC is not set
# CONFIG_BE2NET is not set
# CONFIG_TR is not set
#
@ -974,20 +1019,11 @@ CONFIG_NETDEV_10000=y
# CONFIG_WLAN_PRE80211 is not set
CONFIG_WLAN_80211=y
# CONFIG_PCMCIA_RAYCS is not set
# CONFIG_IPW2100 is not set
# CONFIG_IPW2200 is not set
# CONFIG_LIBERTAS is not set
# CONFIG_LIBERTAS_THINFIRM is not set
# CONFIG_AIRO is not set
CONFIG_HERMES=m
CONFIG_APPLE_AIRPORT=m
# CONFIG_PLX_HERMES is not set
# CONFIG_TMD_HERMES is not set
# CONFIG_NORTEL_HERMES is not set
CONFIG_PCI_HERMES=m
CONFIG_PCMCIA_HERMES=m
# CONFIG_PCMCIA_SPECTRUM is not set
# CONFIG_ATMEL is not set
# CONFIG_AT76C50X_USB is not set
# CONFIG_AIRO_CS is not set
# CONFIG_PCMCIA_WL3501 is not set
CONFIG_PRISM54=m
@ -997,15 +1033,17 @@ CONFIG_PRISM54=m
# CONFIG_RTL8187 is not set
# CONFIG_ADM8211 is not set
# CONFIG_MAC80211_HWSIM is not set
# CONFIG_MWL8K is not set
CONFIG_P54_COMMON=m
# CONFIG_P54_USB is not set
# CONFIG_P54_PCI is not set
CONFIG_P54_LEDS=y
# CONFIG_ATH5K is not set
# CONFIG_ATH9K is not set
# CONFIG_IWLCORE is not set
# CONFIG_IWLWIFI_LEDS is not set
# CONFIG_IWLAGN is not set
# CONFIG_IWL3945 is not set
# CONFIG_AR9170_USB is not set
# CONFIG_IPW2100 is not set
# CONFIG_IPW2200 is not set
# CONFIG_IWLWIFI is not set
# CONFIG_HOSTAP is not set
CONFIG_B43=m
CONFIG_B43_PCI_AUTOSELECT=y
@ -1025,6 +1063,19 @@ CONFIG_B43LEGACY_DMA_AND_PIO_MODE=y
# CONFIG_B43LEGACY_PIO_MODE is not set
# CONFIG_ZD1211RW is not set
# CONFIG_RT2X00 is not set
CONFIG_HERMES=m
CONFIG_HERMES_CACHE_FW_ON_INIT=y
CONFIG_APPLE_AIRPORT=m
# CONFIG_PLX_HERMES is not set
# CONFIG_TMD_HERMES is not set
# CONFIG_NORTEL_HERMES is not set
CONFIG_PCI_HERMES=m
CONFIG_PCMCIA_HERMES=m
# CONFIG_PCMCIA_SPECTRUM is not set
#
# Enable WiMAX (Networking options) to see the WiMAX drivers
#
#
# USB Network Adapters
@ -1036,6 +1087,7 @@ CONFIG_B43LEGACY_DMA_AND_PIO_MODE=y
CONFIG_USB_USBNET=m
CONFIG_USB_NET_AX8817X=m
CONFIG_USB_NET_CDCETHER=m
# CONFIG_USB_NET_CDC_EEM is not set
# CONFIG_USB_NET_DM9601 is not set
# CONFIG_USB_NET_SMSC95XX is not set
# CONFIG_USB_NET_GL620A is not set
@ -1099,7 +1151,7 @@ CONFIG_INPUT_KEYBOARD=y
CONFIG_INPUT_MOUSE=y
# CONFIG_MOUSE_PS2 is not set
# CONFIG_MOUSE_SERIAL is not set
# CONFIG_MOUSE_APPLETOUCH is not set
CONFIG_MOUSE_APPLETOUCH=y
# CONFIG_MOUSE_BCM5974 is not set
# CONFIG_MOUSE_VSXXXAA is not set
# CONFIG_INPUT_JOYSTICK is not set
@ -1150,10 +1202,13 @@ CONFIG_SERIAL_PMACZILOG_TTYS=y
# CONFIG_SERIAL_JSM is not set
# CONFIG_SERIAL_OF_PLATFORM is not set
CONFIG_UNIX98_PTYS=y
# CONFIG_DEVPTS_MULTIPLE_INSTANCES is not set
CONFIG_LEGACY_PTYS=y
CONFIG_LEGACY_PTY_COUNT=256
# CONFIG_HVC_UDBG is not set
# CONFIG_IPMI_HANDLER is not set
CONFIG_HW_RANDOM=m
# CONFIG_HW_RANDOM_TIMERIOMEM is not set
CONFIG_NVRAM=y
CONFIG_GEN_RTC=y
# CONFIG_GEN_RTC_X is not set
@ -1232,12 +1287,9 @@ CONFIG_I2C_POWERMAC=y
# Miscellaneous I2C Chip support
#
# CONFIG_DS1682 is not set
# CONFIG_EEPROM_AT24 is not set
# CONFIG_EEPROM_LEGACY is not set
# CONFIG_SENSORS_PCF8574 is not set
# CONFIG_PCF8575 is not set
# CONFIG_SENSORS_PCA9539 is not set
# CONFIG_SENSORS_PCF8591 is not set
# CONFIG_SENSORS_MAX6875 is not set
# CONFIG_SENSORS_TSL2550 is not set
# CONFIG_I2C_DEBUG_CORE is not set
@ -1259,11 +1311,11 @@ CONFIG_BATTERY_PMU=y
# CONFIG_THERMAL is not set
# CONFIG_THERMAL_HWMON is not set
# CONFIG_WATCHDOG is not set
CONFIG_SSB_POSSIBLE=y
#
# Sonics Silicon Backplane
#
CONFIG_SSB_POSSIBLE=y
CONFIG_SSB=m
CONFIG_SSB_SPROM=y
CONFIG_SSB_PCIHOST_POSSIBLE=y
@ -1281,18 +1333,13 @@ CONFIG_SSB_DRIVER_PCICORE=y
# CONFIG_MFD_CORE is not set
# CONFIG_MFD_SM501 is not set
# CONFIG_HTC_PASIC3 is not set
# CONFIG_TWL4030_CORE is not set
# CONFIG_MFD_TMIO is not set
# CONFIG_PMIC_DA903X is not set
# CONFIG_MFD_WM8400 is not set
# CONFIG_MFD_WM8350_I2C is not set
#
# Voltage and Current regulators
#
# CONFIG_MFD_PCF50633 is not set
# CONFIG_REGULATOR is not set
# CONFIG_REGULATOR_FIXED_VOLTAGE is not set
# CONFIG_REGULATOR_VIRTUAL_CONSUMER is not set
# CONFIG_REGULATOR_BQ24022 is not set
#
# Multimedia devices
@ -1390,6 +1437,7 @@ CONFIG_FB_ATY_BACKLIGHT=y
# CONFIG_FB_KYRO is not set
CONFIG_FB_3DFX=y
# CONFIG_FB_3DFX_ACCEL is not set
CONFIG_FB_3DFX_I2C=y
# CONFIG_FB_VOODOO1 is not set
# CONFIG_FB_VT8623 is not set
# CONFIG_FB_TRIDENT is not set
@ -1399,12 +1447,14 @@ CONFIG_FB_3DFX=y
# CONFIG_FB_IBM_GXT4500 is not set
# CONFIG_FB_VIRTUAL is not set
# CONFIG_FB_METRONOME is not set
# CONFIG_FB_MB862XX is not set
# CONFIG_FB_BROADSHEET is not set
CONFIG_BACKLIGHT_LCD_SUPPORT=y
CONFIG_LCD_CLASS_DEVICE=m
# CONFIG_LCD_ILI9320 is not set
# CONFIG_LCD_PLATFORM is not set
CONFIG_BACKLIGHT_CLASS_DEVICE=y
# CONFIG_BACKLIGHT_CORGI is not set
CONFIG_BACKLIGHT_GENERIC=y
#
# Display device support
@ -1444,11 +1494,13 @@ CONFIG_SND_MIXER_OSS=m
CONFIG_SND_PCM_OSS=m
CONFIG_SND_PCM_OSS_PLUGINS=y
CONFIG_SND_SEQUENCER_OSS=y
# CONFIG_SND_HRTIMER is not set
# CONFIG_SND_DYNAMIC_MINORS is not set
CONFIG_SND_SUPPORT_OLD_API=y
CONFIG_SND_VERBOSE_PROCFS=y
# CONFIG_SND_VERBOSE_PRINTK is not set
# CONFIG_SND_DEBUG is not set
CONFIG_SND_VMASTER=y
CONFIG_SND_DRIVERS=y
CONFIG_SND_DUMMY=m
# CONFIG_SND_VIRMIDI is not set
@ -1486,6 +1538,8 @@ CONFIG_SND_PCI=y
# CONFIG_SND_INDIGO is not set
# CONFIG_SND_INDIGOIO is not set
# CONFIG_SND_INDIGODJ is not set
# CONFIG_SND_INDIGOIOX is not set
# CONFIG_SND_INDIGODJX is not set
# CONFIG_SND_EMU10K1 is not set
# CONFIG_SND_EMU10K1X is not set
# CONFIG_SND_ENS1370 is not set
@ -1551,28 +1605,31 @@ CONFIG_USB_HID=y
#
# Special HID drivers
#
CONFIG_HID_COMPAT=y
CONFIG_HID_A4TECH=y
CONFIG_HID_APPLE=y
CONFIG_HID_BELKIN=y
CONFIG_HID_BRIGHT=y
CONFIG_HID_CHERRY=y
CONFIG_HID_CHICONY=y
CONFIG_HID_CYPRESS=y
CONFIG_HID_DELL=y
# CONFIG_DRAGONRISE_FF is not set
CONFIG_HID_EZKEY=y
CONFIG_HID_KYE=y
CONFIG_HID_GYRATION=y
CONFIG_HID_KENSINGTON=y
CONFIG_HID_LOGITECH=y
# CONFIG_LOGITECH_FF is not set
# CONFIG_LOGIRUMBLEPAD2_FF is not set
CONFIG_HID_MICROSOFT=y
CONFIG_HID_MONTEREY=y
CONFIG_HID_NTRIG=y
CONFIG_HID_PANTHERLORD=y
# CONFIG_PANTHERLORD_FF is not set
CONFIG_HID_PETALYNX=y
CONFIG_HID_SAMSUNG=y
CONFIG_HID_SONY=y
CONFIG_HID_SUNPLUS=y
# CONFIG_GREENASIA_FF is not set
CONFIG_HID_TOPSEED=y
# CONFIG_THRUSTMASTER_FF is not set
# CONFIG_ZEROPLUS_FF is not set
CONFIG_USB_SUPPORT=y
@ -1603,6 +1660,7 @@ CONFIG_USB_EHCI_HCD=m
CONFIG_USB_EHCI_ROOT_HUB_TT=y
# CONFIG_USB_EHCI_TT_NEWSCHED is not set
# CONFIG_USB_EHCI_HCD_PPC_OF is not set
# CONFIG_USB_OXU210HP_HCD is not set
# CONFIG_USB_ISP116X_HCD is not set
# CONFIG_USB_ISP1760_HCD is not set
CONFIG_USB_OHCI_HCD=y
@ -1625,24 +1683,23 @@ CONFIG_USB_PRINTER=m
# CONFIG_USB_TMC is not set
#
# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support'
# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may
#
#
# may also be needed; see USB_STORAGE Help for more information
# also be needed; see USB_STORAGE Help for more info
#
CONFIG_USB_STORAGE=m
# CONFIG_USB_STORAGE_DEBUG is not set
# CONFIG_USB_STORAGE_DATAFAB is not set
# CONFIG_USB_STORAGE_FREECOM is not set
# CONFIG_USB_STORAGE_ISD200 is not set
# CONFIG_USB_STORAGE_DPCM is not set
# CONFIG_USB_STORAGE_USBAT is not set
# CONFIG_USB_STORAGE_SDDR09 is not set
# CONFIG_USB_STORAGE_SDDR55 is not set
# CONFIG_USB_STORAGE_JUMPSHOT is not set
# CONFIG_USB_STORAGE_ALAUDA is not set
CONFIG_USB_STORAGE_ONETOUCH=y
CONFIG_USB_STORAGE_ONETOUCH=m
# CONFIG_USB_STORAGE_KARMA is not set
# CONFIG_USB_STORAGE_CYPRESS_ATACB is not set
# CONFIG_USB_LIBUSUAL is not set
@ -1665,7 +1722,7 @@ CONFIG_USB_EZUSB=y
# CONFIG_USB_SERIAL_CH341 is not set
# CONFIG_USB_SERIAL_WHITEHEAT is not set
# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set
# CONFIG_USB_SERIAL_CP2101 is not set
# CONFIG_USB_SERIAL_CP210X is not set
# CONFIG_USB_SERIAL_CYPRESS_M8 is not set
# CONFIG_USB_SERIAL_EMPEG is not set
# CONFIG_USB_SERIAL_FTDI_SIO is not set
@ -1701,15 +1758,19 @@ CONFIG_USB_SERIAL_KEYSPAN_USA49WLC=y
# CONFIG_USB_SERIAL_NAVMAN is not set
# CONFIG_USB_SERIAL_PL2303 is not set
# CONFIG_USB_SERIAL_OTI6858 is not set
# CONFIG_USB_SERIAL_QUALCOMM is not set
# CONFIG_USB_SERIAL_SPCP8X5 is not set
# CONFIG_USB_SERIAL_HP4X is not set
# CONFIG_USB_SERIAL_SAFE is not set
# CONFIG_USB_SERIAL_SIEMENS_MPI is not set
# CONFIG_USB_SERIAL_SIERRAWIRELESS is not set
# CONFIG_USB_SERIAL_SYMBOL is not set
# CONFIG_USB_SERIAL_TI is not set
# CONFIG_USB_SERIAL_CYBERJACK is not set
# CONFIG_USB_SERIAL_XIRCOM is not set
# CONFIG_USB_SERIAL_OPTION is not set
# CONFIG_USB_SERIAL_OMNINET is not set
# CONFIG_USB_SERIAL_OPTICON is not set
# CONFIG_USB_SERIAL_DEBUG is not set
#
@ -1726,7 +1787,6 @@ CONFIG_USB_SERIAL_KEYSPAN_USA49WLC=y
# CONFIG_USB_LED is not set
# CONFIG_USB_CYPRESS_CY7C63 is not set
# CONFIG_USB_CYTHERM is not set
# CONFIG_USB_PHIDGET is not set
# CONFIG_USB_IDMOUSE is not set
# CONFIG_USB_FTDI_ELAN is not set
CONFIG_USB_APPLEDISPLAY=m
@ -1738,6 +1798,11 @@ CONFIG_USB_APPLEDISPLAY=m
# CONFIG_USB_ISIGHTFW is not set
# CONFIG_USB_VST is not set
# CONFIG_USB_GADGET is not set
#
# OTG and related infrastructure
#
# CONFIG_NOP_USB_XCEIV is not set
# CONFIG_UWB is not set
# CONFIG_MMC is not set
# CONFIG_MEMSTICK is not set
@ -1748,7 +1813,9 @@ CONFIG_LEDS_CLASS=y
# LED drivers
#
# CONFIG_LEDS_PCA9532 is not set
# CONFIG_LEDS_LP5521 is not set
# CONFIG_LEDS_PCA955X is not set
# CONFIG_LEDS_BD2802 is not set
#
# LED Triggers
@ -1759,11 +1826,16 @@ CONFIG_LEDS_TRIGGER_IDE_DISK=y
# CONFIG_LEDS_TRIGGER_HEARTBEAT is not set
# CONFIG_LEDS_TRIGGER_BACKLIGHT is not set
CONFIG_LEDS_TRIGGER_DEFAULT_ON=y
#
# iptables trigger is under Netfilter config (LED target)
#
# CONFIG_ACCESSIBILITY is not set
# CONFIG_INFINIBAND is not set
# CONFIG_EDAC is not set
# CONFIG_RTC_CLASS is not set
# CONFIG_DMADEVICES is not set
# CONFIG_AUXDISPLAY is not set
# CONFIG_UIO is not set
# CONFIG_STAGING is not set
@ -1774,6 +1846,7 @@ CONFIG_EXT2_FS=y
# CONFIG_EXT2_FS_XATTR is not set
# CONFIG_EXT2_FS_XIP is not set
CONFIG_EXT3_FS=y
# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
CONFIG_EXT3_FS_XATTR=y
CONFIG_EXT3_FS_POSIX_ACL=y
# CONFIG_EXT3_FS_SECURITY is not set
@ -1783,7 +1856,9 @@ CONFIG_EXT4_FS_XATTR=y
# CONFIG_EXT4_FS_POSIX_ACL is not set
# CONFIG_EXT4_FS_SECURITY is not set
CONFIG_JBD=y
# CONFIG_JBD_DEBUG is not set
CONFIG_JBD2=y
# CONFIG_JBD2_DEBUG is not set
CONFIG_FS_MBCACHE=y
# CONFIG_REISERFS_FS is not set
# CONFIG_JFS_FS is not set
@ -1792,6 +1867,7 @@ CONFIG_FILE_LOCKING=y
# CONFIG_XFS_FS is not set
# CONFIG_GFS2_FS is not set
# CONFIG_OCFS2_FS is not set
# CONFIG_BTRFS_FS is not set
CONFIG_DNOTIFY=y
CONFIG_INOTIFY=y
CONFIG_INOTIFY_USER=y
@ -1800,6 +1876,11 @@ CONFIG_INOTIFY_USER=y
CONFIG_AUTOFS4_FS=m
CONFIG_FUSE_FS=m
#
# Caches
#
# CONFIG_FSCACHE is not set
#
# CD-ROM/DVD Filesystems
#
@ -1831,10 +1912,7 @@ CONFIG_TMPFS=y
# CONFIG_TMPFS_POSIX_ACL is not set
# CONFIG_HUGETLB_PAGE is not set
# CONFIG_CONFIGFS_FS is not set
#
# Miscellaneous filesystems
#
CONFIG_MISC_FILESYSTEMS=y
# CONFIG_ADFS_FS is not set
# CONFIG_AFFS_FS is not set
CONFIG_HFS_FS=m
@ -1843,6 +1921,7 @@ CONFIG_HFSPLUS_FS=m
# CONFIG_BFS_FS is not set
# CONFIG_EFS_FS is not set
# CONFIG_CRAMFS is not set
# CONFIG_SQUASHFS is not set
# CONFIG_VXFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_OMFS_FS is not set
@ -1851,6 +1930,7 @@ CONFIG_HFSPLUS_FS=m
# CONFIG_ROMFS_FS is not set
# CONFIG_SYSV_FS is not set
# CONFIG_UFS_FS is not set
# CONFIG_NILFS2_FS is not set
CONFIG_NETWORK_FILESYSTEMS=y
CONFIG_NFS_FS=y
CONFIG_NFS_V3=y
@ -1868,7 +1948,6 @@ CONFIG_NFS_ACL_SUPPORT=y
CONFIG_NFS_COMMON=y
CONFIG_SUNRPC=y
CONFIG_SUNRPC_GSS=y
# CONFIG_SUNRPC_REGISTER_V4 is not set
CONFIG_RPCSEC_GSS_KRB5=y
# CONFIG_RPCSEC_GSS_SPKM3 is not set
CONFIG_SMB_FS=m
@ -1940,11 +2019,13 @@ CONFIG_NLS_ISO8859_1=m
# CONFIG_NLS_KOI8_U is not set
CONFIG_NLS_UTF8=m
# CONFIG_DLM is not set
CONFIG_BINARY_PRINTF=y
#
# Library routines
#
CONFIG_BITREVERSE=y
CONFIG_GENERIC_FIND_LAST_BIT=y
CONFIG_CRC_CCITT=y
CONFIG_CRC16=y
CONFIG_CRC_T10DIF=y
@ -1954,15 +2035,18 @@ CONFIG_CRC32=y
CONFIG_LIBCRC32C=m
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=y
CONFIG_DECOMPRESS_GZIP=y
CONFIG_DECOMPRESS_BZIP2=y
CONFIG_DECOMPRESS_LZMA=y
CONFIG_TEXTSEARCH=y
CONFIG_TEXTSEARCH_KMP=m
CONFIG_TEXTSEARCH_BM=m
CONFIG_TEXTSEARCH_FSM=m
CONFIG_PLIST=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT=y
CONFIG_HAS_DMA=y
CONFIG_HAVE_LMB=y
CONFIG_NLATTR=y
#
# Kernel hacking
@ -1973,13 +2057,16 @@ CONFIG_ENABLE_MUST_CHECK=y
CONFIG_FRAME_WARN=1024
CONFIG_MAGIC_SYSRQ=y
# CONFIG_UNUSED_SYMBOLS is not set
# CONFIG_DEBUG_FS is not set
CONFIG_DEBUG_FS=y
# CONFIG_HEADERS_CHECK is not set
CONFIG_DEBUG_KERNEL=y
# CONFIG_DEBUG_SHIRQ is not set
CONFIG_DETECT_SOFTLOCKUP=y
# CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC is not set
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE=0
CONFIG_DETECT_HUNG_TASK=y
# CONFIG_BOOTPARAM_HUNG_TASK_PANIC is not set
CONFIG_BOOTPARAM_HUNG_TASK_PANIC_VALUE=0
CONFIG_SCHED_DEBUG=y
CONFIG_SCHEDSTATS=y
# CONFIG_TIMER_STATS is not set
@ -1994,6 +2081,7 @@ CONFIG_SCHEDSTATS=y
# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
CONFIG_STACKTRACE=y
# CONFIG_DEBUG_KOBJECT is not set
# CONFIG_DEBUG_HIGHMEM is not set
CONFIG_DEBUG_BUGVERBOSE=y
# CONFIG_DEBUG_INFO is not set
# CONFIG_DEBUG_VM is not set
@ -2001,6 +2089,7 @@ CONFIG_DEBUG_BUGVERBOSE=y
CONFIG_DEBUG_MEMORY_INIT=y
# CONFIG_DEBUG_LIST is not set
# CONFIG_DEBUG_SG is not set
# CONFIG_DEBUG_NOTIFIERS is not set
# CONFIG_BOOT_PRINTK_DELAY is not set
# CONFIG_RCU_TORTURE_TEST is not set
# CONFIG_RCU_CPU_STALL_DETECTOR is not set
@ -2009,7 +2098,14 @@ CONFIG_DEBUG_MEMORY_INIT=y
# CONFIG_FAULT_INJECTION is not set
CONFIG_LATENCYTOP=y
CONFIG_SYSCTL_SYSCALL_CHECK=y
CONFIG_NOP_TRACER=y
CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_RING_BUFFER=y
CONFIG_TRACING=y
CONFIG_TRACING_SUPPORT=y
#
# Tracers
@ -2017,12 +2113,19 @@ CONFIG_HAVE_FUNCTION_TRACER=y
# CONFIG_FUNCTION_TRACER is not set
# CONFIG_SCHED_TRACER is not set
# CONFIG_CONTEXT_SWITCH_TRACER is not set
# CONFIG_EVENT_TRACER is not set
# CONFIG_BOOT_TRACER is not set
# CONFIG_TRACE_BRANCH_PROFILING is not set
# CONFIG_STACK_TRACER is not set
# CONFIG_DYNAMIC_PRINTK_DEBUG is not set
# CONFIG_KMEMTRACE is not set
# CONFIG_WORKQUEUE_TRACER is not set
# CONFIG_BLK_DEV_IO_TRACE is not set
# CONFIG_FTRACE_STARTUP_TEST is not set
# CONFIG_DYNAMIC_DEBUG is not set
# CONFIG_SAMPLES is not set
CONFIG_HAVE_ARCH_KGDB=y
# CONFIG_KGDB is not set
CONFIG_PRINT_STACK_DEPTH=64
# CONFIG_DEBUG_STACKOVERFLOW is not set
# CONFIG_DEBUG_STACK_USAGE is not set
# CONFIG_CODE_PATCHING_SELFTEST is not set
@ -2033,6 +2136,7 @@ CONFIG_XMON_DEFAULT=y
CONFIG_XMON_DISASSEMBLY=y
CONFIG_DEBUGGER=y
CONFIG_IRQSTACKS=y
# CONFIG_VIRQ_DEBUG is not set
# CONFIG_BDI_SWITCH is not set
CONFIG_BOOTX_TEXT=y
# CONFIG_PPC_EARLY_DEBUG is not set
@ -2051,13 +2155,20 @@ CONFIG_CRYPTO=y
#
# CONFIG_CRYPTO_FIPS is not set
CONFIG_CRYPTO_ALGAPI=y
CONFIG_CRYPTO_ALGAPI2=y
CONFIG_CRYPTO_AEAD=y
CONFIG_CRYPTO_AEAD2=y
CONFIG_CRYPTO_BLKCIPHER=y
CONFIG_CRYPTO_BLKCIPHER2=y
CONFIG_CRYPTO_HASH=y
CONFIG_CRYPTO_RNG=y
CONFIG_CRYPTO_HASH2=y
CONFIG_CRYPTO_RNG2=y
CONFIG_CRYPTO_PCOMP=y
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_MANAGER2=y
# CONFIG_CRYPTO_GF128MUL is not set
CONFIG_CRYPTO_NULL=m
CONFIG_CRYPTO_WORKQUEUE=y
# CONFIG_CRYPTO_CRYPTD is not set
CONFIG_CRYPTO_AUTHENC=y
# CONFIG_CRYPTO_TEST is not set
@ -2127,6 +2238,7 @@ CONFIG_CRYPTO_TWOFISH_COMMON=m
# Compression
#
CONFIG_CRYPTO_DEFLATE=m
# CONFIG_CRYPTO_ZLIB is not set
# CONFIG_CRYPTO_LZO is not set
#

Просмотреть файл

@ -131,5 +131,44 @@ static inline int irqs_disabled_flags(unsigned long flags)
*/
struct irq_chip;
#ifdef CONFIG_PERF_COUNTERS
static inline unsigned long test_perf_counter_pending(void)
{
unsigned long x;
asm volatile("lbz %0,%1(13)"
: "=r" (x)
: "i" (offsetof(struct paca_struct, perf_counter_pending)));
return x;
}
static inline void set_perf_counter_pending(void)
{
asm volatile("stb %0,%1(13)" : :
"r" (1),
"i" (offsetof(struct paca_struct, perf_counter_pending)));
}
static inline void clear_perf_counter_pending(void)
{
asm volatile("stb %0,%1(13)" : :
"r" (0),
"i" (offsetof(struct paca_struct, perf_counter_pending)));
}
extern void perf_counter_do_pending(void);
#else
static inline unsigned long test_perf_counter_pending(void)
{
return 0;
}
static inline void set_perf_counter_pending(void) {}
static inline void clear_perf_counter_pending(void) {}
static inline void perf_counter_do_pending(void) {}
#endif /* CONFIG_PERF_COUNTERS */
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_HW_IRQ_H */

Просмотреть файл

@ -99,6 +99,7 @@ struct paca_struct {
u8 soft_enabled; /* irq soft-enable flag */
u8 hard_enabled; /* set if irqs are enabled in MSR */
u8 io_sync; /* writel() needs spin_unlock sync */
u8 perf_counter_pending; /* PM interrupt while soft-disabled */
/* Stuff for accurate time accounting */
u64 user_time; /* accumulated usermode TB ticks */

Просмотреть файл

@ -0,0 +1,98 @@
/*
* Performance counter support - PowerPC-specific definitions.
*
* Copyright 2008-2009 Paul Mackerras, IBM Corporation.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/types.h>
#define MAX_HWCOUNTERS 8
#define MAX_EVENT_ALTERNATIVES 8
#define MAX_LIMITED_HWCOUNTERS 2
/*
* This struct provides the constants and functions needed to
* describe the PMU on a particular POWER-family CPU.
*/
struct power_pmu {
int n_counter;
int max_alternatives;
u64 add_fields;
u64 test_adder;
int (*compute_mmcr)(u64 events[], int n_ev,
unsigned int hwc[], u64 mmcr[]);
int (*get_constraint)(u64 event, u64 *mskp, u64 *valp);
int (*get_alternatives)(u64 event, unsigned int flags,
u64 alt[]);
void (*disable_pmc)(unsigned int pmc, u64 mmcr[]);
int (*limited_pmc_event)(u64 event);
u32 flags;
int n_generic;
int *generic_events;
int (*cache_events)[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX]
[PERF_COUNT_HW_CACHE_RESULT_MAX];
};
extern struct power_pmu *ppmu;
/*
* Values for power_pmu.flags
*/
#define PPMU_LIMITED_PMC5_6 1 /* PMC5/6 have limited function */
#define PPMU_ALT_SIPR 2 /* uses alternate posn for SIPR/HV */
/*
* Values for flags to get_alternatives()
*/
#define PPMU_LIMITED_PMC_OK 1 /* can put this on a limited PMC */
#define PPMU_LIMITED_PMC_REQD 2 /* have to put this on a limited PMC */
#define PPMU_ONLY_COUNT_RUN 4 /* only counting in run state */
struct pt_regs;
extern unsigned long perf_misc_flags(struct pt_regs *regs);
#define perf_misc_flags(regs) perf_misc_flags(regs)
extern unsigned long perf_instruction_pointer(struct pt_regs *regs);
/*
* The power_pmu.get_constraint function returns a 64-bit value and
* a 64-bit mask that express the constraints between this event and
* other events.
*
* The value and mask are divided up into (non-overlapping) bitfields
* of three different types:
*
* Select field: this expresses the constraint that some set of bits
* in MMCR* needs to be set to a specific value for this event. For a
* select field, the mask contains 1s in every bit of the field, and
* the value contains a unique value for each possible setting of the
* MMCR* bits. The constraint checking code will ensure that two events
* that set the same field in their masks have the same value in their
* value dwords.
*
* Add field: this expresses the constraint that there can be at most
* N events in a particular class. A field of k bits can be used for
* N <= 2^(k-1) - 1. The mask has the most significant bit of the field
* set (and the other bits 0), and the value has only the least significant
* bit of the field set. In addition, the 'add_fields' and 'test_adder'
* in the struct power_pmu for this processor come into play. The
* add_fields value contains 1 in the LSB of the field, and the
* test_adder contains 2^(k-1) - 1 - N in the field.
*
* NAND field: this expresses the constraint that you may not have events
* in all of a set of classes. (For example, on PPC970, you can't select
* events from the FPU, ISU and IDU simultaneously, although any two are
* possible.) For N classes, the field is N+1 bits wide, and each class
* is assigned one bit from the least-significant N bits. The mask has
* only the most-significant bit set, and the value has only the bit
* for the event's class set. The test_adder has the least significant
* bit set in the field.
*
* If an event is not subject to the constraint expressed by a particular
* field, then it will have 0 in both the mask and value for that field.
*/

Просмотреть файл

@ -492,11 +492,13 @@
#define MMCR0_FCHV 0x00000001UL /* freeze conditions in hypervisor mode */
#define SPRN_MMCR1 798
#define SPRN_MMCRA 0x312
#define MMCRA_SDSYNC 0x80000000UL /* SDAR synced with SIAR */
#define MMCRA_SIHV 0x10000000UL /* state of MSR HV when SIAR set */
#define MMCRA_SIPR 0x08000000UL /* state of MSR PR when SIAR set */
#define MMCRA_SLOT 0x07000000UL /* SLOT bits (37-39) */
#define MMCRA_SLOT_SHIFT 24
#define MMCRA_SAMPLE_ENABLE 0x00000001UL /* enable sampling */
#define POWER6_MMCRA_SDSYNC 0x0000080000000000ULL /* SDAR/SIAR synced */
#define POWER6_MMCRA_SIHV 0x0000040000000000ULL
#define POWER6_MMCRA_SIPR 0x0000020000000000ULL
#define POWER6_MMCRA_THRM 0x00000020UL

Просмотреть файл

@ -322,6 +322,6 @@ SYSCALL_SPU(epoll_create1)
SYSCALL_SPU(dup3)
SYSCALL_SPU(pipe2)
SYSCALL(inotify_init1)
SYSCALL(ni_syscall)
SYSCALL_SPU(perf_counter_open)
COMPAT_SYS_SPU(preadv)
COMPAT_SYS_SPU(pwritev)

Просмотреть файл

@ -341,6 +341,7 @@
#define __NR_dup3 316
#define __NR_pipe2 317
#define __NR_inotify_init1 318
#define __NR_perf_counter_open 319
#define __NR_preadv 320
#define __NR_pwritev 321

Просмотреть файл

@ -94,6 +94,9 @@ obj64-$(CONFIG_AUDIT) += compat_audit.o
obj-$(CONFIG_DYNAMIC_FTRACE) += ftrace.o
obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += ftrace.o
obj-$(CONFIG_PERF_COUNTERS) += perf_counter.o power4-pmu.o ppc970-pmu.o \
power5-pmu.o power5+-pmu.o power6-pmu.o \
power7-pmu.o
obj-$(CONFIG_8XX_MINIMAL_FPEMU) += softemu8xx.o

Просмотреть файл

@ -131,6 +131,7 @@ int main(void)
DEFINE(PACAKMSR, offsetof(struct paca_struct, kernel_msr));
DEFINE(PACASOFTIRQEN, offsetof(struct paca_struct, soft_enabled));
DEFINE(PACAHARDIRQEN, offsetof(struct paca_struct, hard_enabled));
DEFINE(PACAPERFPEND, offsetof(struct paca_struct, perf_counter_pending));
DEFINE(PACASLBCACHE, offsetof(struct paca_struct, slb_cache));
DEFINE(PACASLBCACHEPTR, offsetof(struct paca_struct, slb_cache_ptr));
DEFINE(PACACONTEXTID, offsetof(struct paca_struct, context.id));

Просмотреть файл

@ -526,6 +526,15 @@ ALT_FW_FTR_SECTION_END_IFCLR(FW_FEATURE_ISERIES)
2:
TRACE_AND_RESTORE_IRQ(r5);
#ifdef CONFIG_PERF_COUNTERS
/* check paca->perf_counter_pending if we're enabling ints */
lbz r3,PACAPERFPEND(r13)
and. r3,r3,r5
beq 27f
bl .perf_counter_do_pending
27:
#endif /* CONFIG_PERF_COUNTERS */
/* extract EE bit and use it to restore paca->hard_enabled */
ld r3,_MSR(r1)
rldicl r4,r3,49,63 /* r0 = (r3 >> 15) & 1 */

Просмотреть файл

@ -135,6 +135,11 @@ notrace void raw_local_irq_restore(unsigned long en)
iseries_handle_interrupts();
}
if (test_perf_counter_pending()) {
clear_perf_counter_pending();
perf_counter_do_pending();
}
/*
* if (get_paca()->hard_enabled) return;
* But again we need to take care that gcc gets hard_enabled directly

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -0,0 +1,598 @@
/*
* Performance counter support for POWER4 (GP) and POWER4+ (GQ) processors.
*
* Copyright 2009 Paul Mackerras, IBM Corporation.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/kernel.h>
#include <linux/perf_counter.h>
#include <asm/reg.h>
/*
* Bits in event code for POWER4
*/
#define PM_PMC_SH 12 /* PMC number (1-based) for direct events */
#define PM_PMC_MSK 0xf
#define PM_UNIT_SH 8 /* TTMMUX number and setting - unit select */
#define PM_UNIT_MSK 0xf
#define PM_LOWER_SH 6
#define PM_LOWER_MSK 1
#define PM_LOWER_MSKS 0x40
#define PM_BYTE_SH 4 /* Byte number of event bus to use */
#define PM_BYTE_MSK 3
#define PM_PMCSEL_MSK 7
/*
* Unit code values
*/
#define PM_FPU 1
#define PM_ISU1 2
#define PM_IFU 3
#define PM_IDU0 4
#define PM_ISU1_ALT 6
#define PM_ISU2 7
#define PM_IFU_ALT 8
#define PM_LSU0 9
#define PM_LSU1 0xc
#define PM_GPS 0xf
/*
* Bits in MMCR0 for POWER4
*/
#define MMCR0_PMC1SEL_SH 8
#define MMCR0_PMC2SEL_SH 1
#define MMCR_PMCSEL_MSK 0x1f
/*
* Bits in MMCR1 for POWER4
*/
#define MMCR1_TTM0SEL_SH 62
#define MMCR1_TTC0SEL_SH 61
#define MMCR1_TTM1SEL_SH 59
#define MMCR1_TTC1SEL_SH 58
#define MMCR1_TTM2SEL_SH 56
#define MMCR1_TTC2SEL_SH 55
#define MMCR1_TTM3SEL_SH 53
#define MMCR1_TTC3SEL_SH 52
#define MMCR1_TTMSEL_MSK 3
#define MMCR1_TD_CP_DBG0SEL_SH 50
#define MMCR1_TD_CP_DBG1SEL_SH 48
#define MMCR1_TD_CP_DBG2SEL_SH 46
#define MMCR1_TD_CP_DBG3SEL_SH 44
#define MMCR1_DEBUG0SEL_SH 43
#define MMCR1_DEBUG1SEL_SH 42
#define MMCR1_DEBUG2SEL_SH 41
#define MMCR1_DEBUG3SEL_SH 40
#define MMCR1_PMC1_ADDER_SEL_SH 39
#define MMCR1_PMC2_ADDER_SEL_SH 38
#define MMCR1_PMC6_ADDER_SEL_SH 37
#define MMCR1_PMC5_ADDER_SEL_SH 36
#define MMCR1_PMC8_ADDER_SEL_SH 35
#define MMCR1_PMC7_ADDER_SEL_SH 34
#define MMCR1_PMC3_ADDER_SEL_SH 33
#define MMCR1_PMC4_ADDER_SEL_SH 32
#define MMCR1_PMC3SEL_SH 27
#define MMCR1_PMC4SEL_SH 22
#define MMCR1_PMC5SEL_SH 17
#define MMCR1_PMC6SEL_SH 12
#define MMCR1_PMC7SEL_SH 7
#define MMCR1_PMC8SEL_SH 2 /* note bit 0 is in MMCRA for GP */
static short mmcr1_adder_bits[8] = {
MMCR1_PMC1_ADDER_SEL_SH,
MMCR1_PMC2_ADDER_SEL_SH,
MMCR1_PMC3_ADDER_SEL_SH,
MMCR1_PMC4_ADDER_SEL_SH,
MMCR1_PMC5_ADDER_SEL_SH,
MMCR1_PMC6_ADDER_SEL_SH,
MMCR1_PMC7_ADDER_SEL_SH,
MMCR1_PMC8_ADDER_SEL_SH
};
/*
* Bits in MMCRA
*/
#define MMCRA_PMC8SEL0_SH 17 /* PMC8SEL bit 0 for GP */
/*
* Layout of constraint bits:
* 6666555555555544444444443333333333222222222211111111110000000000
* 3210987654321098765432109876543210987654321098765432109876543210
* |[ >[ >[ >|||[ >[ >< >< >< >< ><><><><><><><><>
* | UC1 UC2 UC3 ||| PS1 PS2 B0 B1 B2 B3 P1P2P3P4P5P6P7P8
* \SMPL ||\TTC3SEL
* |\TTC_IFU_SEL
* \TTM2SEL0
*
* SMPL - SAMPLE_ENABLE constraint
* 56: SAMPLE_ENABLE value 0x0100_0000_0000_0000
*
* UC1 - unit constraint 1: can't have all three of FPU/ISU1/IDU0|ISU2
* 55: UC1 error 0x0080_0000_0000_0000
* 54: FPU events needed 0x0040_0000_0000_0000
* 53: ISU1 events needed 0x0020_0000_0000_0000
* 52: IDU0|ISU2 events needed 0x0010_0000_0000_0000
*
* UC2 - unit constraint 2: can't have all three of FPU/IFU/LSU0
* 51: UC2 error 0x0008_0000_0000_0000
* 50: FPU events needed 0x0004_0000_0000_0000
* 49: IFU events needed 0x0002_0000_0000_0000
* 48: LSU0 events needed 0x0001_0000_0000_0000
*
* UC3 - unit constraint 3: can't have all four of LSU0/IFU/IDU0|ISU2/ISU1
* 47: UC3 error 0x8000_0000_0000
* 46: LSU0 events needed 0x4000_0000_0000
* 45: IFU events needed 0x2000_0000_0000
* 44: IDU0|ISU2 events needed 0x1000_0000_0000
* 43: ISU1 events needed 0x0800_0000_0000
*
* TTM2SEL0
* 42: 0 = IDU0 events needed
* 1 = ISU2 events needed 0x0400_0000_0000
*
* TTC_IFU_SEL
* 41: 0 = IFU.U events needed
* 1 = IFU.L events needed 0x0200_0000_0000
*
* TTC3SEL
* 40: 0 = LSU1.U events needed
* 1 = LSU1.L events needed 0x0100_0000_0000
*
* PS1
* 39: PS1 error 0x0080_0000_0000
* 36-38: count of events needing PMC1/2/5/6 0x0070_0000_0000
*
* PS2
* 35: PS2 error 0x0008_0000_0000
* 32-34: count of events needing PMC3/4/7/8 0x0007_0000_0000
*
* B0
* 28-31: Byte 0 event source 0xf000_0000
* 1 = FPU
* 2 = ISU1
* 3 = IFU
* 4 = IDU0
* 7 = ISU2
* 9 = LSU0
* c = LSU1
* f = GPS
*
* B1, B2, B3
* 24-27, 20-23, 16-19: Byte 1, 2, 3 event sources
*
* P8
* 15: P8 error 0x8000
* 14-15: Count of events needing PMC8
*
* P1..P7
* 0-13: Count of events needing PMC1..PMC7
*
* Note: this doesn't allow events using IFU.U to be combined with events
* using IFU.L, though that is feasible (using TTM0 and TTM2). However
* there are no listed events for IFU.L (they are debug events not
* verified for performance monitoring) so this shouldn't cause a
* problem.
*/
static struct unitinfo {
u64 value, mask;
int unit;
int lowerbit;
} p4_unitinfo[16] = {
[PM_FPU] = { 0x44000000000000ull, 0x88000000000000ull, PM_FPU, 0 },
[PM_ISU1] = { 0x20080000000000ull, 0x88000000000000ull, PM_ISU1, 0 },
[PM_ISU1_ALT] =
{ 0x20080000000000ull, 0x88000000000000ull, PM_ISU1, 0 },
[PM_IFU] = { 0x02200000000000ull, 0x08820000000000ull, PM_IFU, 41 },
[PM_IFU_ALT] =
{ 0x02200000000000ull, 0x08820000000000ull, PM_IFU, 41 },
[PM_IDU0] = { 0x10100000000000ull, 0x80840000000000ull, PM_IDU0, 1 },
[PM_ISU2] = { 0x10140000000000ull, 0x80840000000000ull, PM_ISU2, 0 },
[PM_LSU0] = { 0x01400000000000ull, 0x08800000000000ull, PM_LSU0, 0 },
[PM_LSU1] = { 0x00000000000000ull, 0x00010000000000ull, PM_LSU1, 40 },
[PM_GPS] = { 0x00000000000000ull, 0x00000000000000ull, PM_GPS, 0 }
};
static unsigned char direct_marked_event[8] = {
(1<<2) | (1<<3), /* PMC1: PM_MRK_GRP_DISP, PM_MRK_ST_CMPL */
(1<<3) | (1<<5), /* PMC2: PM_THRESH_TIMEO, PM_MRK_BRU_FIN */
(1<<3), /* PMC3: PM_MRK_ST_CMPL_INT */
(1<<4) | (1<<5), /* PMC4: PM_MRK_GRP_CMPL, PM_MRK_CRU_FIN */
(1<<4) | (1<<5), /* PMC5: PM_MRK_GRP_TIMEO */
(1<<3) | (1<<4) | (1<<5),
/* PMC6: PM_MRK_ST_GPS, PM_MRK_FXU_FIN, PM_MRK_GRP_ISSUED */
(1<<4) | (1<<5), /* PMC7: PM_MRK_FPU_FIN, PM_MRK_INST_FIN */
(1<<4), /* PMC8: PM_MRK_LSU_FIN */
};
/*
* Returns 1 if event counts things relating to marked instructions
* and thus needs the MMCRA_SAMPLE_ENABLE bit set, or 0 if not.
*/
static int p4_marked_instr_event(u64 event)
{
int pmc, psel, unit, byte, bit;
unsigned int mask;
pmc = (event >> PM_PMC_SH) & PM_PMC_MSK;
psel = event & PM_PMCSEL_MSK;
if (pmc) {
if (direct_marked_event[pmc - 1] & (1 << psel))
return 1;
if (psel == 0) /* add events */
bit = (pmc <= 4)? pmc - 1: 8 - pmc;
else if (psel == 6) /* decode events */
bit = 4;
else
return 0;
} else
bit = psel;
byte = (event >> PM_BYTE_SH) & PM_BYTE_MSK;
unit = (event >> PM_UNIT_SH) & PM_UNIT_MSK;
mask = 0;
switch (unit) {
case PM_LSU1:
if (event & PM_LOWER_MSKS)
mask = 1 << 28; /* byte 7 bit 4 */
else
mask = 6 << 24; /* byte 3 bits 1 and 2 */
break;
case PM_LSU0:
/* byte 3, bit 3; byte 2 bits 0,2,3,4,5; byte 1 */
mask = 0x083dff00;
}
return (mask >> (byte * 8 + bit)) & 1;
}
static int p4_get_constraint(u64 event, u64 *maskp, u64 *valp)
{
int pmc, byte, unit, lower, sh;
u64 mask = 0, value = 0;
int grp = -1;
pmc = (event >> PM_PMC_SH) & PM_PMC_MSK;
if (pmc) {
if (pmc > 8)
return -1;
sh = (pmc - 1) * 2;
mask |= 2 << sh;
value |= 1 << sh;
grp = ((pmc - 1) >> 1) & 1;
}
unit = (event >> PM_UNIT_SH) & PM_UNIT_MSK;
byte = (event >> PM_BYTE_SH) & PM_BYTE_MSK;
if (unit) {
lower = (event >> PM_LOWER_SH) & PM_LOWER_MSK;
/*
* Bus events on bytes 0 and 2 can be counted
* on PMC1/2/5/6; bytes 1 and 3 on PMC3/4/7/8.
*/
if (!pmc)
grp = byte & 1;
if (!p4_unitinfo[unit].unit)
return -1;
mask |= p4_unitinfo[unit].mask;
value |= p4_unitinfo[unit].value;
sh = p4_unitinfo[unit].lowerbit;
if (sh > 1)
value |= (u64)lower << sh;
else if (lower != sh)
return -1;
unit = p4_unitinfo[unit].unit;
/* Set byte lane select field */
mask |= 0xfULL << (28 - 4 * byte);
value |= (u64)unit << (28 - 4 * byte);
}
if (grp == 0) {
/* increment PMC1/2/5/6 field */
mask |= 0x8000000000ull;
value |= 0x1000000000ull;
} else {
/* increment PMC3/4/7/8 field */
mask |= 0x800000000ull;
value |= 0x100000000ull;
}
/* Marked instruction events need sample_enable set */
if (p4_marked_instr_event(event)) {
mask |= 1ull << 56;
value |= 1ull << 56;
}
/* PMCSEL=6 decode events on byte 2 need sample_enable clear */
if (pmc && (event & PM_PMCSEL_MSK) == 6 && byte == 2)
mask |= 1ull << 56;
*maskp = mask;
*valp = value;
return 0;
}
static unsigned int ppc_inst_cmpl[] = {
0x1001, 0x4001, 0x6001, 0x7001, 0x8001
};
static int p4_get_alternatives(u64 event, unsigned int flags, u64 alt[])
{
int i, j, na;
alt[0] = event;
na = 1;
/* 2 possibilities for PM_GRP_DISP_REJECT */
if (event == 0x8003 || event == 0x0224) {
alt[1] = event ^ (0x8003 ^ 0x0224);
return 2;
}
/* 2 possibilities for PM_ST_MISS_L1 */
if (event == 0x0c13 || event == 0x0c23) {
alt[1] = event ^ (0x0c13 ^ 0x0c23);
return 2;
}
/* several possibilities for PM_INST_CMPL */
for (i = 0; i < ARRAY_SIZE(ppc_inst_cmpl); ++i) {
if (event == ppc_inst_cmpl[i]) {
for (j = 0; j < ARRAY_SIZE(ppc_inst_cmpl); ++j)
if (j != i)
alt[na++] = ppc_inst_cmpl[j];
break;
}
}
return na;
}
static int p4_compute_mmcr(u64 event[], int n_ev,
unsigned int hwc[], u64 mmcr[])
{
u64 mmcr0 = 0, mmcr1 = 0, mmcra = 0;
unsigned int pmc, unit, byte, psel, lower;
unsigned int ttm, grp;
unsigned int pmc_inuse = 0;
unsigned int pmc_grp_use[2];
unsigned char busbyte[4];
unsigned char unituse[16];
unsigned int unitlower = 0;
int i;
if (n_ev > 8)
return -1;
/* First pass to count resource use */
pmc_grp_use[0] = pmc_grp_use[1] = 0;
memset(busbyte, 0, sizeof(busbyte));
memset(unituse, 0, sizeof(unituse));
for (i = 0; i < n_ev; ++i) {
pmc = (event[i] >> PM_PMC_SH) & PM_PMC_MSK;
if (pmc) {
if (pmc_inuse & (1 << (pmc - 1)))
return -1;
pmc_inuse |= 1 << (pmc - 1);
/* count 1/2/5/6 vs 3/4/7/8 use */
++pmc_grp_use[((pmc - 1) >> 1) & 1];
}
unit = (event[i] >> PM_UNIT_SH) & PM_UNIT_MSK;
byte = (event[i] >> PM_BYTE_SH) & PM_BYTE_MSK;
lower = (event[i] >> PM_LOWER_SH) & PM_LOWER_MSK;
if (unit) {
if (!pmc)
++pmc_grp_use[byte & 1];
if (unit == 6 || unit == 8)
/* map alt ISU1/IFU codes: 6->2, 8->3 */
unit = (unit >> 1) - 1;
if (busbyte[byte] && busbyte[byte] != unit)
return -1;
busbyte[byte] = unit;
lower <<= unit;
if (unituse[unit] && lower != (unitlower & lower))
return -1;
unituse[unit] = 1;
unitlower |= lower;
}
}
if (pmc_grp_use[0] > 4 || pmc_grp_use[1] > 4)
return -1;
/*
* Assign resources and set multiplexer selects.
*
* Units 1,2,3 are on TTM0, 4,6,7 on TTM1, 8,10 on TTM2.
* Each TTMx can only select one unit, but since
* units 2 and 6 are both ISU1, and 3 and 8 are both IFU,
* we have some choices.
*/
if (unituse[2] & (unituse[1] | (unituse[3] & unituse[9]))) {
unituse[6] = 1; /* Move 2 to 6 */
unituse[2] = 0;
}
if (unituse[3] & (unituse[1] | unituse[2])) {
unituse[8] = 1; /* Move 3 to 8 */
unituse[3] = 0;
unitlower = (unitlower & ~8) | ((unitlower & 8) << 5);
}
/* Check only one unit per TTMx */
if (unituse[1] + unituse[2] + unituse[3] > 1 ||
unituse[4] + unituse[6] + unituse[7] > 1 ||
unituse[8] + unituse[9] > 1 ||
(unituse[5] | unituse[10] | unituse[11] |
unituse[13] | unituse[14]))
return -1;
/* Set TTMxSEL fields. Note, units 1-3 => TTM0SEL codes 0-2 */
mmcr1 |= (u64)(unituse[3] * 2 + unituse[2]) << MMCR1_TTM0SEL_SH;
mmcr1 |= (u64)(unituse[7] * 3 + unituse[6] * 2) << MMCR1_TTM1SEL_SH;
mmcr1 |= (u64)unituse[9] << MMCR1_TTM2SEL_SH;
/* Set TTCxSEL fields. */
if (unitlower & 0xe)
mmcr1 |= 1ull << MMCR1_TTC0SEL_SH;
if (unitlower & 0xf0)
mmcr1 |= 1ull << MMCR1_TTC1SEL_SH;
if (unitlower & 0xf00)
mmcr1 |= 1ull << MMCR1_TTC2SEL_SH;
if (unitlower & 0x7000)
mmcr1 |= 1ull << MMCR1_TTC3SEL_SH;
/* Set byte lane select fields. */
for (byte = 0; byte < 4; ++byte) {
unit = busbyte[byte];
if (!unit)
continue;
if (unit == 0xf) {
/* special case for GPS */
mmcr1 |= 1ull << (MMCR1_DEBUG0SEL_SH - byte);
} else {
if (!unituse[unit])
ttm = unit - 1; /* 2->1, 3->2 */
else
ttm = unit >> 2;
mmcr1 |= (u64)ttm << (MMCR1_TD_CP_DBG0SEL_SH - 2*byte);
}
}
/* Second pass: assign PMCs, set PMCxSEL and PMCx_ADDER_SEL fields */
for (i = 0; i < n_ev; ++i) {
pmc = (event[i] >> PM_PMC_SH) & PM_PMC_MSK;
unit = (event[i] >> PM_UNIT_SH) & PM_UNIT_MSK;
byte = (event[i] >> PM_BYTE_SH) & PM_BYTE_MSK;
psel = event[i] & PM_PMCSEL_MSK;
if (!pmc) {
/* Bus event or 00xxx direct event (off or cycles) */
if (unit)
psel |= 0x10 | ((byte & 2) << 2);
for (pmc = 0; pmc < 8; ++pmc) {
if (pmc_inuse & (1 << pmc))
continue;
grp = (pmc >> 1) & 1;
if (unit) {
if (grp == (byte & 1))
break;
} else if (pmc_grp_use[grp] < 4) {
++pmc_grp_use[grp];
break;
}
}
pmc_inuse |= 1 << pmc;
} else {
/* Direct event */
--pmc;
if (psel == 0 && (byte & 2))
/* add events on higher-numbered bus */
mmcr1 |= 1ull << mmcr1_adder_bits[pmc];
else if (psel == 6 && byte == 3)
/* seem to need to set sample_enable here */
mmcra |= MMCRA_SAMPLE_ENABLE;
psel |= 8;
}
if (pmc <= 1)
mmcr0 |= psel << (MMCR0_PMC1SEL_SH - 7 * pmc);
else
mmcr1 |= psel << (MMCR1_PMC3SEL_SH - 5 * (pmc - 2));
if (pmc == 7) /* PMC8 */
mmcra |= (psel & 1) << MMCRA_PMC8SEL0_SH;
hwc[i] = pmc;
if (p4_marked_instr_event(event[i]))
mmcra |= MMCRA_SAMPLE_ENABLE;
}
if (pmc_inuse & 1)
mmcr0 |= MMCR0_PMC1CE;
if (pmc_inuse & 0xfe)
mmcr0 |= MMCR0_PMCjCE;
mmcra |= 0x2000; /* mark only one IOP per PPC instruction */
/* Return MMCRx values */
mmcr[0] = mmcr0;
mmcr[1] = mmcr1;
mmcr[2] = mmcra;
return 0;
}
static void p4_disable_pmc(unsigned int pmc, u64 mmcr[])
{
/*
* Setting the PMCxSEL field to 0 disables PMC x.
* (Note that pmc is 0-based here, not 1-based.)
*/
if (pmc <= 1) {
mmcr[0] &= ~(0x1fUL << (MMCR0_PMC1SEL_SH - 7 * pmc));
} else {
mmcr[1] &= ~(0x1fUL << (MMCR1_PMC3SEL_SH - 5 * (pmc - 2)));
if (pmc == 7)
mmcr[2] &= ~(1UL << MMCRA_PMC8SEL0_SH);
}
}
static int p4_generic_events[] = {
[PERF_COUNT_HW_CPU_CYCLES] = 7,
[PERF_COUNT_HW_INSTRUCTIONS] = 0x1001,
[PERF_COUNT_HW_CACHE_REFERENCES] = 0x8c10, /* PM_LD_REF_L1 */
[PERF_COUNT_HW_CACHE_MISSES] = 0x3c10, /* PM_LD_MISS_L1 */
[PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = 0x330, /* PM_BR_ISSUED */
[PERF_COUNT_HW_BRANCH_MISSES] = 0x331, /* PM_BR_MPRED_CR */
};
#define C(x) PERF_COUNT_HW_CACHE_##x
/*
* Table of generalized cache-related events.
* 0 means not supported, -1 means nonsensical, other values
* are event codes.
*/
static int power4_cache_events[C(MAX)][C(OP_MAX)][C(RESULT_MAX)] = {
[C(L1D)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0x8c10, 0x3c10 },
[C(OP_WRITE)] = { 0x7c10, 0xc13 },
[C(OP_PREFETCH)] = { 0xc35, 0 },
},
[C(L1I)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0, 0 },
[C(OP_WRITE)] = { -1, -1 },
[C(OP_PREFETCH)] = { 0, 0 },
},
[C(LL)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0, 0 },
[C(OP_WRITE)] = { 0, 0 },
[C(OP_PREFETCH)] = { 0xc34, 0 },
},
[C(DTLB)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0, 0x904 },
[C(OP_WRITE)] = { -1, -1 },
[C(OP_PREFETCH)] = { -1, -1 },
},
[C(ITLB)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0, 0x900 },
[C(OP_WRITE)] = { -1, -1 },
[C(OP_PREFETCH)] = { -1, -1 },
},
[C(BPU)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0x330, 0x331 },
[C(OP_WRITE)] = { -1, -1 },
[C(OP_PREFETCH)] = { -1, -1 },
},
};
struct power_pmu power4_pmu = {
.n_counter = 8,
.max_alternatives = 5,
.add_fields = 0x0000001100005555ull,
.test_adder = 0x0011083300000000ull,
.compute_mmcr = p4_compute_mmcr,
.get_constraint = p4_get_constraint,
.get_alternatives = p4_get_alternatives,
.disable_pmc = p4_disable_pmc,
.n_generic = ARRAY_SIZE(p4_generic_events),
.generic_events = p4_generic_events,
.cache_events = &power4_cache_events,
};

Просмотреть файл

@ -0,0 +1,671 @@
/*
* Performance counter support for POWER5+/++ (not POWER5) processors.
*
* Copyright 2009 Paul Mackerras, IBM Corporation.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/kernel.h>
#include <linux/perf_counter.h>
#include <asm/reg.h>
/*
* Bits in event code for POWER5+ (POWER5 GS) and POWER5++ (POWER5 GS DD3)
*/
#define PM_PMC_SH 20 /* PMC number (1-based) for direct events */
#define PM_PMC_MSK 0xf
#define PM_PMC_MSKS (PM_PMC_MSK << PM_PMC_SH)
#define PM_UNIT_SH 16 /* TTMMUX number and setting - unit select */
#define PM_UNIT_MSK 0xf
#define PM_BYTE_SH 12 /* Byte number of event bus to use */
#define PM_BYTE_MSK 7
#define PM_GRS_SH 8 /* Storage subsystem mux select */
#define PM_GRS_MSK 7
#define PM_BUSEVENT_MSK 0x80 /* Set if event uses event bus */
#define PM_PMCSEL_MSK 0x7f
/* Values in PM_UNIT field */
#define PM_FPU 0
#define PM_ISU0 1
#define PM_IFU 2
#define PM_ISU1 3
#define PM_IDU 4
#define PM_ISU0_ALT 6
#define PM_GRS 7
#define PM_LSU0 8
#define PM_LSU1 0xc
#define PM_LASTUNIT 0xc
/*
* Bits in MMCR1 for POWER5+
*/
#define MMCR1_TTM0SEL_SH 62
#define MMCR1_TTM1SEL_SH 60
#define MMCR1_TTM2SEL_SH 58
#define MMCR1_TTM3SEL_SH 56
#define MMCR1_TTMSEL_MSK 3
#define MMCR1_TD_CP_DBG0SEL_SH 54
#define MMCR1_TD_CP_DBG1SEL_SH 52
#define MMCR1_TD_CP_DBG2SEL_SH 50
#define MMCR1_TD_CP_DBG3SEL_SH 48
#define MMCR1_GRS_L2SEL_SH 46
#define MMCR1_GRS_L2SEL_MSK 3
#define MMCR1_GRS_L3SEL_SH 44
#define MMCR1_GRS_L3SEL_MSK 3
#define MMCR1_GRS_MCSEL_SH 41
#define MMCR1_GRS_MCSEL_MSK 7
#define MMCR1_GRS_FABSEL_SH 39
#define MMCR1_GRS_FABSEL_MSK 3
#define MMCR1_PMC1_ADDER_SEL_SH 35
#define MMCR1_PMC2_ADDER_SEL_SH 34
#define MMCR1_PMC3_ADDER_SEL_SH 33
#define MMCR1_PMC4_ADDER_SEL_SH 32
#define MMCR1_PMC1SEL_SH 25
#define MMCR1_PMC2SEL_SH 17
#define MMCR1_PMC3SEL_SH 9
#define MMCR1_PMC4SEL_SH 1
#define MMCR1_PMCSEL_SH(n) (MMCR1_PMC1SEL_SH - (n) * 8)
#define MMCR1_PMCSEL_MSK 0x7f
/*
* Bits in MMCRA
*/
/*
* Layout of constraint bits:
* 6666555555555544444444443333333333222222222211111111110000000000
* 3210987654321098765432109876543210987654321098765432109876543210
* [ ><><>< ><> <><>[ > < >< >< >< ><><><><><><>
* NC G0G1G2 G3 T0T1 UC B0 B1 B2 B3 P6P5P4P3P2P1
*
* NC - number of counters
* 51: NC error 0x0008_0000_0000_0000
* 48-50: number of events needing PMC1-4 0x0007_0000_0000_0000
*
* G0..G3 - GRS mux constraints
* 46-47: GRS_L2SEL value
* 44-45: GRS_L3SEL value
* 41-44: GRS_MCSEL value
* 39-40: GRS_FABSEL value
* Note that these match up with their bit positions in MMCR1
*
* T0 - TTM0 constraint
* 36-37: TTM0SEL value (0=FPU, 2=IFU, 3=ISU1) 0x30_0000_0000
*
* T1 - TTM1 constraint
* 34-35: TTM1SEL value (0=IDU, 3=GRS) 0x0c_0000_0000
*
* UC - unit constraint: can't have all three of FPU|IFU|ISU1, ISU0, IDU|GRS
* 33: UC3 error 0x02_0000_0000
* 32: FPU|IFU|ISU1 events needed 0x01_0000_0000
* 31: ISU0 events needed 0x01_8000_0000
* 30: IDU|GRS events needed 0x00_4000_0000
*
* B0
* 24-27: Byte 0 event source 0x0f00_0000
* Encoding as for the event code
*
* B1, B2, B3
* 20-23, 16-19, 12-15: Byte 1, 2, 3 event sources
*
* P6
* 11: P6 error 0x800
* 10-11: Count of events needing PMC6
*
* P1..P5
* 0-9: Count of events needing PMC1..PMC5
*/
static const int grsel_shift[8] = {
MMCR1_GRS_L2SEL_SH, MMCR1_GRS_L2SEL_SH, MMCR1_GRS_L2SEL_SH,
MMCR1_GRS_L3SEL_SH, MMCR1_GRS_L3SEL_SH, MMCR1_GRS_L3SEL_SH,
MMCR1_GRS_MCSEL_SH, MMCR1_GRS_FABSEL_SH
};
/* Masks and values for using events from the various units */
static u64 unit_cons[PM_LASTUNIT+1][2] = {
[PM_FPU] = { 0x3200000000ull, 0x0100000000ull },
[PM_ISU0] = { 0x0200000000ull, 0x0080000000ull },
[PM_ISU1] = { 0x3200000000ull, 0x3100000000ull },
[PM_IFU] = { 0x3200000000ull, 0x2100000000ull },
[PM_IDU] = { 0x0e00000000ull, 0x0040000000ull },
[PM_GRS] = { 0x0e00000000ull, 0x0c40000000ull },
};
static int power5p_get_constraint(u64 event, u64 *maskp, u64 *valp)
{
int pmc, byte, unit, sh;
int bit, fmask;
u64 mask = 0, value = 0;
pmc = (event >> PM_PMC_SH) & PM_PMC_MSK;
if (pmc) {
if (pmc > 6)
return -1;
sh = (pmc - 1) * 2;
mask |= 2 << sh;
value |= 1 << sh;
if (pmc >= 5 && !(event == 0x500009 || event == 0x600005))
return -1;
}
if (event & PM_BUSEVENT_MSK) {
unit = (event >> PM_UNIT_SH) & PM_UNIT_MSK;
if (unit > PM_LASTUNIT)
return -1;
if (unit == PM_ISU0_ALT)
unit = PM_ISU0;
mask |= unit_cons[unit][0];
value |= unit_cons[unit][1];
byte = (event >> PM_BYTE_SH) & PM_BYTE_MSK;
if (byte >= 4) {
if (unit != PM_LSU1)
return -1;
/* Map LSU1 low word (bytes 4-7) to unit LSU1+1 */
++unit;
byte &= 3;
}
if (unit == PM_GRS) {
bit = event & 7;
fmask = (bit == 6)? 7: 3;
sh = grsel_shift[bit];
mask |= (u64)fmask << sh;
value |= (u64)((event >> PM_GRS_SH) & fmask) << sh;
}
/* Set byte lane select field */
mask |= 0xfULL << (24 - 4 * byte);
value |= (u64)unit << (24 - 4 * byte);
}
if (pmc < 5) {
/* need a counter from PMC1-4 set */
mask |= 0x8000000000000ull;
value |= 0x1000000000000ull;
}
*maskp = mask;
*valp = value;
return 0;
}
static int power5p_limited_pmc_event(u64 event)
{
int pmc = (event >> PM_PMC_SH) & PM_PMC_MSK;
return pmc == 5 || pmc == 6;
}
#define MAX_ALT 3 /* at most 3 alternatives for any event */
static const unsigned int event_alternatives[][MAX_ALT] = {
{ 0x100c0, 0x40001f }, /* PM_GCT_FULL_CYC */
{ 0x120e4, 0x400002 }, /* PM_GRP_DISP_REJECT */
{ 0x230e2, 0x323087 }, /* PM_BR_PRED_CR */
{ 0x230e3, 0x223087, 0x3230a0 }, /* PM_BR_PRED_TA */
{ 0x410c7, 0x441084 }, /* PM_THRD_L2MISS_BOTH_CYC */
{ 0x800c4, 0xc20e0 }, /* PM_DTLB_MISS */
{ 0xc50c6, 0xc60e0 }, /* PM_MRK_DTLB_MISS */
{ 0x100005, 0x600005 }, /* PM_RUN_CYC */
{ 0x100009, 0x200009 }, /* PM_INST_CMPL */
{ 0x200015, 0x300015 }, /* PM_LSU_LMQ_SRQ_EMPTY_CYC */
{ 0x300009, 0x400009 }, /* PM_INST_DISP */
};
/*
* Scan the alternatives table for a match and return the
* index into the alternatives table if found, else -1.
*/
static int find_alternative(unsigned int event)
{
int i, j;
for (i = 0; i < ARRAY_SIZE(event_alternatives); ++i) {
if (event < event_alternatives[i][0])
break;
for (j = 0; j < MAX_ALT && event_alternatives[i][j]; ++j)
if (event == event_alternatives[i][j])
return i;
}
return -1;
}
static const unsigned char bytedecode_alternatives[4][4] = {
/* PMC 1 */ { 0x21, 0x23, 0x25, 0x27 },
/* PMC 2 */ { 0x07, 0x17, 0x0e, 0x1e },
/* PMC 3 */ { 0x20, 0x22, 0x24, 0x26 },
/* PMC 4 */ { 0x07, 0x17, 0x0e, 0x1e }
};
/*
* Some direct events for decodes of event bus byte 3 have alternative
* PMCSEL values on other counters. This returns the alternative
* event code for those that do, or -1 otherwise. This also handles
* alternative PCMSEL values for add events.
*/
static s64 find_alternative_bdecode(u64 event)
{
int pmc, altpmc, pp, j;
pmc = (event >> PM_PMC_SH) & PM_PMC_MSK;
if (pmc == 0 || pmc > 4)
return -1;
altpmc = 5 - pmc; /* 1 <-> 4, 2 <-> 3 */
pp = event & PM_PMCSEL_MSK;
for (j = 0; j < 4; ++j) {
if (bytedecode_alternatives[pmc - 1][j] == pp) {
return (event & ~(PM_PMC_MSKS | PM_PMCSEL_MSK)) |
(altpmc << PM_PMC_SH) |
bytedecode_alternatives[altpmc - 1][j];
}
}
/* new decode alternatives for power5+ */
if (pmc == 1 && (pp == 0x0d || pp == 0x0e))
return event + (2 << PM_PMC_SH) + (0x2e - 0x0d);
if (pmc == 3 && (pp == 0x2e || pp == 0x2f))
return event - (2 << PM_PMC_SH) - (0x2e - 0x0d);
/* alternative add event encodings */
if (pp == 0x10 || pp == 0x28)
return ((event ^ (0x10 ^ 0x28)) & ~PM_PMC_MSKS) |
(altpmc << PM_PMC_SH);
return -1;
}
static int power5p_get_alternatives(u64 event, unsigned int flags, u64 alt[])
{
int i, j, nalt = 1;
int nlim;
s64 ae;
alt[0] = event;
nalt = 1;
nlim = power5p_limited_pmc_event(event);
i = find_alternative(event);
if (i >= 0) {
for (j = 0; j < MAX_ALT; ++j) {
ae = event_alternatives[i][j];
if (ae && ae != event)
alt[nalt++] = ae;
nlim += power5p_limited_pmc_event(ae);
}
} else {
ae = find_alternative_bdecode(event);
if (ae > 0)
alt[nalt++] = ae;
}
if (flags & PPMU_ONLY_COUNT_RUN) {
/*
* We're only counting in RUN state,
* so PM_CYC is equivalent to PM_RUN_CYC
* and PM_INST_CMPL === PM_RUN_INST_CMPL.
* This doesn't include alternatives that don't provide
* any extra flexibility in assigning PMCs (e.g.
* 0x100005 for PM_RUN_CYC vs. 0xf for PM_CYC).
* Note that even with these additional alternatives
* we never end up with more than 3 alternatives for any event.
*/
j = nalt;
for (i = 0; i < nalt; ++i) {
switch (alt[i]) {
case 0xf: /* PM_CYC */
alt[j++] = 0x600005; /* PM_RUN_CYC */
++nlim;
break;
case 0x600005: /* PM_RUN_CYC */
alt[j++] = 0xf;
break;
case 0x100009: /* PM_INST_CMPL */
alt[j++] = 0x500009; /* PM_RUN_INST_CMPL */
++nlim;
break;
case 0x500009: /* PM_RUN_INST_CMPL */
alt[j++] = 0x100009; /* PM_INST_CMPL */
alt[j++] = 0x200009;
break;
}
}
nalt = j;
}
if (!(flags & PPMU_LIMITED_PMC_OK) && nlim) {
/* remove the limited PMC events */
j = 0;
for (i = 0; i < nalt; ++i) {
if (!power5p_limited_pmc_event(alt[i])) {
alt[j] = alt[i];
++j;
}
}
nalt = j;
} else if ((flags & PPMU_LIMITED_PMC_REQD) && nlim < nalt) {
/* remove all but the limited PMC events */
j = 0;
for (i = 0; i < nalt; ++i) {
if (power5p_limited_pmc_event(alt[i])) {
alt[j] = alt[i];
++j;
}
}
nalt = j;
}
return nalt;
}
/*
* Map of which direct events on which PMCs are marked instruction events.
* Indexed by PMCSEL value, bit i (LE) set if PMC i is a marked event.
* Bit 0 is set if it is marked for all PMCs.
* The 0x80 bit indicates a byte decode PMCSEL value.
*/
static unsigned char direct_event_is_marked[0x28] = {
0, /* 00 */
0x1f, /* 01 PM_IOPS_CMPL */
0x2, /* 02 PM_MRK_GRP_DISP */
0xe, /* 03 PM_MRK_ST_CMPL, PM_MRK_ST_GPS, PM_MRK_ST_CMPL_INT */
0, /* 04 */
0x1c, /* 05 PM_MRK_BRU_FIN, PM_MRK_INST_FIN, PM_MRK_CRU_FIN */
0x80, /* 06 */
0x80, /* 07 */
0, 0, 0,/* 08 - 0a */
0x18, /* 0b PM_THRESH_TIMEO, PM_MRK_GRP_TIMEO */
0, /* 0c */
0x80, /* 0d */
0x80, /* 0e */
0, /* 0f */
0, /* 10 */
0x14, /* 11 PM_MRK_GRP_BR_REDIR, PM_MRK_GRP_IC_MISS */
0, /* 12 */
0x10, /* 13 PM_MRK_GRP_CMPL */
0x1f, /* 14 PM_GRP_MRK, PM_MRK_{FXU,FPU,LSU}_FIN */
0x2, /* 15 PM_MRK_GRP_ISSUED */
0x80, /* 16 */
0x80, /* 17 */
0, 0, 0, 0, 0,
0x80, /* 1d */
0x80, /* 1e */
0, /* 1f */
0x80, /* 20 */
0x80, /* 21 */
0x80, /* 22 */
0x80, /* 23 */
0x80, /* 24 */
0x80, /* 25 */
0x80, /* 26 */
0x80, /* 27 */
};
/*
* Returns 1 if event counts things relating to marked instructions
* and thus needs the MMCRA_SAMPLE_ENABLE bit set, or 0 if not.
*/
static int power5p_marked_instr_event(u64 event)
{
int pmc, psel;
int bit, byte, unit;
u32 mask;
pmc = (event >> PM_PMC_SH) & PM_PMC_MSK;
psel = event & PM_PMCSEL_MSK;
if (pmc >= 5)
return 0;
bit = -1;
if (psel < sizeof(direct_event_is_marked)) {
if (direct_event_is_marked[psel] & (1 << pmc))
return 1;
if (direct_event_is_marked[psel] & 0x80)
bit = 4;
else if (psel == 0x08)
bit = pmc - 1;
else if (psel == 0x10)
bit = 4 - pmc;
else if (psel == 0x1b && (pmc == 1 || pmc == 3))
bit = 4;
} else if ((psel & 0x48) == 0x40) {
bit = psel & 7;
} else if (psel == 0x28) {
bit = pmc - 1;
} else if (pmc == 3 && (psel == 0x2e || psel == 0x2f)) {
bit = 4;
}
if (!(event & PM_BUSEVENT_MSK) || bit == -1)
return 0;
byte = (event >> PM_BYTE_SH) & PM_BYTE_MSK;
unit = (event >> PM_UNIT_SH) & PM_UNIT_MSK;
if (unit == PM_LSU0) {
/* byte 1 bits 0-7, byte 2 bits 0,2-4,6 */
mask = 0x5dff00;
} else if (unit == PM_LSU1 && byte >= 4) {
byte -= 4;
/* byte 5 bits 6-7, byte 6 bits 0,4, byte 7 bits 0-4,6 */
mask = 0x5f11c000;
} else
return 0;
return (mask >> (byte * 8 + bit)) & 1;
}
static int power5p_compute_mmcr(u64 event[], int n_ev,
unsigned int hwc[], u64 mmcr[])
{
u64 mmcr1 = 0;
u64 mmcra = 0;
unsigned int pmc, unit, byte, psel;
unsigned int ttm;
int i, isbus, bit, grsel;
unsigned int pmc_inuse = 0;
unsigned char busbyte[4];
unsigned char unituse[16];
int ttmuse;
if (n_ev > 6)
return -1;
/* First pass to count resource use */
memset(busbyte, 0, sizeof(busbyte));
memset(unituse, 0, sizeof(unituse));
for (i = 0; i < n_ev; ++i) {
pmc = (event[i] >> PM_PMC_SH) & PM_PMC_MSK;
if (pmc) {
if (pmc > 6)
return -1;
if (pmc_inuse & (1 << (pmc - 1)))
return -1;
pmc_inuse |= 1 << (pmc - 1);
}
if (event[i] & PM_BUSEVENT_MSK) {
unit = (event[i] >> PM_UNIT_SH) & PM_UNIT_MSK;
byte = (event[i] >> PM_BYTE_SH) & PM_BYTE_MSK;
if (unit > PM_LASTUNIT)
return -1;
if (unit == PM_ISU0_ALT)
unit = PM_ISU0;
if (byte >= 4) {
if (unit != PM_LSU1)
return -1;
++unit;
byte &= 3;
}
if (busbyte[byte] && busbyte[byte] != unit)
return -1;
busbyte[byte] = unit;
unituse[unit] = 1;
}
}
/*
* Assign resources and set multiplexer selects.
*
* PM_ISU0 can go either on TTM0 or TTM1, but that's the only
* choice we have to deal with.
*/
if (unituse[PM_ISU0] &
(unituse[PM_FPU] | unituse[PM_IFU] | unituse[PM_ISU1])) {
unituse[PM_ISU0_ALT] = 1; /* move ISU to TTM1 */
unituse[PM_ISU0] = 0;
}
/* Set TTM[01]SEL fields. */
ttmuse = 0;
for (i = PM_FPU; i <= PM_ISU1; ++i) {
if (!unituse[i])
continue;
if (ttmuse++)
return -1;
mmcr1 |= (u64)i << MMCR1_TTM0SEL_SH;
}
ttmuse = 0;
for (; i <= PM_GRS; ++i) {
if (!unituse[i])
continue;
if (ttmuse++)
return -1;
mmcr1 |= (u64)(i & 3) << MMCR1_TTM1SEL_SH;
}
if (ttmuse > 1)
return -1;
/* Set byte lane select fields, TTM[23]SEL and GRS_*SEL. */
for (byte = 0; byte < 4; ++byte) {
unit = busbyte[byte];
if (!unit)
continue;
if (unit == PM_ISU0 && unituse[PM_ISU0_ALT]) {
/* get ISU0 through TTM1 rather than TTM0 */
unit = PM_ISU0_ALT;
} else if (unit == PM_LSU1 + 1) {
/* select lower word of LSU1 for this byte */
mmcr1 |= 1ull << (MMCR1_TTM3SEL_SH + 3 - byte);
}
ttm = unit >> 2;
mmcr1 |= (u64)ttm << (MMCR1_TD_CP_DBG0SEL_SH - 2 * byte);
}
/* Second pass: assign PMCs, set PMCxSEL and PMCx_ADDER_SEL fields */
for (i = 0; i < n_ev; ++i) {
pmc = (event[i] >> PM_PMC_SH) & PM_PMC_MSK;
unit = (event[i] >> PM_UNIT_SH) & PM_UNIT_MSK;
byte = (event[i] >> PM_BYTE_SH) & PM_BYTE_MSK;
psel = event[i] & PM_PMCSEL_MSK;
isbus = event[i] & PM_BUSEVENT_MSK;
if (!pmc) {
/* Bus event or any-PMC direct event */
for (pmc = 0; pmc < 4; ++pmc) {
if (!(pmc_inuse & (1 << pmc)))
break;
}
if (pmc >= 4)
return -1;
pmc_inuse |= 1 << pmc;
} else if (pmc <= 4) {
/* Direct event */
--pmc;
if (isbus && (byte & 2) &&
(psel == 8 || psel == 0x10 || psel == 0x28))
/* add events on higher-numbered bus */
mmcr1 |= 1ull << (MMCR1_PMC1_ADDER_SEL_SH - pmc);
} else {
/* Instructions or run cycles on PMC5/6 */
--pmc;
}
if (isbus && unit == PM_GRS) {
bit = psel & 7;
grsel = (event[i] >> PM_GRS_SH) & PM_GRS_MSK;
mmcr1 |= (u64)grsel << grsel_shift[bit];
}
if (power5p_marked_instr_event(event[i]))
mmcra |= MMCRA_SAMPLE_ENABLE;
if ((psel & 0x58) == 0x40 && (byte & 1) != ((pmc >> 1) & 1))
/* select alternate byte lane */
psel |= 0x10;
if (pmc <= 3)
mmcr1 |= psel << MMCR1_PMCSEL_SH(pmc);
hwc[i] = pmc;
}
/* Return MMCRx values */
mmcr[0] = 0;
if (pmc_inuse & 1)
mmcr[0] = MMCR0_PMC1CE;
if (pmc_inuse & 0x3e)
mmcr[0] |= MMCR0_PMCjCE;
mmcr[1] = mmcr1;
mmcr[2] = mmcra;
return 0;
}
static void power5p_disable_pmc(unsigned int pmc, u64 mmcr[])
{
if (pmc <= 3)
mmcr[1] &= ~(0x7fUL << MMCR1_PMCSEL_SH(pmc));
}
static int power5p_generic_events[] = {
[PERF_COUNT_HW_CPU_CYCLES] = 0xf,
[PERF_COUNT_HW_INSTRUCTIONS] = 0x100009,
[PERF_COUNT_HW_CACHE_REFERENCES] = 0x1c10a8, /* LD_REF_L1 */
[PERF_COUNT_HW_CACHE_MISSES] = 0x3c1088, /* LD_MISS_L1 */
[PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = 0x230e4, /* BR_ISSUED */
[PERF_COUNT_HW_BRANCH_MISSES] = 0x230e5, /* BR_MPRED_CR */
};
#define C(x) PERF_COUNT_HW_CACHE_##x
/*
* Table of generalized cache-related events.
* 0 means not supported, -1 means nonsensical, other values
* are event codes.
*/
static int power5p_cache_events[C(MAX)][C(OP_MAX)][C(RESULT_MAX)] = {
[C(L1D)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0x1c10a8, 0x3c1088 },
[C(OP_WRITE)] = { 0x2c10a8, 0xc10c3 },
[C(OP_PREFETCH)] = { 0xc70e7, -1 },
},
[C(L1I)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0, 0 },
[C(OP_WRITE)] = { -1, -1 },
[C(OP_PREFETCH)] = { 0, 0 },
},
[C(LL)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0, 0 },
[C(OP_WRITE)] = { 0, 0 },
[C(OP_PREFETCH)] = { 0xc50c3, 0 },
},
[C(DTLB)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0xc20e4, 0x800c4 },
[C(OP_WRITE)] = { -1, -1 },
[C(OP_PREFETCH)] = { -1, -1 },
},
[C(ITLB)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0, 0x800c0 },
[C(OP_WRITE)] = { -1, -1 },
[C(OP_PREFETCH)] = { -1, -1 },
},
[C(BPU)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0x230e4, 0x230e5 },
[C(OP_WRITE)] = { -1, -1 },
[C(OP_PREFETCH)] = { -1, -1 },
},
};
struct power_pmu power5p_pmu = {
.n_counter = 6,
.max_alternatives = MAX_ALT,
.add_fields = 0x7000000000055ull,
.test_adder = 0x3000040000000ull,
.compute_mmcr = power5p_compute_mmcr,
.get_constraint = power5p_get_constraint,
.get_alternatives = power5p_get_alternatives,
.disable_pmc = power5p_disable_pmc,
.limited_pmc_event = power5p_limited_pmc_event,
.flags = PPMU_LIMITED_PMC5_6,
.n_generic = ARRAY_SIZE(power5p_generic_events),
.generic_events = power5p_generic_events,
.cache_events = &power5p_cache_events,
};

Просмотреть файл

@ -0,0 +1,611 @@
/*
* Performance counter support for POWER5 (not POWER5++) processors.
*
* Copyright 2009 Paul Mackerras, IBM Corporation.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/kernel.h>
#include <linux/perf_counter.h>
#include <asm/reg.h>
/*
* Bits in event code for POWER5 (not POWER5++)
*/
#define PM_PMC_SH 20 /* PMC number (1-based) for direct events */
#define PM_PMC_MSK 0xf
#define PM_PMC_MSKS (PM_PMC_MSK << PM_PMC_SH)
#define PM_UNIT_SH 16 /* TTMMUX number and setting - unit select */
#define PM_UNIT_MSK 0xf
#define PM_BYTE_SH 12 /* Byte number of event bus to use */
#define PM_BYTE_MSK 7
#define PM_GRS_SH 8 /* Storage subsystem mux select */
#define PM_GRS_MSK 7
#define PM_BUSEVENT_MSK 0x80 /* Set if event uses event bus */
#define PM_PMCSEL_MSK 0x7f
/* Values in PM_UNIT field */
#define PM_FPU 0
#define PM_ISU0 1
#define PM_IFU 2
#define PM_ISU1 3
#define PM_IDU 4
#define PM_ISU0_ALT 6
#define PM_GRS 7
#define PM_LSU0 8
#define PM_LSU1 0xc
#define PM_LASTUNIT 0xc
/*
* Bits in MMCR1 for POWER5
*/
#define MMCR1_TTM0SEL_SH 62
#define MMCR1_TTM1SEL_SH 60
#define MMCR1_TTM2SEL_SH 58
#define MMCR1_TTM3SEL_SH 56
#define MMCR1_TTMSEL_MSK 3
#define MMCR1_TD_CP_DBG0SEL_SH 54
#define MMCR1_TD_CP_DBG1SEL_SH 52
#define MMCR1_TD_CP_DBG2SEL_SH 50
#define MMCR1_TD_CP_DBG3SEL_SH 48
#define MMCR1_GRS_L2SEL_SH 46
#define MMCR1_GRS_L2SEL_MSK 3
#define MMCR1_GRS_L3SEL_SH 44
#define MMCR1_GRS_L3SEL_MSK 3
#define MMCR1_GRS_MCSEL_SH 41
#define MMCR1_GRS_MCSEL_MSK 7
#define MMCR1_GRS_FABSEL_SH 39
#define MMCR1_GRS_FABSEL_MSK 3
#define MMCR1_PMC1_ADDER_SEL_SH 35
#define MMCR1_PMC2_ADDER_SEL_SH 34
#define MMCR1_PMC3_ADDER_SEL_SH 33
#define MMCR1_PMC4_ADDER_SEL_SH 32
#define MMCR1_PMC1SEL_SH 25
#define MMCR1_PMC2SEL_SH 17
#define MMCR1_PMC3SEL_SH 9
#define MMCR1_PMC4SEL_SH 1
#define MMCR1_PMCSEL_SH(n) (MMCR1_PMC1SEL_SH - (n) * 8)
#define MMCR1_PMCSEL_MSK 0x7f
/*
* Bits in MMCRA
*/
/*
* Layout of constraint bits:
* 6666555555555544444444443333333333222222222211111111110000000000
* 3210987654321098765432109876543210987654321098765432109876543210
* <><>[ ><><>< ><> [ >[ >[ >< >< >< >< ><><><><><><>
* T0T1 NC G0G1G2 G3 UC PS1PS2 B0 B1 B2 B3 P6P5P4P3P2P1
*
* T0 - TTM0 constraint
* 54-55: TTM0SEL value (0=FPU, 2=IFU, 3=ISU1) 0xc0_0000_0000_0000
*
* T1 - TTM1 constraint
* 52-53: TTM1SEL value (0=IDU, 3=GRS) 0x30_0000_0000_0000
*
* NC - number of counters
* 51: NC error 0x0008_0000_0000_0000
* 48-50: number of events needing PMC1-4 0x0007_0000_0000_0000
*
* G0..G3 - GRS mux constraints
* 46-47: GRS_L2SEL value
* 44-45: GRS_L3SEL value
* 41-44: GRS_MCSEL value
* 39-40: GRS_FABSEL value
* Note that these match up with their bit positions in MMCR1
*
* UC - unit constraint: can't have all three of FPU|IFU|ISU1, ISU0, IDU|GRS
* 37: UC3 error 0x20_0000_0000
* 36: FPU|IFU|ISU1 events needed 0x10_0000_0000
* 35: ISU0 events needed 0x08_0000_0000
* 34: IDU|GRS events needed 0x04_0000_0000
*
* PS1
* 33: PS1 error 0x2_0000_0000
* 31-32: count of events needing PMC1/2 0x1_8000_0000
*
* PS2
* 30: PS2 error 0x4000_0000
* 28-29: count of events needing PMC3/4 0x3000_0000
*
* B0
* 24-27: Byte 0 event source 0x0f00_0000
* Encoding as for the event code
*
* B1, B2, B3
* 20-23, 16-19, 12-15: Byte 1, 2, 3 event sources
*
* P1..P6
* 0-11: Count of events needing PMC1..PMC6
*/
static const int grsel_shift[8] = {
MMCR1_GRS_L2SEL_SH, MMCR1_GRS_L2SEL_SH, MMCR1_GRS_L2SEL_SH,
MMCR1_GRS_L3SEL_SH, MMCR1_GRS_L3SEL_SH, MMCR1_GRS_L3SEL_SH,
MMCR1_GRS_MCSEL_SH, MMCR1_GRS_FABSEL_SH
};
/* Masks and values for using events from the various units */
static u64 unit_cons[PM_LASTUNIT+1][2] = {
[PM_FPU] = { 0xc0002000000000ull, 0x00001000000000ull },
[PM_ISU0] = { 0x00002000000000ull, 0x00000800000000ull },
[PM_ISU1] = { 0xc0002000000000ull, 0xc0001000000000ull },
[PM_IFU] = { 0xc0002000000000ull, 0x80001000000000ull },
[PM_IDU] = { 0x30002000000000ull, 0x00000400000000ull },
[PM_GRS] = { 0x30002000000000ull, 0x30000400000000ull },
};
static int power5_get_constraint(u64 event, u64 *maskp, u64 *valp)
{
int pmc, byte, unit, sh;
int bit, fmask;
u64 mask = 0, value = 0;
int grp = -1;
pmc = (event >> PM_PMC_SH) & PM_PMC_MSK;
if (pmc) {
if (pmc > 6)
return -1;
sh = (pmc - 1) * 2;
mask |= 2 << sh;
value |= 1 << sh;
if (pmc <= 4)
grp = (pmc - 1) >> 1;
else if (event != 0x500009 && event != 0x600005)
return -1;
}
if (event & PM_BUSEVENT_MSK) {
unit = (event >> PM_UNIT_SH) & PM_UNIT_MSK;
if (unit > PM_LASTUNIT)
return -1;
if (unit == PM_ISU0_ALT)
unit = PM_ISU0;
mask |= unit_cons[unit][0];
value |= unit_cons[unit][1];
byte = (event >> PM_BYTE_SH) & PM_BYTE_MSK;
if (byte >= 4) {
if (unit != PM_LSU1)
return -1;
/* Map LSU1 low word (bytes 4-7) to unit LSU1+1 */
++unit;
byte &= 3;
}
if (unit == PM_GRS) {
bit = event & 7;
fmask = (bit == 6)? 7: 3;
sh = grsel_shift[bit];
mask |= (u64)fmask << sh;
value |= (u64)((event >> PM_GRS_SH) & fmask) << sh;
}
/*
* Bus events on bytes 0 and 2 can be counted
* on PMC1/2; bytes 1 and 3 on PMC3/4.
*/
if (!pmc)
grp = byte & 1;
/* Set byte lane select field */
mask |= 0xfULL << (24 - 4 * byte);
value |= (u64)unit << (24 - 4 * byte);
}
if (grp == 0) {
/* increment PMC1/2 field */
mask |= 0x200000000ull;
value |= 0x080000000ull;
} else if (grp == 1) {
/* increment PMC3/4 field */
mask |= 0x40000000ull;
value |= 0x10000000ull;
}
if (pmc < 5) {
/* need a counter from PMC1-4 set */
mask |= 0x8000000000000ull;
value |= 0x1000000000000ull;
}
*maskp = mask;
*valp = value;
return 0;
}
#define MAX_ALT 3 /* at most 3 alternatives for any event */
static const unsigned int event_alternatives[][MAX_ALT] = {
{ 0x120e4, 0x400002 }, /* PM_GRP_DISP_REJECT */
{ 0x410c7, 0x441084 }, /* PM_THRD_L2MISS_BOTH_CYC */
{ 0x100005, 0x600005 }, /* PM_RUN_CYC */
{ 0x100009, 0x200009, 0x500009 }, /* PM_INST_CMPL */
{ 0x300009, 0x400009 }, /* PM_INST_DISP */
};
/*
* Scan the alternatives table for a match and return the
* index into the alternatives table if found, else -1.
*/
static int find_alternative(u64 event)
{
int i, j;
for (i = 0; i < ARRAY_SIZE(event_alternatives); ++i) {
if (event < event_alternatives[i][0])
break;
for (j = 0; j < MAX_ALT && event_alternatives[i][j]; ++j)
if (event == event_alternatives[i][j])
return i;
}
return -1;
}
static const unsigned char bytedecode_alternatives[4][4] = {
/* PMC 1 */ { 0x21, 0x23, 0x25, 0x27 },
/* PMC 2 */ { 0x07, 0x17, 0x0e, 0x1e },
/* PMC 3 */ { 0x20, 0x22, 0x24, 0x26 },
/* PMC 4 */ { 0x07, 0x17, 0x0e, 0x1e }
};
/*
* Some direct events for decodes of event bus byte 3 have alternative
* PMCSEL values on other counters. This returns the alternative
* event code for those that do, or -1 otherwise.
*/
static s64 find_alternative_bdecode(u64 event)
{
int pmc, altpmc, pp, j;
pmc = (event >> PM_PMC_SH) & PM_PMC_MSK;
if (pmc == 0 || pmc > 4)
return -1;
altpmc = 5 - pmc; /* 1 <-> 4, 2 <-> 3 */
pp = event & PM_PMCSEL_MSK;
for (j = 0; j < 4; ++j) {
if (bytedecode_alternatives[pmc - 1][j] == pp) {
return (event & ~(PM_PMC_MSKS | PM_PMCSEL_MSK)) |
(altpmc << PM_PMC_SH) |
bytedecode_alternatives[altpmc - 1][j];
}
}
return -1;
}
static int power5_get_alternatives(u64 event, unsigned int flags, u64 alt[])
{
int i, j, nalt = 1;
s64 ae;
alt[0] = event;
nalt = 1;
i = find_alternative(event);
if (i >= 0) {
for (j = 0; j < MAX_ALT; ++j) {
ae = event_alternatives[i][j];
if (ae && ae != event)
alt[nalt++] = ae;
}
} else {
ae = find_alternative_bdecode(event);
if (ae > 0)
alt[nalt++] = ae;
}
return nalt;
}
/*
* Map of which direct events on which PMCs are marked instruction events.
* Indexed by PMCSEL value, bit i (LE) set if PMC i is a marked event.
* Bit 0 is set if it is marked for all PMCs.
* The 0x80 bit indicates a byte decode PMCSEL value.
*/
static unsigned char direct_event_is_marked[0x28] = {
0, /* 00 */
0x1f, /* 01 PM_IOPS_CMPL */
0x2, /* 02 PM_MRK_GRP_DISP */
0xe, /* 03 PM_MRK_ST_CMPL, PM_MRK_ST_GPS, PM_MRK_ST_CMPL_INT */
0, /* 04 */
0x1c, /* 05 PM_MRK_BRU_FIN, PM_MRK_INST_FIN, PM_MRK_CRU_FIN */
0x80, /* 06 */
0x80, /* 07 */
0, 0, 0,/* 08 - 0a */
0x18, /* 0b PM_THRESH_TIMEO, PM_MRK_GRP_TIMEO */
0, /* 0c */
0x80, /* 0d */
0x80, /* 0e */
0, /* 0f */
0, /* 10 */
0x14, /* 11 PM_MRK_GRP_BR_REDIR, PM_MRK_GRP_IC_MISS */
0, /* 12 */
0x10, /* 13 PM_MRK_GRP_CMPL */
0x1f, /* 14 PM_GRP_MRK, PM_MRK_{FXU,FPU,LSU}_FIN */
0x2, /* 15 PM_MRK_GRP_ISSUED */
0x80, /* 16 */
0x80, /* 17 */
0, 0, 0, 0, 0,
0x80, /* 1d */
0x80, /* 1e */
0, /* 1f */
0x80, /* 20 */
0x80, /* 21 */
0x80, /* 22 */
0x80, /* 23 */
0x80, /* 24 */
0x80, /* 25 */
0x80, /* 26 */
0x80, /* 27 */
};
/*
* Returns 1 if event counts things relating to marked instructions
* and thus needs the MMCRA_SAMPLE_ENABLE bit set, or 0 if not.
*/
static int power5_marked_instr_event(u64 event)
{
int pmc, psel;
int bit, byte, unit;
u32 mask;
pmc = (event >> PM_PMC_SH) & PM_PMC_MSK;
psel = event & PM_PMCSEL_MSK;
if (pmc >= 5)
return 0;
bit = -1;
if (psel < sizeof(direct_event_is_marked)) {
if (direct_event_is_marked[psel] & (1 << pmc))
return 1;
if (direct_event_is_marked[psel] & 0x80)
bit = 4;
else if (psel == 0x08)
bit = pmc - 1;
else if (psel == 0x10)
bit = 4 - pmc;
else if (psel == 0x1b && (pmc == 1 || pmc == 3))
bit = 4;
} else if ((psel & 0x58) == 0x40)
bit = psel & 7;
if (!(event & PM_BUSEVENT_MSK))
return 0;
byte = (event >> PM_BYTE_SH) & PM_BYTE_MSK;
unit = (event >> PM_UNIT_SH) & PM_UNIT_MSK;
if (unit == PM_LSU0) {
/* byte 1 bits 0-7, byte 2 bits 0,2-4,6 */
mask = 0x5dff00;
} else if (unit == PM_LSU1 && byte >= 4) {
byte -= 4;
/* byte 4 bits 1,3,5,7, byte 5 bits 6-7, byte 7 bits 0-4,6 */
mask = 0x5f00c0aa;
} else
return 0;
return (mask >> (byte * 8 + bit)) & 1;
}
static int power5_compute_mmcr(u64 event[], int n_ev,
unsigned int hwc[], u64 mmcr[])
{
u64 mmcr1 = 0;
u64 mmcra = 0;
unsigned int pmc, unit, byte, psel;
unsigned int ttm, grp;
int i, isbus, bit, grsel;
unsigned int pmc_inuse = 0;
unsigned int pmc_grp_use[2];
unsigned char busbyte[4];
unsigned char unituse[16];
int ttmuse;
if (n_ev > 6)
return -1;
/* First pass to count resource use */
pmc_grp_use[0] = pmc_grp_use[1] = 0;
memset(busbyte, 0, sizeof(busbyte));
memset(unituse, 0, sizeof(unituse));
for (i = 0; i < n_ev; ++i) {
pmc = (event[i] >> PM_PMC_SH) & PM_PMC_MSK;
if (pmc) {
if (pmc > 6)
return -1;
if (pmc_inuse & (1 << (pmc - 1)))
return -1;
pmc_inuse |= 1 << (pmc - 1);
/* count 1/2 vs 3/4 use */
if (pmc <= 4)
++pmc_grp_use[(pmc - 1) >> 1];
}
if (event[i] & PM_BUSEVENT_MSK) {
unit = (event[i] >> PM_UNIT_SH) & PM_UNIT_MSK;
byte = (event[i] >> PM_BYTE_SH) & PM_BYTE_MSK;
if (unit > PM_LASTUNIT)
return -1;
if (unit == PM_ISU0_ALT)
unit = PM_ISU0;
if (byte >= 4) {
if (unit != PM_LSU1)
return -1;
++unit;
byte &= 3;
}
if (!pmc)
++pmc_grp_use[byte & 1];
if (busbyte[byte] && busbyte[byte] != unit)
return -1;
busbyte[byte] = unit;
unituse[unit] = 1;
}
}
if (pmc_grp_use[0] > 2 || pmc_grp_use[1] > 2)
return -1;
/*
* Assign resources and set multiplexer selects.
*
* PM_ISU0 can go either on TTM0 or TTM1, but that's the only
* choice we have to deal with.
*/
if (unituse[PM_ISU0] &
(unituse[PM_FPU] | unituse[PM_IFU] | unituse[PM_ISU1])) {
unituse[PM_ISU0_ALT] = 1; /* move ISU to TTM1 */
unituse[PM_ISU0] = 0;
}
/* Set TTM[01]SEL fields. */
ttmuse = 0;
for (i = PM_FPU; i <= PM_ISU1; ++i) {
if (!unituse[i])
continue;
if (ttmuse++)
return -1;
mmcr1 |= (u64)i << MMCR1_TTM0SEL_SH;
}
ttmuse = 0;
for (; i <= PM_GRS; ++i) {
if (!unituse[i])
continue;
if (ttmuse++)
return -1;
mmcr1 |= (u64)(i & 3) << MMCR1_TTM1SEL_SH;
}
if (ttmuse > 1)
return -1;
/* Set byte lane select fields, TTM[23]SEL and GRS_*SEL. */
for (byte = 0; byte < 4; ++byte) {
unit = busbyte[byte];
if (!unit)
continue;
if (unit == PM_ISU0 && unituse[PM_ISU0_ALT]) {
/* get ISU0 through TTM1 rather than TTM0 */
unit = PM_ISU0_ALT;
} else if (unit == PM_LSU1 + 1) {
/* select lower word of LSU1 for this byte */
mmcr1 |= 1ull << (MMCR1_TTM3SEL_SH + 3 - byte);
}
ttm = unit >> 2;
mmcr1 |= (u64)ttm << (MMCR1_TD_CP_DBG0SEL_SH - 2 * byte);
}
/* Second pass: assign PMCs, set PMCxSEL and PMCx_ADDER_SEL fields */
for (i = 0; i < n_ev; ++i) {
pmc = (event[i] >> PM_PMC_SH) & PM_PMC_MSK;
unit = (event[i] >> PM_UNIT_SH) & PM_UNIT_MSK;
byte = (event[i] >> PM_BYTE_SH) & PM_BYTE_MSK;
psel = event[i] & PM_PMCSEL_MSK;
isbus = event[i] & PM_BUSEVENT_MSK;
if (!pmc) {
/* Bus event or any-PMC direct event */
for (pmc = 0; pmc < 4; ++pmc) {
if (pmc_inuse & (1 << pmc))
continue;
grp = (pmc >> 1) & 1;
if (isbus) {
if (grp == (byte & 1))
break;
} else if (pmc_grp_use[grp] < 2) {
++pmc_grp_use[grp];
break;
}
}
pmc_inuse |= 1 << pmc;
} else if (pmc <= 4) {
/* Direct event */
--pmc;
if ((psel == 8 || psel == 0x10) && isbus && (byte & 2))
/* add events on higher-numbered bus */
mmcr1 |= 1ull << (MMCR1_PMC1_ADDER_SEL_SH - pmc);
} else {
/* Instructions or run cycles on PMC5/6 */
--pmc;
}
if (isbus && unit == PM_GRS) {
bit = psel & 7;
grsel = (event[i] >> PM_GRS_SH) & PM_GRS_MSK;
mmcr1 |= (u64)grsel << grsel_shift[bit];
}
if (power5_marked_instr_event(event[i]))
mmcra |= MMCRA_SAMPLE_ENABLE;
if (pmc <= 3)
mmcr1 |= psel << MMCR1_PMCSEL_SH(pmc);
hwc[i] = pmc;
}
/* Return MMCRx values */
mmcr[0] = 0;
if (pmc_inuse & 1)
mmcr[0] = MMCR0_PMC1CE;
if (pmc_inuse & 0x3e)
mmcr[0] |= MMCR0_PMCjCE;
mmcr[1] = mmcr1;
mmcr[2] = mmcra;
return 0;
}
static void power5_disable_pmc(unsigned int pmc, u64 mmcr[])
{
if (pmc <= 3)
mmcr[1] &= ~(0x7fUL << MMCR1_PMCSEL_SH(pmc));
}
static int power5_generic_events[] = {
[PERF_COUNT_HW_CPU_CYCLES] = 0xf,
[PERF_COUNT_HW_INSTRUCTIONS] = 0x100009,
[PERF_COUNT_HW_CACHE_REFERENCES] = 0x4c1090, /* LD_REF_L1 */
[PERF_COUNT_HW_CACHE_MISSES] = 0x3c1088, /* LD_MISS_L1 */
[PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = 0x230e4, /* BR_ISSUED */
[PERF_COUNT_HW_BRANCH_MISSES] = 0x230e5, /* BR_MPRED_CR */
};
#define C(x) PERF_COUNT_HW_CACHE_##x
/*
* Table of generalized cache-related events.
* 0 means not supported, -1 means nonsensical, other values
* are event codes.
*/
static int power5_cache_events[C(MAX)][C(OP_MAX)][C(RESULT_MAX)] = {
[C(L1D)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0x4c1090, 0x3c1088 },
[C(OP_WRITE)] = { 0x3c1090, 0xc10c3 },
[C(OP_PREFETCH)] = { 0xc70e7, 0 },
},
[C(L1I)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0, 0 },
[C(OP_WRITE)] = { -1, -1 },
[C(OP_PREFETCH)] = { 0, 0 },
},
[C(LL)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0, 0x3c309b },
[C(OP_WRITE)] = { 0, 0 },
[C(OP_PREFETCH)] = { 0xc50c3, 0 },
},
[C(DTLB)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0x2c4090, 0x800c4 },
[C(OP_WRITE)] = { -1, -1 },
[C(OP_PREFETCH)] = { -1, -1 },
},
[C(ITLB)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0, 0x800c0 },
[C(OP_WRITE)] = { -1, -1 },
[C(OP_PREFETCH)] = { -1, -1 },
},
[C(BPU)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0x230e4, 0x230e5 },
[C(OP_WRITE)] = { -1, -1 },
[C(OP_PREFETCH)] = { -1, -1 },
},
};
struct power_pmu power5_pmu = {
.n_counter = 6,
.max_alternatives = MAX_ALT,
.add_fields = 0x7000090000555ull,
.test_adder = 0x3000490000000ull,
.compute_mmcr = power5_compute_mmcr,
.get_constraint = power5_get_constraint,
.get_alternatives = power5_get_alternatives,
.disable_pmc = power5_disable_pmc,
.n_generic = ARRAY_SIZE(power5_generic_events),
.generic_events = power5_generic_events,
.cache_events = &power5_cache_events,
};

Просмотреть файл

@ -0,0 +1,532 @@
/*
* Performance counter support for POWER6 processors.
*
* Copyright 2008-2009 Paul Mackerras, IBM Corporation.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/kernel.h>
#include <linux/perf_counter.h>
#include <asm/reg.h>
/*
* Bits in event code for POWER6
*/
#define PM_PMC_SH 20 /* PMC number (1-based) for direct events */
#define PM_PMC_MSK 0x7
#define PM_PMC_MSKS (PM_PMC_MSK << PM_PMC_SH)
#define PM_UNIT_SH 16 /* Unit event comes (TTMxSEL encoding) */
#define PM_UNIT_MSK 0xf
#define PM_UNIT_MSKS (PM_UNIT_MSK << PM_UNIT_SH)
#define PM_LLAV 0x8000 /* Load lookahead match value */
#define PM_LLA 0x4000 /* Load lookahead match enable */
#define PM_BYTE_SH 12 /* Byte of event bus to use */
#define PM_BYTE_MSK 3
#define PM_SUBUNIT_SH 8 /* Subunit event comes from (NEST_SEL enc.) */
#define PM_SUBUNIT_MSK 7
#define PM_SUBUNIT_MSKS (PM_SUBUNIT_MSK << PM_SUBUNIT_SH)
#define PM_PMCSEL_MSK 0xff /* PMCxSEL value */
#define PM_BUSEVENT_MSK 0xf3700
/*
* Bits in MMCR1 for POWER6
*/
#define MMCR1_TTM0SEL_SH 60
#define MMCR1_TTMSEL_SH(n) (MMCR1_TTM0SEL_SH - (n) * 4)
#define MMCR1_TTMSEL_MSK 0xf
#define MMCR1_TTMSEL(m, n) (((m) >> MMCR1_TTMSEL_SH(n)) & MMCR1_TTMSEL_MSK)
#define MMCR1_NESTSEL_SH 45
#define MMCR1_NESTSEL_MSK 0x7
#define MMCR1_NESTSEL(m) (((m) >> MMCR1_NESTSEL_SH) & MMCR1_NESTSEL_MSK)
#define MMCR1_PMC1_LLA ((u64)1 << 44)
#define MMCR1_PMC1_LLA_VALUE ((u64)1 << 39)
#define MMCR1_PMC1_ADDR_SEL ((u64)1 << 35)
#define MMCR1_PMC1SEL_SH 24
#define MMCR1_PMCSEL_SH(n) (MMCR1_PMC1SEL_SH - (n) * 8)
#define MMCR1_PMCSEL_MSK 0xff
/*
* Map of which direct events on which PMCs are marked instruction events.
* Indexed by PMCSEL value >> 1.
* Bottom 4 bits are a map of which PMCs are interesting,
* top 4 bits say what sort of event:
* 0 = direct marked event,
* 1 = byte decode event,
* 4 = add/and event (PMC1 -> bits 0 & 4),
* 5 = add/and event (PMC1 -> bits 1 & 5),
* 6 = add/and event (PMC1 -> bits 2 & 6),
* 7 = add/and event (PMC1 -> bits 3 & 7).
*/
static unsigned char direct_event_is_marked[0x60 >> 1] = {
0, /* 00 */
0, /* 02 */
0, /* 04 */
0x07, /* 06 PM_MRK_ST_CMPL, PM_MRK_ST_GPS, PM_MRK_ST_CMPL_INT */
0x04, /* 08 PM_MRK_DFU_FIN */
0x06, /* 0a PM_MRK_IFU_FIN, PM_MRK_INST_FIN */
0, /* 0c */
0, /* 0e */
0x02, /* 10 PM_MRK_INST_DISP */
0x08, /* 12 PM_MRK_LSU_DERAT_MISS */
0, /* 14 */
0, /* 16 */
0x0c, /* 18 PM_THRESH_TIMEO, PM_MRK_INST_FIN */
0x0f, /* 1a PM_MRK_INST_DISP, PM_MRK_{FXU,FPU,LSU}_FIN */
0x01, /* 1c PM_MRK_INST_ISSUED */
0, /* 1e */
0, /* 20 */
0, /* 22 */
0, /* 24 */
0, /* 26 */
0x15, /* 28 PM_MRK_DATA_FROM_L2MISS, PM_MRK_DATA_FROM_L3MISS */
0, /* 2a */
0, /* 2c */
0, /* 2e */
0x4f, /* 30 */
0x7f, /* 32 */
0x4f, /* 34 */
0x5f, /* 36 */
0x6f, /* 38 */
0x4f, /* 3a */
0, /* 3c */
0x08, /* 3e PM_MRK_INST_TIMEO */
0x1f, /* 40 */
0x1f, /* 42 */
0x1f, /* 44 */
0x1f, /* 46 */
0x1f, /* 48 */
0x1f, /* 4a */
0x1f, /* 4c */
0x1f, /* 4e */
0, /* 50 */
0x05, /* 52 PM_MRK_BR_TAKEN, PM_MRK_BR_MPRED */
0x1c, /* 54 PM_MRK_PTEG_FROM_L3MISS, PM_MRK_PTEG_FROM_L2MISS */
0x02, /* 56 PM_MRK_LD_MISS_L1 */
0, /* 58 */
0, /* 5a */
0, /* 5c */
0, /* 5e */
};
/*
* Masks showing for each unit which bits are marked events.
* These masks are in LE order, i.e. 0x00000001 is byte 0, bit 0.
*/
static u32 marked_bus_events[16] = {
0x01000000, /* direct events set 1: byte 3 bit 0 */
0x00010000, /* direct events set 2: byte 2 bit 0 */
0, 0, 0, 0, /* IDU, IFU, nest: nothing */
0x00000088, /* VMX set 1: byte 0 bits 3, 7 */
0x000000c0, /* VMX set 2: byte 0 bits 4-7 */
0x04010000, /* LSU set 1: byte 2 bit 0, byte 3 bit 2 */
0xff010000u, /* LSU set 2: byte 2 bit 0, all of byte 3 */
0, /* LSU set 3 */
0x00000010, /* VMX set 3: byte 0 bit 4 */
0, /* BFP set 1 */
0x00000022, /* BFP set 2: byte 0 bits 1, 5 */
0, 0
};
/*
* Returns 1 if event counts things relating to marked instructions
* and thus needs the MMCRA_SAMPLE_ENABLE bit set, or 0 if not.
*/
static int power6_marked_instr_event(u64 event)
{
int pmc, psel, ptype;
int bit, byte, unit;
u32 mask;
pmc = (event >> PM_PMC_SH) & PM_PMC_MSK;
psel = (event & PM_PMCSEL_MSK) >> 1; /* drop edge/level bit */
if (pmc >= 5)
return 0;
bit = -1;
if (psel < sizeof(direct_event_is_marked)) {
ptype = direct_event_is_marked[psel];
if (pmc == 0 || !(ptype & (1 << (pmc - 1))))
return 0;
ptype >>= 4;
if (ptype == 0)
return 1;
if (ptype == 1)
bit = 0;
else
bit = ptype ^ (pmc - 1);
} else if ((psel & 0x48) == 0x40)
bit = psel & 7;
if (!(event & PM_BUSEVENT_MSK) || bit == -1)
return 0;
byte = (event >> PM_BYTE_SH) & PM_BYTE_MSK;
unit = (event >> PM_UNIT_SH) & PM_UNIT_MSK;
mask = marked_bus_events[unit];
return (mask >> (byte * 8 + bit)) & 1;
}
/*
* Assign PMC numbers and compute MMCR1 value for a set of events
*/
static int p6_compute_mmcr(u64 event[], int n_ev,
unsigned int hwc[], u64 mmcr[])
{
u64 mmcr1 = 0;
u64 mmcra = 0;
int i;
unsigned int pmc, ev, b, u, s, psel;
unsigned int ttmset = 0;
unsigned int pmc_inuse = 0;
if (n_ev > 6)
return -1;
for (i = 0; i < n_ev; ++i) {
pmc = (event[i] >> PM_PMC_SH) & PM_PMC_MSK;
if (pmc) {
if (pmc_inuse & (1 << (pmc - 1)))
return -1; /* collision! */
pmc_inuse |= 1 << (pmc - 1);
}
}
for (i = 0; i < n_ev; ++i) {
ev = event[i];
pmc = (ev >> PM_PMC_SH) & PM_PMC_MSK;
if (pmc) {
--pmc;
} else {
/* can go on any PMC; find a free one */
for (pmc = 0; pmc < 4; ++pmc)
if (!(pmc_inuse & (1 << pmc)))
break;
if (pmc >= 4)
return -1;
pmc_inuse |= 1 << pmc;
}
hwc[i] = pmc;
psel = ev & PM_PMCSEL_MSK;
if (ev & PM_BUSEVENT_MSK) {
/* this event uses the event bus */
b = (ev >> PM_BYTE_SH) & PM_BYTE_MSK;
u = (ev >> PM_UNIT_SH) & PM_UNIT_MSK;
/* check for conflict on this byte of event bus */
if ((ttmset & (1 << b)) && MMCR1_TTMSEL(mmcr1, b) != u)
return -1;
mmcr1 |= (u64)u << MMCR1_TTMSEL_SH(b);
ttmset |= 1 << b;
if (u == 5) {
/* Nest events have a further mux */
s = (ev >> PM_SUBUNIT_SH) & PM_SUBUNIT_MSK;
if ((ttmset & 0x10) &&
MMCR1_NESTSEL(mmcr1) != s)
return -1;
ttmset |= 0x10;
mmcr1 |= (u64)s << MMCR1_NESTSEL_SH;
}
if (0x30 <= psel && psel <= 0x3d) {
/* these need the PMCx_ADDR_SEL bits */
if (b >= 2)
mmcr1 |= MMCR1_PMC1_ADDR_SEL >> pmc;
}
/* bus select values are different for PMC3/4 */
if (pmc >= 2 && (psel & 0x90) == 0x80)
psel ^= 0x20;
}
if (ev & PM_LLA) {
mmcr1 |= MMCR1_PMC1_LLA >> pmc;
if (ev & PM_LLAV)
mmcr1 |= MMCR1_PMC1_LLA_VALUE >> pmc;
}
if (power6_marked_instr_event(event[i]))
mmcra |= MMCRA_SAMPLE_ENABLE;
if (pmc < 4)
mmcr1 |= (u64)psel << MMCR1_PMCSEL_SH(pmc);
}
mmcr[0] = 0;
if (pmc_inuse & 1)
mmcr[0] = MMCR0_PMC1CE;
if (pmc_inuse & 0xe)
mmcr[0] |= MMCR0_PMCjCE;
mmcr[1] = mmcr1;
mmcr[2] = mmcra;
return 0;
}
/*
* Layout of constraint bits:
*
* 0-1 add field: number of uses of PMC1 (max 1)
* 2-3, 4-5, 6-7, 8-9, 10-11: ditto for PMC2, 3, 4, 5, 6
* 12-15 add field: number of uses of PMC1-4 (max 4)
* 16-19 select field: unit on byte 0 of event bus
* 20-23, 24-27, 28-31 ditto for bytes 1, 2, 3
* 32-34 select field: nest (subunit) event selector
*/
static int p6_get_constraint(u64 event, u64 *maskp, u64 *valp)
{
int pmc, byte, sh, subunit;
u64 mask = 0, value = 0;
pmc = (event >> PM_PMC_SH) & PM_PMC_MSK;
if (pmc) {
if (pmc > 4 && !(event == 0x500009 || event == 0x600005))
return -1;
sh = (pmc - 1) * 2;
mask |= 2 << sh;
value |= 1 << sh;
}
if (event & PM_BUSEVENT_MSK) {
byte = (event >> PM_BYTE_SH) & PM_BYTE_MSK;
sh = byte * 4 + (16 - PM_UNIT_SH);
mask |= PM_UNIT_MSKS << sh;
value |= (u64)(event & PM_UNIT_MSKS) << sh;
if ((event & PM_UNIT_MSKS) == (5 << PM_UNIT_SH)) {
subunit = (event >> PM_SUBUNIT_SH) & PM_SUBUNIT_MSK;
mask |= (u64)PM_SUBUNIT_MSK << 32;
value |= (u64)subunit << 32;
}
}
if (pmc <= 4) {
mask |= 0x8000; /* add field for count of PMC1-4 uses */
value |= 0x1000;
}
*maskp = mask;
*valp = value;
return 0;
}
static int p6_limited_pmc_event(u64 event)
{
int pmc = (event >> PM_PMC_SH) & PM_PMC_MSK;
return pmc == 5 || pmc == 6;
}
#define MAX_ALT 4 /* at most 4 alternatives for any event */
static const unsigned int event_alternatives[][MAX_ALT] = {
{ 0x0130e8, 0x2000f6, 0x3000fc }, /* PM_PTEG_RELOAD_VALID */
{ 0x080080, 0x10000d, 0x30000c, 0x4000f0 }, /* PM_LD_MISS_L1 */
{ 0x080088, 0x200054, 0x3000f0 }, /* PM_ST_MISS_L1 */
{ 0x10000a, 0x2000f4, 0x600005 }, /* PM_RUN_CYC */
{ 0x10000b, 0x2000f5 }, /* PM_RUN_COUNT */
{ 0x10000e, 0x400010 }, /* PM_PURR */
{ 0x100010, 0x4000f8 }, /* PM_FLUSH */
{ 0x10001a, 0x200010 }, /* PM_MRK_INST_DISP */
{ 0x100026, 0x3000f8 }, /* PM_TB_BIT_TRANS */
{ 0x100054, 0x2000f0 }, /* PM_ST_FIN */
{ 0x100056, 0x2000fc }, /* PM_L1_ICACHE_MISS */
{ 0x1000f0, 0x40000a }, /* PM_INST_IMC_MATCH_CMPL */
{ 0x1000f8, 0x200008 }, /* PM_GCT_EMPTY_CYC */
{ 0x1000fc, 0x400006 }, /* PM_LSU_DERAT_MISS_CYC */
{ 0x20000e, 0x400007 }, /* PM_LSU_DERAT_MISS */
{ 0x200012, 0x300012 }, /* PM_INST_DISP */
{ 0x2000f2, 0x3000f2 }, /* PM_INST_DISP */
{ 0x2000f8, 0x300010 }, /* PM_EXT_INT */
{ 0x2000fe, 0x300056 }, /* PM_DATA_FROM_L2MISS */
{ 0x2d0030, 0x30001a }, /* PM_MRK_FPU_FIN */
{ 0x30000a, 0x400018 }, /* PM_MRK_INST_FIN */
{ 0x3000f6, 0x40000e }, /* PM_L1_DCACHE_RELOAD_VALID */
{ 0x3000fe, 0x400056 }, /* PM_DATA_FROM_L3MISS */
};
/*
* This could be made more efficient with a binary search on
* a presorted list, if necessary
*/
static int find_alternatives_list(u64 event)
{
int i, j;
unsigned int alt;
for (i = 0; i < ARRAY_SIZE(event_alternatives); ++i) {
if (event < event_alternatives[i][0])
return -1;
for (j = 0; j < MAX_ALT; ++j) {
alt = event_alternatives[i][j];
if (!alt || event < alt)
break;
if (event == alt)
return i;
}
}
return -1;
}
static int p6_get_alternatives(u64 event, unsigned int flags, u64 alt[])
{
int i, j, nlim;
unsigned int psel, pmc;
unsigned int nalt = 1;
u64 aevent;
alt[0] = event;
nlim = p6_limited_pmc_event(event);
/* check the alternatives table */
i = find_alternatives_list(event);
if (i >= 0) {
/* copy out alternatives from list */
for (j = 0; j < MAX_ALT; ++j) {
aevent = event_alternatives[i][j];
if (!aevent)
break;
if (aevent != event)
alt[nalt++] = aevent;
nlim += p6_limited_pmc_event(aevent);
}
} else {
/* Check for alternative ways of computing sum events */
/* PMCSEL 0x32 counter N == PMCSEL 0x34 counter 5-N */
psel = event & (PM_PMCSEL_MSK & ~1); /* ignore edge bit */
pmc = (event >> PM_PMC_SH) & PM_PMC_MSK;
if (pmc && (psel == 0x32 || psel == 0x34))
alt[nalt++] = ((event ^ 0x6) & ~PM_PMC_MSKS) |
((5 - pmc) << PM_PMC_SH);
/* PMCSEL 0x38 counter N == PMCSEL 0x3a counter N+/-2 */
if (pmc && (psel == 0x38 || psel == 0x3a))
alt[nalt++] = ((event ^ 0x2) & ~PM_PMC_MSKS) |
((pmc > 2? pmc - 2: pmc + 2) << PM_PMC_SH);
}
if (flags & PPMU_ONLY_COUNT_RUN) {
/*
* We're only counting in RUN state,
* so PM_CYC is equivalent to PM_RUN_CYC,
* PM_INST_CMPL === PM_RUN_INST_CMPL, PM_PURR === PM_RUN_PURR.
* This doesn't include alternatives that don't provide
* any extra flexibility in assigning PMCs (e.g.
* 0x10000a for PM_RUN_CYC vs. 0x1e for PM_CYC).
* Note that even with these additional alternatives
* we never end up with more than 4 alternatives for any event.
*/
j = nalt;
for (i = 0; i < nalt; ++i) {
switch (alt[i]) {
case 0x1e: /* PM_CYC */
alt[j++] = 0x600005; /* PM_RUN_CYC */
++nlim;
break;
case 0x10000a: /* PM_RUN_CYC */
alt[j++] = 0x1e; /* PM_CYC */
break;
case 2: /* PM_INST_CMPL */
alt[j++] = 0x500009; /* PM_RUN_INST_CMPL */
++nlim;
break;
case 0x500009: /* PM_RUN_INST_CMPL */
alt[j++] = 2; /* PM_INST_CMPL */
break;
case 0x10000e: /* PM_PURR */
alt[j++] = 0x4000f4; /* PM_RUN_PURR */
break;
case 0x4000f4: /* PM_RUN_PURR */
alt[j++] = 0x10000e; /* PM_PURR */
break;
}
}
nalt = j;
}
if (!(flags & PPMU_LIMITED_PMC_OK) && nlim) {
/* remove the limited PMC events */
j = 0;
for (i = 0; i < nalt; ++i) {
if (!p6_limited_pmc_event(alt[i])) {
alt[j] = alt[i];
++j;
}
}
nalt = j;
} else if ((flags & PPMU_LIMITED_PMC_REQD) && nlim < nalt) {
/* remove all but the limited PMC events */
j = 0;
for (i = 0; i < nalt; ++i) {
if (p6_limited_pmc_event(alt[i])) {
alt[j] = alt[i];
++j;
}
}
nalt = j;
}
return nalt;
}
static void p6_disable_pmc(unsigned int pmc, u64 mmcr[])
{
/* Set PMCxSEL to 0 to disable PMCx */
if (pmc <= 3)
mmcr[1] &= ~(0xffUL << MMCR1_PMCSEL_SH(pmc));
}
static int power6_generic_events[] = {
[PERF_COUNT_HW_CPU_CYCLES] = 0x1e,
[PERF_COUNT_HW_INSTRUCTIONS] = 2,
[PERF_COUNT_HW_CACHE_REFERENCES] = 0x280030, /* LD_REF_L1 */
[PERF_COUNT_HW_CACHE_MISSES] = 0x30000c, /* LD_MISS_L1 */
[PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = 0x410a0, /* BR_PRED */
[PERF_COUNT_HW_BRANCH_MISSES] = 0x400052, /* BR_MPRED */
};
#define C(x) PERF_COUNT_HW_CACHE_##x
/*
* Table of generalized cache-related events.
* 0 means not supported, -1 means nonsensical, other values
* are event codes.
* The "DTLB" and "ITLB" events relate to the DERAT and IERAT.
*/
static int power6_cache_events[C(MAX)][C(OP_MAX)][C(RESULT_MAX)] = {
[C(L1D)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0x80082, 0x80080 },
[C(OP_WRITE)] = { 0x80086, 0x80088 },
[C(OP_PREFETCH)] = { 0x810a4, 0 },
},
[C(L1I)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0, 0x100056 },
[C(OP_WRITE)] = { -1, -1 },
[C(OP_PREFETCH)] = { 0x4008c, 0 },
},
[C(LL)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0x150730, 0x250532 },
[C(OP_WRITE)] = { 0x250432, 0x150432 },
[C(OP_PREFETCH)] = { 0x810a6, 0 },
},
[C(DTLB)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0, 0x20000e },
[C(OP_WRITE)] = { -1, -1 },
[C(OP_PREFETCH)] = { -1, -1 },
},
[C(ITLB)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0, 0x420ce },
[C(OP_WRITE)] = { -1, -1 },
[C(OP_PREFETCH)] = { -1, -1 },
},
[C(BPU)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0x430e6, 0x400052 },
[C(OP_WRITE)] = { -1, -1 },
[C(OP_PREFETCH)] = { -1, -1 },
},
};
struct power_pmu power6_pmu = {
.n_counter = 6,
.max_alternatives = MAX_ALT,
.add_fields = 0x1555,
.test_adder = 0x3000,
.compute_mmcr = p6_compute_mmcr,
.get_constraint = p6_get_constraint,
.get_alternatives = p6_get_alternatives,
.disable_pmc = p6_disable_pmc,
.limited_pmc_event = p6_limited_pmc_event,
.flags = PPMU_LIMITED_PMC5_6 | PPMU_ALT_SIPR,
.n_generic = ARRAY_SIZE(power6_generic_events),
.generic_events = power6_generic_events,
.cache_events = &power6_cache_events,
};

Просмотреть файл

@ -0,0 +1,357 @@
/*
* Performance counter support for POWER7 processors.
*
* Copyright 2009 Paul Mackerras, IBM Corporation.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/kernel.h>
#include <linux/perf_counter.h>
#include <asm/reg.h>
/*
* Bits in event code for POWER7
*/
#define PM_PMC_SH 16 /* PMC number (1-based) for direct events */
#define PM_PMC_MSK 0xf
#define PM_PMC_MSKS (PM_PMC_MSK << PM_PMC_SH)
#define PM_UNIT_SH 12 /* TTMMUX number and setting - unit select */
#define PM_UNIT_MSK 0xf
#define PM_COMBINE_SH 11 /* Combined event bit */
#define PM_COMBINE_MSK 1
#define PM_COMBINE_MSKS 0x800
#define PM_L2SEL_SH 8 /* L2 event select */
#define PM_L2SEL_MSK 7
#define PM_PMCSEL_MSK 0xff
/*
* Bits in MMCR1 for POWER7
*/
#define MMCR1_TTM0SEL_SH 60
#define MMCR1_TTM1SEL_SH 56
#define MMCR1_TTM2SEL_SH 52
#define MMCR1_TTM3SEL_SH 48
#define MMCR1_TTMSEL_MSK 0xf
#define MMCR1_L2SEL_SH 45
#define MMCR1_L2SEL_MSK 7
#define MMCR1_PMC1_COMBINE_SH 35
#define MMCR1_PMC2_COMBINE_SH 34
#define MMCR1_PMC3_COMBINE_SH 33
#define MMCR1_PMC4_COMBINE_SH 32
#define MMCR1_PMC1SEL_SH 24
#define MMCR1_PMC2SEL_SH 16
#define MMCR1_PMC3SEL_SH 8
#define MMCR1_PMC4SEL_SH 0
#define MMCR1_PMCSEL_SH(n) (MMCR1_PMC1SEL_SH - (n) * 8)
#define MMCR1_PMCSEL_MSK 0xff
/*
* Bits in MMCRA
*/
/*
* Layout of constraint bits:
* 6666555555555544444444443333333333222222222211111111110000000000
* 3210987654321098765432109876543210987654321098765432109876543210
* [ ><><><><><><>
* NC P6P5P4P3P2P1
*
* NC - number of counters
* 15: NC error 0x8000
* 12-14: number of events needing PMC1-4 0x7000
*
* P6
* 11: P6 error 0x800
* 10-11: Count of events needing PMC6
*
* P1..P5
* 0-9: Count of events needing PMC1..PMC5
*/
static int power7_get_constraint(u64 event, u64 *maskp, u64 *valp)
{
int pmc, sh;
u64 mask = 0, value = 0;
pmc = (event >> PM_PMC_SH) & PM_PMC_MSK;
if (pmc) {
if (pmc > 6)
return -1;
sh = (pmc - 1) * 2;
mask |= 2 << sh;
value |= 1 << sh;
if (pmc >= 5 && !(event == 0x500fa || event == 0x600f4))
return -1;
}
if (pmc < 5) {
/* need a counter from PMC1-4 set */
mask |= 0x8000;
value |= 0x1000;
}
*maskp = mask;
*valp = value;
return 0;
}
#define MAX_ALT 2 /* at most 2 alternatives for any event */
static const unsigned int event_alternatives[][MAX_ALT] = {
{ 0x200f2, 0x300f2 }, /* PM_INST_DISP */
{ 0x200f4, 0x600f4 }, /* PM_RUN_CYC */
{ 0x400fa, 0x500fa }, /* PM_RUN_INST_CMPL */
};
/*
* Scan the alternatives table for a match and return the
* index into the alternatives table if found, else -1.
*/
static int find_alternative(u64 event)
{
int i, j;
for (i = 0; i < ARRAY_SIZE(event_alternatives); ++i) {
if (event < event_alternatives[i][0])
break;
for (j = 0; j < MAX_ALT && event_alternatives[i][j]; ++j)
if (event == event_alternatives[i][j])
return i;
}
return -1;
}
static s64 find_alternative_decode(u64 event)
{
int pmc, psel;
/* this only handles the 4x decode events */
pmc = (event >> PM_PMC_SH) & PM_PMC_MSK;
psel = event & PM_PMCSEL_MSK;
if ((pmc == 2 || pmc == 4) && (psel & ~7) == 0x40)
return event - (1 << PM_PMC_SH) + 8;
if ((pmc == 1 || pmc == 3) && (psel & ~7) == 0x48)
return event + (1 << PM_PMC_SH) - 8;
return -1;
}
static int power7_get_alternatives(u64 event, unsigned int flags, u64 alt[])
{
int i, j, nalt = 1;
s64 ae;
alt[0] = event;
nalt = 1;
i = find_alternative(event);
if (i >= 0) {
for (j = 0; j < MAX_ALT; ++j) {
ae = event_alternatives[i][j];
if (ae && ae != event)
alt[nalt++] = ae;
}
} else {
ae = find_alternative_decode(event);
if (ae > 0)
alt[nalt++] = ae;
}
if (flags & PPMU_ONLY_COUNT_RUN) {
/*
* We're only counting in RUN state,
* so PM_CYC is equivalent to PM_RUN_CYC
* and PM_INST_CMPL === PM_RUN_INST_CMPL.
* This doesn't include alternatives that don't provide
* any extra flexibility in assigning PMCs.
*/
j = nalt;
for (i = 0; i < nalt; ++i) {
switch (alt[i]) {
case 0x1e: /* PM_CYC */
alt[j++] = 0x600f4; /* PM_RUN_CYC */
break;
case 0x600f4: /* PM_RUN_CYC */
alt[j++] = 0x1e;
break;
case 0x2: /* PM_PPC_CMPL */
alt[j++] = 0x500fa; /* PM_RUN_INST_CMPL */
break;
case 0x500fa: /* PM_RUN_INST_CMPL */
alt[j++] = 0x2; /* PM_PPC_CMPL */
break;
}
}
nalt = j;
}
return nalt;
}
/*
* Returns 1 if event counts things relating to marked instructions
* and thus needs the MMCRA_SAMPLE_ENABLE bit set, or 0 if not.
*/
static int power7_marked_instr_event(u64 event)
{
int pmc, psel;
int unit;
pmc = (event >> PM_PMC_SH) & PM_PMC_MSK;
unit = (event >> PM_UNIT_SH) & PM_UNIT_MSK;
psel = event & PM_PMCSEL_MSK & ~1; /* trim off edge/level bit */
if (pmc >= 5)
return 0;
switch (psel >> 4) {
case 2:
return pmc == 2 || pmc == 4;
case 3:
if (psel == 0x3c)
return pmc == 1;
if (psel == 0x3e)
return pmc != 2;
return 1;
case 4:
case 5:
return unit == 0xd;
case 6:
if (psel == 0x64)
return pmc >= 3;
case 8:
return unit == 0xd;
}
return 0;
}
static int power7_compute_mmcr(u64 event[], int n_ev,
unsigned int hwc[], u64 mmcr[])
{
u64 mmcr1 = 0;
u64 mmcra = 0;
unsigned int pmc, unit, combine, l2sel, psel;
unsigned int pmc_inuse = 0;
int i;
/* First pass to count resource use */
for (i = 0; i < n_ev; ++i) {
pmc = (event[i] >> PM_PMC_SH) & PM_PMC_MSK;
if (pmc) {
if (pmc > 6)
return -1;
if (pmc_inuse & (1 << (pmc - 1)))
return -1;
pmc_inuse |= 1 << (pmc - 1);
}
}
/* Second pass: assign PMCs, set all MMCR1 fields */
for (i = 0; i < n_ev; ++i) {
pmc = (event[i] >> PM_PMC_SH) & PM_PMC_MSK;
unit = (event[i] >> PM_UNIT_SH) & PM_UNIT_MSK;
combine = (event[i] >> PM_COMBINE_SH) & PM_COMBINE_MSK;
l2sel = (event[i] >> PM_L2SEL_SH) & PM_L2SEL_MSK;
psel = event[i] & PM_PMCSEL_MSK;
if (!pmc) {
/* Bus event or any-PMC direct event */
for (pmc = 0; pmc < 4; ++pmc) {
if (!(pmc_inuse & (1 << pmc)))
break;
}
if (pmc >= 4)
return -1;
pmc_inuse |= 1 << pmc;
} else {
/* Direct or decoded event */
--pmc;
}
if (pmc <= 3) {
mmcr1 |= (u64) unit << (MMCR1_TTM0SEL_SH - 4 * pmc);
mmcr1 |= (u64) combine << (MMCR1_PMC1_COMBINE_SH - pmc);
mmcr1 |= psel << MMCR1_PMCSEL_SH(pmc);
if (unit == 6) /* L2 events */
mmcr1 |= (u64) l2sel << MMCR1_L2SEL_SH;
}
if (power7_marked_instr_event(event[i]))
mmcra |= MMCRA_SAMPLE_ENABLE;
hwc[i] = pmc;
}
/* Return MMCRx values */
mmcr[0] = 0;
if (pmc_inuse & 1)
mmcr[0] = MMCR0_PMC1CE;
if (pmc_inuse & 0x3e)
mmcr[0] |= MMCR0_PMCjCE;
mmcr[1] = mmcr1;
mmcr[2] = mmcra;
return 0;
}
static void power7_disable_pmc(unsigned int pmc, u64 mmcr[])
{
if (pmc <= 3)
mmcr[1] &= ~(0xffULL << MMCR1_PMCSEL_SH(pmc));
}
static int power7_generic_events[] = {
[PERF_COUNT_CPU_CYCLES] = 0x1e,
[PERF_COUNT_INSTRUCTIONS] = 2,
[PERF_COUNT_CACHE_REFERENCES] = 0xc880, /* LD_REF_L1_LSU */
[PERF_COUNT_CACHE_MISSES] = 0x400f0, /* LD_MISS_L1 */
[PERF_COUNT_BRANCH_INSTRUCTIONS] = 0x10068, /* BRU_FIN */
[PERF_COUNT_BRANCH_MISSES] = 0x400f6, /* BR_MPRED */
};
#define C(x) PERF_COUNT_HW_CACHE_##x
/*
* Table of generalized cache-related events.
* 0 means not supported, -1 means nonsensical, other values
* are event codes.
*/
static int power7_cache_events[C(MAX)][C(OP_MAX)][C(RESULT_MAX)] = {
[C(L1D)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0x400f0, 0xc880 },
[C(OP_WRITE)] = { 0, 0x300f0 },
[C(OP_PREFETCH)] = { 0xd8b8, 0 },
},
[C(L1I)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0, 0x200fc },
[C(OP_WRITE)] = { -1, -1 },
[C(OP_PREFETCH)] = { 0x408a, 0 },
},
[C(LL)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0x6080, 0x6084 },
[C(OP_WRITE)] = { 0x6082, 0x6086 },
[C(OP_PREFETCH)] = { 0, 0 },
},
[C(DTLB)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0, 0x300fc },
[C(OP_WRITE)] = { -1, -1 },
[C(OP_PREFETCH)] = { -1, -1 },
},
[C(ITLB)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0, 0x400fc },
[C(OP_WRITE)] = { -1, -1 },
[C(OP_PREFETCH)] = { -1, -1 },
},
[C(BPU)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0x10068, 0x400f6 },
[C(OP_WRITE)] = { -1, -1 },
[C(OP_PREFETCH)] = { -1, -1 },
},
};
struct power_pmu power7_pmu = {
.n_counter = 6,
.max_alternatives = MAX_ALT + 1,
.add_fields = 0x1555ull,
.test_adder = 0x3000ull,
.compute_mmcr = power7_compute_mmcr,
.get_constraint = power7_get_constraint,
.get_alternatives = power7_get_alternatives,
.disable_pmc = power7_disable_pmc,
.n_generic = ARRAY_SIZE(power7_generic_events),
.generic_events = power7_generic_events,
.cache_events = &power7_cache_events,
};

Просмотреть файл

@ -0,0 +1,482 @@
/*
* Performance counter support for PPC970-family processors.
*
* Copyright 2008-2009 Paul Mackerras, IBM Corporation.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/string.h>
#include <linux/perf_counter.h>
#include <asm/reg.h>
/*
* Bits in event code for PPC970
*/
#define PM_PMC_SH 12 /* PMC number (1-based) for direct events */
#define PM_PMC_MSK 0xf
#define PM_UNIT_SH 8 /* TTMMUX number and setting - unit select */
#define PM_UNIT_MSK 0xf
#define PM_SPCSEL_SH 6
#define PM_SPCSEL_MSK 3
#define PM_BYTE_SH 4 /* Byte number of event bus to use */
#define PM_BYTE_MSK 3
#define PM_PMCSEL_MSK 0xf
/* Values in PM_UNIT field */
#define PM_NONE 0
#define PM_FPU 1
#define PM_VPU 2
#define PM_ISU 3
#define PM_IFU 4
#define PM_IDU 5
#define PM_STS 6
#define PM_LSU0 7
#define PM_LSU1U 8
#define PM_LSU1L 9
#define PM_LASTUNIT 9
/*
* Bits in MMCR0 for PPC970
*/
#define MMCR0_PMC1SEL_SH 8
#define MMCR0_PMC2SEL_SH 1
#define MMCR_PMCSEL_MSK 0x1f
/*
* Bits in MMCR1 for PPC970
*/
#define MMCR1_TTM0SEL_SH 62
#define MMCR1_TTM1SEL_SH 59
#define MMCR1_TTM3SEL_SH 53
#define MMCR1_TTMSEL_MSK 3
#define MMCR1_TD_CP_DBG0SEL_SH 50
#define MMCR1_TD_CP_DBG1SEL_SH 48
#define MMCR1_TD_CP_DBG2SEL_SH 46
#define MMCR1_TD_CP_DBG3SEL_SH 44
#define MMCR1_PMC1_ADDER_SEL_SH 39
#define MMCR1_PMC2_ADDER_SEL_SH 38
#define MMCR1_PMC6_ADDER_SEL_SH 37
#define MMCR1_PMC5_ADDER_SEL_SH 36
#define MMCR1_PMC8_ADDER_SEL_SH 35
#define MMCR1_PMC7_ADDER_SEL_SH 34
#define MMCR1_PMC3_ADDER_SEL_SH 33
#define MMCR1_PMC4_ADDER_SEL_SH 32
#define MMCR1_PMC3SEL_SH 27
#define MMCR1_PMC4SEL_SH 22
#define MMCR1_PMC5SEL_SH 17
#define MMCR1_PMC6SEL_SH 12
#define MMCR1_PMC7SEL_SH 7
#define MMCR1_PMC8SEL_SH 2
static short mmcr1_adder_bits[8] = {
MMCR1_PMC1_ADDER_SEL_SH,
MMCR1_PMC2_ADDER_SEL_SH,
MMCR1_PMC3_ADDER_SEL_SH,
MMCR1_PMC4_ADDER_SEL_SH,
MMCR1_PMC5_ADDER_SEL_SH,
MMCR1_PMC6_ADDER_SEL_SH,
MMCR1_PMC7_ADDER_SEL_SH,
MMCR1_PMC8_ADDER_SEL_SH
};
/*
* Bits in MMCRA
*/
/*
* Layout of constraint bits:
* 6666555555555544444444443333333333222222222211111111110000000000
* 3210987654321098765432109876543210987654321098765432109876543210
* <><><>[ >[ >[ >< >< >< >< ><><><><><><><><>
* SPT0T1 UC PS1 PS2 B0 B1 B2 B3 P1P2P3P4P5P6P7P8
*
* SP - SPCSEL constraint
* 48-49: SPCSEL value 0x3_0000_0000_0000
*
* T0 - TTM0 constraint
* 46-47: TTM0SEL value (0=FPU, 2=IFU, 3=VPU) 0xC000_0000_0000
*
* T1 - TTM1 constraint
* 44-45: TTM1SEL value (0=IDU, 3=STS) 0x3000_0000_0000
*
* UC - unit constraint: can't have all three of FPU|IFU|VPU, ISU, IDU|STS
* 43: UC3 error 0x0800_0000_0000
* 42: FPU|IFU|VPU events needed 0x0400_0000_0000
* 41: ISU events needed 0x0200_0000_0000
* 40: IDU|STS events needed 0x0100_0000_0000
*
* PS1
* 39: PS1 error 0x0080_0000_0000
* 36-38: count of events needing PMC1/2/5/6 0x0070_0000_0000
*
* PS2
* 35: PS2 error 0x0008_0000_0000
* 32-34: count of events needing PMC3/4/7/8 0x0007_0000_0000
*
* B0
* 28-31: Byte 0 event source 0xf000_0000
* Encoding as for the event code
*
* B1, B2, B3
* 24-27, 20-23, 16-19: Byte 1, 2, 3 event sources
*
* P1
* 15: P1 error 0x8000
* 14-15: Count of events needing PMC1
*
* P2..P8
* 0-13: Count of events needing PMC2..PMC8
*/
static unsigned char direct_marked_event[8] = {
(1<<2) | (1<<3), /* PMC1: PM_MRK_GRP_DISP, PM_MRK_ST_CMPL */
(1<<3) | (1<<5), /* PMC2: PM_THRESH_TIMEO, PM_MRK_BRU_FIN */
(1<<3) | (1<<5), /* PMC3: PM_MRK_ST_CMPL_INT, PM_MRK_VMX_FIN */
(1<<4) | (1<<5), /* PMC4: PM_MRK_GRP_CMPL, PM_MRK_CRU_FIN */
(1<<4) | (1<<5), /* PMC5: PM_GRP_MRK, PM_MRK_GRP_TIMEO */
(1<<3) | (1<<4) | (1<<5),
/* PMC6: PM_MRK_ST_STS, PM_MRK_FXU_FIN, PM_MRK_GRP_ISSUED */
(1<<4) | (1<<5), /* PMC7: PM_MRK_FPU_FIN, PM_MRK_INST_FIN */
(1<<4) /* PMC8: PM_MRK_LSU_FIN */
};
/*
* Returns 1 if event counts things relating to marked instructions
* and thus needs the MMCRA_SAMPLE_ENABLE bit set, or 0 if not.
*/
static int p970_marked_instr_event(u64 event)
{
int pmc, psel, unit, byte, bit;
unsigned int mask;
pmc = (event >> PM_PMC_SH) & PM_PMC_MSK;
psel = event & PM_PMCSEL_MSK;
if (pmc) {
if (direct_marked_event[pmc - 1] & (1 << psel))
return 1;
if (psel == 0) /* add events */
bit = (pmc <= 4)? pmc - 1: 8 - pmc;
else if (psel == 7 || psel == 13) /* decode events */
bit = 4;
else
return 0;
} else
bit = psel;
byte = (event >> PM_BYTE_SH) & PM_BYTE_MSK;
unit = (event >> PM_UNIT_SH) & PM_UNIT_MSK;
mask = 0;
switch (unit) {
case PM_VPU:
mask = 0x4c; /* byte 0 bits 2,3,6 */
case PM_LSU0:
/* byte 2 bits 0,2,3,4,6; all of byte 1 */
mask = 0x085dff00;
case PM_LSU1L:
mask = 0x50 << 24; /* byte 3 bits 4,6 */
break;
}
return (mask >> (byte * 8 + bit)) & 1;
}
/* Masks and values for using events from the various units */
static u64 unit_cons[PM_LASTUNIT+1][2] = {
[PM_FPU] = { 0xc80000000000ull, 0x040000000000ull },
[PM_VPU] = { 0xc80000000000ull, 0xc40000000000ull },
[PM_ISU] = { 0x080000000000ull, 0x020000000000ull },
[PM_IFU] = { 0xc80000000000ull, 0x840000000000ull },
[PM_IDU] = { 0x380000000000ull, 0x010000000000ull },
[PM_STS] = { 0x380000000000ull, 0x310000000000ull },
};
static int p970_get_constraint(u64 event, u64 *maskp, u64 *valp)
{
int pmc, byte, unit, sh, spcsel;
u64 mask = 0, value = 0;
int grp = -1;
pmc = (event >> PM_PMC_SH) & PM_PMC_MSK;
if (pmc) {
if (pmc > 8)
return -1;
sh = (pmc - 1) * 2;
mask |= 2 << sh;
value |= 1 << sh;
grp = ((pmc - 1) >> 1) & 1;
}
unit = (event >> PM_UNIT_SH) & PM_UNIT_MSK;
if (unit) {
if (unit > PM_LASTUNIT)
return -1;
mask |= unit_cons[unit][0];
value |= unit_cons[unit][1];
byte = (event >> PM_BYTE_SH) & PM_BYTE_MSK;
/*
* Bus events on bytes 0 and 2 can be counted
* on PMC1/2/5/6; bytes 1 and 3 on PMC3/4/7/8.
*/
if (!pmc)
grp = byte & 1;
/* Set byte lane select field */
mask |= 0xfULL << (28 - 4 * byte);
value |= (u64)unit << (28 - 4 * byte);
}
if (grp == 0) {
/* increment PMC1/2/5/6 field */
mask |= 0x8000000000ull;
value |= 0x1000000000ull;
} else if (grp == 1) {
/* increment PMC3/4/7/8 field */
mask |= 0x800000000ull;
value |= 0x100000000ull;
}
spcsel = (event >> PM_SPCSEL_SH) & PM_SPCSEL_MSK;
if (spcsel) {
mask |= 3ull << 48;
value |= (u64)spcsel << 48;
}
*maskp = mask;
*valp = value;
return 0;
}
static int p970_get_alternatives(u64 event, unsigned int flags, u64 alt[])
{
alt[0] = event;
/* 2 alternatives for LSU empty */
if (event == 0x2002 || event == 0x3002) {
alt[1] = event ^ 0x1000;
return 2;
}
return 1;
}
static int p970_compute_mmcr(u64 event[], int n_ev,
unsigned int hwc[], u64 mmcr[])
{
u64 mmcr0 = 0, mmcr1 = 0, mmcra = 0;
unsigned int pmc, unit, byte, psel;
unsigned int ttm, grp;
unsigned int pmc_inuse = 0;
unsigned int pmc_grp_use[2];
unsigned char busbyte[4];
unsigned char unituse[16];
unsigned char unitmap[] = { 0, 0<<3, 3<<3, 1<<3, 2<<3, 0|4, 3|4 };
unsigned char ttmuse[2];
unsigned char pmcsel[8];
int i;
int spcsel;
if (n_ev > 8)
return -1;
/* First pass to count resource use */
pmc_grp_use[0] = pmc_grp_use[1] = 0;
memset(busbyte, 0, sizeof(busbyte));
memset(unituse, 0, sizeof(unituse));
for (i = 0; i < n_ev; ++i) {
pmc = (event[i] >> PM_PMC_SH) & PM_PMC_MSK;
if (pmc) {
if (pmc_inuse & (1 << (pmc - 1)))
return -1;
pmc_inuse |= 1 << (pmc - 1);
/* count 1/2/5/6 vs 3/4/7/8 use */
++pmc_grp_use[((pmc - 1) >> 1) & 1];
}
unit = (event[i] >> PM_UNIT_SH) & PM_UNIT_MSK;
byte = (event[i] >> PM_BYTE_SH) & PM_BYTE_MSK;
if (unit) {
if (unit > PM_LASTUNIT)
return -1;
if (!pmc)
++pmc_grp_use[byte & 1];
if (busbyte[byte] && busbyte[byte] != unit)
return -1;
busbyte[byte] = unit;
unituse[unit] = 1;
}
}
if (pmc_grp_use[0] > 4 || pmc_grp_use[1] > 4)
return -1;
/*
* Assign resources and set multiplexer selects.
*
* PM_ISU can go either on TTM0 or TTM1, but that's the only
* choice we have to deal with.
*/
if (unituse[PM_ISU] &
(unituse[PM_FPU] | unituse[PM_IFU] | unituse[PM_VPU]))
unitmap[PM_ISU] = 2 | 4; /* move ISU to TTM1 */
/* Set TTM[01]SEL fields. */
ttmuse[0] = ttmuse[1] = 0;
for (i = PM_FPU; i <= PM_STS; ++i) {
if (!unituse[i])
continue;
ttm = unitmap[i];
++ttmuse[(ttm >> 2) & 1];
mmcr1 |= (u64)(ttm & ~4) << MMCR1_TTM1SEL_SH;
}
/* Check only one unit per TTMx */
if (ttmuse[0] > 1 || ttmuse[1] > 1)
return -1;
/* Set byte lane select fields and TTM3SEL. */
for (byte = 0; byte < 4; ++byte) {
unit = busbyte[byte];
if (!unit)
continue;
if (unit <= PM_STS)
ttm = (unitmap[unit] >> 2) & 1;
else if (unit == PM_LSU0)
ttm = 2;
else {
ttm = 3;
if (unit == PM_LSU1L && byte >= 2)
mmcr1 |= 1ull << (MMCR1_TTM3SEL_SH + 3 - byte);
}
mmcr1 |= (u64)ttm << (MMCR1_TD_CP_DBG0SEL_SH - 2 * byte);
}
/* Second pass: assign PMCs, set PMCxSEL and PMCx_ADDER_SEL fields */
memset(pmcsel, 0x8, sizeof(pmcsel)); /* 8 means don't count */
for (i = 0; i < n_ev; ++i) {
pmc = (event[i] >> PM_PMC_SH) & PM_PMC_MSK;
unit = (event[i] >> PM_UNIT_SH) & PM_UNIT_MSK;
byte = (event[i] >> PM_BYTE_SH) & PM_BYTE_MSK;
psel = event[i] & PM_PMCSEL_MSK;
if (!pmc) {
/* Bus event or any-PMC direct event */
if (unit)
psel |= 0x10 | ((byte & 2) << 2);
else
psel |= 8;
for (pmc = 0; pmc < 8; ++pmc) {
if (pmc_inuse & (1 << pmc))
continue;
grp = (pmc >> 1) & 1;
if (unit) {
if (grp == (byte & 1))
break;
} else if (pmc_grp_use[grp] < 4) {
++pmc_grp_use[grp];
break;
}
}
pmc_inuse |= 1 << pmc;
} else {
/* Direct event */
--pmc;
if (psel == 0 && (byte & 2))
/* add events on higher-numbered bus */
mmcr1 |= 1ull << mmcr1_adder_bits[pmc];
}
pmcsel[pmc] = psel;
hwc[i] = pmc;
spcsel = (event[i] >> PM_SPCSEL_SH) & PM_SPCSEL_MSK;
mmcr1 |= spcsel;
if (p970_marked_instr_event(event[i]))
mmcra |= MMCRA_SAMPLE_ENABLE;
}
for (pmc = 0; pmc < 2; ++pmc)
mmcr0 |= pmcsel[pmc] << (MMCR0_PMC1SEL_SH - 7 * pmc);
for (; pmc < 8; ++pmc)
mmcr1 |= (u64)pmcsel[pmc] << (MMCR1_PMC3SEL_SH - 5 * (pmc - 2));
if (pmc_inuse & 1)
mmcr0 |= MMCR0_PMC1CE;
if (pmc_inuse & 0xfe)
mmcr0 |= MMCR0_PMCjCE;
mmcra |= 0x2000; /* mark only one IOP per PPC instruction */
/* Return MMCRx values */
mmcr[0] = mmcr0;
mmcr[1] = mmcr1;
mmcr[2] = mmcra;
return 0;
}
static void p970_disable_pmc(unsigned int pmc, u64 mmcr[])
{
int shift, i;
if (pmc <= 1) {
shift = MMCR0_PMC1SEL_SH - 7 * pmc;
i = 0;
} else {
shift = MMCR1_PMC3SEL_SH - 5 * (pmc - 2);
i = 1;
}
/*
* Setting the PMCxSEL field to 0x08 disables PMC x.
*/
mmcr[i] = (mmcr[i] & ~(0x1fUL << shift)) | (0x08UL << shift);
}
static int ppc970_generic_events[] = {
[PERF_COUNT_HW_CPU_CYCLES] = 7,
[PERF_COUNT_HW_INSTRUCTIONS] = 1,
[PERF_COUNT_HW_CACHE_REFERENCES] = 0x8810, /* PM_LD_REF_L1 */
[PERF_COUNT_HW_CACHE_MISSES] = 0x3810, /* PM_LD_MISS_L1 */
[PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = 0x431, /* PM_BR_ISSUED */
[PERF_COUNT_HW_BRANCH_MISSES] = 0x327, /* PM_GRP_BR_MPRED */
};
#define C(x) PERF_COUNT_HW_CACHE_##x
/*
* Table of generalized cache-related events.
* 0 means not supported, -1 means nonsensical, other values
* are event codes.
*/
static int ppc970_cache_events[C(MAX)][C(OP_MAX)][C(RESULT_MAX)] = {
[C(L1D)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0x8810, 0x3810 },
[C(OP_WRITE)] = { 0x7810, 0x813 },
[C(OP_PREFETCH)] = { 0x731, 0 },
},
[C(L1I)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0, 0 },
[C(OP_WRITE)] = { -1, -1 },
[C(OP_PREFETCH)] = { 0, 0 },
},
[C(LL)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0, 0 },
[C(OP_WRITE)] = { 0, 0 },
[C(OP_PREFETCH)] = { 0x733, 0 },
},
[C(DTLB)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0, 0x704 },
[C(OP_WRITE)] = { -1, -1 },
[C(OP_PREFETCH)] = { -1, -1 },
},
[C(ITLB)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0, 0x700 },
[C(OP_WRITE)] = { -1, -1 },
[C(OP_PREFETCH)] = { -1, -1 },
},
[C(BPU)] = { /* RESULT_ACCESS RESULT_MISS */
[C(OP_READ)] = { 0x431, 0x327 },
[C(OP_WRITE)] = { -1, -1 },
[C(OP_PREFETCH)] = { -1, -1 },
},
};
struct power_pmu ppc970_pmu = {
.n_counter = 8,
.max_alternatives = 2,
.add_fields = 0x001100005555ull,
.test_adder = 0x013300000000ull,
.compute_mmcr = p970_compute_mmcr,
.get_constraint = p970_get_constraint,
.get_alternatives = p970_get_alternatives,
.disable_pmc = p970_disable_pmc,
.n_generic = ARRAY_SIZE(ppc970_generic_events),
.generic_events = ppc970_generic_events,
.cache_events = &ppc970_cache_events,
};

Просмотреть файл

@ -41,6 +41,12 @@ int kvm_cpu_has_interrupt(struct kvm_vcpu *v)
return !!(v->arch.pending_exceptions);
}
int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu)
{
/* do real check here */
return 1;
}
int kvm_arch_vcpu_runnable(struct kvm_vcpu *v)
{
return !(v->arch.msr & MSR_WE);

Просмотреть файл

@ -29,6 +29,7 @@
#include <linux/module.h>
#include <linux/kprobes.h>
#include <linux/kdebug.h>
#include <linux/perf_counter.h>
#include <asm/firmware.h>
#include <asm/page.h>
@ -170,6 +171,8 @@ int __kprobes do_page_fault(struct pt_regs *regs, unsigned long address,
die("Weird page fault", regs, SIGSEGV);
}
perf_swcounter_event(PERF_COUNT_SW_PAGE_FAULTS, 1, 0, regs, address);
/* When running in the kernel we expect faults to occur only to
* addresses in user space. All other faults represent errors in the
* kernel and should generate an OOPS. Unfortunately, in the case of an
@ -309,6 +312,8 @@ good_area:
}
if (ret & VM_FAULT_MAJOR) {
current->maj_flt++;
perf_swcounter_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, 0,
regs, address);
#ifdef CONFIG_PPC_SMLPAR
if (firmware_has_feature(FW_FEATURE_CMO)) {
preempt_disable();
@ -316,8 +321,11 @@ good_area:
preempt_enable();
}
#endif
} else
} else {
current->min_flt++;
perf_swcounter_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, 0,
regs, address);
}
up_read(&mm->mmap_sem);
return 0;

Просмотреть файл

@ -1,6 +1,7 @@
config PPC64
bool "64-bit kernel"
default n
select HAVE_PERF_COUNTERS
help
This option selects whether a 32-bit or a 64-bit kernel
will be built.

Просмотреть файл

@ -250,7 +250,7 @@ axon_ram_probe(struct of_device *device, const struct of_device_id *device_id)
set_capacity(bank->disk, bank->size >> AXON_RAM_SECTOR_SHIFT);
blk_queue_make_request(bank->disk->queue, axon_ram_make_request);
blk_queue_hardsect_size(bank->disk->queue, AXON_RAM_SECTOR_SIZE);
blk_queue_logical_block_size(bank->disk->queue, AXON_RAM_SECTOR_SIZE);
add_disk(bank->disk);
bank->irq_id = irq_of_parse_and_map(device->node, 0);

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше