Граф коммитов

18 Коммитов

Автор SHA1 Сообщение Дата
Linus Torvalds f94c128eef metag: Changes for 4.12
These patches primarily make some usercopy improvements (following on
 from the recent usercopy fixes):
 - Reformat and simplify rapf copy loops
 - Add 64-bit get_user support
 
 And fix a couple more uaccess issues, partily pointed out by Al:
 - Fix access_ok() serious shortcomings
 - Fix strncpy_from_user() address validation
 
 Also included is a trivial removal of a redundant increment.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEd80NauSabkiESfLYbAtpk944dnoFAlkTHcQACgkQbAtpk944
 dnqkXA//fGEeZbGIN61EUk5YI2oc+66if/whvyhM18U9sZZauGCN+PzIP9f/4S7K
 YP2fnXpjpyKsIo88afsVf3rPfAeGEBE7JItASbcflcfAquv1Hz86exbg4uxUA/+m
 rI+8+ZPM3RsQxF0WNgvL4c4xbv3jmvSZ7Qk9rzzkSXdxxDjArSGPApZN5qWM2ih2
 akLJ/ybWdXyP65Z/E1vvd3IM6H15+fe4ajPcconM8b5J/zxrtgNxYuAXsUnhOBIj
 aDzeX4UV9xdlZW1dHz3rIqPiJWRV1dlSiCXqncqh9kA9ma7p21CQPCGHLG+DUtRY
 8RC4z6dluCMfymQZ0P20NZrk/LcOWmwgtVhXVVhoWNoWwqwuFv67LbQK0lD+16D4
 4ydcTlEUiLQo7kLo2wFf8+/F18m/m5f/rOjRlHZB/lG4hoTTtb1vt0V03VWpXWTf
 /Tvbhmya+/+KovzOW3ye+7cqdR0+UKTPg8d19Qn71bSoaY1toFoBBdmzST7CK8IL
 JrLuTgCh+LJiyg13OpwgbSPRY1ybYvan6Gev4aIicHbbvAy2ZQ4UMJmWDi/v9E3C
 tee+QMXMgbkpKj3w4qQYNNuv2D4EWoeX3PA+ypeB0kFFcIi6eWqHCFUJ+TnaIr/K
 nfwQnMBKa63nKy3vJrbYrEIDK/S20WZlJ3tL37wVMA763YZV62A=
 =c80X
 -----END PGP SIGNATURE-----

Merge tag 'metag-for-v4.12' of git://git.kernel.org/pub/scm/linux/kernel/git/jhogan/metag

Pull metag updates from James Hogan:
 "These patches primarily make some usercopy improvements (following on
  from the recent usercopy fixes):

   - reformat and simplify rapf copy loops

   - add 64-bit get_user support

  And fix a couple more uaccess issues, partily pointed out by Al:

   - fix access_ok() serious shortcomings

   - fix strncpy_from_user() address validation

  Also included is a trivial removal of a redundant increment"

* tag 'metag-for-v4.12' of git://git.kernel.org/pub/scm/linux/kernel/git/jhogan/metag:
  metag/mm: Drop pointless increment
  metag/uaccess: Check access_ok in strncpy_from_user
  metag/uaccess: Fix access_ok()
  metag/usercopy: Add 64-bit get_user support
  metag/usercopy: Simplify rapf loop fixup corner case
  metag/usercopy: Reformat rapf loop inline asm
2017-05-10 11:40:36 -07:00
James Hogan 840db3f938 metag/usercopy: Switch to RAW_COPY_USER
Switch to using raw user copy instead of providing metag specific
[__]copy_{to,from}_user[_inatomic](). This simplifies the metag
uaccess.h and allows us to take advantage of extra checking in the
generic versions.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: linux-metag@vger.kernel.org
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2017-04-05 11:43:57 -04:00
James Hogan d3ba2e922d metag/usercopy: Add 64-bit get_user support
Metag already supports 64-bit put_user, so add support for 64-bit
get_user too so that the test_user_copy module can test both.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-metag@vger.kernel.org
2017-04-05 15:27:32 +01:00
James Hogan fc1b759ae4 metag/usercopy: Simplify rapf loop fixup corner case
The final fixup in the rapf loops must handle a corner case due to the
intermediate decrementing of the destination pointer before writing the
last element to it again and re-incrementing it. This decrement (and the
associated increment in the fixup code) can be easily avoided by using
SETL/SETD with an offset of -8/-4.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-metag@vger.kernel.org
2017-04-05 15:25:08 +01:00
James Hogan 049520dcb3 metag/usercopy: Reformat rapf loop inline asm
Reformat rapf loop inline assembly to make it more readable and easier
to modify in future.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-metag@vger.kernel.org
2017-04-05 15:25:08 +01:00
James Hogan b884a190af metag/usercopy: Add missing fixups
The rapf copy loops in the Meta usercopy code is missing some extable
entries for HTP cores with unaligned access checking enabled, where
faults occur on the instruction immediately after the faulting access.

Add the fixup labels and extable entries for these cases so that corner
case user copy failures don't cause kernel crashes.

Fixes: 373cd784d0 ("metag: Memory handling")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-metag@vger.kernel.org
Cc: stable@vger.kernel.org
2017-04-05 15:25:07 +01:00
James Hogan 2c0b1df88b metag/usercopy: Fix src fixup in from user rapf loops
The fixup code to rewind the source pointer in
__asm_copy_from_user_{32,64}bit_rapf_loop() always rewound the source by
a single unit (4 or 8 bytes), however this is insufficient if the fault
didn't occur on the first load in the loop, as the source pointer will
have been incremented but nothing will have been stored until all 4
register [pairs] are loaded.

Read the LSM_STEP field of TXSTATUS (which is already loaded into a
register), a bit like the copy_to_user versions, to determine how many
iterations of MGET[DL] have taken place, all of which need rewinding.

Fixes: 373cd784d0 ("metag: Memory handling")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-metag@vger.kernel.org
Cc: stable@vger.kernel.org
2017-04-05 15:25:07 +01:00
James Hogan fd40eee129 metag/usercopy: Set flags before ADDZ
The fixup code for the copy_to_user rapf loops reads TXStatus.LSM_STEP
to decide how far to rewind the source pointer. There is a special case
for the last execution of an MGETL/MGETD, since it leaves LSM_STEP=0
even though the number of MGETLs/MGETDs attempted was 4. This uses ADDZ
which is conditional upon the Z condition flag, but the AND instruction
which masked the TXStatus.LSM_STEP field didn't set the condition flags
based on the result.

Fix that now by using ANDS which does set the flags, and also marking
the condition codes as clobbered by the inline assembly.

Fixes: 373cd784d0 ("metag: Memory handling")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-metag@vger.kernel.org
Cc: stable@vger.kernel.org
2017-04-05 15:25:06 +01:00
James Hogan 563ddc1076 metag/usercopy: Zero rest of buffer from copy_from_user
Currently we try to zero the destination for a failed read from userland
in fixup code in the usercopy.c macros. The rest of the destination
buffer is then zeroed from __copy_user_zeroing(), which is used for both
copy_from_user() and __copy_from_user().

Unfortunately we fail to zero in the fixup code as D1Ar1 is set to 0
before the fixup code entry labels, and __copy_from_user() shouldn't even
be zeroing the rest of the buffer.

Move the zeroing out into copy_from_user() and rename
__copy_user_zeroing() to raw_copy_from_user() since it no longer does
any zeroing. This also conveniently matches the name needed for
RAW_COPY_USER support in a later patch.

Fixes: 373cd784d0 ("metag: Memory handling")
Reported-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-metag@vger.kernel.org
Cc: stable@vger.kernel.org
2017-04-05 15:25:02 +01:00
James Hogan fb8ea062a8 metag/usercopy: Add early abort to copy_to_user
When copying to userland on Meta, if any faults are encountered
immediately abort the copy instead of continuing on and repeatedly
faulting, and worse potentially copying further bytes successfully to
subsequent valid pages.

Fixes: 373cd784d0 ("metag: Memory handling")
Reported-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-metag@vger.kernel.org
Cc: stable@vger.kernel.org
2017-04-05 14:49:42 +01:00
James Hogan 2257211942 metag/usercopy: Fix alignment error checking
Fix the error checking of the alignment adjustment code in
raw_copy_from_user(), which mistakenly considers it safe to skip the
error check when aligning the source buffer on a 2 or 4 byte boundary.

If the destination buffer was unaligned it may have started to copy
using byte or word accesses, which could well be at the start of a new
(valid) source page. This would result in it appearing to have copied 1
or 2 bytes at the end of the first (invalid) page rather than none at
all.

Fixes: 373cd784d0 ("metag: Memory handling")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-metag@vger.kernel.org
Cc: stable@vger.kernel.org
2017-04-05 14:49:36 +01:00
James Hogan ef62a2d81f metag/usercopy: Drop unused macros
Metag's lib/usercopy.c has a bunch of copy_from_user macros for larger
copies between 5 and 16 bytes which are completely unused. Before fixing
zeroing lets drop these macros so there is less to fix.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: linux-metag@vger.kernel.org
Cc: stable@vger.kernel.org
2017-04-05 14:49:26 +01:00
Andrea Gelmini 986724dd35 metag: Fix typos
Fix typos in metag architecture.

[james.hogan@imgtec.com: squashed patches and fixed "detailed"]

Signed-off-by: Andrea Gelmini <andrea.gelmini@gelma.net>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
2016-07-15 09:55:49 +01:00
James Hogan c20eb0f1d0 metag: move EXPORT_SYMBOL(csum_partial) to metag_ksyms.c
Move EXPORT_SYMBOL(csum_partial) from lib/checksum.c into metag_ksyms.c
so that it doesn't get omitted by the static linker if it's not used by
any other statically linked code, which can result in undefined symbols
when building modules.

For example a randconfig caused the following error:
ERROR: "csum_partial" [fs/reiserfs/reiserfs.ko] undefined!

Signed-off-by: James Hogan <james.hogan@imgtec.com>
2013-07-04 10:00:02 +01:00
James Hogan 9da3ee9aa8 metag: move usercopy.c exports out of metag_ksyms.c
It's less error prone to have function symbols exported immediately
after the function rather than in metag_ksyms.c. Move each EXPORT_SYMBOL
in metag_ksyms.c for symbols defined in usercopy.c into usercopy.c

Signed-off-by: James Hogan <james.hogan@imgtec.com>
2013-03-02 20:11:15 +00:00
James Hogan 5633004cc2 metag: Build infrastructure
Add metag build infrastructure.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
2013-03-02 20:09:54 +00:00
James Hogan 086e9dc0e2 metag: Optimised library functions
Add optimised library functions for metag.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
2013-03-02 20:09:52 +00:00
James Hogan 373cd784d0 metag: Memory handling
Meta has instructions for accessing:
 - bytes        - GETB (1 byte)
 - words        - GETW (2 bytes)
 - doublewords  - GETD (4 bytes)
 - longwords    - GETL (8 bytes)

All accesses must be aligned. Unaligned accesses can be detected and
made to fault on Meta2, however it isn't possible to fix up unaligned
writes so we don't bother fixing up reads either.

This patch adds metag memory handling code including:
 - I/O memory (io.h, ioremap.c): Actually any virtual memory can be
   accessed with these helpers. A part of the non-MMUable address space
   is used for memory mapped I/O. The ioremap() function is implemented
   one to one for non-MMUable addresses.
 - User memory (uaccess.h, usercopy.c): User memory is directly
   accessible from privileged code.
 - Kernel memory (maccess.c): probe_kernel_write() needs to be
   overwridden to use the I/O functions when doing a simple aligned
   write to non-writecombined memory, otherwise the write may be split
   by the generic version.

Note that due to the fact that a portion of the virtual address space is
non-MMUable, and therefore always maps directly to the physical address
space, metag specific I/O functions are made available (metag_in32,
metag_out32 etc). These cast the address argument to a pointer so that
they can be used with raw physical addresses. These accessors are only
to be used for accessing fixed core Meta architecture registers in the
non-MMU region, and not for any SoC/peripheral registers.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
2013-03-02 20:09:19 +00:00