Add unwinder information so oops in the findbit functions can create a
proper backtrace.
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Convert the implementations to operate on words rather than bytes
which makes bitmap searching faster.
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Since the pairs of _find_first and _find_next functions are pretty
similar, use macros to generate this code. This commit does not
change the generated code.
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Provide a more efficient ARMv7 implementation to determine the first
set bit in the supplied value.
Signed-off-by: Russell King (Oracle) <rmk@armlinux.org.uk>
When offset is larger than the size of the bit array, we should not
attempt to access the array as we can perform an access beyond the
end of the array. Fix this by changing the pre-condition.
Using "cmp r2, r1; bhs ..." covers us for the size == 0 case, since
this will always take the branch when r1 is zero, irrespective of
the value of r2. This means we can fix this bug without adding any
additional code!
Tested-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Based on 2 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation #
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 4122 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Enrico Weigelt <info@metux.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This reverts commit 4dd1837d75.
Moving the exports for assembly code into the assembly files breaks
KSYM trimming, but also breaks modversions.
While fixing the KSYM trimming is trivial, fixing modversions brings
us to a technically worse position that we had prior to the above
change:
- We end up with the prototype definitions divorsed from everything
else, which means that adding or removing assembly level ksyms
become more fragile:
* if adding a new assembly ksyms export, a missed prototype in
asm-prototypes.h results in a successful build if no module in
the selected configuration makes use of the symbol.
* when removing a ksyms export, asm-prototypes.h will get forgotten,
with armksyms.c, you'll get a build error if you forget to touch
the file.
- We end up with the same amount of include files and prototypes,
they're just in a header file instead of a .c file with their
exports.
As for lines of code, we don't get much of a size reduction:
(original commit)
47 files changed, 131 insertions(+), 208 deletions(-)
(fix for ksyms trimming)
7 files changed, 18 insertions(+), 5 deletions(-)
(two fixes for modversions)
1 file changed, 34 insertions(+)
3 files changed, 7 insertions(+), 2 deletions(-)
which results in a net total of only 25 lines deleted.
As there does not seem to be much benefit from this change of approach,
revert the change.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
ARMv6 and greater introduced a new instruction ("bx") which can be used
to return from function calls. Recent CPUs perform better when the
"bx lr" instruction is used rather than the "mov pc, lr" instruction,
and this sequence is strongly recommended to be used by the ARM
architecture manual (section A.4.1.1).
We provide a new macro "ret" with all its variants for the condition
code which will resolve to the appropriate instruction.
Rather than doing this piecemeal, and miss some instances, change all
the "mov pc" instances to use the new macro, with the exception of
the "movs" instruction and the kprobes code. This allows us to detect
the "mov pc, lr" case and fix it up - and also gives us the possibility
of deploying this for other registers depending on the CPU selection.
Reported-by: Will Deacon <will.deacon@arm.com>
Tested-by: Stephen Warren <swarren@nvidia.com> # Tegra Jetson TK1
Tested-by: Robert Jarzmik <robert.jarzmik@free.fr> # mioa701_bootresume.S
Tested-by: Andrew Lunn <andrew@lunn.ch> # Kirkwood
Tested-by: Shawn Guo <shawn.guo@freescale.com>
Tested-by: Tony Lindgren <tony@atomide.com> # OMAPs
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com> # Armada XP, 375, 385
Acked-by: Sekhar Nori <nsekhar@ti.com> # DaVinci
Acked-by: Christoffer Dall <christoffer.dall@linaro.org> # kvm/hyp
Acked-by: Haojian Zhuang <haojian.zhuang@gmail.com> # PXA3xx
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> # Xen
Tested-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> # ARMv7M
Tested-by: Simon Horman <horms+renesas@verge.net.au> # Shmobile
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The find_next_bit, find_first_bit, find_next_zero_bit
and find_first_zero_bit functions were not properly
clamping to the maxbit argument at the bit level. They
were instead only checking maxbit at the byte level.
To fix this, add a compare and a conditional move
instruction to the end of the common bit-within-the-
byte code used by all the functions and be sure not to
clobber the maxbit argument before it is used.
Cc: <stable@kernel.org>
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: James Jones <jajones@nvidia.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This declaration specifies the "function" type and size for various
assembly functions, mainly needed for generating the correct branch
instructions in Thumb-2.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
RETINSTR is a left-over from the days when we had 26-bit and
32-bit CPU support integrated into the same tree. Since this
is no longer the case, we can now remove RETINSTR.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Nicolas Pitre
For assembly labels to actually be local they must start with ".L" and
not only "." otherwise they still remain visible in the final link and
clutter kallsyms needlessly, and possibly make for unclear symbolic
backtrace. This patch simply inserts a"L" where appropriate. The code
itself is unchanged.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.
Let it rip!