powerpc updates for 5.19
- Convert to the generic mmap support (ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT). - Add support for outline-only KASAN with 64-bit Radix MMU (P9 or later). - Increase SIGSTKSZ and MINSIGSTKSZ and add support for AT_MINSIGSTKSZ. - Enable the DAWR (Data Address Watchpoint) on POWER9 DD2.3 or later. - Drop support for system call instruction emulation. - Many other small features and fixes. Thanks to: Alexey Kardashevskiy, Alistair Popple, Andy Shevchenko, Bagas Sanjaya, Bjorn Helgaas, Bo Liu, Chen Huang, Christophe Leroy, Colin Ian King, Daniel Axtens, Dwaipayan Ray, Fabiano Rosas, Finn Thain, Frank Rowand, Fuqian Huang, Guilherme G. Piccoli, Hangyu Hua, Haowen Bai, Haren Myneni, Hari Bathini, He Ying, Jason Wang, Jiapeng Chong, Jing Yangyang, Joel Stanley, Julia Lawall, Kajol Jain, Kevin Hao, Krzysztof Kozlowski, Laurent Dufour, Lv Ruyi, Madhavan Srinivasan, Magali Lemes, Miaoqian Lin, Minghao Chi, Nathan Chancellor, Naveen N. Rao, Nicholas Piggin, Oliver O'Halloran, Oscar Salvador, Pali Rohár, Paul Mackerras, Peng Wu, Qing Wang, Randy Dunlap, Reza Arbab, Russell Currey, Sohaib Mohamed, Vaibhav Jain, Vasant Hegde, Wang Qing, Wang Wensheng, Xiang wangx, Xiaomeng Tong, Xu Wang, Yang Guang, Yang Li, Ye Bin, YueHaibing, Yu Kuai, Zheng Bin, Zou Wei, Zucheng Zheng. -----BEGIN PGP SIGNATURE----- iQJHBAABCAAxFiEEJFGtCPCthwEv2Y/bUevqPMjhpYAFAmKSEgETHG1wZUBlbGxl cm1hbi5pZC5hdQAKCRBR6+o8yOGlgJpLEACee7mu2I00Z7VWtW5ckT4RFbAXYZcM Hv5DbTnVB2ItoQMRHvG52DNbR73j9HnYrz8kpwfTBVk90udxVP14L/swXDs3xbT4 riXEYtJ1DRVc/bLiOK637RLPWNrmmZStWZme7k0Y9Ki5Aif8i1Erjjq7EIy47m9j j1MTcwp3ND7IsBON2nZ3PkttEHhevKvOwCPb/BWtPMDV0OhyQUFKB2SNegrlCrkT wshDgdQcYqbIix98PoGa2ZfUVgFQD3JVLzXa4sLpqouzGD+HvEFStOFa2Gq/ZEvV zunaeXDdZUCjlib6KvA8+aumBbIQ1s/urrDbxd+3BuYxZ094vNP1B428NT1AWVtl 3bEZQIN8GSx0v9aHxZ8HePsAMXgG9d2o0xC9EMQ430+cqroN+6UHP7lkekwkprb7 U9EpZCG9U8jV6SDcaMigW3tooEjn657we0R8nZG2NgUNssdSHVh/JYxGDALPXIAk awL3NQrR0tYF3Y3LJm5AxdQrK1hJH8E+hZFCZvIpUXGsr/uf9Gemy/62pD1rhrr/ niULpxIneRGkJiXB5qdGy8pRu27ED53k7Ky6+8MWSEFQl1mUsHSryYACWz939D8c DydhBwQqDTl6Ozs41a5TkVjIRLOCrZADUd/VZM6A4kEOqPJ5t2Gz22Bn8ya1z6Ks 5Sx6vrGH7GnDjA== =15oQ -----END PGP SIGNATURE----- Merge tag 'powerpc-5.19-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull powerpc updates from Michael Ellerman: - Convert to the generic mmap support (ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT) - Add support for outline-only KASAN with 64-bit Radix MMU (P9 or later) - Increase SIGSTKSZ and MINSIGSTKSZ and add support for AT_MINSIGSTKSZ - Enable the DAWR (Data Address Watchpoint) on POWER9 DD2.3 or later - Drop support for system call instruction emulation - Many other small features and fixes Thanks to Alexey Kardashevskiy, Alistair Popple, Andy Shevchenko, Bagas Sanjaya, Bjorn Helgaas, Bo Liu, Chen Huang, Christophe Leroy, Colin Ian King, Daniel Axtens, Dwaipayan Ray, Fabiano Rosas, Finn Thain, Frank Rowand, Fuqian Huang, Guilherme G. Piccoli, Hangyu Hua, Haowen Bai, Haren Myneni, Hari Bathini, He Ying, Jason Wang, Jiapeng Chong, Jing Yangyang, Joel Stanley, Julia Lawall, Kajol Jain, Kevin Hao, Krzysztof Kozlowski, Laurent Dufour, Lv Ruyi, Madhavan Srinivasan, Magali Lemes, Miaoqian Lin, Minghao Chi, Nathan Chancellor, Naveen N. Rao, Nicholas Piggin, Oliver O'Halloran, Oscar Salvador, Pali Rohár, Paul Mackerras, Peng Wu, Qing Wang, Randy Dunlap, Reza Arbab, Russell Currey, Sohaib Mohamed, Vaibhav Jain, Vasant Hegde, Wang Qing, Wang Wensheng, Xiang wangx, Xiaomeng Tong, Xu Wang, Yang Guang, Yang Li, Ye Bin, YueHaibing, Yu Kuai, Zheng Bin, Zou Wei, and Zucheng Zheng. * tag 'powerpc-5.19-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (200 commits) powerpc/64: Include cache.h directly in paca.h powerpc/64s: Only set HAVE_ARCH_UNMAPPED_AREA when CONFIG_PPC_64S_HASH_MMU is set powerpc/xics: Include missing header powerpc/powernv/pci: Drop VF MPS fixup powerpc/fsl_book3e: Don't set rodata RO too early powerpc/microwatt: Add mmu bits to device tree powerpc/powernv/flash: Check OPAL flash calls exist before using powerpc/powermac: constify device_node in of_irq_parse_oldworld() powerpc/powermac: add missing g5_phy_disable_cpu1() declaration selftests/powerpc/pmu: fix spelling mistake "mis-match" -> "mismatch" powerpc: Enable the DAWR on POWER9 DD2.3 and above powerpc/64s: Add CPU_FTRS_POWER10 to ALWAYS mask powerpc/64s: Add CPU_FTRS_POWER9_DD2_2 to CPU_FTRS_ALWAYS mask powerpc: Fix all occurences of "the the" selftests/powerpc/pmu/ebb: remove fixed_instruction.S powerpc/platforms/83xx: Use of_device_get_match_data() powerpc/eeh: Drop redundant spinlock initialization powerpc/iommu: Add missing of_node_put in iommu_init_early_dart powerpc/pseries/vas: Call misc_deregister if sysfs init fails powerpc/papr_scm: Fix leaking nvdimm_events_map elements ...
This commit is contained in:
Коммит
6112bd00e8
|
@ -103,8 +103,8 @@ What: /sys/class/cxl/<afu>/api_version_compatible
|
|||
Date: September 2014
|
||||
Contact: linuxppc-dev@lists.ozlabs.org
|
||||
Description: read only
|
||||
Decimal value of the the lowest version of the userspace API
|
||||
this this kernel supports.
|
||||
Decimal value of the lowest version of the userspace API
|
||||
this kernel supports.
|
||||
Users: https://github.com/ibm-capi/libcxl
|
||||
|
||||
|
||||
|
|
|
@ -1,20 +0,0 @@
|
|||
* Freescale PQ3 and QorIQ based Cache SRAM
|
||||
|
||||
Freescale's mpc85xx and some QorIQ platforms provide an
|
||||
option of configuring a part of (or full) cache memory
|
||||
as SRAM. This cache SRAM representation in the device
|
||||
tree should be done as under:-
|
||||
|
||||
Required properties:
|
||||
|
||||
- compatible : should be "fsl,p2020-cache-sram"
|
||||
- fsl,cache-sram-ctlr-handle : points to the L2 controller
|
||||
- reg : offset and length of the cache-sram.
|
||||
|
||||
Example:
|
||||
|
||||
cache-sram@fff00000 {
|
||||
fsl,cache-sram-ctlr-handle = <&L2>;
|
||||
reg = <0 0xfff00000 0 0x10000>;
|
||||
compatible = "fsl,p2020-cache-sram";
|
||||
};
|
|
@ -2,15 +2,23 @@
|
|||
DAWR issues on POWER9
|
||||
=====================
|
||||
|
||||
On POWER9 the Data Address Watchpoint Register (DAWR) can cause a checkstop
|
||||
if it points to cache inhibited (CI) memory. Currently Linux has no way to
|
||||
distinguish CI memory when configuring the DAWR, so (for now) the DAWR is
|
||||
disabled by this commit::
|
||||
On older POWER9 processors, the Data Address Watchpoint Register (DAWR) can
|
||||
cause a checkstop if it points to cache inhibited (CI) memory. Currently Linux
|
||||
has no way to distinguish CI memory when configuring the DAWR, so on affected
|
||||
systems, the DAWR is disabled.
|
||||
|
||||
commit 9654153158d3e0684a1bdb76dbababdb7111d5a0
|
||||
Author: Michael Neuling <mikey@neuling.org>
|
||||
Date: Tue Mar 27 15:37:24 2018 +1100
|
||||
powerpc: Disable DAWR in the base POWER9 CPU features
|
||||
Affected processor revisions
|
||||
============================
|
||||
|
||||
This issue is only present on processors prior to v2.3. The revision can be
|
||||
found in /proc/cpuinfo::
|
||||
|
||||
processor : 0
|
||||
cpu : POWER9, altivec supported
|
||||
clock : 3800.000000MHz
|
||||
revision : 2.3 (pvr 004e 1203)
|
||||
|
||||
On a system with the issue, the DAWR is disabled as detailed below.
|
||||
|
||||
Technical Details:
|
||||
==================
|
||||
|
|
|
@ -0,0 +1,58 @@
|
|||
KASAN is supported on powerpc on 32-bit and Radix 64-bit only.
|
||||
|
||||
32 bit support
|
||||
==============
|
||||
|
||||
KASAN is supported on both hash and nohash MMUs on 32-bit.
|
||||
|
||||
The shadow area sits at the top of the kernel virtual memory space above the
|
||||
fixmap area and occupies one eighth of the total kernel virtual memory space.
|
||||
|
||||
Instrumentation of the vmalloc area is optional, unless built with modules,
|
||||
in which case it is required.
|
||||
|
||||
64 bit support
|
||||
==============
|
||||
|
||||
Currently, only the radix MMU is supported. There have been versions for hash
|
||||
and Book3E processors floating around on the mailing list, but nothing has been
|
||||
merged.
|
||||
|
||||
KASAN support on Book3S is a bit tricky to get right:
|
||||
|
||||
- It would be good to support inline instrumentation so as to be able to catch
|
||||
stack issues that cannot be caught with outline mode.
|
||||
|
||||
- Inline instrumentation requires a fixed offset.
|
||||
|
||||
- Book3S runs code with translations off ("real mode") during boot, including a
|
||||
lot of generic device-tree parsing code which is used to determine MMU
|
||||
features.
|
||||
|
||||
- Some code - most notably a lot of KVM code - also runs with translations off
|
||||
after boot.
|
||||
|
||||
- Therefore any offset has to point to memory that is valid with
|
||||
translations on or off.
|
||||
|
||||
One approach is just to give up on inline instrumentation. This way boot-time
|
||||
checks can be delayed until after the MMU is set is up, and we can just not
|
||||
instrument any code that runs with translations off after booting. This is the
|
||||
current approach.
|
||||
|
||||
To avoid this limitiation, the KASAN shadow would have to be placed inside the
|
||||
linear mapping, using the same high-bits trick we use for the rest of the linear
|
||||
mapping. This is tricky:
|
||||
|
||||
- We'd like to place it near the start of physical memory. In theory we can do
|
||||
this at run-time based on how much physical memory we have, but this requires
|
||||
being able to arbitrarily relocate the kernel, which is basically the tricky
|
||||
part of KASLR. Not being game to implement both tricky things at once, this
|
||||
is hopefully something we can revisit once we get KASLR for Book3S.
|
||||
|
||||
- Alternatively, we can place the shadow at the _end_ of memory, but this
|
||||
requires knowing how much contiguous physical memory a system has _at compile
|
||||
time_. This is a big hammer, and has some unfortunate consequences: inablity
|
||||
to handle discontiguous physical memory, total failure to boot on machines
|
||||
with less memory than specified, and that machines with more memory than
|
||||
specified can't use it. This was deemed unacceptable.
|
|
@ -1019,12 +1019,10 @@ config PAGE_SIZE_LESS_THAN_64KB
|
|||
depends on !IA64_PAGE_SIZE_64KB
|
||||
depends on !PAGE_SIZE_64KB
|
||||
depends on !PARISC_PAGE_SIZE_64KB
|
||||
depends on !PPC_64K_PAGES
|
||||
depends on PAGE_SIZE_LESS_THAN_256KB
|
||||
|
||||
config PAGE_SIZE_LESS_THAN_256KB
|
||||
def_bool y
|
||||
depends on !PPC_256K_PAGES
|
||||
depends on !PAGE_SIZE_256KB
|
||||
|
||||
# This allows to use a set of generic functions to determine mmap base
|
||||
|
|
|
@ -92,8 +92,8 @@
|
|||
#endif /* CONFIG_COMPAT */
|
||||
|
||||
#ifndef CONFIG_ARM64_FORCE_52BIT
|
||||
#define arch_get_mmap_end(addr) ((addr > DEFAULT_MAP_WINDOW) ? TASK_SIZE :\
|
||||
DEFAULT_MAP_WINDOW)
|
||||
#define arch_get_mmap_end(addr, len, flags) \
|
||||
(((addr) > DEFAULT_MAP_WINDOW) ? TASK_SIZE : DEFAULT_MAP_WINDOW)
|
||||
|
||||
#define arch_get_mmap_base(addr, base) ((addr > DEFAULT_MAP_WINDOW) ? \
|
||||
base + TASK_SIZE - DEFAULT_MAP_WINDOW :\
|
||||
|
|
|
@ -109,6 +109,7 @@ config PPC
|
|||
# Please keep this list sorted alphabetically.
|
||||
#
|
||||
select ARCH_32BIT_OFF_T if PPC32
|
||||
select ARCH_DISABLE_KASAN_INLINE if PPC_RADIX_MMU
|
||||
select ARCH_ENABLE_MEMORY_HOTPLUG
|
||||
select ARCH_ENABLE_MEMORY_HOTREMOVE
|
||||
select ARCH_HAS_COPY_MC if PPC64
|
||||
|
@ -118,7 +119,6 @@ config PPC
|
|||
select ARCH_HAS_DEBUG_WX if STRICT_KERNEL_RWX
|
||||
select ARCH_HAS_DEVMEM_IS_ALLOWED
|
||||
select ARCH_HAS_DMA_MAP_DIRECT if PPC_PSERIES
|
||||
select ARCH_HAS_ELF_RANDOMIZE
|
||||
select ARCH_HAS_FORTIFY_SOURCE
|
||||
select ARCH_HAS_GCOV_PROFILE_ALL
|
||||
select ARCH_HAS_HUGEPD if HUGETLB_PAGE
|
||||
|
@ -155,10 +155,12 @@ config PPC
|
|||
select ARCH_USE_MEMTEST
|
||||
select ARCH_USE_QUEUED_RWLOCKS if PPC_QUEUED_SPINLOCKS
|
||||
select ARCH_USE_QUEUED_SPINLOCKS if PPC_QUEUED_SPINLOCKS
|
||||
select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
|
||||
select ARCH_WANT_IPC_PARSE_VERSION
|
||||
select ARCH_WANT_IRQS_OFF_ACTIVATE_MM
|
||||
select ARCH_WANT_LD_ORPHAN_WARN
|
||||
select ARCH_WANTS_MODULES_DATA_IN_VMALLOC if PPC_BOOK3S_32 || PPC_8xx
|
||||
select ARCH_WANTS_NO_INSTR
|
||||
select ARCH_WEAK_RELEASE_ACQUIRE
|
||||
select BINFMT_ELF
|
||||
select BUILDTIME_TABLE_SORT
|
||||
|
@ -190,7 +192,8 @@ config PPC
|
|||
select HAVE_ARCH_JUMP_LABEL
|
||||
select HAVE_ARCH_JUMP_LABEL_RELATIVE
|
||||
select HAVE_ARCH_KASAN if PPC32 && PPC_PAGE_SHIFT <= 14
|
||||
select HAVE_ARCH_KASAN_VMALLOC if PPC32 && PPC_PAGE_SHIFT <= 14
|
||||
select HAVE_ARCH_KASAN if PPC_RADIX_MMU
|
||||
select HAVE_ARCH_KASAN_VMALLOC if HAVE_ARCH_KASAN
|
||||
select HAVE_ARCH_KFENCE if PPC_BOOK3S_32 || PPC_8xx || 40x
|
||||
select HAVE_ARCH_KGDB
|
||||
select HAVE_ARCH_MMAP_RND_BITS
|
||||
|
@ -210,7 +213,7 @@ config PPC
|
|||
select HAVE_EFFICIENT_UNALIGNED_ACCESS if !(CPU_LITTLE_ENDIAN && POWER7_CPU)
|
||||
select HAVE_FAST_GUP
|
||||
select HAVE_FTRACE_MCOUNT_RECORD
|
||||
select HAVE_FUNCTION_DESCRIPTORS if PPC64 && !CPU_LITTLE_ENDIAN
|
||||
select HAVE_FUNCTION_DESCRIPTORS if PPC64_ELF_ABI_V1
|
||||
select HAVE_FUNCTION_ERROR_INJECTION
|
||||
select HAVE_FUNCTION_GRAPH_TRACER
|
||||
select HAVE_FUNCTION_TRACER
|
||||
|
@ -760,6 +763,22 @@ config PPC_256K_PAGES
|
|||
|
||||
endchoice
|
||||
|
||||
config PAGE_SIZE_4KB
|
||||
def_bool y
|
||||
depends on PPC_4K_PAGES
|
||||
|
||||
config PAGE_SIZE_16KB
|
||||
def_bool y
|
||||
depends on PPC_16K_PAGES
|
||||
|
||||
config PAGE_SIZE_64KB
|
||||
def_bool y
|
||||
depends on PPC_64K_PAGES
|
||||
|
||||
config PAGE_SIZE_256KB
|
||||
def_bool y
|
||||
depends on PPC_256K_PAGES
|
||||
|
||||
config PPC_PAGE_SHIFT
|
||||
int
|
||||
default 18 if PPC_256K_PAGES
|
||||
|
|
|
@ -374,4 +374,5 @@ config PPC_FAST_ENDIAN_SWITCH
|
|||
config KASAN_SHADOW_OFFSET
|
||||
hex
|
||||
depends on KASAN
|
||||
default 0xe0000000
|
||||
default 0xe0000000 if PPC32
|
||||
default 0xa80e000000000000 if PPC64
|
||||
|
|
|
@ -89,10 +89,10 @@ endif
|
|||
|
||||
ifdef CONFIG_PPC64
|
||||
ifndef CONFIG_CC_IS_CLANG
|
||||
cflags-$(CONFIG_CPU_BIG_ENDIAN) += $(call cc-option,-mabi=elfv1)
|
||||
cflags-$(CONFIG_CPU_BIG_ENDIAN) += $(call cc-option,-mcall-aixdesc)
|
||||
aflags-$(CONFIG_CPU_BIG_ENDIAN) += $(call cc-option,-mabi=elfv1)
|
||||
aflags-$(CONFIG_CPU_LITTLE_ENDIAN) += -mabi=elfv2
|
||||
cflags-$(CONFIG_PPC64_ELF_ABI_V1) += $(call cc-option,-mabi=elfv1)
|
||||
cflags-$(CONFIG_PPC64_ELF_ABI_V1) += $(call cc-option,-mcall-aixdesc)
|
||||
aflags-$(CONFIG_PPC64_ELF_ABI_V1) += $(call cc-option,-mabi=elfv1)
|
||||
aflags-$(CONFIG_PPC64_ELF_ABI_V2) += -mabi=elfv2
|
||||
endif
|
||||
endif
|
||||
|
||||
|
@ -141,7 +141,7 @@ endif
|
|||
|
||||
CFLAGS-$(CONFIG_PPC64) := $(call cc-option,-mtraceback=no)
|
||||
ifndef CONFIG_CC_IS_CLANG
|
||||
ifdef CONFIG_CPU_LITTLE_ENDIAN
|
||||
ifdef CONFIG_PPC64_ELF_ABI_V2
|
||||
CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mabi=elfv2,$(call cc-option,-mcall-aixdesc))
|
||||
AFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mabi=elfv2)
|
||||
else
|
||||
|
@ -213,7 +213,7 @@ CHECKFLAGS += -m$(BITS) -D__powerpc__ -D__powerpc$(BITS)__
|
|||
ifdef CONFIG_CPU_BIG_ENDIAN
|
||||
CHECKFLAGS += -D__BIG_ENDIAN__
|
||||
else
|
||||
CHECKFLAGS += -D__LITTLE_ENDIAN__ -D_CALL_ELF=2
|
||||
CHECKFLAGS += -D__LITTLE_ENDIAN__
|
||||
endif
|
||||
|
||||
ifdef CONFIG_476FPE_ERR46
|
||||
|
|
|
@ -38,9 +38,13 @@ BOOTCFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \
|
|||
$(LINUXINCLUDE)
|
||||
|
||||
ifdef CONFIG_PPC64_BOOT_WRAPPER
|
||||
BOOTCFLAGS += -m64
|
||||
ifdef CONFIG_CPU_LITTLE_ENDIAN
|
||||
BOOTCFLAGS += -m64 -mcpu=powerpc64le
|
||||
else
|
||||
BOOTCFLAGS += -m32
|
||||
BOOTCFLAGS += -m64 -mcpu=powerpc64
|
||||
endif
|
||||
else
|
||||
BOOTCFLAGS += -m32 -mcpu=powerpc
|
||||
endif
|
||||
|
||||
BOOTCFLAGS += -isystem $(shell $(BOOTCC) -print-file-name=include)
|
||||
|
@ -49,6 +53,8 @@ ifdef CONFIG_CPU_BIG_ENDIAN
|
|||
BOOTCFLAGS += -mbig-endian
|
||||
else
|
||||
BOOTCFLAGS += -mlittle-endian
|
||||
endif
|
||||
ifdef CONFIG_PPC64_ELF_ABI_V2
|
||||
BOOTCFLAGS += $(call cc-option,-mabi=elfv2)
|
||||
endif
|
||||
|
||||
|
|
|
@ -8,7 +8,8 @@
|
|||
#include "ppc_asm.h"
|
||||
|
||||
RELA = 7
|
||||
RELACOUNT = 0x6ffffff9
|
||||
RELASZ = 8
|
||||
RELAENT = 9
|
||||
|
||||
.data
|
||||
/* A procedure descriptor used when booting this as a COFF file.
|
||||
|
@ -75,34 +76,39 @@ p_base: mflr r10 /* r10 now points to runtime addr of p_base */
|
|||
bne 11f
|
||||
lwz r9,4(r12) /* get RELA pointer in r9 */
|
||||
b 12f
|
||||
11: addis r8,r8,(-RELACOUNT)@ha
|
||||
cmpwi r8,RELACOUNT@l
|
||||
11: cmpwi r8,RELASZ
|
||||
bne .Lcheck_for_relaent
|
||||
lwz r0,4(r12) /* get RELASZ value in r0 */
|
||||
b 12f
|
||||
.Lcheck_for_relaent:
|
||||
cmpwi r8,RELAENT
|
||||
bne 12f
|
||||
lwz r0,4(r12) /* get RELACOUNT value in r0 */
|
||||
lwz r14,4(r12) /* get RELAENT value in r14 */
|
||||
12: addi r12,r12,8
|
||||
b 9b
|
||||
|
||||
/* The relocation section contains a list of relocations.
|
||||
* We now do the R_PPC_RELATIVE ones, which point to words
|
||||
* which need to be initialized with addend + offset.
|
||||
* The R_PPC_RELATIVE ones come first and there are RELACOUNT
|
||||
* of them. */
|
||||
* which need to be initialized with addend + offset */
|
||||
10: /* skip relocation if we don't have both */
|
||||
cmpwi r0,0
|
||||
beq 3f
|
||||
cmpwi r9,0
|
||||
beq 3f
|
||||
cmpwi r14,0
|
||||
beq 3f
|
||||
|
||||
add r9,r9,r11 /* Relocate RELA pointer */
|
||||
divwu r0,r0,r14 /* RELASZ / RELAENT */
|
||||
mtctr r0
|
||||
2: lbz r0,4+3(r9) /* ELF32_R_INFO(reloc->r_info) */
|
||||
cmpwi r0,22 /* R_PPC_RELATIVE */
|
||||
bne 3f
|
||||
bne .Lnext
|
||||
lwz r12,0(r9) /* reloc->r_offset */
|
||||
lwz r0,8(r9) /* reloc->r_addend */
|
||||
add r0,r0,r11
|
||||
stwx r0,r11,r12
|
||||
addi r9,r9,12
|
||||
.Lnext: add r9,r9,r14
|
||||
bdnz 2b
|
||||
|
||||
/* Do a cache flush for our text, in case the loader didn't */
|
||||
|
@ -160,32 +166,39 @@ p_base: mflr r10 /* r10 now points to runtime addr of p_base */
|
|||
bne 10f
|
||||
ld r13,8(r11) /* get RELA pointer in r13 */
|
||||
b 11f
|
||||
10: addis r12,r12,(-RELACOUNT)@ha
|
||||
cmpdi r12,RELACOUNT@l
|
||||
bne 11f
|
||||
ld r8,8(r11) /* get RELACOUNT value in r8 */
|
||||
10: cmpwi r12,RELASZ
|
||||
bne .Lcheck_for_relaent
|
||||
lwz r8,8(r11) /* get RELASZ pointer in r8 */
|
||||
b 11f
|
||||
.Lcheck_for_relaent:
|
||||
cmpwi r12,RELAENT
|
||||
bne 11f
|
||||
lwz r14,8(r11) /* get RELAENT pointer in r14 */
|
||||
11: addi r11,r11,16
|
||||
b 9b
|
||||
12:
|
||||
cmpdi r13,0 /* check we have both RELA and RELACOUNT */
|
||||
cmpdi r13,0 /* check we have both RELA, RELASZ, RELAENT*/
|
||||
cmpdi cr1,r8,0
|
||||
beq 3f
|
||||
beq cr1,3f
|
||||
cmpdi r14,0
|
||||
beq 3f
|
||||
|
||||
/* Calcuate the runtime offset. */
|
||||
subf r13,r13,r9
|
||||
|
||||
/* Run through the list of relocations and process the
|
||||
* R_PPC64_RELATIVE ones. */
|
||||
divdu r8,r8,r14 /* RELASZ / RELAENT */
|
||||
mtctr r8
|
||||
13: ld r0,8(r9) /* ELF64_R_TYPE(reloc->r_info) */
|
||||
cmpdi r0,22 /* R_PPC64_RELATIVE */
|
||||
bne 3f
|
||||
bne .Lnext
|
||||
ld r12,0(r9) /* reloc->r_offset */
|
||||
ld r0,16(r9) /* reloc->r_addend */
|
||||
add r0,r0,r13
|
||||
stdx r0,r13,r12
|
||||
addi r9,r9,24
|
||||
.Lnext: add r9,r9,r14
|
||||
bdnz 13b
|
||||
|
||||
/* Do a cache flush for our text, in case the loader didn't */
|
||||
|
|
|
@ -70,7 +70,7 @@ static void hotfoot_fixups(void)
|
|||
|
||||
printf("Fixing devtree for 4M Flash\n");
|
||||
|
||||
/* First fix up the base addresse */
|
||||
/* First fix up the base address */
|
||||
getprop(devp, "reg", regs, sizeof(regs));
|
||||
regs[0] = 0;
|
||||
regs[1] = 0xffc00000;
|
||||
|
|
|
@ -198,4 +198,9 @@
|
|||
reg = <0xe0000 0x1000>;
|
||||
fsl,has-rstcr;
|
||||
};
|
||||
|
||||
pmc: power@e0070 {
|
||||
compatible = "fsl,mpc8548-pmc";
|
||||
reg = <0xe0070 0x20>;
|
||||
};
|
||||
};
|
||||
|
|
|
@ -90,6 +90,8 @@
|
|||
64-bit;
|
||||
d-cache-size = <0x1000>;
|
||||
ibm,chip-id = <0>;
|
||||
ibm,mmu-lpid-bits = <12>;
|
||||
ibm,mmu-pid-bits = <20>;
|
||||
};
|
||||
};
|
||||
|
||||
|
|
|
@ -200,12 +200,6 @@ void __dt_fixup_mac_addresses(u32 startindex, ...);
|
|||
__dt_fixup_mac_addresses(0, __VA_ARGS__, NULL)
|
||||
|
||||
|
||||
static inline void *find_node_by_linuxphandle(const u32 linuxphandle)
|
||||
{
|
||||
return find_node_by_prop_value(NULL, "linux,phandle",
|
||||
(char *)&linuxphandle, sizeof(u32));
|
||||
}
|
||||
|
||||
static inline char *get_path(const void *phandle, char *buf, int len)
|
||||
{
|
||||
if (dt_ops.get_path)
|
||||
|
|
|
@ -162,7 +162,7 @@ while [ "$#" -gt 0 ]; do
|
|||
fi
|
||||
;;
|
||||
--no-gzip)
|
||||
# a "feature" of the the wrapper script is that it can be used outside
|
||||
# a "feature" of the wrapper script is that it can be used outside
|
||||
# the kernel tree. So keeping this around for backwards compatibility.
|
||||
compression=
|
||||
uboot_comp=none
|
||||
|
|
|
@ -404,7 +404,7 @@ static int ppc_xts_decrypt(struct skcipher_request *req)
|
|||
|
||||
/*
|
||||
* Algorithm definitions. Disabling alignment (cra_alignmask=0) was chosen
|
||||
* because the e500 platform can handle unaligned reads/writes very efficently.
|
||||
* because the e500 platform can handle unaligned reads/writes very efficiently.
|
||||
* This improves IPsec thoughput by another few percent. Additionally we assume
|
||||
* that AES context is always aligned to at least 8 bytes because it is created
|
||||
* with kmalloc() in the crypto infrastructure
|
||||
|
|
|
@ -18,6 +18,10 @@
|
|||
#include <asm/book3s/64/hash-4k.h>
|
||||
#endif
|
||||
|
||||
#define H_PTRS_PER_PTE (1 << H_PTE_INDEX_SIZE)
|
||||
#define H_PTRS_PER_PMD (1 << H_PMD_INDEX_SIZE)
|
||||
#define H_PTRS_PER_PUD (1 << H_PUD_INDEX_SIZE)
|
||||
|
||||
/* Bits to set in a PMD/PUD/PGD entry valid bit*/
|
||||
#define HASH_PMD_VAL_BITS (0x8000000000000000UL)
|
||||
#define HASH_PUD_VAL_BITS (0x8000000000000000UL)
|
||||
|
|
|
@ -8,10 +8,6 @@
|
|||
*/
|
||||
void radix__flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
|
||||
void radix__local_flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
|
||||
extern unsigned long
|
||||
radix__hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
|
||||
unsigned long len, unsigned long pgoff,
|
||||
unsigned long flags);
|
||||
|
||||
extern void radix__huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
|
||||
unsigned long addr, pte_t *ptep,
|
||||
|
|
|
@ -18,6 +18,7 @@
|
|||
* complete pgtable.h but only a portion of it.
|
||||
*/
|
||||
#include <asm/book3s/64/pgtable.h>
|
||||
#include <asm/book3s/64/slice.h>
|
||||
#include <asm/task_size_64.h>
|
||||
#include <asm/cpu_has_feature.h>
|
||||
|
||||
|
|
|
@ -4,12 +4,6 @@
|
|||
|
||||
#include <asm/page.h>
|
||||
|
||||
#ifdef CONFIG_HUGETLB_PAGE
|
||||
#define HAVE_ARCH_HUGETLB_UNMAPPED_AREA
|
||||
#endif
|
||||
#define HAVE_ARCH_UNMAPPED_AREA
|
||||
#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
/*
|
||||
* Page size definition
|
||||
|
|
|
@ -231,6 +231,9 @@ extern unsigned long __pmd_frag_size_shift;
|
|||
#define PTRS_PER_PUD (1 << PUD_INDEX_SIZE)
|
||||
#define PTRS_PER_PGD (1 << PGD_INDEX_SIZE)
|
||||
|
||||
#define MAX_PTRS_PER_PTE ((H_PTRS_PER_PTE > R_PTRS_PER_PTE) ? H_PTRS_PER_PTE : R_PTRS_PER_PTE)
|
||||
#define MAX_PTRS_PER_PMD ((H_PTRS_PER_PMD > R_PTRS_PER_PMD) ? H_PTRS_PER_PMD : R_PTRS_PER_PMD)
|
||||
#define MAX_PTRS_PER_PUD ((H_PTRS_PER_PUD > R_PTRS_PER_PUD) ? H_PTRS_PER_PUD : R_PTRS_PER_PUD)
|
||||
#define MAX_PTRS_PER_PGD (1 << (H_PGD_INDEX_SIZE > RADIX_PGD_INDEX_SIZE ? \
|
||||
H_PGD_INDEX_SIZE : RADIX_PGD_INDEX_SIZE))
|
||||
|
||||
|
|
|
@ -35,6 +35,11 @@
|
|||
#define RADIX_PMD_SHIFT (PAGE_SHIFT + RADIX_PTE_INDEX_SIZE)
|
||||
#define RADIX_PUD_SHIFT (RADIX_PMD_SHIFT + RADIX_PMD_INDEX_SIZE)
|
||||
#define RADIX_PGD_SHIFT (RADIX_PUD_SHIFT + RADIX_PUD_INDEX_SIZE)
|
||||
|
||||
#define R_PTRS_PER_PTE (1 << RADIX_PTE_INDEX_SIZE)
|
||||
#define R_PTRS_PER_PMD (1 << RADIX_PMD_INDEX_SIZE)
|
||||
#define R_PTRS_PER_PUD (1 << RADIX_PUD_INDEX_SIZE)
|
||||
|
||||
/*
|
||||
* Size of EA range mapped by our pagetables.
|
||||
*/
|
||||
|
@ -68,11 +73,11 @@
|
|||
*
|
||||
*
|
||||
* 3rd quadrant expanded:
|
||||
* +------------------------------+
|
||||
* +------------------------------+ Highest address (0xc010000000000000)
|
||||
* +------------------------------+ KASAN shadow end (0xc00fc00000000000)
|
||||
* | |
|
||||
* | |
|
||||
* | |
|
||||
* +------------------------------+ Kernel vmemmap end (0xc010000000000000)
|
||||
* +------------------------------+ Kernel vmemmap end/shadow start (0xc00e000000000000)
|
||||
* | |
|
||||
* | 512TB |
|
||||
* | |
|
||||
|
@ -91,6 +96,7 @@
|
|||
* +------------------------------+ Kernel linear (0xc.....)
|
||||
*/
|
||||
|
||||
/* For the sizes of the shadow area, see kasan.h */
|
||||
|
||||
/*
|
||||
* If we store section details in page->flags we can't increase the MAX_PHYSMEM_BITS
|
||||
|
|
|
@ -2,6 +2,16 @@
|
|||
#ifndef _ASM_POWERPC_BOOK3S_64_SLICE_H
|
||||
#define _ASM_POWERPC_BOOK3S_64_SLICE_H
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
#ifdef CONFIG_PPC_64S_HASH_MMU
|
||||
#ifdef CONFIG_HUGETLB_PAGE
|
||||
#define HAVE_ARCH_HUGETLB_UNMAPPED_AREA
|
||||
#endif
|
||||
#define HAVE_ARCH_UNMAPPED_AREA
|
||||
#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
|
||||
#endif
|
||||
|
||||
#define SLICE_LOW_SHIFT 28
|
||||
#define SLICE_LOW_TOP (0x100000000ul)
|
||||
#define SLICE_NUM_LOW (SLICE_LOW_TOP >> SLICE_LOW_SHIFT)
|
||||
|
@ -13,4 +23,20 @@
|
|||
|
||||
#define SLB_ADDR_LIMIT_DEFAULT DEFAULT_MAP_WINDOW_USER64
|
||||
|
||||
struct mm_struct;
|
||||
|
||||
unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
|
||||
unsigned long flags, unsigned int psize,
|
||||
int topdown);
|
||||
|
||||
unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr);
|
||||
|
||||
void slice_set_range_psize(struct mm_struct *mm, unsigned long start,
|
||||
unsigned long len, unsigned int psize);
|
||||
|
||||
void slice_init_new_context_exec(struct mm_struct *mm);
|
||||
void slice_setup_new_exec(void);
|
||||
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
||||
#endif /* _ASM_POWERPC_BOOK3S_64_SLICE_H */
|
||||
|
|
|
@ -38,14 +38,15 @@ extern __wsum csum_and_copy_to_user(const void *src, void __user *dst,
|
|||
*/
|
||||
static inline __sum16 csum_fold(__wsum sum)
|
||||
{
|
||||
unsigned int tmp;
|
||||
u32 tmp = (__force u32)sum;
|
||||
|
||||
/* swap the two 16-bit halves of sum */
|
||||
__asm__("rlwinm %0,%1,16,0,31" : "=r" (tmp) : "r" (sum));
|
||||
/* if there is a carry from adding the two 16-bit halves,
|
||||
it will carry from the lower half into the upper half,
|
||||
giving us the correct sum in the upper half. */
|
||||
return (__force __sum16)(~((__force u32)sum + tmp) >> 16);
|
||||
/*
|
||||
* swap the two 16-bit halves of sum
|
||||
* if there is a carry from adding the two 16-bit halves,
|
||||
* it will carry from the lower half into the upper half,
|
||||
* giving us the correct sum in the upper half.
|
||||
*/
|
||||
return (__force __sum16)(~(tmp + rol32(tmp, 16)) >> 16);
|
||||
}
|
||||
|
||||
static inline u32 from64to32(u64 x)
|
||||
|
@ -95,16 +96,15 @@ static __always_inline __wsum csum_add(__wsum csum, __wsum addend)
|
|||
{
|
||||
#ifdef __powerpc64__
|
||||
u64 res = (__force u64)csum;
|
||||
#endif
|
||||
|
||||
res += (__force u64)addend;
|
||||
return (__force __wsum)((u32)res + (res >> 32));
|
||||
#else
|
||||
if (__builtin_constant_p(csum) && csum == 0)
|
||||
return addend;
|
||||
if (__builtin_constant_p(addend) && addend == 0)
|
||||
return csum;
|
||||
|
||||
#ifdef __powerpc64__
|
||||
res += (__force u64)addend;
|
||||
return (__force __wsum)((u32)res + (res >> 32));
|
||||
#else
|
||||
asm("addc %0,%0,%1;"
|
||||
"addze %0,%0;"
|
||||
: "+r" (csum) : "r" (addend) : "xer");
|
||||
|
|
|
@ -22,10 +22,55 @@
|
|||
#define BRANCH_SET_LINK 0x1
|
||||
#define BRANCH_ABSOLUTE 0x2
|
||||
|
||||
bool is_offset_in_branch_range(long offset);
|
||||
bool is_offset_in_cond_branch_range(long offset);
|
||||
int create_branch(ppc_inst_t *instr, const u32 *addr,
|
||||
unsigned long target, int flags);
|
||||
DECLARE_STATIC_KEY_FALSE(init_mem_is_free);
|
||||
|
||||
/*
|
||||
* Powerpc branch instruction is :
|
||||
*
|
||||
* 0 6 30 31
|
||||
* +---------+----------------+---+---+
|
||||
* | opcode | LI |AA |LK |
|
||||
* +---------+----------------+---+---+
|
||||
* Where AA = 0 and LK = 0
|
||||
*
|
||||
* LI is a signed 24 bits integer. The real branch offset is computed
|
||||
* by: imm32 = SignExtend(LI:'0b00', 32);
|
||||
*
|
||||
* So the maximum forward branch should be:
|
||||
* (0x007fffff << 2) = 0x01fffffc = 0x1fffffc
|
||||
* The maximum backward branch should be:
|
||||
* (0xff800000 << 2) = 0xfe000000 = -0x2000000
|
||||
*/
|
||||
static inline bool is_offset_in_branch_range(long offset)
|
||||
{
|
||||
return (offset >= -0x2000000 && offset <= 0x1fffffc && !(offset & 0x3));
|
||||
}
|
||||
|
||||
static inline bool is_offset_in_cond_branch_range(long offset)
|
||||
{
|
||||
return offset >= -0x8000 && offset <= 0x7fff && !(offset & 0x3);
|
||||
}
|
||||
|
||||
static inline int create_branch(ppc_inst_t *instr, const u32 *addr,
|
||||
unsigned long target, int flags)
|
||||
{
|
||||
long offset;
|
||||
|
||||
*instr = ppc_inst(0);
|
||||
offset = target;
|
||||
if (! (flags & BRANCH_ABSOLUTE))
|
||||
offset = offset - (unsigned long)addr;
|
||||
|
||||
/* Check we can represent the target in the instruction format */
|
||||
if (!is_offset_in_branch_range(offset))
|
||||
return 1;
|
||||
|
||||
/* Mask out the flags and target, so they don't step on each other. */
|
||||
*instr = ppc_inst(0x48000000 | (flags & 0x3) | (offset & 0x03FFFFFC));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int create_cond_branch(ppc_inst_t *instr, const u32 *addr,
|
||||
unsigned long target, int flags);
|
||||
int patch_branch(u32 *addr, unsigned long target, int flags);
|
||||
|
@ -87,7 +132,7 @@ bool is_conditional_branch(ppc_inst_t instr);
|
|||
|
||||
static inline unsigned long ppc_function_entry(void *func)
|
||||
{
|
||||
#ifdef PPC64_ELF_ABI_v2
|
||||
#ifdef CONFIG_PPC64_ELF_ABI_V2
|
||||
u32 *insn = func;
|
||||
|
||||
/*
|
||||
|
@ -112,7 +157,7 @@ static inline unsigned long ppc_function_entry(void *func)
|
|||
return (unsigned long)(insn + 2);
|
||||
else
|
||||
return (unsigned long)func;
|
||||
#elif defined(PPC64_ELF_ABI_v1)
|
||||
#elif defined(CONFIG_PPC64_ELF_ABI_V1)
|
||||
/*
|
||||
* On PPC64 ABIv1 the function pointer actually points to the
|
||||
* function's descriptor. The first entry in the descriptor is the
|
||||
|
@ -126,7 +171,7 @@ static inline unsigned long ppc_function_entry(void *func)
|
|||
|
||||
static inline unsigned long ppc_global_function_entry(void *func)
|
||||
{
|
||||
#ifdef PPC64_ELF_ABI_v2
|
||||
#ifdef CONFIG_PPC64_ELF_ABI_V2
|
||||
/* PPC64 ABIv2 the global entry point is at the address */
|
||||
return (unsigned long)func;
|
||||
#else
|
||||
|
@ -143,7 +188,7 @@ static inline unsigned long ppc_global_function_entry(void *func)
|
|||
static inline unsigned long ppc_kallsyms_lookup_name(const char *name)
|
||||
{
|
||||
unsigned long addr;
|
||||
#ifdef PPC64_ELF_ABI_v1
|
||||
#ifdef CONFIG_PPC64_ELF_ABI_V1
|
||||
/* check for dot variant */
|
||||
char dot_name[1 + KSYM_NAME_LEN];
|
||||
bool dot_appended = false;
|
||||
|
@ -164,7 +209,7 @@ static inline unsigned long ppc_kallsyms_lookup_name(const char *name)
|
|||
if (!addr && dot_appended)
|
||||
/* Let's try the original non-dot symbol lookup */
|
||||
addr = kallsyms_lookup_name(name);
|
||||
#elif defined(PPC64_ELF_ABI_v2)
|
||||
#elif defined(CONFIG_PPC64_ELF_ABI_V2)
|
||||
addr = kallsyms_lookup_name(name);
|
||||
if (addr)
|
||||
addr = ppc_function_entry((void *)addr);
|
||||
|
@ -174,14 +219,13 @@ static inline unsigned long ppc_kallsyms_lookup_name(const char *name)
|
|||
return addr;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PPC64
|
||||
/*
|
||||
* Some instruction encodings commonly used in dynamic ftracing
|
||||
* and function live patching.
|
||||
*/
|
||||
|
||||
/* This must match the definition of STK_GOT in <asm/ppc_asm.h> */
|
||||
#ifdef PPC64_ELF_ABI_v2
|
||||
#ifdef CONFIG_PPC64_ELF_ABI_V2
|
||||
#define R2_STACK_OFFSET 24
|
||||
#else
|
||||
#define R2_STACK_OFFSET 40
|
||||
|
@ -191,6 +235,5 @@ static inline unsigned long ppc_kallsyms_lookup_name(const char *name)
|
|||
|
||||
/* usually preceded by a mflr r0 */
|
||||
#define PPC_INST_STD_LR PPC_RAW_STD(_R0, _R1, PPC_LR_STKOFF)
|
||||
#endif /* CONFIG_PPC64 */
|
||||
|
||||
#endif /* _ASM_POWERPC_CODE_PATCHING_H */
|
||||
|
|
|
@ -440,6 +440,10 @@ static inline void cpu_feature_keys_init(void) { }
|
|||
#define CPU_FTRS_POWER9_DD2_2 (CPU_FTRS_POWER9 | CPU_FTR_POWER9_DD2_1 | \
|
||||
CPU_FTR_P9_TM_HV_ASSIST | \
|
||||
CPU_FTR_P9_TM_XER_SO_BUG)
|
||||
#define CPU_FTRS_POWER9_DD2_3 (CPU_FTRS_POWER9 | CPU_FTR_POWER9_DD2_1 | \
|
||||
CPU_FTR_P9_TM_HV_ASSIST | \
|
||||
CPU_FTR_P9_TM_XER_SO_BUG | \
|
||||
CPU_FTR_DAWR)
|
||||
#define CPU_FTRS_POWER10 (CPU_FTR_LWSYNC | \
|
||||
CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | CPU_FTR_ARCH_206 |\
|
||||
CPU_FTR_MMCRA | CPU_FTR_SMT | \
|
||||
|
@ -469,14 +473,16 @@ static inline void cpu_feature_keys_init(void) { }
|
|||
#define CPU_FTRS_POSSIBLE \
|
||||
(CPU_FTRS_POWER7 | CPU_FTRS_POWER8E | CPU_FTRS_POWER8 | \
|
||||
CPU_FTR_ALTIVEC_COMP | CPU_FTR_VSX_COMP | CPU_FTRS_POWER9 | \
|
||||
CPU_FTRS_POWER9_DD2_1 | CPU_FTRS_POWER9_DD2_2 | CPU_FTRS_POWER10)
|
||||
CPU_FTRS_POWER9_DD2_1 | CPU_FTRS_POWER9_DD2_2 | \
|
||||
CPU_FTRS_POWER9_DD2_3 | CPU_FTRS_POWER10)
|
||||
#else
|
||||
#define CPU_FTRS_POSSIBLE \
|
||||
(CPU_FTRS_PPC970 | CPU_FTRS_POWER5 | \
|
||||
CPU_FTRS_POWER6 | CPU_FTRS_POWER7 | CPU_FTRS_POWER8E | \
|
||||
CPU_FTRS_POWER8 | CPU_FTRS_CELL | CPU_FTRS_PA6T | \
|
||||
CPU_FTR_VSX_COMP | CPU_FTR_ALTIVEC_COMP | CPU_FTRS_POWER9 | \
|
||||
CPU_FTRS_POWER9_DD2_1 | CPU_FTRS_POWER9_DD2_2 | CPU_FTRS_POWER10)
|
||||
CPU_FTRS_POWER9_DD2_1 | CPU_FTRS_POWER9_DD2_2 | \
|
||||
CPU_FTRS_POWER9_DD2_3 | CPU_FTRS_POWER10)
|
||||
#endif /* CONFIG_CPU_LITTLE_ENDIAN */
|
||||
#endif
|
||||
#else
|
||||
|
@ -541,14 +547,16 @@ enum {
|
|||
#define CPU_FTRS_ALWAYS \
|
||||
(CPU_FTRS_POSSIBLE & ~CPU_FTR_HVMODE & CPU_FTRS_POWER7 & \
|
||||
CPU_FTRS_POWER8E & CPU_FTRS_POWER8 & CPU_FTRS_POWER9 & \
|
||||
CPU_FTRS_POWER9_DD2_1 & CPU_FTRS_DT_CPU_BASE)
|
||||
CPU_FTRS_POWER9_DD2_1 & CPU_FTRS_POWER9_DD2_2 & \
|
||||
CPU_FTRS_POWER10 & CPU_FTRS_DT_CPU_BASE)
|
||||
#else
|
||||
#define CPU_FTRS_ALWAYS \
|
||||
(CPU_FTRS_PPC970 & CPU_FTRS_POWER5 & \
|
||||
CPU_FTRS_POWER6 & CPU_FTRS_POWER7 & CPU_FTRS_CELL & \
|
||||
CPU_FTRS_PA6T & CPU_FTRS_POWER8 & CPU_FTRS_POWER8E & \
|
||||
~CPU_FTR_HVMODE & CPU_FTRS_POSSIBLE & CPU_FTRS_POWER9 & \
|
||||
CPU_FTRS_POWER9_DD2_1 & CPU_FTRS_DT_CPU_BASE)
|
||||
CPU_FTRS_POWER9_DD2_1 & CPU_FTRS_POWER9_DD2_2 & \
|
||||
CPU_FTRS_POWER10 & CPU_FTRS_DT_CPU_BASE)
|
||||
#endif /* CONFIG_CPU_LITTLE_ENDIAN */
|
||||
#endif
|
||||
#else
|
||||
|
|
|
@ -23,6 +23,9 @@ struct drmem_lmb_info {
|
|||
u64 lmb_size;
|
||||
};
|
||||
|
||||
struct device_node;
|
||||
struct property;
|
||||
|
||||
extern struct drmem_lmb_info *drmem_info;
|
||||
|
||||
static inline struct drmem_lmb *drmem_lmb_next(struct drmem_lmb *lmb,
|
||||
|
|
|
@ -333,8 +333,6 @@ static inline bool eeh_enabled(void)
|
|||
|
||||
static inline void eeh_show_enabled(void) { }
|
||||
|
||||
static inline void eeh_dev_phb_init_dynamic(struct pci_controller *phb) { }
|
||||
|
||||
static inline int eeh_check_failure(const volatile void __iomem *token)
|
||||
{
|
||||
return 0;
|
||||
|
@ -354,11 +352,7 @@ static inline int eeh_phb_pe_create(struct pci_controller *phb) { return 0; }
|
|||
#endif /* CONFIG_EEH */
|
||||
|
||||
#if defined(CONFIG_PPC_PSERIES) && defined(CONFIG_EEH)
|
||||
void pseries_eeh_init_edev(struct pci_dn *pdn);
|
||||
void pseries_eeh_init_edev_recursive(struct pci_dn *pdn);
|
||||
#else
|
||||
static inline void pseries_eeh_add_device_early(struct pci_dn *pdn) { }
|
||||
static inline void pseries_eeh_add_device_tree_early(struct pci_dn *pdn) { }
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_PPC64
|
||||
|
|
|
@ -160,7 +160,7 @@ extern int arch_setup_additional_pages(struct linux_binprm *bprm,
|
|||
* even if DLINFO_ARCH_ITEMS goes to zero or is undefined.
|
||||
* update AT_VECTOR_SIZE_ARCH if the number of NEW_AUX_ENT entries changes
|
||||
*/
|
||||
#define ARCH_DLINFO \
|
||||
#define COMMON_ARCH_DLINFO \
|
||||
do { \
|
||||
/* Handle glibc compatibility. */ \
|
||||
NEW_AUX_ENT(AT_IGNOREPPC, AT_IGNOREPPC); \
|
||||
|
@ -173,6 +173,18 @@ do { \
|
|||
ARCH_DLINFO_CACHE_GEOMETRY; \
|
||||
} while (0)
|
||||
|
||||
#define ARCH_DLINFO \
|
||||
do { \
|
||||
COMMON_ARCH_DLINFO; \
|
||||
NEW_AUX_ENT(AT_MINSIGSTKSZ, get_min_sigframe_size()); \
|
||||
} while (0)
|
||||
|
||||
#define COMPAT_ARCH_DLINFO \
|
||||
do { \
|
||||
COMMON_ARCH_DLINFO; \
|
||||
NEW_AUX_ENT(AT_MINSIGSTKSZ, get_min_sigframe_size_compat()); \
|
||||
} while (0)
|
||||
|
||||
/* Relocate the kernel image to @final_address */
|
||||
void relocate(unsigned long final_address);
|
||||
|
||||
|
|
|
@ -50,7 +50,7 @@ struct fadump_crash_info_header {
|
|||
u64 elfcorehdr_addr;
|
||||
u32 crashing_cpu;
|
||||
struct pt_regs regs;
|
||||
struct cpumask online_mask;
|
||||
struct cpumask cpu_mask;
|
||||
};
|
||||
|
||||
struct fadump_memory_range {
|
||||
|
|
|
@ -1,35 +0,0 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0-or-later */
|
||||
/*
|
||||
* Copyright 2009 Freescale Semiconductor, Inc.
|
||||
*
|
||||
* Cache SRAM handling for QorIQ platform
|
||||
*
|
||||
* Author: Vivek Mahajan <vivek.mahajan@freescale.com>
|
||||
|
||||
* This file is derived from the original work done
|
||||
* by Sylvain Munaut for the Bestcomm SRAM allocator.
|
||||
*/
|
||||
|
||||
#ifndef __ASM_POWERPC_FSL_85XX_CACHE_SRAM_H__
|
||||
#define __ASM_POWERPC_FSL_85XX_CACHE_SRAM_H__
|
||||
|
||||
#include <asm/rheap.h>
|
||||
#include <linux/spinlock.h>
|
||||
|
||||
/*
|
||||
* Cache-SRAM
|
||||
*/
|
||||
|
||||
struct mpc85xx_cache_sram {
|
||||
phys_addr_t base_phys;
|
||||
void *base_virt;
|
||||
unsigned int size;
|
||||
rh_info_t *rh;
|
||||
spinlock_t lock;
|
||||
};
|
||||
|
||||
extern void mpc85xx_cache_sram_free(void *ptr);
|
||||
extern void *mpc85xx_cache_sram_alloc(unsigned int size,
|
||||
phys_addr_t *phys, unsigned int align);
|
||||
|
||||
#endif /* __AMS_POWERPC_FSL_85XX_CACHE_SRAM_H__ */
|
|
@ -64,7 +64,7 @@ void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
|
|||
* those.
|
||||
*/
|
||||
#define ARCH_HAS_SYSCALL_MATCH_SYM_NAME
|
||||
#ifdef PPC64_ELF_ABI_v1
|
||||
#ifdef CONFIG_PPC64_ELF_ABI_V1
|
||||
static inline bool arch_syscall_match_sym_name(const char *sym, const char *name)
|
||||
{
|
||||
/* We need to skip past the initial dot, and the __se_sys alias */
|
||||
|
@ -83,10 +83,10 @@ static inline bool arch_syscall_match_sym_name(const char *sym, const char *name
|
|||
(!strncmp(sym, "ppc32_", 6) && !strcmp(sym + 6, name + 4)) ||
|
||||
(!strncmp(sym, "ppc64_", 6) && !strcmp(sym + 6, name + 4));
|
||||
}
|
||||
#endif /* PPC64_ELF_ABI_v1 */
|
||||
#endif /* CONFIG_PPC64_ELF_ABI_V1 */
|
||||
#endif /* CONFIG_FTRACE_SYSCALLS */
|
||||
|
||||
#ifdef CONFIG_PPC64
|
||||
#if defined(CONFIG_PPC64) && defined(CONFIG_FUNCTION_TRACER)
|
||||
#include <asm/paca.h>
|
||||
|
||||
static inline void this_cpu_disable_ftrace(void)
|
||||
|
@ -110,11 +110,13 @@ static inline u8 this_cpu_get_ftrace_enabled(void)
|
|||
return get_paca()->ftrace_enabled;
|
||||
}
|
||||
|
||||
void ftrace_free_init_tramp(void);
|
||||
#else /* CONFIG_PPC64 */
|
||||
static inline void this_cpu_disable_ftrace(void) { }
|
||||
static inline void this_cpu_enable_ftrace(void) { }
|
||||
static inline void this_cpu_set_ftrace_enabled(u8 ftrace_enabled) { }
|
||||
static inline u8 this_cpu_get_ftrace_enabled(void) { return 1; }
|
||||
static inline void ftrace_free_init_tramp(void) { }
|
||||
#endif /* CONFIG_PPC64 */
|
||||
#endif /* !__ASSEMBLY__ */
|
||||
|
||||
|
|
|
@ -24,7 +24,7 @@ static inline int is_hugepage_only_range(struct mm_struct *mm,
|
|||
unsigned long addr,
|
||||
unsigned long len)
|
||||
{
|
||||
if (IS_ENABLED(CONFIG_PPC_MM_SLICES) && !radix_enabled())
|
||||
if (IS_ENABLED(CONFIG_PPC_64S_HASH_MMU) && !radix_enabled())
|
||||
return slice_is_hugepage_only_range(mm, addr, len);
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -158,13 +158,10 @@ static inline char *__ppc_inst_as_str(char str[PPC_INST_STR_LEN], ppc_inst_t x)
|
|||
__str; \
|
||||
})
|
||||
|
||||
static inline int copy_inst_from_kernel_nofault(ppc_inst_t *inst, u32 *src)
|
||||
static inline int __copy_inst_from_kernel_nofault(ppc_inst_t *inst, u32 *src)
|
||||
{
|
||||
unsigned int val, suffix;
|
||||
|
||||
if (unlikely(!is_kernel_addr((unsigned long)src)))
|
||||
return -ERANGE;
|
||||
|
||||
/* See https://github.com/ClangBuiltLinux/linux/issues/1521 */
|
||||
#if defined(CONFIG_CC_IS_CLANG) && CONFIG_CLANG_VERSION < 140000
|
||||
val = suffix = 0;
|
||||
|
@ -181,4 +178,12 @@ Efault:
|
|||
return -EFAULT;
|
||||
}
|
||||
|
||||
static inline int copy_inst_from_kernel_nofault(ppc_inst_t *inst, u32 *src)
|
||||
{
|
||||
if (unlikely(!is_kernel_addr((unsigned long)src)))
|
||||
return -ERANGE;
|
||||
|
||||
return __copy_inst_from_kernel_nofault(inst, src);
|
||||
}
|
||||
|
||||
#endif /* _ASM_POWERPC_INST_H */
|
||||
|
|
|
@ -324,22 +324,46 @@ static inline void interrupt_nmi_enter_prepare(struct pt_regs *regs, struct inte
|
|||
}
|
||||
#endif
|
||||
|
||||
/* If data relocations are enabled, it's safe to use nmi_enter() */
|
||||
if (mfmsr() & MSR_DR) {
|
||||
nmi_enter();
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* Do not use nmi_enter() for pseries hash guest taking a real-mode
|
||||
* But do not use nmi_enter() for pseries hash guest taking a real-mode
|
||||
* NMI because not everything it touches is within the RMA limit.
|
||||
*/
|
||||
if (!IS_ENABLED(CONFIG_PPC_BOOK3S_64) ||
|
||||
!firmware_has_feature(FW_FEATURE_LPAR) ||
|
||||
radix_enabled() || (mfmsr() & MSR_DR))
|
||||
nmi_enter();
|
||||
if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) &&
|
||||
firmware_has_feature(FW_FEATURE_LPAR) &&
|
||||
!radix_enabled())
|
||||
return;
|
||||
|
||||
/*
|
||||
* Likewise, don't use it if we have some form of instrumentation (like
|
||||
* KASAN shadow) that is not safe to access in real mode (even on radix)
|
||||
*/
|
||||
if (IS_ENABLED(CONFIG_KASAN))
|
||||
return;
|
||||
|
||||
/* Otherwise, it should be safe to call it */
|
||||
nmi_enter();
|
||||
}
|
||||
|
||||
static inline void interrupt_nmi_exit_prepare(struct pt_regs *regs, struct interrupt_nmi_state *state)
|
||||
{
|
||||
if (!IS_ENABLED(CONFIG_PPC_BOOK3S_64) ||
|
||||
!firmware_has_feature(FW_FEATURE_LPAR) ||
|
||||
radix_enabled() || (mfmsr() & MSR_DR))
|
||||
if (mfmsr() & MSR_DR) {
|
||||
// nmi_exit if relocations are on
|
||||
nmi_exit();
|
||||
} else if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) &&
|
||||
firmware_has_feature(FW_FEATURE_LPAR) &&
|
||||
!radix_enabled()) {
|
||||
// no nmi_exit for a pseries hash guest taking a real mode exception
|
||||
} else if (IS_ENABLED(CONFIG_KASAN)) {
|
||||
// no nmi_exit for KASAN in real mode
|
||||
} else {
|
||||
nmi_exit();
|
||||
}
|
||||
|
||||
/*
|
||||
* nmi does not call nap_adjust_return because nmi should not create
|
||||
|
@ -407,7 +431,8 @@ static inline void interrupt_nmi_exit_prepare(struct pt_regs *regs, struct inter
|
|||
* Specific handlers may have additional restrictions.
|
||||
*/
|
||||
#define DEFINE_INTERRUPT_HANDLER_RAW(func) \
|
||||
static __always_inline long ____##func(struct pt_regs *regs); \
|
||||
static __always_inline __no_sanitize_address __no_kcsan long \
|
||||
____##func(struct pt_regs *regs); \
|
||||
\
|
||||
interrupt_handler long func(struct pt_regs *regs) \
|
||||
{ \
|
||||
|
@ -421,7 +446,8 @@ interrupt_handler long func(struct pt_regs *regs) \
|
|||
} \
|
||||
NOKPROBE_SYMBOL(func); \
|
||||
\
|
||||
static __always_inline long ____##func(struct pt_regs *regs)
|
||||
static __always_inline __no_sanitize_address __no_kcsan long \
|
||||
____##func(struct pt_regs *regs)
|
||||
|
||||
/**
|
||||
* DECLARE_INTERRUPT_HANDLER - Declare synchronous interrupt handler function
|
||||
|
@ -541,7 +567,8 @@ static __always_inline void ____##func(struct pt_regs *regs)
|
|||
* body with a pair of curly brackets.
|
||||
*/
|
||||
#define DEFINE_INTERRUPT_HANDLER_NMI(func) \
|
||||
static __always_inline long ____##func(struct pt_regs *regs); \
|
||||
static __always_inline __no_sanitize_address __no_kcsan long \
|
||||
____##func(struct pt_regs *regs); \
|
||||
\
|
||||
interrupt_handler long func(struct pt_regs *regs) \
|
||||
{ \
|
||||
|
@ -558,7 +585,8 @@ interrupt_handler long func(struct pt_regs *regs) \
|
|||
} \
|
||||
NOKPROBE_SYMBOL(func); \
|
||||
\
|
||||
static __always_inline long ____##func(struct pt_regs *regs)
|
||||
static __always_inline __no_sanitize_address __no_kcsan long \
|
||||
____##func(struct pt_regs *regs)
|
||||
|
||||
|
||||
/* Interrupt handlers */
|
||||
|
|
|
@ -38,8 +38,6 @@ extern struct pci_dev *isa_bridge_pcidev;
|
|||
#define SIO_CONFIG_RA 0x398
|
||||
#define SIO_CONFIG_RD 0x399
|
||||
|
||||
#define SLOW_DOWN_IO
|
||||
|
||||
/* 32 bits uses slightly different variables for the various IO
|
||||
* bases. Most of this file only uses _IO_BASE though which we
|
||||
* define properly based on the platform
|
||||
|
|
|
@ -51,13 +51,11 @@ struct iommu_table_ops {
|
|||
int (*xchg_no_kill)(struct iommu_table *tbl,
|
||||
long index,
|
||||
unsigned long *hpa,
|
||||
enum dma_data_direction *direction,
|
||||
bool realmode);
|
||||
enum dma_data_direction *direction);
|
||||
|
||||
void (*tce_kill)(struct iommu_table *tbl,
|
||||
unsigned long index,
|
||||
unsigned long pages,
|
||||
bool realmode);
|
||||
unsigned long pages);
|
||||
|
||||
__be64 *(*useraddrptr)(struct iommu_table *tbl, long index, bool alloc);
|
||||
#endif
|
||||
|
|
|
@ -30,9 +30,31 @@
|
|||
|
||||
#define KASAN_SHADOW_OFFSET ASM_CONST(CONFIG_KASAN_SHADOW_OFFSET)
|
||||
|
||||
#ifdef CONFIG_PPC32
|
||||
#define KASAN_SHADOW_END (-(-KASAN_SHADOW_START >> KASAN_SHADOW_SCALE_SHIFT))
|
||||
#elif defined(CONFIG_PPC_BOOK3S_64)
|
||||
/*
|
||||
* The shadow ends before the highest accessible address
|
||||
* because we don't need a shadow for the shadow. Instead:
|
||||
* c00e000000000000 << 3 + a80e000000000000 = c00fc00000000000
|
||||
*/
|
||||
#define KASAN_SHADOW_END 0xc00fc00000000000UL
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_KASAN
|
||||
#ifdef CONFIG_PPC_BOOK3S_64
|
||||
DECLARE_STATIC_KEY_FALSE(powerpc_kasan_enabled_key);
|
||||
|
||||
static __always_inline bool kasan_arch_is_ready(void)
|
||||
{
|
||||
if (static_branch_likely(&powerpc_kasan_enabled_key))
|
||||
return true;
|
||||
return false;
|
||||
}
|
||||
|
||||
#define kasan_arch_is_ready kasan_arch_is_ready
|
||||
#endif
|
||||
|
||||
void kasan_early_init(void);
|
||||
void kasan_mmu_init(void);
|
||||
void kasan_init(void);
|
||||
|
|
|
@ -52,7 +52,6 @@ __bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
|
|||
return false;
|
||||
}
|
||||
|
||||
static inline void __kuap_assert_locked(void) { }
|
||||
static inline void __kuap_lock(void) { }
|
||||
static inline void __kuap_save_and_lock(struct pt_regs *regs) { }
|
||||
static inline void kuap_user_restore(struct pt_regs *regs) { }
|
||||
|
|
|
@ -14,9 +14,6 @@
|
|||
#define XICS_MFRR 0xc
|
||||
#define XICS_IPI 2 /* interrupt source # for IPIs */
|
||||
|
||||
/* LPIDs we support with this build -- runtime limit may be lower */
|
||||
#define KVMPPC_NR_LPIDS (LPID_RSVD + 1)
|
||||
|
||||
/* Maximum number of threads per physical core */
|
||||
#define MAX_SMT_THREADS 8
|
||||
|
||||
|
|
|
@ -36,7 +36,12 @@
|
|||
#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
|
||||
#include <asm/kvm_book3s_asm.h> /* for MAX_SMT_THREADS */
|
||||
#define KVM_MAX_VCPU_IDS (MAX_SMT_THREADS * KVM_MAX_VCORES)
|
||||
#define KVM_MAX_NESTED_GUESTS KVMPPC_NR_LPIDS
|
||||
|
||||
/*
|
||||
* Limit the nested partition table to 4096 entries (because that's what
|
||||
* hardware supports). Both guest and host use this value.
|
||||
*/
|
||||
#define KVM_MAX_NESTED_GUESTS_SHIFT 12
|
||||
|
||||
#else
|
||||
#define KVM_MAX_VCPU_IDS KVM_MAX_VCPUS
|
||||
|
@ -327,8 +332,7 @@ struct kvm_arch {
|
|||
struct list_head uvmem_pfns;
|
||||
struct mutex mmu_setup_lock; /* nests inside vcpu mutexes */
|
||||
u64 l1_ptcr;
|
||||
int max_nested_lpid;
|
||||
struct kvm_nested_guest *nested_guests[KVM_MAX_NESTED_GUESTS];
|
||||
struct idr kvm_nested_guest_idr;
|
||||
/* This array can grow quite large, keep it at the end */
|
||||
struct kvmppc_vcore *vcores[KVM_MAX_VCORES];
|
||||
#endif
|
||||
|
|
|
@ -177,8 +177,6 @@ extern void kvmppc_setup_partition_table(struct kvm *kvm);
|
|||
|
||||
extern long kvm_vm_ioctl_create_spapr_tce(struct kvm *kvm,
|
||||
struct kvm_create_spapr_tce_64 *args);
|
||||
extern struct kvmppc_spapr_tce_table *kvmppc_find_table(
|
||||
struct kvm *kvm, unsigned long liobn);
|
||||
#define kvmppc_ioba_validate(stt, ioba, npages) \
|
||||
(iommu_tce_check_ioba((stt)->page_shift, (stt)->offset, \
|
||||
(stt)->size, (ioba), (npages)) ? \
|
||||
|
@ -685,7 +683,7 @@ extern int kvmppc_xive_set_irq(struct kvm *kvm, int irq_source_id, u32 irq,
|
|||
int level, bool line_status);
|
||||
extern void kvmppc_xive_push_vcpu(struct kvm_vcpu *vcpu);
|
||||
extern void kvmppc_xive_pull_vcpu(struct kvm_vcpu *vcpu);
|
||||
extern void kvmppc_xive_rearm_escalation(struct kvm_vcpu *vcpu);
|
||||
extern bool kvmppc_xive_rearm_escalation(struct kvm_vcpu *vcpu);
|
||||
|
||||
static inline int kvmppc_xive_enabled(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
|
@ -723,7 +721,7 @@ static inline int kvmppc_xive_set_irq(struct kvm *kvm, int irq_source_id, u32 ir
|
|||
int level, bool line_status) { return -ENODEV; }
|
||||
static inline void kvmppc_xive_push_vcpu(struct kvm_vcpu *vcpu) { }
|
||||
static inline void kvmppc_xive_pull_vcpu(struct kvm_vcpu *vcpu) { }
|
||||
static inline void kvmppc_xive_rearm_escalation(struct kvm_vcpu *vcpu) { }
|
||||
static inline bool kvmppc_xive_rearm_escalation(struct kvm_vcpu *vcpu) { return true; }
|
||||
|
||||
static inline int kvmppc_xive_enabled(struct kvm_vcpu *vcpu)
|
||||
{ return 0; }
|
||||
|
@ -789,13 +787,6 @@ long kvmppc_rm_h_page_init(struct kvm_vcpu *vcpu, unsigned long flags,
|
|||
unsigned long dest, unsigned long src);
|
||||
long kvmppc_hpte_hv_fault(struct kvm_vcpu *vcpu, unsigned long addr,
|
||||
unsigned long slb_v, unsigned int status, bool data);
|
||||
unsigned long kvmppc_rm_h_xirr(struct kvm_vcpu *vcpu);
|
||||
unsigned long kvmppc_rm_h_xirr_x(struct kvm_vcpu *vcpu);
|
||||
unsigned long kvmppc_rm_h_ipoll(struct kvm_vcpu *vcpu, unsigned long server);
|
||||
int kvmppc_rm_h_ipi(struct kvm_vcpu *vcpu, unsigned long server,
|
||||
unsigned long mfrr);
|
||||
int kvmppc_rm_h_cppr(struct kvm_vcpu *vcpu, unsigned long cppr);
|
||||
int kvmppc_rm_h_eoi(struct kvm_vcpu *vcpu, unsigned long xirr);
|
||||
void kvmppc_guest_entry_inject_int(struct kvm_vcpu *vcpu);
|
||||
|
||||
/*
|
||||
|
@ -877,7 +868,6 @@ int kvm_vcpu_ioctl_dirty_tlb(struct kvm_vcpu *vcpu,
|
|||
struct kvm_dirty_tlb *cfg);
|
||||
|
||||
long kvmppc_alloc_lpid(void);
|
||||
void kvmppc_claim_lpid(long lpid);
|
||||
void kvmppc_free_lpid(long lpid);
|
||||
void kvmppc_init_lpid(unsigned long nr_lpids);
|
||||
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
|
||||
#include <asm/types.h>
|
||||
|
||||
#ifdef PPC64_ELF_ABI_v1
|
||||
#ifdef CONFIG_PPC64_ELF_ABI_V1
|
||||
#define cond_syscall(x) \
|
||||
asm ("\t.weak " #x "\n\t.set " #x ", sys_ni_syscall\n" \
|
||||
"\t.weak ." #x "\n\t.set ." #x ", .sys_ni_syscall\n")
|
||||
|
|
|
@ -34,15 +34,10 @@ extern void mm_iommu_init(struct mm_struct *mm);
|
|||
extern void mm_iommu_cleanup(struct mm_struct *mm);
|
||||
extern struct mm_iommu_table_group_mem_t *mm_iommu_lookup(struct mm_struct *mm,
|
||||
unsigned long ua, unsigned long size);
|
||||
extern struct mm_iommu_table_group_mem_t *mm_iommu_lookup_rm(
|
||||
struct mm_struct *mm, unsigned long ua, unsigned long size);
|
||||
extern struct mm_iommu_table_group_mem_t *mm_iommu_get(struct mm_struct *mm,
|
||||
unsigned long ua, unsigned long entries);
|
||||
extern long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem,
|
||||
unsigned long ua, unsigned int pageshift, unsigned long *hpa);
|
||||
extern long mm_iommu_ua_to_hpa_rm(struct mm_iommu_table_group_mem_t *mem,
|
||||
unsigned long ua, unsigned int pageshift, unsigned long *hpa);
|
||||
extern void mm_iommu_ua_mark_dirty_rm(struct mm_struct *mm, unsigned long ua);
|
||||
extern bool mm_iommu_is_devmem(struct mm_struct *mm, unsigned long hpa,
|
||||
unsigned int pageshift, unsigned long *size);
|
||||
extern long mm_iommu_mapped_inc(struct mm_iommu_table_group_mem_t *mem);
|
||||
|
|
|
@ -41,9 +41,7 @@ struct mod_arch_specific {
|
|||
|
||||
#ifdef CONFIG_DYNAMIC_FTRACE
|
||||
unsigned long tramp;
|
||||
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
|
||||
unsigned long tramp_regs;
|
||||
#endif
|
||||
#endif
|
||||
|
||||
/* List of BUG addresses, source line numbers and filenames */
|
||||
|
|
|
@ -30,7 +30,6 @@ struct mm_struct;
|
|||
|
||||
extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
|
||||
unsigned long end);
|
||||
extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
|
||||
|
||||
#ifdef CONFIG_PPC_8xx
|
||||
static inline void local_flush_tlb_mm(struct mm_struct *mm)
|
||||
|
@ -45,7 +44,18 @@ static inline void local_flush_tlb_page(struct vm_area_struct *vma, unsigned lon
|
|||
{
|
||||
asm volatile ("tlbie %0; sync" : : "r" (vmaddr) : "memory");
|
||||
}
|
||||
|
||||
static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end)
|
||||
{
|
||||
start &= PAGE_MASK;
|
||||
|
||||
if (end - start <= PAGE_SIZE)
|
||||
asm volatile ("tlbie %0; sync" : : "r" (start) : "memory");
|
||||
else
|
||||
asm volatile ("sync; tlbia; isync" : : : "memory");
|
||||
}
|
||||
#else
|
||||
extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
|
||||
extern void local_flush_tlb_mm(struct mm_struct *mm);
|
||||
extern void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
|
||||
|
||||
|
|
|
@ -12,6 +12,7 @@
|
|||
|
||||
#ifdef CONFIG_PPC64
|
||||
|
||||
#include <linux/cache.h>
|
||||
#include <linux/string.h>
|
||||
#include <asm/types.h>
|
||||
#include <asm/lppaca.h>
|
||||
|
@ -152,16 +153,9 @@ struct paca_struct {
|
|||
struct tlb_core_data tcd;
|
||||
#endif /* CONFIG_PPC_BOOK3E */
|
||||
|
||||
#ifdef CONFIG_PPC_BOOK3S
|
||||
#ifdef CONFIG_PPC_64S_HASH_MMU
|
||||
#ifdef CONFIG_PPC_MM_SLICES
|
||||
unsigned char mm_ctx_low_slices_psize[BITS_PER_LONG / BITS_PER_BYTE];
|
||||
unsigned char mm_ctx_high_slices_psize[SLICE_ARRAY_SIZE];
|
||||
#else
|
||||
u16 mm_ctx_user_psize;
|
||||
u16 mm_ctx_sllp;
|
||||
#endif
|
||||
#endif
|
||||
#endif
|
||||
|
||||
/*
|
||||
|
|
|
@ -216,6 +216,9 @@ static inline bool pfn_valid(unsigned long pfn)
|
|||
#define __pa(x) ((phys_addr_t)(unsigned long)(x) - VIRT_PHYS_OFFSET)
|
||||
#else
|
||||
#ifdef CONFIG_PPC64
|
||||
|
||||
#define VIRTUAL_WARN_ON(x) WARN_ON(IS_ENABLED(CONFIG_DEBUG_VIRTUAL) && (x))
|
||||
|
||||
/*
|
||||
* gcc miscompiles (unsigned long)(&static_var) - PAGE_OFFSET
|
||||
* with -mcmodel=medium, so we use & and | instead of - and + on 64-bit.
|
||||
|
@ -223,13 +226,13 @@ static inline bool pfn_valid(unsigned long pfn)
|
|||
*/
|
||||
#define __va(x) \
|
||||
({ \
|
||||
VIRTUAL_BUG_ON((unsigned long)(x) >= PAGE_OFFSET); \
|
||||
VIRTUAL_WARN_ON((unsigned long)(x) >= PAGE_OFFSET); \
|
||||
(void *)(unsigned long)((phys_addr_t)(x) | PAGE_OFFSET); \
|
||||
})
|
||||
|
||||
#define __pa(x) \
|
||||
({ \
|
||||
VIRTUAL_BUG_ON((unsigned long)(x) < PAGE_OFFSET); \
|
||||
VIRTUAL_WARN_ON((unsigned long)(x) < PAGE_OFFSET); \
|
||||
(unsigned long)(x) & 0x0fffffffffffffffUL; \
|
||||
})
|
||||
|
||||
|
@ -333,6 +336,5 @@ static inline unsigned long kaslr_offset(void)
|
|||
|
||||
#include <asm-generic/memory_model.h>
|
||||
#endif /* __ASSEMBLY__ */
|
||||
#include <asm/slice.h>
|
||||
|
||||
#endif /* _ASM_POWERPC_PAGE_H */
|
||||
|
|
|
@ -11,7 +11,7 @@
|
|||
#define _ASM_POWERPC_PARPORT_H
|
||||
#ifdef __KERNEL__
|
||||
|
||||
#include <asm/prom.h>
|
||||
#include <linux/of_irq.h>
|
||||
|
||||
static int parport_pc_find_nonpci_ports (int autoirq, int autodma)
|
||||
{
|
||||
|
|
|
@ -170,10 +170,10 @@ static inline struct pci_controller *pci_bus_to_host(const struct pci_bus *bus)
|
|||
return bus->sysdata;
|
||||
}
|
||||
|
||||
#ifndef CONFIG_PPC64
|
||||
|
||||
extern int pci_device_from_OF_node(struct device_node *node,
|
||||
u8 *bus, u8 *devfn);
|
||||
#ifndef CONFIG_PPC64
|
||||
|
||||
extern void pci_create_OF_bus_map(void);
|
||||
|
||||
#else /* CONFIG_PPC64 */
|
||||
|
@ -235,16 +235,6 @@ struct pci_dn *add_sriov_vf_pdns(struct pci_dev *pdev);
|
|||
void remove_sriov_vf_pdns(struct pci_dev *pdev);
|
||||
#endif
|
||||
|
||||
static inline int pci_device_from_OF_node(struct device_node *np,
|
||||
u8 *bus, u8 *devfn)
|
||||
{
|
||||
if (!PCI_DN(np))
|
||||
return -ENODEV;
|
||||
*bus = PCI_DN(np)->busno;
|
||||
*devfn = PCI_DN(np)->devfn;
|
||||
return 0;
|
||||
}
|
||||
|
||||
#if defined(CONFIG_EEH)
|
||||
static inline struct eeh_dev *pdn_to_eeh_dev(struct pci_dn *pdn)
|
||||
{
|
||||
|
|
|
@ -9,6 +9,7 @@
|
|||
#include <linux/pci.h>
|
||||
#include <linux/pci_hotplug.h>
|
||||
#include <linux/irq.h>
|
||||
#include <linux/of.h>
|
||||
#include <misc/cxl-base.h>
|
||||
#include <asm/opal-api.h>
|
||||
|
||||
|
|
|
@ -127,8 +127,53 @@
|
|||
|
||||
|
||||
/* opcode and xopcode for instructions */
|
||||
#define OP_TRAP 3
|
||||
#define OP_TRAP_64 2
|
||||
#define OP_PREFIX 1
|
||||
#define OP_TRAP_64 2
|
||||
#define OP_TRAP 3
|
||||
#define OP_SC 17
|
||||
#define OP_19 19
|
||||
#define OP_31 31
|
||||
#define OP_LWZ 32
|
||||
#define OP_LWZU 33
|
||||
#define OP_LBZ 34
|
||||
#define OP_LBZU 35
|
||||
#define OP_STW 36
|
||||
#define OP_STWU 37
|
||||
#define OP_STB 38
|
||||
#define OP_STBU 39
|
||||
#define OP_LHZ 40
|
||||
#define OP_LHZU 41
|
||||
#define OP_LHA 42
|
||||
#define OP_LHAU 43
|
||||
#define OP_STH 44
|
||||
#define OP_STHU 45
|
||||
#define OP_LMW 46
|
||||
#define OP_STMW 47
|
||||
#define OP_LFS 48
|
||||
#define OP_LFSU 49
|
||||
#define OP_LFD 50
|
||||
#define OP_LFDU 51
|
||||
#define OP_STFS 52
|
||||
#define OP_STFSU 53
|
||||
#define OP_STFD 54
|
||||
#define OP_STFDU 55
|
||||
#define OP_LQ 56
|
||||
#define OP_LD 58
|
||||
#define OP_STD 62
|
||||
|
||||
#define OP_19_XOP_RFID 18
|
||||
#define OP_19_XOP_RFMCI 38
|
||||
#define OP_19_XOP_RFDI 39
|
||||
#define OP_19_XOP_RFI 50
|
||||
#define OP_19_XOP_RFCI 51
|
||||
#define OP_19_XOP_RFSCV 82
|
||||
#define OP_19_XOP_HRFID 274
|
||||
#define OP_19_XOP_URFID 306
|
||||
#define OP_19_XOP_STOP 370
|
||||
#define OP_19_XOP_DOZE 402
|
||||
#define OP_19_XOP_NAP 434
|
||||
#define OP_19_XOP_SLEEP 466
|
||||
#define OP_19_XOP_RVWINKLE 498
|
||||
|
||||
#define OP_31_XOP_TRAP 4
|
||||
#define OP_31_XOP_LDX 21
|
||||
|
@ -150,6 +195,8 @@
|
|||
#define OP_31_XOP_LHZUX 311
|
||||
#define OP_31_XOP_MSGSNDP 142
|
||||
#define OP_31_XOP_MSGCLRP 174
|
||||
#define OP_31_XOP_MTMSR 146
|
||||
#define OP_31_XOP_MTMSRD 178
|
||||
#define OP_31_XOP_TLBIE 306
|
||||
#define OP_31_XOP_MFSPR 339
|
||||
#define OP_31_XOP_LWAX 341
|
||||
|
@ -208,42 +255,6 @@
|
|||
/* VMX Vector Store Instructions */
|
||||
#define OP_31_XOP_STVX 231
|
||||
|
||||
/* Prefixed Instructions */
|
||||
#define OP_PREFIX 1
|
||||
|
||||
#define OP_31 31
|
||||
#define OP_LWZ 32
|
||||
#define OP_STFS 52
|
||||
#define OP_STFSU 53
|
||||
#define OP_STFD 54
|
||||
#define OP_STFDU 55
|
||||
#define OP_LD 58
|
||||
#define OP_LWZU 33
|
||||
#define OP_LBZ 34
|
||||
#define OP_LBZU 35
|
||||
#define OP_STW 36
|
||||
#define OP_STWU 37
|
||||
#define OP_STD 62
|
||||
#define OP_STB 38
|
||||
#define OP_STBU 39
|
||||
#define OP_LHZ 40
|
||||
#define OP_LHZU 41
|
||||
#define OP_LHA 42
|
||||
#define OP_LHAU 43
|
||||
#define OP_STH 44
|
||||
#define OP_STHU 45
|
||||
#define OP_LMW 46
|
||||
#define OP_STMW 47
|
||||
#define OP_LFS 48
|
||||
#define OP_LFSU 49
|
||||
#define OP_LFD 50
|
||||
#define OP_LFDU 51
|
||||
#define OP_STFS 52
|
||||
#define OP_STFSU 53
|
||||
#define OP_STFD 54
|
||||
#define OP_STFDU 55
|
||||
#define OP_LQ 56
|
||||
|
||||
/* sorted alphabetically */
|
||||
#define PPC_INST_BCCTR_FLUSH 0x4c400420
|
||||
#define PPC_INST_COPY 0x7c20060c
|
||||
|
@ -285,13 +296,6 @@
|
|||
#define PPC_INST_TRECHKPT 0x7c0007dd
|
||||
#define PPC_INST_TRECLAIM 0x7c00075d
|
||||
#define PPC_INST_TSR 0x7c0005dd
|
||||
#define PPC_INST_LD 0xe8000000
|
||||
#define PPC_INST_STD 0xf8000000
|
||||
#define PPC_INST_ADDIS 0x3c000000
|
||||
#define PPC_INST_ADD 0x7c000214
|
||||
#define PPC_INST_DIVD 0x7c0003d2
|
||||
#define PPC_INST_BRANCH 0x48000000
|
||||
#define PPC_INST_BL 0x48000001
|
||||
#define PPC_INST_BRANCH_COND 0x40800000
|
||||
|
||||
/* Prefixes */
|
||||
|
@ -352,6 +356,10 @@
|
|||
#define PPC_HIGHER(v) (((v) >> 32) & 0xffff)
|
||||
#define PPC_HIGHEST(v) (((v) >> 48) & 0xffff)
|
||||
|
||||
/* LI Field */
|
||||
#define PPC_LI_MASK 0x03fffffc
|
||||
#define PPC_LI(v) ((v) & PPC_LI_MASK)
|
||||
|
||||
/*
|
||||
* Only use the larx hint bit on 64bit CPUs. e500v1/v2 based CPUs will treat a
|
||||
* larx with EH set as an illegal instruction.
|
||||
|
@ -460,10 +468,10 @@
|
|||
(0x100000c7 | ___PPC_RT(vrt) | ___PPC_RA(vra) | ___PPC_RB(vrb) | __PPC_RC21)
|
||||
#define PPC_RAW_VCMPEQUB_RC(vrt, vra, vrb) \
|
||||
(0x10000006 | ___PPC_RT(vrt) | ___PPC_RA(vra) | ___PPC_RB(vrb) | __PPC_RC21)
|
||||
#define PPC_RAW_LD(r, base, i) (PPC_INST_LD | ___PPC_RT(r) | ___PPC_RA(base) | IMM_DS(i))
|
||||
#define PPC_RAW_LD(r, base, i) (0xe8000000 | ___PPC_RT(r) | ___PPC_RA(base) | IMM_DS(i))
|
||||
#define PPC_RAW_LWZ(r, base, i) (0x80000000 | ___PPC_RT(r) | ___PPC_RA(base) | IMM_L(i))
|
||||
#define PPC_RAW_LWZX(t, a, b) (0x7c00002e | ___PPC_RT(t) | ___PPC_RA(a) | ___PPC_RB(b))
|
||||
#define PPC_RAW_STD(r, base, i) (PPC_INST_STD | ___PPC_RS(r) | ___PPC_RA(base) | IMM_DS(i))
|
||||
#define PPC_RAW_STD(r, base, i) (0xf8000000 | ___PPC_RS(r) | ___PPC_RA(base) | IMM_DS(i))
|
||||
#define PPC_RAW_STDCX(s, a, b) (0x7c0001ad | ___PPC_RS(s) | ___PPC_RA(a) | ___PPC_RB(b))
|
||||
#define PPC_RAW_LFSX(t, a, b) (0x7c00042e | ___PPC_RT(t) | ___PPC_RA(a) | ___PPC_RB(b))
|
||||
#define PPC_RAW_STFSX(s, a, b) (0x7c00052e | ___PPC_RS(s) | ___PPC_RA(a) | ___PPC_RB(b))
|
||||
|
@ -474,8 +482,8 @@
|
|||
#define PPC_RAW_ADDE(t, a, b) (0x7c000114 | ___PPC_RT(t) | ___PPC_RA(a) | ___PPC_RB(b))
|
||||
#define PPC_RAW_ADDZE(t, a) (0x7c000194 | ___PPC_RT(t) | ___PPC_RA(a))
|
||||
#define PPC_RAW_ADDME(t, a) (0x7c0001d4 | ___PPC_RT(t) | ___PPC_RA(a))
|
||||
#define PPC_RAW_ADD(t, a, b) (PPC_INST_ADD | ___PPC_RT(t) | ___PPC_RA(a) | ___PPC_RB(b))
|
||||
#define PPC_RAW_ADD_DOT(t, a, b) (PPC_INST_ADD | ___PPC_RT(t) | ___PPC_RA(a) | ___PPC_RB(b) | 0x1)
|
||||
#define PPC_RAW_ADD(t, a, b) (0x7c000214 | ___PPC_RT(t) | ___PPC_RA(a) | ___PPC_RB(b))
|
||||
#define PPC_RAW_ADD_DOT(t, a, b) (0x7c000214 | ___PPC_RT(t) | ___PPC_RA(a) | ___PPC_RB(b) | 0x1)
|
||||
#define PPC_RAW_ADDC(t, a, b) (0x7c000014 | ___PPC_RT(t) | ___PPC_RA(a) | ___PPC_RB(b))
|
||||
#define PPC_RAW_ADDC_DOT(t, a, b) (0x7c000014 | ___PPC_RT(t) | ___PPC_RA(a) | ___PPC_RB(b) | 0x1)
|
||||
#define PPC_RAW_NOP() PPC_RAW_ORI(0, 0, 0)
|
||||
|
@ -571,7 +579,8 @@
|
|||
#define PPC_RAW_MTSPR(spr, d) (0x7c0003a6 | ___PPC_RS(d) | __PPC_SPR(spr))
|
||||
#define PPC_RAW_EIEIO() (0x7c0006ac)
|
||||
|
||||
#define PPC_RAW_BRANCH(addr) (PPC_INST_BRANCH | ((addr) & 0x03fffffc))
|
||||
#define PPC_RAW_BRANCH(offset) (0x48000000 | PPC_LI(offset))
|
||||
#define PPC_RAW_BL(offset) (0x48000001 | PPC_LI(offset))
|
||||
|
||||
/* Deal with instructions that older assemblers aren't aware of */
|
||||
#define PPC_BCCTR_FLUSH stringify_in_c(.long PPC_INST_BCCTR_FLUSH)
|
||||
|
|
|
@ -149,7 +149,7 @@
|
|||
#define __STK_REG(i) (112 + ((i)-14)*8)
|
||||
#define STK_REG(i) __STK_REG(__REG_##i)
|
||||
|
||||
#ifdef PPC64_ELF_ABI_v2
|
||||
#ifdef CONFIG_PPC64_ELF_ABI_V2
|
||||
#define STK_GOT 24
|
||||
#define __STK_PARAM(i) (32 + ((i)-3)*8)
|
||||
#else
|
||||
|
@ -158,7 +158,7 @@
|
|||
#endif
|
||||
#define STK_PARAM(i) __STK_PARAM(__REG_##i)
|
||||
|
||||
#ifdef PPC64_ELF_ABI_v2
|
||||
#ifdef CONFIG_PPC64_ELF_ABI_V2
|
||||
|
||||
#define _GLOBAL(name) \
|
||||
.align 2 ; \
|
||||
|
|
|
@ -8,6 +8,7 @@
|
|||
* Copyright IBM Corporation, 2012
|
||||
*/
|
||||
#include <linux/types.h>
|
||||
#include <asm/disassemble.h>
|
||||
|
||||
typedef u32 ppc_opcode_t;
|
||||
#define BREAKPOINT_INSTRUCTION 0x7fe00008 /* trap */
|
||||
|
@ -31,6 +32,41 @@ typedef u32 ppc_opcode_t;
|
|||
#define MSR_SINGLESTEP (MSR_SE)
|
||||
#endif
|
||||
|
||||
static inline bool can_single_step(u32 inst)
|
||||
{
|
||||
switch (get_op(inst)) {
|
||||
case OP_TRAP_64: return false;
|
||||
case OP_TRAP: return false;
|
||||
case OP_SC: return false;
|
||||
case OP_19:
|
||||
switch (get_xop(inst)) {
|
||||
case OP_19_XOP_RFID: return false;
|
||||
case OP_19_XOP_RFMCI: return false;
|
||||
case OP_19_XOP_RFDI: return false;
|
||||
case OP_19_XOP_RFI: return false;
|
||||
case OP_19_XOP_RFCI: return false;
|
||||
case OP_19_XOP_RFSCV: return false;
|
||||
case OP_19_XOP_HRFID: return false;
|
||||
case OP_19_XOP_URFID: return false;
|
||||
case OP_19_XOP_STOP: return false;
|
||||
case OP_19_XOP_DOZE: return false;
|
||||
case OP_19_XOP_NAP: return false;
|
||||
case OP_19_XOP_SLEEP: return false;
|
||||
case OP_19_XOP_RVWINKLE: return false;
|
||||
}
|
||||
break;
|
||||
case OP_31:
|
||||
switch (get_xop(inst)) {
|
||||
case OP_31_XOP_TRAP: return false;
|
||||
case OP_31_XOP_TRAP_64: return false;
|
||||
case OP_31_XOP_MTMSR: return false;
|
||||
case OP_31_XOP_MTMSRD: return false;
|
||||
}
|
||||
break;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
/* Enable single stepping for the current task */
|
||||
static inline void enable_single_step(struct pt_regs *regs)
|
||||
{
|
||||
|
|
|
@ -392,8 +392,6 @@ static inline void prefetchw(const void *x)
|
|||
|
||||
#define spin_lock_prefetch(x) prefetchw(x)
|
||||
|
||||
#define HAVE_ARCH_PICK_MMAP_LAYOUT
|
||||
|
||||
/* asm stubs */
|
||||
extern unsigned long isa300_idle_stop_noloss(unsigned long psscr_val);
|
||||
extern unsigned long isa300_idle_stop_mayloss(unsigned long psscr_val);
|
||||
|
|
|
@ -120,7 +120,7 @@ struct pt_regs
|
|||
STACK_FRAME_OVERHEAD + KERNEL_REDZONE_SIZE)
|
||||
#define STACK_FRAME_MARKER 12
|
||||
|
||||
#ifdef PPC64_ELF_ABI_v2
|
||||
#ifdef CONFIG_PPC64_ELF_ABI_V2
|
||||
#define STACK_FRAME_MIN_SIZE 32
|
||||
#else
|
||||
#define STACK_FRAME_MIN_SIZE STACK_FRAME_OVERHEAD
|
||||
|
|
|
@ -417,7 +417,6 @@
|
|||
#define FSCR_DSCR __MASK(FSCR_DSCR_LG)
|
||||
#define FSCR_INTR_CAUSE (ASM_CONST(0xFF) << 56) /* interrupt cause */
|
||||
#define SPRN_HFSCR 0xbe /* HV=1 Facility Status & Control Register */
|
||||
#define HFSCR_PREFIX __MASK(FSCR_PREFIX_LG)
|
||||
#define HFSCR_MSGP __MASK(FSCR_MSGP_LG)
|
||||
#define HFSCR_TAR __MASK(FSCR_TAR_LG)
|
||||
#define HFSCR_EBB __MASK(FSCR_EBB_LG)
|
||||
|
@ -474,8 +473,6 @@
|
|||
#ifndef SPRN_LPID
|
||||
#define SPRN_LPID 0x13F /* Logical Partition Identifier */
|
||||
#endif
|
||||
#define LPID_RSVD_POWER7 0x3ff /* Reserved LPID for partn switching */
|
||||
#define LPID_RSVD 0xfff /* Reserved LPID for partn switching */
|
||||
#define SPRN_HMER 0x150 /* Hypervisor maintenance exception reg */
|
||||
#define HMER_DEBUG_TRIG (1ul << (63 - 17)) /* Debug trigger */
|
||||
#define SPRN_HMEER 0x151 /* Hyp maintenance exception enable reg */
|
||||
|
|
|
@ -9,4 +9,9 @@
|
|||
struct pt_regs;
|
||||
void do_notify_resume(struct pt_regs *regs, unsigned long thread_info_flags);
|
||||
|
||||
unsigned long get_min_sigframe_size_32(void);
|
||||
unsigned long get_min_sigframe_size_64(void);
|
||||
unsigned long get_min_sigframe_size(void);
|
||||
unsigned long get_min_sigframe_size_compat(void);
|
||||
|
||||
#endif /* _ASM_POWERPC_SIGNAL_H */
|
||||
|
|
|
@ -1,46 +0,0 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef _ASM_POWERPC_SLICE_H
|
||||
#define _ASM_POWERPC_SLICE_H
|
||||
|
||||
#ifdef CONFIG_PPC_BOOK3S_64
|
||||
#include <asm/book3s/64/slice.h>
|
||||
#endif
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
struct mm_struct;
|
||||
|
||||
#ifdef CONFIG_PPC_MM_SLICES
|
||||
|
||||
#ifdef CONFIG_HUGETLB_PAGE
|
||||
#define HAVE_ARCH_HUGETLB_UNMAPPED_AREA
|
||||
#endif
|
||||
#define HAVE_ARCH_UNMAPPED_AREA
|
||||
#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
|
||||
|
||||
unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
|
||||
unsigned long flags, unsigned int psize,
|
||||
int topdown);
|
||||
|
||||
unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr);
|
||||
|
||||
void slice_set_range_psize(struct mm_struct *mm, unsigned long start,
|
||||
unsigned long len, unsigned int psize);
|
||||
|
||||
void slice_init_new_context_exec(struct mm_struct *mm);
|
||||
void slice_setup_new_exec(void);
|
||||
|
||||
#else /* CONFIG_PPC_MM_SLICES */
|
||||
|
||||
static inline void slice_init_new_context_exec(struct mm_struct *mm) {}
|
||||
|
||||
static inline unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
#endif /* CONFIG_PPC_MM_SLICES */
|
||||
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
||||
#endif /* _ASM_POWERPC_SLICE_H */
|
|
@ -189,8 +189,6 @@ extern void __cpu_die(unsigned int cpu);
|
|||
#define smp_setup_cpu_maps()
|
||||
#define thread_group_shares_l2 0
|
||||
#define thread_group_shares_l3 0
|
||||
static inline void inhibit_secondary_onlining(void) {}
|
||||
static inline void uninhibit_secondary_onlining(void) {}
|
||||
static inline const struct cpumask *cpu_sibling_mask(int cpu)
|
||||
{
|
||||
return cpumask_of(cpu);
|
||||
|
|
|
@ -10,6 +10,8 @@
|
|||
|
||||
#ifdef CONFIG_PPC_SVM
|
||||
|
||||
#include <asm/reg.h>
|
||||
|
||||
static inline bool is_secure_guest(void)
|
||||
{
|
||||
return mfmsr() & MSR_S;
|
||||
|
|
|
@ -62,6 +62,15 @@ static inline void disable_kernel_altivec(void)
|
|||
#else
|
||||
static inline void save_altivec(struct task_struct *t) { }
|
||||
static inline void __giveup_altivec(struct task_struct *t) { }
|
||||
static inline void enable_kernel_altivec(void)
|
||||
{
|
||||
BUILD_BUG();
|
||||
}
|
||||
|
||||
static inline void disable_kernel_altivec(void)
|
||||
{
|
||||
BUILD_BUG();
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_VSX
|
||||
|
|
|
@ -72,4 +72,12 @@
|
|||
#define STACK_TOP_MAX TASK_SIZE_USER64
|
||||
#define STACK_TOP (is_32bit_task() ? STACK_TOP_USER32 : STACK_TOP_USER64)
|
||||
|
||||
#define arch_get_mmap_base(addr, base) \
|
||||
(((addr) > DEFAULT_MAP_WINDOW) ? (base) + TASK_SIZE - DEFAULT_MAP_WINDOW : (base))
|
||||
|
||||
#define arch_get_mmap_end(addr, len, flags) \
|
||||
(((addr) > DEFAULT_MAP_WINDOW) || \
|
||||
(((flags) & MAP_FIXED) && ((addr) + (len) > DEFAULT_MAP_WINDOW)) ? TASK_SIZE : \
|
||||
DEFAULT_MAP_WINDOW)
|
||||
|
||||
#endif /* _ASM_POWERPC_TASK_SIZE_64_H */
|
||||
|
|
|
@ -24,6 +24,7 @@ extern unsigned long tb_ticks_per_jiffy;
|
|||
extern unsigned long tb_ticks_per_usec;
|
||||
extern unsigned long tb_ticks_per_sec;
|
||||
extern struct clock_event_device decrementer_clockevent;
|
||||
extern u64 decrementer_max;
|
||||
|
||||
|
||||
extern void generic_calibrate_decr(void);
|
||||
|
|
|
@ -111,14 +111,10 @@ static inline void unmap_cpu_from_node(unsigned long cpu) {}
|
|||
#endif /* CONFIG_NUMA */
|
||||
|
||||
#if defined(CONFIG_NUMA) && defined(CONFIG_PPC_SPLPAR)
|
||||
extern int find_and_online_cpu_nid(int cpu);
|
||||
void find_and_update_cpu_nid(int cpu);
|
||||
extern int cpu_to_coregroup_id(int cpu);
|
||||
#else
|
||||
static inline int find_and_online_cpu_nid(int cpu)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void find_and_update_cpu_nid(int cpu) {}
|
||||
static inline int cpu_to_coregroup_id(int cpu)
|
||||
{
|
||||
#ifdef CONFIG_SMP
|
||||
|
|
|
@ -11,14 +11,6 @@
|
|||
|
||||
#include <uapi/asm/types.h>
|
||||
|
||||
#ifdef __powerpc64__
|
||||
#if defined(_CALL_ELF) && _CALL_ELF == 2
|
||||
#define PPC64_ELF_ABI_v2 1
|
||||
#else
|
||||
#define PPC64_ELF_ABI_v1 1
|
||||
#endif
|
||||
#endif /* __powerpc64__ */
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
typedef __vector128 vector128;
|
||||
|
|
|
@ -126,7 +126,7 @@ static inline void vas_user_win_add_mm_context(struct vas_user_win_ref *ref)
|
|||
* Receive window attributes specified by the (in-kernel) owner of window.
|
||||
*/
|
||||
struct vas_rx_win_attr {
|
||||
void *rx_fifo;
|
||||
u64 rx_fifo;
|
||||
int rx_fifo_size;
|
||||
int wcreds_max;
|
||||
|
||||
|
|
|
@ -48,6 +48,8 @@
|
|||
#define AT_L3_CACHESIZE 46
|
||||
#define AT_L3_CACHEGEOMETRY 47
|
||||
|
||||
#define AT_VECTOR_SIZE_ARCH 14 /* entries in ARCH_DLINFO */
|
||||
#define AT_MINSIGSTKSZ 51 /* stack needed for signal delivery */
|
||||
|
||||
#define AT_VECTOR_SIZE_ARCH 15 /* entries in ARCH_DLINFO */
|
||||
|
||||
#endif
|
||||
|
|
|
@ -62,8 +62,13 @@ typedef struct {
|
|||
|
||||
#define SA_RESTORER 0x04000000U
|
||||
|
||||
#ifdef __powerpc64__
|
||||
#define MINSIGSTKSZ 8192
|
||||
#define SIGSTKSZ 32768
|
||||
#else
|
||||
#define MINSIGSTKSZ 2048
|
||||
#define SIGSTKSZ 8192
|
||||
#endif
|
||||
|
||||
#include <asm-generic/signal-defs.h>
|
||||
|
||||
|
|
|
@ -33,6 +33,17 @@ KASAN_SANITIZE_early_32.o := n
|
|||
KASAN_SANITIZE_cputable.o := n
|
||||
KASAN_SANITIZE_prom_init.o := n
|
||||
KASAN_SANITIZE_btext.o := n
|
||||
KASAN_SANITIZE_paca.o := n
|
||||
KASAN_SANITIZE_setup_64.o := n
|
||||
KASAN_SANITIZE_mce.o := n
|
||||
KASAN_SANITIZE_mce_power.o := n
|
||||
|
||||
# we have to be particularly careful in ppc64 to exclude code that
|
||||
# runs with translations off, as we cannot access the shadow with
|
||||
# translations off. However, ppc32 can sanitize this.
|
||||
ifdef CONFIG_PPC64
|
||||
KASAN_SANITIZE_traps.o := n
|
||||
endif
|
||||
|
||||
ifdef CONFIG_KASAN
|
||||
CFLAGS_early_32.o += -DDISABLE_BRANCH_PROFILING
|
||||
|
@ -68,7 +79,7 @@ obj-$(CONFIG_PPC_BOOK3S_IDLE) += idle_book3s.o
|
|||
procfs-y := proc_powerpc.o
|
||||
obj-$(CONFIG_PROC_FS) += $(procfs-y)
|
||||
rtaspci-$(CONFIG_PPC64)-$(CONFIG_PCI) := rtas_pci.o
|
||||
obj-$(CONFIG_PPC_RTAS) += rtas.o rtas-rtc.o $(rtaspci-y-y)
|
||||
obj-$(CONFIG_PPC_RTAS) += rtas_entry.o rtas.o rtas-rtc.o $(rtaspci-y-y)
|
||||
obj-$(CONFIG_PPC_RTAS_DAEMON) += rtasd.o
|
||||
obj-$(CONFIG_RTAS_FLASH) += rtas_flash.o
|
||||
obj-$(CONFIG_RTAS_PROC) += rtas-proc.o
|
||||
|
|
|
@ -10,9 +10,9 @@
|
|||
#include <linux/export.h>
|
||||
#include <linux/memblock.h>
|
||||
#include <linux/pgtable.h>
|
||||
#include <linux/of.h>
|
||||
|
||||
#include <asm/sections.h>
|
||||
#include <asm/prom.h>
|
||||
#include <asm/btext.h>
|
||||
#include <asm/page.h>
|
||||
#include <asm/mmu.h>
|
||||
|
@ -45,8 +45,7 @@ unsigned long disp_BAT[2] __initdata = {0, 0};
|
|||
|
||||
static unsigned char vga_font[cmapsz];
|
||||
|
||||
int boot_text_mapped __force_data = 0;
|
||||
int force_printk_to_btext = 0;
|
||||
static int boot_text_mapped __force_data;
|
||||
|
||||
extern void rmci_on(void);
|
||||
extern void rmci_off(void);
|
||||
|
|
|
@ -18,7 +18,6 @@
|
|||
#include <linux/of.h>
|
||||
#include <linux/percpu.h>
|
||||
#include <linux/slab.h>
|
||||
#include <asm/prom.h>
|
||||
#include <asm/cputhreads.h>
|
||||
#include <asm/smp.h>
|
||||
|
||||
|
|
|
@ -12,9 +12,9 @@
|
|||
#include <linux/init.h>
|
||||
#include <linux/export.h>
|
||||
#include <linux/jump_label.h>
|
||||
#include <linux/of.h>
|
||||
|
||||
#include <asm/cputable.h>
|
||||
#include <asm/prom.h> /* for PTRRELOC on ARCH=ppc */
|
||||
#include <asm/mce.h>
|
||||
#include <asm/mmu.h>
|
||||
#include <asm/setup.h>
|
||||
|
@ -487,11 +487,29 @@ static struct cpu_spec __initdata cpu_specs[] = {
|
|||
.machine_check_early = __machine_check_early_realmode_p9,
|
||||
.platform = "power9",
|
||||
},
|
||||
{ /* Power9 DD2.2 or later */
|
||||
{ /* Power9 DD2.2 */
|
||||
.pvr_mask = 0xffffefff,
|
||||
.pvr_value = 0x004e0202,
|
||||
.cpu_name = "POWER9 (raw)",
|
||||
.cpu_features = CPU_FTRS_POWER9_DD2_2,
|
||||
.cpu_user_features = COMMON_USER_POWER9,
|
||||
.cpu_user_features2 = COMMON_USER2_POWER9,
|
||||
.mmu_features = MMU_FTRS_POWER9,
|
||||
.icache_bsize = 128,
|
||||
.dcache_bsize = 128,
|
||||
.num_pmcs = 6,
|
||||
.pmc_type = PPC_PMC_IBM,
|
||||
.oprofile_cpu_type = "ppc64/power9",
|
||||
.cpu_setup = __setup_cpu_power9,
|
||||
.cpu_restore = __restore_cpu_power9,
|
||||
.machine_check_early = __machine_check_early_realmode_p9,
|
||||
.platform = "power9",
|
||||
},
|
||||
{ /* Power9 DD2.3 or later */
|
||||
.pvr_mask = 0xffff0000,
|
||||
.pvr_value = 0x004e0000,
|
||||
.cpu_name = "POWER9 (raw)",
|
||||
.cpu_features = CPU_FTRS_POWER9_DD2_2,
|
||||
.cpu_features = CPU_FTRS_POWER9_DD2_3,
|
||||
.cpu_user_features = COMMON_USER_POWER9,
|
||||
.cpu_user_features2 = COMMON_USER2_POWER9,
|
||||
.mmu_features = MMU_FTRS_POWER9,
|
||||
|
@ -2025,7 +2043,7 @@ static struct cpu_spec * __init setup_cpu_spec(unsigned long offset,
|
|||
* oprofile_cpu_type already has a value, then we are
|
||||
* possibly overriding a real PVR with a logical one,
|
||||
* and, in that case, keep the current value for
|
||||
* oprofile_cpu_type. Futhermore, let's ensure that the
|
||||
* oprofile_cpu_type. Furthermore, let's ensure that the
|
||||
* fix for the PMAO bug is enabled on compatibility mode.
|
||||
*/
|
||||
if (old.oprofile_cpu_type != NULL) {
|
||||
|
@ -2119,7 +2137,7 @@ void __init cpu_feature_keys_init(void)
|
|||
struct static_key_true mmu_feature_keys[NUM_MMU_FTR_KEYS] = {
|
||||
[0 ... NUM_MMU_FTR_KEYS - 1] = STATIC_KEY_TRUE_INIT
|
||||
};
|
||||
EXPORT_SYMBOL_GPL(mmu_feature_keys);
|
||||
EXPORT_SYMBOL(mmu_feature_keys);
|
||||
|
||||
void __init mmu_feature_keys_init(void)
|
||||
{
|
||||
|
|
|
@ -12,9 +12,9 @@
|
|||
#include <linux/crash_dump.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/memblock.h>
|
||||
#include <linux/of.h>
|
||||
#include <asm/code-patching.h>
|
||||
#include <asm/kdump.h>
|
||||
#include <asm/prom.h>
|
||||
#include <asm/firmware.h>
|
||||
#include <linux/uio.h>
|
||||
#include <asm/rtas.h>
|
||||
|
|
|
@ -27,7 +27,7 @@ int set_dawr(int nr, struct arch_hw_breakpoint *brk)
|
|||
dawrx |= (brk->type & (HW_BRK_TYPE_PRIV_ALL)) >> 3;
|
||||
/*
|
||||
* DAWR length is stored in field MDR bits 48:53. Matches range in
|
||||
* doublewords (64 bits) baised by -1 eg. 0b000000=1DW and
|
||||
* doublewords (64 bits) biased by -1 eg. 0b000000=1DW and
|
||||
* 0b111111=64DW.
|
||||
* brk->hw_len is in bytes.
|
||||
* This aligns up to double word size, shifts and does the bias.
|
||||
|
|
|
@ -10,6 +10,7 @@
|
|||
#include <linux/jump_label.h>
|
||||
#include <linux/libfdt.h>
|
||||
#include <linux/memblock.h>
|
||||
#include <linux/of_fdt.h>
|
||||
#include <linux/printk.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/string.h>
|
||||
|
@ -19,7 +20,6 @@
|
|||
#include <asm/dt_cpu_ftrs.h>
|
||||
#include <asm/mce.h>
|
||||
#include <asm/mmu.h>
|
||||
#include <asm/prom.h>
|
||||
#include <asm/setup.h>
|
||||
|
||||
|
||||
|
@ -774,20 +774,26 @@ static __init void cpufeatures_cpu_quirks(void)
|
|||
if ((version & 0xffffefff) == 0x004e0200) {
|
||||
/* DD2.0 has no feature flag */
|
||||
cur_cpu_spec->cpu_features |= CPU_FTR_P9_RADIX_PREFETCH_BUG;
|
||||
cur_cpu_spec->cpu_features &= ~(CPU_FTR_DAWR);
|
||||
} else if ((version & 0xffffefff) == 0x004e0201) {
|
||||
cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD2_1;
|
||||
cur_cpu_spec->cpu_features |= CPU_FTR_P9_RADIX_PREFETCH_BUG;
|
||||
cur_cpu_spec->cpu_features &= ~(CPU_FTR_DAWR);
|
||||
} else if ((version & 0xffffefff) == 0x004e0202) {
|
||||
cur_cpu_spec->cpu_features |= CPU_FTR_P9_TM_HV_ASSIST;
|
||||
cur_cpu_spec->cpu_features |= CPU_FTR_P9_TM_XER_SO_BUG;
|
||||
cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD2_1;
|
||||
cur_cpu_spec->cpu_features &= ~(CPU_FTR_DAWR);
|
||||
} else if ((version & 0xffffefff) == 0x004e0203) {
|
||||
cur_cpu_spec->cpu_features |= CPU_FTR_P9_TM_HV_ASSIST;
|
||||
cur_cpu_spec->cpu_features |= CPU_FTR_P9_TM_XER_SO_BUG;
|
||||
cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD2_1;
|
||||
} else if ((version & 0xffff0000) == 0x004e0000) {
|
||||
/* DD2.1 and up have DD2_1 */
|
||||
cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD2_1;
|
||||
}
|
||||
|
||||
if ((version & 0xffff0000) == 0x004e0000) {
|
||||
cur_cpu_spec->cpu_features &= ~(CPU_FTR_DAWR);
|
||||
cur_cpu_spec->cpu_features |= CPU_FTR_P9_TIDR;
|
||||
}
|
||||
|
||||
|
|
|
@ -1329,7 +1329,7 @@ int eeh_pe_set_option(struct eeh_pe *pe, int option)
|
|||
|
||||
/*
|
||||
* EEH functionality could possibly be disabled, just
|
||||
* return error for the case. And the EEH functinality
|
||||
* return error for the case. And the EEH functionality
|
||||
* isn't expected to be disabled on one specific PE.
|
||||
*/
|
||||
switch (option) {
|
||||
|
@ -1804,7 +1804,7 @@ static int eeh_debugfs_break_device(struct pci_dev *pdev)
|
|||
* PE freeze. Using the in_8() accessor skips the eeh detection hook
|
||||
* so the freeze hook so the EEH Detection machinery won't be
|
||||
* triggered here. This is to match the usual behaviour of EEH
|
||||
* where the HW will asyncronously freeze a PE and it's up to
|
||||
* where the HW will asynchronously freeze a PE and it's up to
|
||||
* the kernel to notice and deal with it.
|
||||
*
|
||||
* 3. Turn Memory space back on. This is more important for VFs
|
||||
|
|
|
@ -16,7 +16,6 @@
|
|||
#include <asm/eeh_event.h>
|
||||
#include <asm/ppc-pci.h>
|
||||
#include <asm/pci-bridge.h>
|
||||
#include <asm/prom.h>
|
||||
#include <asm/rtas.h>
|
||||
|
||||
struct eeh_rmv_data {
|
||||
|
|
|
@ -143,7 +143,7 @@ int __eeh_send_failure_event(struct eeh_pe *pe)
|
|||
int eeh_send_failure_event(struct eeh_pe *pe)
|
||||
{
|
||||
/*
|
||||
* If we've manually supressed recovery events via debugfs
|
||||
* If we've manually suppressed recovery events via debugfs
|
||||
* then just drop it on the floor.
|
||||
*/
|
||||
if (eeh_debugfs_no_recover) {
|
||||
|
|
|
@ -13,6 +13,7 @@
|
|||
#include <linux/export.h>
|
||||
#include <linux/gfp.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/string.h>
|
||||
|
||||
|
@ -301,7 +302,7 @@ struct eeh_pe *eeh_pe_get(struct pci_controller *phb, int pe_no)
|
|||
* @new_pe_parent.
|
||||
*
|
||||
* If @new_pe_parent is NULL then the new PE will be inserted under
|
||||
* directly under the the PHB.
|
||||
* directly under the PHB.
|
||||
*/
|
||||
int eeh_pe_tree_insert(struct eeh_dev *edev, struct eeh_pe *new_pe_parent)
|
||||
{
|
||||
|
|
|
@ -6,6 +6,7 @@
|
|||
*
|
||||
* Send comments and feedback to Linas Vepstas <linas@austin.ibm.com>
|
||||
*/
|
||||
#include <linux/of.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/stat.h>
|
||||
#include <asm/ppc-pci.h>
|
||||
|
|
|
@ -555,52 +555,3 @@ ret_from_mcheck_exc:
|
|||
_ASM_NOKPROBE_SYMBOL(ret_from_mcheck_exc)
|
||||
#endif /* CONFIG_BOOKE */
|
||||
#endif /* !(CONFIG_4xx || CONFIG_BOOKE) */
|
||||
|
||||
/*
|
||||
* PROM code for specific machines follows. Put it
|
||||
* here so it's easy to add arch-specific sections later.
|
||||
* -- Cort
|
||||
*/
|
||||
#ifdef CONFIG_PPC_RTAS
|
||||
/*
|
||||
* On CHRP, the Run-Time Abstraction Services (RTAS) have to be
|
||||
* called with the MMU off.
|
||||
*/
|
||||
_GLOBAL(enter_rtas)
|
||||
stwu r1,-INT_FRAME_SIZE(r1)
|
||||
mflr r0
|
||||
stw r0,INT_FRAME_SIZE+4(r1)
|
||||
LOAD_REG_ADDR(r4, rtas)
|
||||
lis r6,1f@ha /* physical return address for rtas */
|
||||
addi r6,r6,1f@l
|
||||
tophys(r6,r6)
|
||||
lwz r8,RTASENTRY(r4)
|
||||
lwz r4,RTASBASE(r4)
|
||||
mfmsr r9
|
||||
stw r9,8(r1)
|
||||
LOAD_REG_IMMEDIATE(r0,MSR_KERNEL)
|
||||
mtmsr r0 /* disable interrupts so SRR0/1 don't get trashed */
|
||||
li r9,MSR_KERNEL & ~(MSR_IR|MSR_DR)
|
||||
mtlr r6
|
||||
stw r1, THREAD + RTAS_SP(r2)
|
||||
mtspr SPRN_SRR0,r8
|
||||
mtspr SPRN_SRR1,r9
|
||||
rfi
|
||||
1:
|
||||
lis r8, 1f@h
|
||||
ori r8, r8, 1f@l
|
||||
LOAD_REG_IMMEDIATE(r9,MSR_KERNEL)
|
||||
mtspr SPRN_SRR0,r8
|
||||
mtspr SPRN_SRR1,r9
|
||||
rfi /* Reactivate MMU translation */
|
||||
1:
|
||||
lwz r8,INT_FRAME_SIZE+4(r1) /* get return address */
|
||||
lwz r9,8(r1) /* original msr value */
|
||||
addi r1,r1,INT_FRAME_SIZE
|
||||
li r0,0
|
||||
stw r0, THREAD + RTAS_SP(r2)
|
||||
mtlr r8
|
||||
mtmsr r9
|
||||
blr /* return to caller */
|
||||
_ASM_NOKPROBE_SYMBOL(enter_rtas)
|
||||
#endif /* CONFIG_PPC_RTAS */
|
||||
|
|
|
@ -264,156 +264,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
|
|||
addi r1,r1,SWITCH_FRAME_SIZE
|
||||
blr
|
||||
|
||||
#ifdef CONFIG_PPC_RTAS
|
||||
/*
|
||||
* On CHRP, the Run-Time Abstraction Services (RTAS) have to be
|
||||
* called with the MMU off.
|
||||
*
|
||||
* In addition, we need to be in 32b mode, at least for now.
|
||||
*
|
||||
* Note: r3 is an input parameter to rtas, so don't trash it...
|
||||
*/
|
||||
_GLOBAL(enter_rtas)
|
||||
mflr r0
|
||||
std r0,16(r1)
|
||||
stdu r1,-SWITCH_FRAME_SIZE(r1) /* Save SP and create stack space. */
|
||||
|
||||
/* Because RTAS is running in 32b mode, it clobbers the high order half
|
||||
* of all registers that it saves. We therefore save those registers
|
||||
* RTAS might touch to the stack. (r0, r3-r13 are caller saved)
|
||||
*/
|
||||
SAVE_GPR(2, r1) /* Save the TOC */
|
||||
SAVE_GPR(13, r1) /* Save paca */
|
||||
SAVE_NVGPRS(r1) /* Save the non-volatiles */
|
||||
|
||||
mfcr r4
|
||||
std r4,_CCR(r1)
|
||||
mfctr r5
|
||||
std r5,_CTR(r1)
|
||||
mfspr r6,SPRN_XER
|
||||
std r6,_XER(r1)
|
||||
mfdar r7
|
||||
std r7,_DAR(r1)
|
||||
mfdsisr r8
|
||||
std r8,_DSISR(r1)
|
||||
|
||||
/* Temporary workaround to clear CR until RTAS can be modified to
|
||||
* ignore all bits.
|
||||
*/
|
||||
li r0,0
|
||||
mtcr r0
|
||||
|
||||
#ifdef CONFIG_BUG
|
||||
/* There is no way it is acceptable to get here with interrupts enabled,
|
||||
* check it with the asm equivalent of WARN_ON
|
||||
*/
|
||||
lbz r0,PACAIRQSOFTMASK(r13)
|
||||
1: tdeqi r0,IRQS_ENABLED
|
||||
EMIT_WARN_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING
|
||||
#endif
|
||||
|
||||
/* Hard-disable interrupts */
|
||||
mfmsr r6
|
||||
rldicl r7,r6,48,1
|
||||
rotldi r7,r7,16
|
||||
mtmsrd r7,1
|
||||
|
||||
/* Unfortunately, the stack pointer and the MSR are also clobbered,
|
||||
* so they are saved in the PACA which allows us to restore
|
||||
* our original state after RTAS returns.
|
||||
*/
|
||||
std r1,PACAR1(r13)
|
||||
std r6,PACASAVEDMSR(r13)
|
||||
|
||||
/* Setup our real return addr */
|
||||
LOAD_REG_ADDR(r4,rtas_return_loc)
|
||||
clrldi r4,r4,2 /* convert to realmode address */
|
||||
mtlr r4
|
||||
|
||||
li r0,0
|
||||
ori r0,r0,MSR_EE|MSR_SE|MSR_BE|MSR_RI
|
||||
andc r0,r6,r0
|
||||
|
||||
li r9,1
|
||||
rldicr r9,r9,MSR_SF_LG,(63-MSR_SF_LG)
|
||||
ori r9,r9,MSR_IR|MSR_DR|MSR_FE0|MSR_FE1|MSR_FP|MSR_RI|MSR_LE
|
||||
andc r6,r0,r9
|
||||
|
||||
__enter_rtas:
|
||||
sync /* disable interrupts so SRR0/1 */
|
||||
mtmsrd r0 /* don't get trashed */
|
||||
|
||||
LOAD_REG_ADDR(r4, rtas)
|
||||
ld r5,RTASENTRY(r4) /* get the rtas->entry value */
|
||||
ld r4,RTASBASE(r4) /* get the rtas->base value */
|
||||
|
||||
mtspr SPRN_SRR0,r5
|
||||
mtspr SPRN_SRR1,r6
|
||||
RFI_TO_KERNEL
|
||||
b . /* prevent speculative execution */
|
||||
|
||||
rtas_return_loc:
|
||||
FIXUP_ENDIAN
|
||||
|
||||
/*
|
||||
* Clear RI and set SF before anything.
|
||||
*/
|
||||
mfmsr r6
|
||||
li r0,MSR_RI
|
||||
andc r6,r6,r0
|
||||
sldi r0,r0,(MSR_SF_LG - MSR_RI_LG)
|
||||
or r6,r6,r0
|
||||
sync
|
||||
mtmsrd r6
|
||||
|
||||
/* relocation is off at this point */
|
||||
GET_PACA(r4)
|
||||
clrldi r4,r4,2 /* convert to realmode address */
|
||||
|
||||
bcl 20,31,$+4
|
||||
0: mflr r3
|
||||
ld r3,(1f-0b)(r3) /* get &rtas_restore_regs */
|
||||
|
||||
ld r1,PACAR1(r4) /* Restore our SP */
|
||||
ld r4,PACASAVEDMSR(r4) /* Restore our MSR */
|
||||
|
||||
mtspr SPRN_SRR0,r3
|
||||
mtspr SPRN_SRR1,r4
|
||||
RFI_TO_KERNEL
|
||||
b . /* prevent speculative execution */
|
||||
_ASM_NOKPROBE_SYMBOL(__enter_rtas)
|
||||
_ASM_NOKPROBE_SYMBOL(rtas_return_loc)
|
||||
|
||||
.align 3
|
||||
1: .8byte rtas_restore_regs
|
||||
|
||||
rtas_restore_regs:
|
||||
/* relocation is on at this point */
|
||||
REST_GPR(2, r1) /* Restore the TOC */
|
||||
REST_GPR(13, r1) /* Restore paca */
|
||||
REST_NVGPRS(r1) /* Restore the non-volatiles */
|
||||
|
||||
GET_PACA(r13)
|
||||
|
||||
ld r4,_CCR(r1)
|
||||
mtcr r4
|
||||
ld r5,_CTR(r1)
|
||||
mtctr r5
|
||||
ld r6,_XER(r1)
|
||||
mtspr SPRN_XER,r6
|
||||
ld r7,_DAR(r1)
|
||||
mtdar r7
|
||||
ld r8,_DSISR(r1)
|
||||
mtdsisr r8
|
||||
|
||||
addi r1,r1,SWITCH_FRAME_SIZE /* Unstack our frame */
|
||||
ld r0,16(r1) /* get return address */
|
||||
|
||||
mtlr r0
|
||||
blr /* return to caller */
|
||||
|
||||
#endif /* CONFIG_PPC_RTAS */
|
||||
|
||||
_GLOBAL(enter_prom)
|
||||
mflr r0
|
||||
std r0,16(r1)
|
||||
|
|
|
@ -25,9 +25,10 @@
|
|||
#include <linux/cma.h>
|
||||
#include <linux/hugetlb.h>
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_fdt.h>
|
||||
|
||||
#include <asm/page.h>
|
||||
#include <asm/prom.h>
|
||||
#include <asm/fadump.h>
|
||||
#include <asm/fadump-internal.h>
|
||||
#include <asm/setup.h>
|
||||
|
@ -73,8 +74,8 @@ static struct cma *fadump_cma;
|
|||
* The total size of fadump reserved memory covers for boot memory size
|
||||
* + cpu data size + hpte size and metadata.
|
||||
* Initialize only the area equivalent to boot memory size for CMA use.
|
||||
* The reamining portion of fadump reserved memory will be not given
|
||||
* to CMA and pages for thoes will stay reserved. boot memory size is
|
||||
* The remaining portion of fadump reserved memory will be not given
|
||||
* to CMA and pages for those will stay reserved. boot memory size is
|
||||
* aligned per CMA requirement to satisy cma_init_reserved_mem() call.
|
||||
* But for some reason even if it fails we still have the memory reservation
|
||||
* with us and we can still continue doing fadump.
|
||||
|
@ -365,6 +366,11 @@ static unsigned long __init get_fadump_area_size(void)
|
|||
|
||||
size += fw_dump.cpu_state_data_size;
|
||||
size += fw_dump.hpte_region_size;
|
||||
/*
|
||||
* Account for pagesize alignment of boot memory area destination address.
|
||||
* This faciliates in mmap reading of first kernel's memory.
|
||||
*/
|
||||
size = PAGE_ALIGN(size);
|
||||
size += fw_dump.boot_memory_size;
|
||||
size += sizeof(struct fadump_crash_info_header);
|
||||
size += sizeof(struct elfhdr); /* ELF core header.*/
|
||||
|
@ -728,7 +734,7 @@ void crash_fadump(struct pt_regs *regs, const char *str)
|
|||
else
|
||||
ppc_save_regs(&fdh->regs);
|
||||
|
||||
fdh->online_mask = *cpu_online_mask;
|
||||
fdh->cpu_mask = *cpu_online_mask;
|
||||
|
||||
/*
|
||||
* If we came in via system reset, wait a while for the secondary
|
||||
|
@ -867,7 +873,6 @@ static int fadump_alloc_mem_ranges(struct fadump_mrange_info *mrange_info)
|
|||
sizeof(struct fadump_memory_range));
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int fadump_add_mem_range(struct fadump_mrange_info *mrange_info,
|
||||
u64 base, u64 end)
|
||||
{
|
||||
|
@ -886,7 +891,12 @@ static inline int fadump_add_mem_range(struct fadump_mrange_info *mrange_info,
|
|||
start = mem_ranges[mrange_info->mem_range_cnt - 1].base;
|
||||
size = mem_ranges[mrange_info->mem_range_cnt - 1].size;
|
||||
|
||||
if ((start + size) == base)
|
||||
/*
|
||||
* Boot memory area needs separate PT_LOAD segment(s) as it
|
||||
* is moved to a different location at the time of crash.
|
||||
* So, fold only if the region is not boot memory area.
|
||||
*/
|
||||
if ((start + size) == base && start >= fw_dump.boot_mem_top)
|
||||
is_adjacent = true;
|
||||
}
|
||||
if (!is_adjacent) {
|
||||
|
@ -968,11 +978,14 @@ static int fadump_init_elfcore_header(char *bufp)
|
|||
elf->e_entry = 0;
|
||||
elf->e_phoff = sizeof(struct elfhdr);
|
||||
elf->e_shoff = 0;
|
||||
#if defined(_CALL_ELF)
|
||||
elf->e_flags = _CALL_ELF;
|
||||
#else
|
||||
elf->e_flags = 0;
|
||||
#endif
|
||||
|
||||
if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V2))
|
||||
elf->e_flags = 2;
|
||||
else if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V1))
|
||||
elf->e_flags = 1;
|
||||
else
|
||||
elf->e_flags = 0;
|
||||
|
||||
elf->e_ehsize = sizeof(struct elfhdr);
|
||||
elf->e_phentsize = sizeof(struct elf_phdr);
|
||||
elf->e_phnum = 0;
|
||||
|
@ -1164,6 +1177,11 @@ static unsigned long init_fadump_header(unsigned long addr)
|
|||
fdh->elfcorehdr_addr = addr;
|
||||
/* We will set the crashing cpu id in crash_fadump() during crash. */
|
||||
fdh->crashing_cpu = FADUMP_CPU_UNKNOWN;
|
||||
/*
|
||||
* When LPAR is terminated by PYHP, ensure all possible CPUs'
|
||||
* register data is processed while exporting the vmcore.
|
||||
*/
|
||||
fdh->cpu_mask = *cpu_possible_mask;
|
||||
|
||||
return addr;
|
||||
}
|
||||
|
@ -1271,7 +1289,6 @@ static void fadump_release_reserved_area(u64 start, u64 end)
|
|||
static void sort_and_merge_mem_ranges(struct fadump_mrange_info *mrange_info)
|
||||
{
|
||||
struct fadump_memory_range *mem_ranges;
|
||||
struct fadump_memory_range tmp_range;
|
||||
u64 base, size;
|
||||
int i, j, idx;
|
||||
|
||||
|
@ -1286,11 +1303,8 @@ static void sort_and_merge_mem_ranges(struct fadump_mrange_info *mrange_info)
|
|||
if (mem_ranges[idx].base > mem_ranges[j].base)
|
||||
idx = j;
|
||||
}
|
||||
if (idx != i) {
|
||||
tmp_range = mem_ranges[idx];
|
||||
mem_ranges[idx] = mem_ranges[i];
|
||||
mem_ranges[i] = tmp_range;
|
||||
}
|
||||
if (idx != i)
|
||||
swap(mem_ranges[idx], mem_ranges[i]);
|
||||
}
|
||||
|
||||
/* Merge adjacent reserved ranges */
|
||||
|
@ -1661,8 +1675,8 @@ int __init setup_fadump(void)
|
|||
}
|
||||
/*
|
||||
* Use subsys_initcall_sync() here because there is dependency with
|
||||
* crash_save_vmcoreinfo_init(), which mush run first to ensure vmcoreinfo initialization
|
||||
* is done before regisering with f/w.
|
||||
* crash_save_vmcoreinfo_init(), which must run first to ensure vmcoreinfo initialization
|
||||
* is done before registering with f/w.
|
||||
*/
|
||||
subsys_initcall_sync(setup_fadump);
|
||||
#else /* !CONFIG_PRESERVE_FA_DUMP */
|
||||
|
|
|
@ -111,7 +111,7 @@ __secondary_hold_acknowledge:
|
|||
#ifdef CONFIG_RELOCATABLE
|
||||
/* This flag is set to 1 by a loader if the kernel should run
|
||||
* at the loaded address instead of the linked address. This
|
||||
* is used by kexec-tools to keep the the kdump kernel in the
|
||||
* is used by kexec-tools to keep the kdump kernel in the
|
||||
* crash_kernel region. The loader is responsible for
|
||||
* observing the alignment requirement.
|
||||
*/
|
||||
|
@ -435,7 +435,7 @@ generic_secondary_common_init:
|
|||
ld r12,CPU_SPEC_RESTORE(r23)
|
||||
cmpdi 0,r12,0
|
||||
beq 3f
|
||||
#ifdef PPC64_ELF_ABI_v1
|
||||
#ifdef CONFIG_PPC64_ELF_ABI_V1
|
||||
ld r12,0(r12)
|
||||
#endif
|
||||
mtctr r12
|
||||
|
|
|
@ -37,7 +37,7 @@ static int __init powersave_off(char *arg)
|
|||
{
|
||||
ppc_md.power_save = NULL;
|
||||
cpuidle_disable = IDLE_POWERSAVE_OFF;
|
||||
return 0;
|
||||
return 1;
|
||||
}
|
||||
__setup("powersave=off", powersave_off);
|
||||
|
||||
|
|
|
@ -219,16 +219,6 @@ system_call_vectored common 0x3000
|
|||
*/
|
||||
system_call_vectored sigill 0x7ff0
|
||||
|
||||
|
||||
/*
|
||||
* Entered via kernel return set up by kernel/sstep.c, must match entry regs
|
||||
*/
|
||||
.globl system_call_vectored_emulate
|
||||
system_call_vectored_emulate:
|
||||
_ASM_NOKPROBE_SYMBOL(system_call_vectored_emulate)
|
||||
li r10,IRQS_ALL_DISABLED
|
||||
stb r10,PACAIRQSOFTMASK(r13)
|
||||
b system_call_vectored_common
|
||||
#endif /* CONFIG_PPC_BOOK3S */
|
||||
|
||||
.balign IFETCH_ALIGN_BYTES
|
||||
|
@ -721,7 +711,7 @@ _GLOBAL(ret_from_kernel_thread)
|
|||
REST_NVGPRS(r1)
|
||||
mtctr r14
|
||||
mr r3,r15
|
||||
#ifdef PPC64_ELF_ABI_v2
|
||||
#ifdef CONFIG_PPC64_ELF_ABI_V2
|
||||
mr r12,r14
|
||||
#endif
|
||||
bctrl
|
||||
|
|
|
@ -27,7 +27,6 @@
|
|||
#include <linux/sched.h>
|
||||
#include <linux/debugfs.h>
|
||||
#include <asm/io.h>
|
||||
#include <asm/prom.h>
|
||||
#include <asm/iommu.h>
|
||||
#include <asm/pci-bridge.h>
|
||||
#include <asm/machdep.h>
|
||||
|
@ -1065,7 +1064,7 @@ extern long iommu_tce_xchg_no_kill(struct mm_struct *mm,
|
|||
long ret;
|
||||
unsigned long size = 0;
|
||||
|
||||
ret = tbl->it_ops->xchg_no_kill(tbl, entry, hpa, direction, false);
|
||||
ret = tbl->it_ops->xchg_no_kill(tbl, entry, hpa, direction);
|
||||
if (!ret && ((*direction == DMA_FROM_DEVICE) ||
|
||||
(*direction == DMA_BIDIRECTIONAL)) &&
|
||||
!mm_iommu_is_devmem(mm, *hpa, tbl->it_page_shift,
|
||||
|
@ -1080,7 +1079,7 @@ void iommu_tce_kill(struct iommu_table *tbl,
|
|||
unsigned long entry, unsigned long pages)
|
||||
{
|
||||
if (tbl->it_ops->tce_kill)
|
||||
tbl->it_ops->tce_kill(tbl, entry, pages, false);
|
||||
tbl->it_ops->tce_kill(tbl, entry, pages);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(iommu_tce_kill);
|
||||
|
||||
|
|
|
@ -52,13 +52,13 @@
|
|||
#include <linux/of_irq.h>
|
||||
#include <linux/vmalloc.h>
|
||||
#include <linux/pgtable.h>
|
||||
#include <linux/static_call.h>
|
||||
|
||||
#include <linux/uaccess.h>
|
||||
#include <asm/interrupt.h>
|
||||
#include <asm/io.h>
|
||||
#include <asm/irq.h>
|
||||
#include <asm/cache.h>
|
||||
#include <asm/prom.h>
|
||||
#include <asm/ptrace.h>
|
||||
#include <asm/machdep.h>
|
||||
#include <asm/udbg.h>
|
||||
|
@ -217,7 +217,6 @@ static inline void replay_soft_interrupts_irqrestore(void)
|
|||
#define replay_soft_interrupts_irqrestore() replay_soft_interrupts()
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_CC_HAS_ASM_GOTO
|
||||
notrace void arch_local_irq_restore(unsigned long mask)
|
||||
{
|
||||
unsigned char irq_happened;
|
||||
|
@ -313,82 +312,6 @@ happened:
|
|||
__hard_irq_enable();
|
||||
preempt_enable();
|
||||
}
|
||||
#else
|
||||
notrace void arch_local_irq_restore(unsigned long mask)
|
||||
{
|
||||
unsigned char irq_happened;
|
||||
|
||||
/* Write the new soft-enabled value */
|
||||
irq_soft_mask_set(mask);
|
||||
if (mask)
|
||||
return;
|
||||
|
||||
if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
|
||||
WARN_ON_ONCE(in_nmi() || in_hardirq());
|
||||
|
||||
/*
|
||||
* From this point onward, we can take interrupts, preempt,
|
||||
* etc... unless we got hard-disabled. We check if an event
|
||||
* happened. If none happened, we know we can just return.
|
||||
*
|
||||
* We may have preempted before the check below, in which case
|
||||
* we are checking the "new" CPU instead of the old one. This
|
||||
* is only a problem if an event happened on the "old" CPU.
|
||||
*
|
||||
* External interrupt events will have caused interrupts to
|
||||
* be hard-disabled, so there is no problem, we
|
||||
* cannot have preempted.
|
||||
*/
|
||||
irq_happened = get_irq_happened();
|
||||
if (!irq_happened) {
|
||||
if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
|
||||
WARN_ON_ONCE(!(mfmsr() & MSR_EE));
|
||||
return;
|
||||
}
|
||||
|
||||
/* We need to hard disable to replay. */
|
||||
if (!(irq_happened & PACA_IRQ_HARD_DIS)) {
|
||||
if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
|
||||
WARN_ON_ONCE(!(mfmsr() & MSR_EE));
|
||||
__hard_irq_disable();
|
||||
local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
|
||||
} else {
|
||||
/*
|
||||
* We should already be hard disabled here. We had bugs
|
||||
* where that wasn't the case so let's dbl check it and
|
||||
* warn if we are wrong. Only do that when IRQ tracing
|
||||
* is enabled as mfmsr() can be costly.
|
||||
*/
|
||||
if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) {
|
||||
if (WARN_ON_ONCE(mfmsr() & MSR_EE))
|
||||
__hard_irq_disable();
|
||||
}
|
||||
|
||||
if (irq_happened == PACA_IRQ_HARD_DIS) {
|
||||
local_paca->irq_happened = 0;
|
||||
__hard_irq_enable();
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Disable preempt here, so that the below preempt_enable will
|
||||
* perform resched if required (a replayed interrupt may set
|
||||
* need_resched).
|
||||
*/
|
||||
preempt_disable();
|
||||
irq_soft_mask_set(IRQS_ALL_DISABLED);
|
||||
trace_hardirqs_off();
|
||||
|
||||
replay_soft_interrupts_irqrestore();
|
||||
local_paca->irq_happened = 0;
|
||||
|
||||
trace_hardirqs_on();
|
||||
irq_soft_mask_set(IRQS_ENABLED);
|
||||
__hard_irq_enable();
|
||||
preempt_enable();
|
||||
}
|
||||
#endif
|
||||
EXPORT_SYMBOL(arch_local_irq_restore);
|
||||
|
||||
/*
|
||||
|
@ -730,6 +653,8 @@ static __always_inline void call_do_irq(struct pt_regs *regs, void *sp)
|
|||
);
|
||||
}
|
||||
|
||||
DEFINE_STATIC_CALL_RET0(ppc_get_irq, *ppc_md.get_irq);
|
||||
|
||||
void __do_irq(struct pt_regs *regs)
|
||||
{
|
||||
unsigned int irq;
|
||||
|
@ -741,7 +666,7 @@ void __do_irq(struct pt_regs *regs)
|
|||
*
|
||||
* This will typically lower the interrupt line to the CPU
|
||||
*/
|
||||
irq = ppc_md.get_irq();
|
||||
irq = static_call(ppc_get_irq)();
|
||||
|
||||
/* We can hard enable interrupts now to allow perf interrupts */
|
||||
if (should_hard_irq_enable())
|
||||
|
@ -809,6 +734,9 @@ void __init init_IRQ(void)
|
|||
|
||||
if (ppc_md.init_IRQ)
|
||||
ppc_md.init_IRQ();
|
||||
|
||||
if (!WARN_ON(!ppc_md.get_irq))
|
||||
static_call_update(ppc_get_irq, ppc_md.get_irq);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_BOOKE_OR_40x
|
||||
|
|
|
@ -18,11 +18,11 @@
|
|||
#include <linux/init.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/notifier.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/vmalloc.h>
|
||||
|
||||
#include <asm/processor.h>
|
||||
#include <asm/io.h>
|
||||
#include <asm/prom.h>
|
||||
#include <asm/pci-bridge.h>
|
||||
#include <asm/machdep.h>
|
||||
#include <asm/ppc-pci.h>
|
||||
|
|
|
@ -45,7 +45,7 @@ kprobe_opcode_t *kprobe_lookup_name(const char *name, unsigned int offset)
|
|||
{
|
||||
kprobe_opcode_t *addr = NULL;
|
||||
|
||||
#ifdef PPC64_ELF_ABI_v2
|
||||
#ifdef CONFIG_PPC64_ELF_ABI_V2
|
||||
/* PPC64 ABIv2 needs local entry point */
|
||||
addr = (kprobe_opcode_t *)kallsyms_lookup_name(name);
|
||||
if (addr && !offset) {
|
||||
|
@ -63,7 +63,7 @@ kprobe_opcode_t *kprobe_lookup_name(const char *name, unsigned int offset)
|
|||
#endif
|
||||
addr = (kprobe_opcode_t *)ppc_function_entry(addr);
|
||||
}
|
||||
#elif defined(PPC64_ELF_ABI_v1)
|
||||
#elif defined(CONFIG_PPC64_ELF_ABI_V1)
|
||||
/*
|
||||
* 64bit powerpc ABIv1 uses function descriptors:
|
||||
* - Check for the dot variant of the symbol first.
|
||||
|
@ -107,7 +107,7 @@ kprobe_opcode_t *kprobe_lookup_name(const char *name, unsigned int offset)
|
|||
|
||||
static bool arch_kprobe_on_func_entry(unsigned long offset)
|
||||
{
|
||||
#ifdef PPC64_ELF_ABI_v2
|
||||
#ifdef CONFIG_PPC64_ELF_ABI_V2
|
||||
#ifdef CONFIG_KPROBES_ON_FTRACE
|
||||
return offset <= 16;
|
||||
#else
|
||||
|
@ -150,8 +150,8 @@ int arch_prepare_kprobe(struct kprobe *p)
|
|||
if ((unsigned long)p->addr & 0x03) {
|
||||
printk("Attempt to register kprobe at an unaligned address\n");
|
||||
ret = -EINVAL;
|
||||
} else if (IS_MTMSRD(insn) || IS_RFID(insn)) {
|
||||
printk("Cannot register a kprobe on mtmsr[d]/rfi[d]\n");
|
||||
} else if (!can_single_step(ppc_inst_val(insn))) {
|
||||
printk("Cannot register a kprobe on instructions that can't be single stepped\n");
|
||||
ret = -EINVAL;
|
||||
} else if ((unsigned long)p->addr & ~PAGE_MASK &&
|
||||
ppc_inst_prefixed(ppc_inst_read(p->addr - 1))) {
|
||||
|
|
|
@ -7,10 +7,10 @@
|
|||
#include <linux/pci.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/serial_reg.h>
|
||||
#include <asm/io.h>
|
||||
#include <asm/mmu.h>
|
||||
#include <asm/prom.h>
|
||||
#include <asm/serial.h>
|
||||
#include <asm/udbg.h>
|
||||
#include <asm/pci-bridge.h>
|
||||
|
|
|
@ -454,7 +454,7 @@ _GLOBAL(kexec_sequence)
|
|||
beq 1f
|
||||
|
||||
/* clear out hardware hash page table and tlb */
|
||||
#ifdef PPC64_ELF_ABI_v1
|
||||
#ifdef CONFIG_PPC64_ELF_ABI_V1
|
||||
ld r12,0(r27) /* deref function descriptor */
|
||||
#else
|
||||
mr r12,r27
|
||||
|
|
|
@ -64,13 +64,13 @@ int module_finalize(const Elf_Ehdr *hdr,
|
|||
(void *)sect->sh_addr + sect->sh_size);
|
||||
#endif /* CONFIG_PPC64 */
|
||||
|
||||
#ifdef PPC64_ELF_ABI_v1
|
||||
#ifdef CONFIG_PPC64_ELF_ABI_V1
|
||||
sect = find_section(hdr, sechdrs, ".opd");
|
||||
if (sect != NULL) {
|
||||
me->arch.start_opd = sect->sh_addr;
|
||||
me->arch.end_opd = sect->sh_addr + sect->sh_size;
|
||||
}
|
||||
#endif /* PPC64_ELF_ABI_v1 */
|
||||
#endif /* CONFIG_PPC64_ELF_ABI_V1 */
|
||||
|
||||
#ifdef CONFIG_PPC_BARRIER_NOSPEC
|
||||
sect = find_section(hdr, sechdrs, "__spec_barrier_fixup");
|
||||
|
|
|
@ -99,7 +99,7 @@ static unsigned long get_plt_size(const Elf32_Ehdr *hdr,
|
|||
|
||||
/* Sort the relocation information based on a symbol and
|
||||
* addend key. This is a stable O(n*log n) complexity
|
||||
* alogrithm but it will reduce the complexity of
|
||||
* algorithm but it will reduce the complexity of
|
||||
* count_relocs() to linear complexity O(n)
|
||||
*/
|
||||
sort((void *)hdr + sechdrs[i].sh_offset,
|
||||
|
@ -256,9 +256,8 @@ int apply_relocate_add(Elf32_Shdr *sechdrs,
|
|||
value, (uint32_t)location);
|
||||
pr_debug("Location before: %08X.\n",
|
||||
*(uint32_t *)location);
|
||||
value = (*(uint32_t *)location & ~0x03fffffc)
|
||||
| ((value - (uint32_t)location)
|
||||
& 0x03fffffc);
|
||||
value = (*(uint32_t *)location & ~PPC_LI_MASK) |
|
||||
PPC_LI(value - (uint32_t)location);
|
||||
|
||||
if (patch_instruction(location, ppc_inst(value)))
|
||||
return -EFAULT;
|
||||
|
@ -266,10 +265,8 @@ int apply_relocate_add(Elf32_Shdr *sechdrs,
|
|||
pr_debug("Location after: %08X.\n",
|
||||
*(uint32_t *)location);
|
||||
pr_debug("ie. jump to %08X+%08X = %08X\n",
|
||||
*(uint32_t *)location & 0x03fffffc,
|
||||
(uint32_t)location,
|
||||
(*(uint32_t *)location & 0x03fffffc)
|
||||
+ (uint32_t)location);
|
||||
*(uint32_t *)PPC_LI((uint32_t)location), (uint32_t)location,
|
||||
(*(uint32_t *)PPC_LI((uint32_t)location)) + (uint32_t)location);
|
||||
break;
|
||||
|
||||
case R_PPC_REL32:
|
||||
|
@ -289,23 +286,32 @@ int apply_relocate_add(Elf32_Shdr *sechdrs,
|
|||
}
|
||||
|
||||
#ifdef CONFIG_DYNAMIC_FTRACE
|
||||
int module_trampoline_target(struct module *mod, unsigned long addr,
|
||||
unsigned long *target)
|
||||
notrace int module_trampoline_target(struct module *mod, unsigned long addr,
|
||||
unsigned long *target)
|
||||
{
|
||||
unsigned int jmp[4];
|
||||
ppc_inst_t jmp[4];
|
||||
|
||||
/* Find where the trampoline jumps to */
|
||||
if (copy_from_kernel_nofault(jmp, (void *)addr, sizeof(jmp)))
|
||||
if (copy_inst_from_kernel_nofault(jmp, (void *)addr))
|
||||
return -EFAULT;
|
||||
if (__copy_inst_from_kernel_nofault(jmp + 1, (void *)addr + 4))
|
||||
return -EFAULT;
|
||||
if (__copy_inst_from_kernel_nofault(jmp + 2, (void *)addr + 8))
|
||||
return -EFAULT;
|
||||
if (__copy_inst_from_kernel_nofault(jmp + 3, (void *)addr + 12))
|
||||
return -EFAULT;
|
||||
|
||||
/* verify that this is what we expect it to be */
|
||||
if ((jmp[0] & 0xffff0000) != PPC_RAW_LIS(_R12, 0) ||
|
||||
(jmp[1] & 0xffff0000) != PPC_RAW_ADDI(_R12, _R12, 0) ||
|
||||
jmp[2] != PPC_RAW_MTCTR(_R12) ||
|
||||
jmp[3] != PPC_RAW_BCTR())
|
||||
if ((ppc_inst_val(jmp[0]) & 0xffff0000) != PPC_RAW_LIS(_R12, 0))
|
||||
return -EINVAL;
|
||||
if ((ppc_inst_val(jmp[1]) & 0xffff0000) != PPC_RAW_ADDI(_R12, _R12, 0))
|
||||
return -EINVAL;
|
||||
if (ppc_inst_val(jmp[2]) != PPC_RAW_MTCTR(_R12))
|
||||
return -EINVAL;
|
||||
if (ppc_inst_val(jmp[3]) != PPC_RAW_BCTR())
|
||||
return -EINVAL;
|
||||
|
||||
addr = (jmp[1] & 0xffff) | ((jmp[0] & 0xffff) << 16);
|
||||
addr = (ppc_inst_val(jmp[1]) & 0xffff) | ((ppc_inst_val(jmp[0]) & 0xffff) << 16);
|
||||
if (addr & 0x8000)
|
||||
addr -= 0x10000;
|
||||
|
||||
|
|
|
@ -31,7 +31,7 @@
|
|||
this, and makes other things simpler. Anton?
|
||||
--RR. */
|
||||
|
||||
#ifdef PPC64_ELF_ABI_v2
|
||||
#ifdef CONFIG_PPC64_ELF_ABI_V2
|
||||
|
||||
static func_desc_t func_desc(unsigned long addr)
|
||||
{
|
||||
|
@ -122,7 +122,7 @@ static u32 ppc64_stub_insns[] = {
|
|||
/* Save current r2 value in magic place on the stack. */
|
||||
PPC_RAW_STD(_R2, _R1, R2_STACK_OFFSET),
|
||||
PPC_RAW_LD(_R12, _R11, 32),
|
||||
#ifdef PPC64_ELF_ABI_v1
|
||||
#ifdef CONFIG_PPC64_ELF_ABI_V1
|
||||
/* Set up new r2 from function descriptor */
|
||||
PPC_RAW_LD(_R2, _R11, 40),
|
||||
#endif
|
||||
|
@ -194,7 +194,7 @@ static unsigned long get_stubs_size(const Elf64_Ehdr *hdr,
|
|||
|
||||
/* Sort the relocation information based on a symbol and
|
||||
* addend key. This is a stable O(n*log n) complexity
|
||||
* alogrithm but it will reduce the complexity of
|
||||
* algorithm but it will reduce the complexity of
|
||||
* count_relocs() to linear complexity O(n)
|
||||
*/
|
||||
sort((void *)sechdrs[i].sh_addr,
|
||||
|
@ -361,7 +361,7 @@ static inline int create_ftrace_stub(struct ppc64_stub_entry *entry,
|
|||
entry->jump[1] |= PPC_HA(reladdr);
|
||||
entry->jump[2] |= PPC_LO(reladdr);
|
||||
|
||||
/* Eventhough we don't use funcdata in the stub, it's needed elsewhere. */
|
||||
/* Even though we don't use funcdata in the stub, it's needed elsewhere. */
|
||||
entry->funcdata = func_desc(addr);
|
||||
entry->magic = STUB_MAGIC;
|
||||
|
||||
|
@ -653,8 +653,7 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
|
|||
}
|
||||
|
||||
/* Only replace bits 2 through 26 */
|
||||
value = (*(uint32_t *)location & ~0x03fffffc)
|
||||
| (value & 0x03fffffc);
|
||||
value = (*(uint32_t *)location & ~PPC_LI_MASK) | PPC_LI(value);
|
||||
|
||||
if (patch_instruction((u32 *)location, ppc_inst(value)))
|
||||
return -EFAULT;
|
||||
|
|
|
@ -19,9 +19,9 @@
|
|||
#include <linux/pstore.h>
|
||||
#include <linux/zlib.h>
|
||||
#include <linux/uaccess.h>
|
||||
#include <linux/of.h>
|
||||
#include <asm/nvram.h>
|
||||
#include <asm/rtas.h>
|
||||
#include <asm/prom.h>
|
||||
#include <asm/machdep.h>
|
||||
|
||||
#undef DEBUG_NVRAM
|
||||
|
|
|
@ -344,15 +344,10 @@ void copy_mm_to_paca(struct mm_struct *mm)
|
|||
{
|
||||
mm_context_t *context = &mm->context;
|
||||
|
||||
#ifdef CONFIG_PPC_MM_SLICES
|
||||
VM_BUG_ON(!mm_ctx_slb_addr_limit(context));
|
||||
memcpy(&get_paca()->mm_ctx_low_slices_psize, mm_ctx_low_slices(context),
|
||||
LOW_SLICE_ARRAY_SZ);
|
||||
memcpy(&get_paca()->mm_ctx_high_slices_psize, mm_ctx_high_slices(context),
|
||||
TASK_SLICE_ARRAY_SZ(context));
|
||||
#else /* CONFIG_PPC_MM_SLICES */
|
||||
get_paca()->mm_ctx_user_psize = context->user_psize;
|
||||
get_paca()->mm_ctx_sllp = context->sllp;
|
||||
#endif
|
||||
}
|
||||
#endif /* CONFIG_PPC_64S_HASH_MMU */
|
||||
|
|
|
@ -30,10 +30,10 @@
|
|||
#include <linux/vgaarb.h>
|
||||
#include <linux/numa.h>
|
||||
#include <linux/msi.h>
|
||||
#include <linux/irqdomain.h>
|
||||
|
||||
#include <asm/processor.h>
|
||||
#include <asm/io.h>
|
||||
#include <asm/prom.h>
|
||||
#include <asm/pci-bridge.h>
|
||||
#include <asm/byteorder.h>
|
||||
#include <asm/machdep.h>
|
||||
|
@ -42,7 +42,7 @@
|
|||
|
||||
#include "../../../drivers/pci/pci.h"
|
||||
|
||||
/* hose_spinlock protects accesses to the the phb_bitmap. */
|
||||
/* hose_spinlock protects accesses to the phb_bitmap. */
|
||||
static DEFINE_SPINLOCK(hose_spinlock);
|
||||
LIST_HEAD(hose_list);
|
||||
|
||||
|
@ -1688,7 +1688,7 @@ EXPORT_SYMBOL_GPL(pcibios_scan_phb);
|
|||
static void fixup_hide_host_resource_fsl(struct pci_dev *dev)
|
||||
{
|
||||
int i, class = dev->class >> 8;
|
||||
/* When configured as agent, programing interface = 1 */
|
||||
/* When configured as agent, programming interface = 1 */
|
||||
int prog_if = dev->class & 0xf;
|
||||
|
||||
if ((class == PCI_CLASS_PROCESSOR_POWERPC ||
|
||||
|
|
Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше
Загрузка…
Ссылка в новой задаче