- Numerous fixes to the compat vDSO build system, especially when
   combining gcc and clang
 
 - Fix parsing of PAR_EL1 in spurious kernel fault detection
 
 - Partial workaround for Neoverse-N1 erratum #1542419
 
 - Fix IRQ priority masking on entry from compat syscalls
 
 - Fix advertisment of FRINT HWCAP to userspace
 
 - Attempt to workaround inlining breakage with '__always_inline'
 
 - Fix accidental freeing of parent SVE state on fork() error path
 
 - Add some missing NULL pointer checks in instruction emulation init
 
 - Some formatting and comment fixes
 -----BEGIN PGP SIGNATURE-----
 
 iQFEBAABCgAuFiEEPxTL6PPUbjXGY88ct6xw3ITBYzQFAl2dv4cQHHdpbGxAa2Vy
 bmVsLm9yZwAKCRC3rHDchMFjNO6UB/4yY3lYR6C++7EdVwYxQRXf8VX9ukeO76gp
 P/AS6Kt8+AiOuhFJJXDj3D7K/KqgZnJEhzeWHTZluYpIBuzFerW+RxzmExL+wFWf
 ISZgdh7roFCQx3Nt+iBs/bAMPvk5Da1KHvSw/yZ6P8mj6fK8sVUh/O8+KK4kSzfT
 muDoSO6WHSonAEOYm9ryn1q1pM5DsCjr+9fm7d9L+dJAUP2xX44ymlIY+v6yD3Or
 IWJMYaWKb4TbdTJSy2VbUSM0fzByGBJCx1wOTd4gV6uDbB4GA6h+E/DMB1qnvv9W
 nH5c4qwVgYhp7prpescMxYZoV/I9damvfnaIjqh9jc3H3milEqcn
 =GwLJ
 -----END PGP SIGNATURE-----

Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull arm64 fixes from Will Deacon:
 "A larger-than-usual batch of arm64 fixes for -rc3.

  The bulk of the fixes are dealing with a bunch of issues with the
  build system from the compat vDSO, which unfortunately led to some
  significant Makefile rework to manage the horrible combinations of
  toolchains that we can end up needing to drive simultaneously.

  We came close to disabling the thing entirely, but Vincenzo was quick
  to spin up some patches and I ended up picking up most of the bits
  that were left [*]. Future work will look at disentangling the header
  files properly.

  Other than that, we have some important fixes all over, including one
  papering over the miscompilation fallout from forcing
  CONFIG_OPTIMIZE_INLINING=y, which I'm still unhappy about. Harumph.

  We've still got a couple of open issues, so I'm expecting to have some
  more fixes later this cycle.

  Summary:

   - Numerous fixes to the compat vDSO build system, especially when
     combining gcc and clang

   - Fix parsing of PAR_EL1 in spurious kernel fault detection

   - Partial workaround for Neoverse-N1 erratum #1542419

   - Fix IRQ priority masking on entry from compat syscalls

   - Fix advertisment of FRINT HWCAP to userspace

   - Attempt to workaround inlining breakage with '__always_inline'

   - Fix accidental freeing of parent SVE state on fork() error path

   - Add some missing NULL pointer checks in instruction emulation init

   - Some formatting and comment fixes"

[*] Will's final fixes were

        Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
        Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com>

    but they were already in linux-next by then and he didn't rebase
    just to add those.

* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (21 commits)
  arm64: armv8_deprecated: Checking return value for memory allocation
  arm64: Kconfig: Make CONFIG_COMPAT_VDSO a proper Kconfig option
  arm64: vdso32: Rename COMPATCC to CC_COMPAT
  arm64: vdso32: Pass '--target' option to clang via VDSO_CAFLAGS
  arm64: vdso32: Don't use KBUILD_CPPFLAGS unconditionally
  arm64: vdso32: Move definition of COMPATCC into vdso32/Makefile
  arm64: Default to building compat vDSO with clang when CONFIG_CC_IS_CLANG
  lib: vdso: Remove CROSS_COMPILE_COMPAT_VDSO
  arm64: vdso32: Remove jump label config option in Makefile
  arm64: vdso32: Detect binutils support for dmb ishld
  arm64: vdso: Remove stale files from old assembly implementation
  arm64: vdso32: Fix broken compat vDSO build warnings
  arm64: mm: fix spurious fault detection
  arm64: ftrace: Ensure synchronisation in PLT setup for Neoverse-N1 #1542419
  arm64: Fix incorrect irqflag restore for priority masking for compat
  arm64: mm: avoid virt_to_phys(init_mm.pgd)
  arm64: cpufeature: Effectively expose FRINT capability to userspace
  arm64: Mark functions using explicit register variables as '__always_inline'
  docs: arm64: Fix indentation and doc formatting
  arm64/sve: Fix wrong free for task->thread.sve_state
  ...
This commit is contained in:
Linus Torvalds 2019-10-09 09:27:22 -07:00
Родитель e3280b54af 3e7c93bd04
Коммит e60329c97b
16 изменённых файлов: 98 добавлений и 104 удалений

Просмотреть файл

@ -154,11 +154,18 @@ return virtual addresses to userspace from a 48-bit range.
Software can "opt-in" to receiving VAs from a 52-bit space by
specifying an mmap hint parameter that is larger than 48-bit.
For example:
maybe_high_address = mmap(~0UL, size, prot, flags,...);
.. code-block:: c
maybe_high_address = mmap(~0UL, size, prot, flags,...);
It is also possible to build a debug kernel that returns addresses
from a 52-bit space by enabling the following kernel config options:
.. code-block:: sh
CONFIG_EXPERT=y && CONFIG_ARM64_FORCE_52BIT=y
Note that this option is only intended for debugging applications

Просмотреть файл

@ -110,7 +110,6 @@ config ARM64
select GENERIC_STRNLEN_USER
select GENERIC_TIME_VSYSCALL
select GENERIC_GETTIMEOFDAY
select GENERIC_COMPAT_VDSO if (!CPU_BIG_ENDIAN && COMPAT)
select HANDLE_DOMAIN_IRQ
select HARDIRQS_SW_RESEND
select HAVE_PCI
@ -1159,7 +1158,7 @@ menuconfig COMPAT
if COMPAT
config KUSER_HELPERS
bool "Enable kuser helpers page for 32 bit applications"
bool "Enable kuser helpers page for 32-bit applications"
default y
help
Warning: disabling this option may break 32-bit user programs.
@ -1185,6 +1184,18 @@ config KUSER_HELPERS
Say N here only if you are absolutely certain that you do not
need these helpers; otherwise, the safe option is to say Y.
config COMPAT_VDSO
bool "Enable vDSO for 32-bit applications"
depends on !CPU_BIG_ENDIAN && "$(CROSS_COMPILE_COMPAT)" != ""
select GENERIC_COMPAT_VDSO
default y
help
Place in the process address space of 32-bit applications an
ELF shared object providing fast implementations of gettimeofday
and clock_gettime.
You must have a 32-bit build of glibc 2.22 or later for programs
to seamlessly take advantage of this.
menuconfig ARMV8_DEPRECATED
bool "Emulate deprecated/obsolete ARMv8 instructions"

Просмотреть файл

@ -53,22 +53,6 @@ $(warning Detected assembler with broken .inst; disassembly will be unreliable)
endif
endif
ifeq ($(CONFIG_GENERIC_COMPAT_VDSO), y)
CROSS_COMPILE_COMPAT ?= $(CONFIG_CROSS_COMPILE_COMPAT_VDSO:"%"=%)
ifeq ($(CONFIG_CC_IS_CLANG), y)
$(warning CROSS_COMPILE_COMPAT is clang, the compat vDSO will not be built)
else ifeq ($(strip $(CROSS_COMPILE_COMPAT)),)
$(warning CROSS_COMPILE_COMPAT not defined or empty, the compat vDSO will not be built)
else ifeq ($(shell which $(CROSS_COMPILE_COMPAT)gcc 2> /dev/null),)
$(error $(CROSS_COMPILE_COMPAT)gcc not found, check CROSS_COMPILE_COMPAT)
else
export CROSS_COMPILE_COMPAT
export CONFIG_COMPAT_VDSO := y
compat_vdso := -DCONFIG_COMPAT_VDSO=1
endif
endif
KBUILD_CFLAGS += -mgeneral-regs-only $(lseinstr) $(brokengasinst) \
$(compat_vdso) $(cc_has_k_constraint)
KBUILD_CFLAGS += -fno-asynchronous-unwind-tables

Просмотреть файл

@ -321,7 +321,8 @@ static inline s64 __lse_atomic64_dec_if_positive(atomic64_t *v)
}
#define __CMPXCHG_CASE(w, sfx, name, sz, mb, cl...) \
static inline u##sz __lse__cmpxchg_case_##name##sz(volatile void *ptr, \
static __always_inline u##sz \
__lse__cmpxchg_case_##name##sz(volatile void *ptr, \
u##sz old, \
u##sz new) \
{ \
@ -362,7 +363,8 @@ __CMPXCHG_CASE(x, , mb_, 64, al, "memory")
#undef __CMPXCHG_CASE
#define __CMPXCHG_DBL(name, mb, cl...) \
static inline long __lse__cmpxchg_double##name(unsigned long old1, \
static __always_inline long \
__lse__cmpxchg_double##name(unsigned long old1, \
unsigned long old2, \
unsigned long new1, \
unsigned long new2, \

Просмотреть файл

@ -20,7 +20,7 @@
#define dmb(option) __asm__ __volatile__ ("dmb " #option : : : "memory")
#if __LINUX_ARM_ARCH__ >= 8
#if __LINUX_ARM_ARCH__ >= 8 && defined(CONFIG_AS_DMB_ISHLD)
#define aarch32_smp_mb() dmb(ish)
#define aarch32_smp_rmb() dmb(ishld)
#define aarch32_smp_wmb() dmb(ishst)

Просмотреть файл

@ -1,33 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2012 ARM Limited
*/
#ifndef __ASM_VDSO_DATAPAGE_H
#define __ASM_VDSO_DATAPAGE_H
#ifndef __ASSEMBLY__
struct vdso_data {
__u64 cs_cycle_last; /* Timebase at clocksource init */
__u64 raw_time_sec; /* Raw time */
__u64 raw_time_nsec;
__u64 xtime_clock_sec; /* Kernel time */
__u64 xtime_clock_nsec;
__u64 xtime_coarse_sec; /* Coarse time */
__u64 xtime_coarse_nsec;
__u64 wtm_clock_sec; /* Wall to monotonic time */
__u64 wtm_clock_nsec;
__u32 tb_seq_count; /* Timebase sequence counter */
/* cs_* members must be adjacent and in this order (ldp accesses) */
__u32 cs_mono_mult; /* NTP-adjusted clocksource multiplier */
__u32 cs_shift; /* Clocksource shift (mono = raw) */
__u32 cs_raw_mult; /* Raw clocksource multiplier */
__u32 tz_minuteswest; /* Whacky timezone stuff */
__u32 tz_dsttime;
__u32 use_syscall;
__u32 hrtimer_res;
};
#endif /* !__ASSEMBLY__ */
#endif /* __ASM_VDSO_DATAPAGE_H */

Просмотреть файл

@ -174,6 +174,9 @@ static void __init register_insn_emulation(struct insn_emulation_ops *ops)
struct insn_emulation *insn;
insn = kzalloc(sizeof(*insn), GFP_KERNEL);
if (!insn)
return;
insn->ops = ops;
insn->min = INSN_UNDEF;
@ -233,6 +236,8 @@ static void __init register_insn_emulation_sysctl(void)
insns_sysctl = kcalloc(nr_insn_emulated + 1, sizeof(*sysctl),
GFP_KERNEL);
if (!insns_sysctl)
return;
raw_spin_lock_irqsave(&insn_emulation_lock, flags);
list_for_each_entry(insn, &insn_emulation, node) {

Просмотреть файл

@ -128,8 +128,8 @@ static void install_bp_hardening_cb(bp_hardening_cb_t fn,
int cpu, slot = -1;
/*
* enable_smccc_arch_workaround_1() passes NULL for the hyp_vecs
* start/end if we're a guest. Skip the hyp-vectors work.
* detect_harden_bp_fw() passes NULL for the hyp_vecs start/end if
* we're a guest. Skip the hyp-vectors work.
*/
if (!hyp_vecs_start) {
__this_cpu_write(bp_hardening_data.fn, fn);

Просмотреть файл

@ -136,6 +136,7 @@ static const struct arm64_ftr_bits ftr_id_aa64isar0[] = {
static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_SB_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FRINTTS_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_GPI_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),

Просмотреть файл

@ -775,6 +775,7 @@ el0_sync_compat:
b.ge el0_dbg
b el0_inv
el0_svc_compat:
gic_prio_kentry_setup tmp=x1
mov x0, sp
bl el0_svc_compat_handler
b ret_to_user

Просмотреть файл

@ -121,10 +121,16 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
/*
* Ensure updated trampoline is visible to instruction
* fetch before we patch in the branch.
* fetch before we patch in the branch. Although the
* architecture doesn't require an IPI in this case,
* Neoverse-N1 erratum #1542419 does require one
* if the TLB maintenance in module_enable_ro() is
* skipped due to rodata_enabled. It doesn't seem worth
* it to make it conditional given that this is
* certainly not a fast-path.
*/
__flush_icache_range((unsigned long)&dst[0],
(unsigned long)&dst[1]);
flush_icache_range((unsigned long)&dst[0],
(unsigned long)&dst[1]);
}
addr = (unsigned long)dst;
#else /* CONFIG_ARM64_MODULE_PLTS */

Просмотреть файл

@ -332,22 +332,27 @@ void arch_release_task_struct(struct task_struct *tsk)
fpsimd_release_task(tsk);
}
/*
* src and dst may temporarily have aliased sve_state after task_struct
* is copied. We cannot fix this properly here, because src may have
* live SVE state and dst's thread_info may not exist yet, so tweaking
* either src's or dst's TIF_SVE is not safe.
*
* The unaliasing is done in copy_thread() instead. This works because
* dst is not schedulable or traceable until both of these functions
* have been called.
*/
int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
{
if (current->mm)
fpsimd_preserve_current_state();
*dst = *src;
/* We rely on the above assignment to initialize dst's thread_flags: */
BUILD_BUG_ON(!IS_ENABLED(CONFIG_THREAD_INFO_IN_TASK));
/*
* Detach src's sve_state (if any) from dst so that it does not
* get erroneously used or freed prematurely. dst's sve_state
* will be allocated on demand later on if dst uses SVE.
* For consistency, also clear TIF_SVE here: this could be done
* later in copy_process(), but to avoid tripping up future
* maintainers it is best not to leave TIF_SVE and sve_state in
* an inconsistent state, even temporarily.
*/
dst->thread.sve_state = NULL;
clear_tsk_thread_flag(dst, TIF_SVE);
return 0;
}
@ -360,13 +365,6 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
memset(&p->thread.cpu_context, 0, sizeof(struct cpu_context));
/*
* Unalias p->thread.sve_state (if any) from the parent task
* and disable discard SVE state for p:
*/
clear_tsk_thread_flag(p, TIF_SVE);
p->thread.sve_state = NULL;
/*
* In case p was allocated the same task_struct pointer as some
* other recently-exited task, make sure p is disassociated from

Просмотреть файл

Просмотреть файл

@ -8,15 +8,21 @@
ARCH_REL_TYPE_ABS := R_ARM_JUMP_SLOT|R_ARM_GLOB_DAT|R_ARM_ABS32
include $(srctree)/lib/vdso/Makefile
COMPATCC := $(CROSS_COMPILE_COMPAT)gcc
# Same as cc-*option, but using CC_COMPAT instead of CC
ifeq ($(CONFIG_CC_IS_CLANG), y)
CC_COMPAT ?= $(CC)
else
CC_COMPAT ?= $(CROSS_COMPILE_COMPAT)gcc
endif
# Same as cc-*option, but using COMPATCC instead of CC
cc32-option = $(call try-run,\
$(COMPATCC) $(1) -c -x c /dev/null -o "$$TMP",$(1),$(2))
$(CC_COMPAT) $(1) -c -x c /dev/null -o "$$TMP",$(1),$(2))
cc32-disable-warning = $(call try-run,\
$(COMPATCC) -W$(strip $(1)) -c -x c /dev/null -o "$$TMP",-Wno-$(strip $(1)))
$(CC_COMPAT) -W$(strip $(1)) -c -x c /dev/null -o "$$TMP",-Wno-$(strip $(1)))
cc32-ldoption = $(call try-run,\
$(COMPATCC) $(1) -nostdlib -x c /dev/null -o "$$TMP",$(1),$(2))
$(CC_COMPAT) $(1) -nostdlib -x c /dev/null -o "$$TMP",$(1),$(2))
cc32-as-instr = $(call try-run,\
printf "%b\n" "$(1)" | $(CC_COMPAT) $(VDSO_AFLAGS) -c -x assembler -o "$$TMP" -,$(2),$(3))
# We cannot use the global flags to compile the vDSO files, the main reason
# being that the 32-bit compiler may be older than the main (64-bit) compiler
@ -25,22 +31,21 @@ cc32-ldoption = $(call try-run,\
# arm64 one.
# As a result we set our own flags here.
# From top-level Makefile
# NOSTDINC_FLAGS
VDSO_CPPFLAGS := -nostdinc -isystem $(shell $(COMPATCC) -print-file-name=include)
# KBUILD_CPPFLAGS and NOSTDINC_FLAGS from top-level Makefile
VDSO_CPPFLAGS := -D__KERNEL__ -nostdinc -isystem $(shell $(CC_COMPAT) -print-file-name=include)
VDSO_CPPFLAGS += $(LINUXINCLUDE)
VDSO_CPPFLAGS += $(KBUILD_CPPFLAGS)
# Common C and assembly flags
# From top-level Makefile
VDSO_CAFLAGS := $(VDSO_CPPFLAGS)
ifneq ($(shell $(CC_COMPAT) --version 2>&1 | head -n 1 | grep clang),)
VDSO_CAFLAGS += --target=$(notdir $(CROSS_COMPILE_COMPAT:%-=%))
endif
VDSO_CAFLAGS += $(call cc32-option,-fno-PIE)
ifdef CONFIG_DEBUG_INFO
VDSO_CAFLAGS += -g
endif
ifeq ($(shell $(CONFIG_SHELL) $(srctree)/scripts/gcc-goto.sh $(COMPATCC)), y)
VDSO_CAFLAGS += -DCC_HAVE_ASM_GOTO
endif
# From arm Makefile
VDSO_CAFLAGS += $(call cc32-option,-fno-dwarf2-cfi-asm)
@ -55,6 +60,7 @@ endif
VDSO_CAFLAGS += -fPIC -fno-builtin -fno-stack-protector
VDSO_CAFLAGS += -DDISABLE_BRANCH_PROFILING
# Try to compile for ARMv8. If the compiler is too old and doesn't support it,
# fall back to v7. There is no easy way to check for what architecture the code
# is being compiled, so define a macro specifying that (see arch/arm/Makefile).
@ -91,6 +97,12 @@ VDSO_CFLAGS += -Wno-int-to-pointer-cast
VDSO_AFLAGS := $(VDSO_CAFLAGS)
VDSO_AFLAGS += -D__ASSEMBLY__
# Check for binutils support for dmb ishld
dmbinstr := $(call cc32-as-instr,dmb ishld,-DCONFIG_AS_DMB_ISHLD=1)
VDSO_CFLAGS += $(dmbinstr)
VDSO_AFLAGS += $(dmbinstr)
VDSO_LDFLAGS := $(VDSO_CPPFLAGS)
# From arm vDSO Makefile
VDSO_LDFLAGS += -Wl,-Bsymbolic -Wl,--no-undefined -Wl,-soname=linux-vdso.so.1
@ -159,14 +171,14 @@ quiet_cmd_vdsold_and_vdso_check = LD32 $@
cmd_vdsold_and_vdso_check = $(cmd_vdsold); $(cmd_vdso_check)
quiet_cmd_vdsold = LD32 $@
cmd_vdsold = $(COMPATCC) -Wp,-MD,$(depfile) $(VDSO_LDFLAGS) \
cmd_vdsold = $(CC_COMPAT) -Wp,-MD,$(depfile) $(VDSO_LDFLAGS) \
-Wl,-T $(filter %.lds,$^) $(filter %.o,$^) -o $@
quiet_cmd_vdsocc = CC32 $@
cmd_vdsocc = $(COMPATCC) -Wp,-MD,$(depfile) $(VDSO_CFLAGS) -c -o $@ $<
cmd_vdsocc = $(CC_COMPAT) -Wp,-MD,$(depfile) $(VDSO_CFLAGS) -c -o $@ $<
quiet_cmd_vdsocc_gettimeofday = CC32 $@
cmd_vdsocc_gettimeofday = $(COMPATCC) -Wp,-MD,$(depfile) $(VDSO_CFLAGS) $(VDSO_CFLAGS_gettimeofday_o) -c -o $@ $<
cmd_vdsocc_gettimeofday = $(CC_COMPAT) -Wp,-MD,$(depfile) $(VDSO_CFLAGS) $(VDSO_CFLAGS_gettimeofday_o) -c -o $@ $<
quiet_cmd_vdsoas = AS32 $@
cmd_vdsoas = $(COMPATCC) -Wp,-MD,$(depfile) $(VDSO_AFLAGS) -c -o $@ $<
cmd_vdsoas = $(CC_COMPAT) -Wp,-MD,$(depfile) $(VDSO_AFLAGS) -c -o $@ $<
quiet_cmd_vdsomunge = MUNGE $@
cmd_vdsomunge = $(obj)/$(munge) $< $@

Просмотреть файл

@ -113,6 +113,15 @@ static inline bool is_ttbr1_addr(unsigned long addr)
return arch_kasan_reset_tag(addr) >= PAGE_OFFSET;
}
static inline unsigned long mm_to_pgd_phys(struct mm_struct *mm)
{
/* Either init_pg_dir or swapper_pg_dir */
if (mm == &init_mm)
return __pa_symbol(mm->pgd);
return (unsigned long)virt_to_phys(mm->pgd);
}
/*
* Dump out the page tables associated with 'addr' in the currently active mm.
*/
@ -141,7 +150,7 @@ static void show_pte(unsigned long addr)
pr_alert("%s pgtable: %luk pages, %llu-bit VAs, pgdp=%016lx\n",
mm == &init_mm ? "swapper" : "user", PAGE_SIZE / SZ_1K,
vabits_actual, (unsigned long)virt_to_phys(mm->pgd));
vabits_actual, mm_to_pgd_phys(mm));
pgdp = pgd_offset(mm, addr);
pgd = READ_ONCE(*pgdp);
pr_alert("[%016lx] pgd=%016llx", addr, pgd_val(pgd));
@ -266,7 +275,7 @@ static bool __kprobes is_spurious_el1_translation_fault(unsigned long addr,
* If we got a different type of fault from the AT instruction,
* treat the translation fault as spurious.
*/
dfsc = FIELD_PREP(SYS_PAR_EL1_FST, par);
dfsc = FIELD_GET(SYS_PAR_EL1_FST, par);
return (dfsc & ESR_ELx_FSC_TYPE) != ESR_ELx_FSC_FAULT;
}

Просмотреть файл

@ -24,13 +24,4 @@ config GENERIC_COMPAT_VDSO
help
This config option enables the compat VDSO layer.
config CROSS_COMPILE_COMPAT_VDSO
string "32 bit Toolchain prefix for compat vDSO"
default ""
depends on GENERIC_COMPAT_VDSO
help
Defines the cross-compiler prefix for compiling compat vDSO.
If a 64 bit compiler (i.e. x86_64) can compile the VDSO for
32 bit, it does not need to define this parameter.
endif