Граф коммитов

79 Коммитов

Автор SHA1 Сообщение Дата
Jiong Wang b85062ac0d arm: bpf: implement jitting of JMP32
This patch implements code-gen for new JMP32 instructions on arm.

For JSET, "ands" (AND with flags updated) is used, so corresponding
encoding helper is added.

Cc: Shubham Bansal <illusionist.neo@gmail.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-01-26 13:33:02 -08:00
Russell King b18bea2a45 ARM: net: bpf: improve 64-bit ALU implementation
Improbe the 64-bit ALU implementation from:

  movw    r8, #65532
  movt    r8, #65535
  movw    r9, #65535
  movt    r9, #65535
  ldr     r7, [fp, #-44]
  adds    r7, r7, r8
  str     r7, [fp, #-44]
  ldr     r7, [fp, #-40]
  adc     r7, r7, r9
  str     r7, [fp, #-40]

to:

  movw    r8, #65532
  movt    r8, #65535
  movw    r9, #65535
  movt    r9, #65535
  ldrd    r6, [fp, #-44]
  adds    r6, r6, r8
  adc     r7, r7, r9
  strd    r6, [fp, #-44]

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-13 15:26:42 +02:00
Russell King c5eae69257 ARM: net: bpf: improve 64-bit store implementation
Improve the 64-bit store implementation from:

  ldr     r6, [fp, #-8]
  str     r8, [r6]
  ldr     r6, [fp, #-8]
  mov     r7, #4
  add     r7, r6, r7
  str     r9, [r7]

to:

  ldr     r6, [fp, #-8]
  str     r8, [r6]
  str     r9, [r6, #4]

We leave the store as two separate STR instructions rather than using
STRD as the store may not be aligned, and STR can handle misalignment.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-13 15:26:42 +02:00
Russell King 077513b894 ARM: net: bpf: improve 64-bit sign-extended immediate load
Improve the 64-bit sign-extended immediate from:

  mov     r6, #1
  str     r6, [fp, #-52]  ; 0xffffffcc
  mov     r6, #0
  str     r6, [fp, #-48]  ; 0xffffffd0

to:

  mov     r6, #1
  mov     r7, #0
  strd    r6, [fp, #-52]  ; 0xffffffcc

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-13 15:26:41 +02:00
Russell King f9ff5018c1 ARM: net: bpf: improve 64-bit load immediate implementation
Rather than writing each 32-bit half of the 64-bit immediate value
separately when the register is on the stack:

  movw    r6, #45056      ; 0xb000
  movt    r6, #60979      ; 0xee33
  str     r6, [fp, #-44]  ; 0xffffffd4
  mov     r6, #0
  str     r6, [fp, #-40]  ; 0xffffffd8

arrange to use the double-word store when available instead:

  movw    r6, #45056      ; 0xb000
  movt    r6, #60979      ; 0xee33
  mov     r7, #0
  strd    r6, [fp, #-44]  ; 0xffffffd4

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-13 15:26:41 +02:00
Russell King 8c9602d38c ARM: net: bpf: use double-word load/stores where available
Use double-word load and stores where support for this instruction is
supported by the CPU architecture.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-12 20:45:23 +02:00
Russell King bef8968df8 ARM: net: bpf: always use odd/even register pair
Always use an odd/even register pair for our 64-bit registers, so that
we're able to use the double-word load/store instructions in the future.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-12 20:45:23 +02:00
Russell King b504522998 ARM: net: bpf: avoid reloading 'array'
Rearranging the order of the initial tail call code a little allows is
to avoid reloading the 'array' pointer.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-12 20:45:23 +02:00
Russell King aaffd2f5c3 ARM: net: bpf: avoid reloading 'index'
Avoid reloading 'index' after we have validated it - it remains in
tmp2[1] up to the point that we begin the code to index the pointer
array, so with a little rearrangement of the registers, we can use
the already loaded value.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-12 20:45:23 +02:00
Russell King 2b6958ef11 ARM: net: bpf: use ldr instructions with shifted rm register
Rather than pre-shifting the rm register for the ldr in the tail call,
shift it in the load instruction.  This eliminates one unnecessary
instruction.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-12 20:45:23 +02:00
Russell King 828e2b90e8 ARM: net: bpf: use immediate forms of instructions where possible
Rather than moving constants to a register and then using them in a
subsequent instruction, use them directly in the desired instruction
cutting out the "middle" register.  This removes two instructions from
the tail call code path.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-12 20:45:23 +02:00
Russell King 1ca3b17b77 ARM: net: bpf: imm12 constant conversion
Provide a version of the imm8m() function that the compiler can optimise
when used with a constant expression.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-12 20:45:23 +02:00
Russell King 96cced4e77 ARM: net: bpf: access eBPF scratch space using ARM FP register
Access the eBPF scratch space using the frame pointer rather than our
stack pointer, as the offsets from the ARM frame pointer are constant
across all eBPF programs.

Since we no longer reference the scratch space registers from the stack
pointer, this simplifies emit_push_r64() as it no longer needs to know
how many words are pushed onto the stack.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-12 20:45:22 +02:00
Russell King a6eccac507 ARM: net: bpf: 64-bit accessor functions for BPF registers
Provide a couple of 64-bit register accessors, and use them where
appropriate

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-12 20:45:22 +02:00
Russell King 7a98702563 ARM: net: bpf: provide accessor functions for BPF registers
Many of the code paths need to have knowledge about whether a register
is stacked or in a CPU register.  Move this decision making to a pair
of helper functions instead of having it scattered throughout the
code.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-12 20:45:22 +02:00
Russell King 47b9c3bf41 ARM: net: bpf: remove is_on_stack() and sstk/dstk
The decision about whether a BPF register is on the stack or in a CPU
register is detected at the top BPF insn processing level, and then
percolated throughout the remainder of the code.  Since we now use
negative register values to represent stacked registers, we can detect
where a BPF register is stored without restoring to carrying this
additional metadata through all code paths.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-12 20:45:22 +02:00
Russell King 1c35ba122d ARM: net: bpf: use negative numbers for stacked registers
Use negative numbers for eBPF registers that live on the stack.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-12 20:45:22 +02:00
Russell King a8ef95a034 ARM: net: bpf: provide load/store ops with negative immediates
Provide a set of load/store opcode generators that work with negative
immediates as well as positive ones.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-12 20:45:22 +02:00
Russell King d449ceb11b ARM: net: bpf: enumerate the JIT scratch stack layout
Enumerate the contents of the JIT scratch stack layout used for storing
some of the JITs 64-bit registers, tail call counter and AX register.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-12 20:45:22 +02:00
Daniel Borkmann 18d405af30 bpf, arm32: fix to use bpf_jit_binary_lock_ro api
Any eBPF JIT that where its underlying arch supports ARCH_HAS_SET_MEMORY
would need to use bpf_jit_binary_{un,}lock_ro() pair instead of the
set_memory_{ro,rw}() pair directly as otherwise changes to the former
might break. arm32's eBPF conversion missed to change it, so fix this
up here.

Fixes: 39c13c204b ("arm: eBPF JIT compiler")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-06-29 10:47:35 -07:00
Wang YanQing 68565a1af9 bpf, arm32: fix inconsistent naming about emit_a32_lsr_{r64,i64}
The names for BPF_ALU64 | BPF_ARSH are emit_a32_arsh_*,
the names for BPF_ALU64 | BPF_LSH are emit_a32_lsh_*, but
the names for BPF_ALU64 | BPF_RSH are emit_a32_lsr_*.

For consistence reason, let's rename emit_a32_lsr_* to
emit_a32_rsh_*.

This patch also corrects a wrong comment.

Fixes: 39c13c204b ("arm: eBPF JIT compiler")
Signed-off-by: Wang YanQing <udknight@gmail.com>
Cc: Shubham Bansal <illusionist.neo@gmail.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux@armlinux.org.uk
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-06-05 10:54:53 +02:00
Wang YanQing 2b589a7e2b bpf, arm32: correct check_imm24
imm24 is signed, so the right range is:

  [-(1<<(24 - 1)), (1<<(24 - 1)) - 1]

Note: this patch also fix a typo.

Fixes: 39c13c204b ("arm: eBPF JIT compiler")
Signed-off-by: Wang YanQing <udknight@gmail.com>
Cc: Shubham Bansal <illusionist.neo@gmail.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux@armlinux.org.uk
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-06-05 10:46:13 +02:00
Daniel Borkmann 38ca930601 bpf, arm32: save 4 bytes of unneeded stack space
The extra skb_copy_bits() buffer is not used anymore, therefore
remove the extra 4 byte stack space requirement.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-05-14 19:11:45 -07:00
Daniel Borkmann 0d2d0cedc0 bpf, arm32: remove ld_abs/ld_ind
Since LD_ABS/LD_IND instructions are now removed from the core and
reimplemented through a combination of inlined BPF instructions and
a slow-path helper, we can get rid of the complexity from arm32 JIT.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-05-03 16:49:20 -07:00
Daniel Borkmann 73ae3c0426 bpf, arm: remove obsolete exception handling from div/mod
Since we've changed div/mod exception handling for src_reg in
eBPF verifier itself, remove the leftovers from arm32 JIT.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Shubham Bansal <illusionist.neo@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-01-26 16:42:07 -08:00
David S. Miller ea9722e265 Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Alexei Starovoitov says:

====================
pull-request: bpf-next 2018-01-19

The following pull-request contains BPF updates for your *net-next* tree.

The main changes are:

1) bpf array map HW offload, from Jakub.

2) support for bpf_get_next_key() for LPM map, from Yonghong.

3) test_verifier now runs loaded programs, from Alexei.

4) xdp cpumap monitoring, from Jesper.

5) variety of tests, cleanups and small x64 JIT optimization, from Daniel.

6) user space can now retrieve HW JITed program, from Jiong.

Note there is a minor conflict between Russell's arm32 JIT fixes
and removal of bpf_jit_enable variable by Daniel which should
be resolved by keeping Russell's comment and removing that variable.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-01-20 22:03:46 -05:00
David S. Miller 8565d26bcb Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
The BPF verifier conflict was some minor contextual issue.

The TUN conflict was less trivial.  Cong Wang fixed a memory leak of
tfile->tx_array in 'net'.  This is an skb_array.  But meanwhile in
net-next tun changed tfile->tx_arry into tfile->tx_ring which is a
ptr_ring.

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-01-19 22:59:33 -05:00
Daniel Borkmann fa9dd599b4 bpf: get rid of pure_initcall dependency to enable jits
Having a pure_initcall() callback just to permanently enable BPF
JITs under CONFIG_BPF_JIT_ALWAYS_ON is unnecessary and could leave
a small race window in future where JIT is still disabled on boot.
Since we know about the setting at compilation time anyway, just
initialize it properly there. Also consolidate all the individual
bpf_jit_enable variables into a single one and move them under one
location. Moreover, don't allow for setting unspecified garbage
values on them.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-01-19 18:37:00 -08:00
Russell King 091f02483d ARM: net: bpf: clarify tail_call index
As per 90caccdd8c ("bpf: fix bpf_tail_call() x64 JIT"), the index used
for array lookup is defined to be 32-bit wide. Update a misleading
comment that suggests it is 64-bit wide.

Fixes: 39c13c204b ("arm: eBPF JIT compiler")
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-01-17 19:39:10 +00:00
Russell King ec19e02b34 ARM: net: bpf: fix LDX instructions
When the source and destination register are identical, our JIT does not
generate correct code, which leads to kernel oopses.

Fix this by (a) generating more efficient code, and (b) making use of
the temporary earlier if we will overwrite the address register.

Fixes: 39c13c204b ("arm: eBPF JIT compiler")
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-01-17 19:38:21 +00:00
Russell King 02088d9b39 ARM: net: bpf: fix register saving
When an eBPF program tail-calls another eBPF program, it enters it after
the prologue to avoid having complex stack manipulations.  This can lead
to kernel oopses, and similar.

Resolve this by always using a fixed stack layout, a CPU register frame
pointer, and using this when reloading registers before returning.

Fixes: 39c13c204b ("arm: eBPF JIT compiler")
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-01-17 19:38:07 +00:00
Russell King 0005e55a79 ARM: net: bpf: correct stack layout documentation
The stack layout documentation incorrectly suggests that the BPF JIT
scratch space starts immediately below BPF_FP. This is not correct,
so let's fix the documentation to reflect reality.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-01-17 19:36:43 +00:00
Russell King 70ec3a6c2c ARM: net: bpf: move stack documentation
Move the stack documentation towards the top of the file, where it's
relevant for things like the register layout.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-01-17 19:36:43 +00:00
Russell King d1220efd23 ARM: net: bpf: fix stack alignment
As per 2dede2d8e9 ("ARM EABI: stack pointer must be 64-bit aligned
after a CPU exception") the stack should be aligned to a 64-bit boundary
on EABI systems.  Ensure that the eBPF JIT appropraitely aligns the
stack.

Fixes: 39c13c204b ("arm: eBPF JIT compiler")
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-01-17 19:36:43 +00:00
Russell King f4483f2cc1 ARM: net: bpf: fix tail call jumps
When a tail call fails, it is documented that the tail call should
continue execution at the following instruction.  An example tail call
sequence is:

  12: (85) call bpf_tail_call#12
  13: (b7) r0 = 0
  14: (95) exit

The ARM assembler for the tail call in this case ends up branching to
instruction 14 instead of instruction 13, resulting in the BPF filter
returning a non-zero value:

  178:	ldr	r8, [sp, #588]	; insn 12
  17c:	ldr	r6, [r8, r6]
  180:	ldr	r8, [sp, #580]
  184:	cmp	r8, r6
  188:	bcs	0x1e8
  18c:	ldr	r6, [sp, #524]
  190:	ldr	r7, [sp, #528]
  194:	cmp	r7, #0
  198:	cmpeq	r6, #32
  19c:	bhi	0x1e8
  1a0:	adds	r6, r6, #1
  1a4:	adc	r7, r7, #0
  1a8:	str	r6, [sp, #524]
  1ac:	str	r7, [sp, #528]
  1b0:	mov	r6, #104
  1b4:	ldr	r8, [sp, #588]
  1b8:	add	r6, r8, r6
  1bc:	ldr	r8, [sp, #580]
  1c0:	lsl	r7, r8, #2
  1c4:	ldr	r6, [r6, r7]
  1c8:	cmp	r6, #0
  1cc:	beq	0x1e8
  1d0:	mov	r8, #32
  1d4:	ldr	r6, [r6, r8]
  1d8:	add	r6, r6, #44
  1dc:	bx	r6
  1e0:	mov	r0, #0		; insn 13
  1e4:	mov	r1, #0
  1e8:	add	sp, sp, #596	; insn 14
  1ec:	pop	{r4, r5, r6, r7, r8, sl, pc}

For other sequences, the tail call could end up branching midway through
the following BPF instructions, or maybe off the end of the function,
leading to unknown behaviours.

Fixes: 39c13c204b ("arm: eBPF JIT compiler")
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-01-17 19:35:51 +00:00
Russell King e906248182 ARM: net: bpf: avoid 'bx' instruction on non-Thumb capable CPUs
Avoid the 'bx' instruction on CPUs that have no support for Thumb and
thus do not implement this instruction by moving the generation of this
opcode to a separate function that selects between:

	bx	reg

and

	mov	pc, reg

according to the capabilities of the CPU.

Fixes: 39c13c204b ("arm: eBPF JIT compiler")
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-01-17 19:35:22 +00:00
Alexei Starovoitov 60b58afc96 bpf: fix net.core.bpf_jit_enable race
global bpf_jit_enable variable is tested multiple times in JITs,
blinding and verifier core. The malicious root can try to toggle
it while loading the programs. This race condition was accounted
for and there should be no issues, but it's safer to avoid
this race condition.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-17 20:34:36 +01:00
Shubham Bansal 39c13c204b arm: eBPF JIT compiler
The JIT compiler emits ARM 32 bit instructions. Currently, It supports
eBPF only. Classic BPF is supported because of the conversion by BPF core.

This patch is essentially changing the current implementation of JIT compiler
of Berkeley Packet Filter from classic to internal with almost all
instructions from eBPF ISA supported except the following
	BPF_ALU64 | BPF_DIV | BPF_K
	BPF_ALU64 | BPF_DIV | BPF_X
	BPF_ALU64 | BPF_MOD | BPF_K
	BPF_ALU64 | BPF_MOD | BPF_X
	BPF_STX | BPF_XADD | BPF_W
	BPF_STX | BPF_XADD | BPF_DW

Implementation is using scratch space to emulate 64 bit eBPF ISA on 32 bit
ARM because of deficiency of general purpose registers on ARM. Currently,
only LITTLE ENDIAN machines are supported in this eBPF JIT Compiler.

Tested on ARMv7 with QEMU by me (Shubham Bansal).

Testing results on ARMv7:

1) test_bpf: Summary: 341 PASSED, 0 FAILED, [312/333 JIT'ed]
2) test_tag: OK (40945 tests)
3) test_progs: Summary: 30 PASSED, 0 FAILED
4) test_lpm: OK
5) test_lru_map: OK

Above tests are all done with following flags enabled discreatly.

1) bpf_jit_enable=1
	a) CONFIG_FRAME_POINTER enabled
	b) CONFIG_FRAME_POINTER disabled
2) bpf_jit_enable=1 and bpf_jit_harden=2
	a) CONFIG_FRAME_POINTER enabled
	b) CONFIG_FRAME_POINTER disabled

See Documentation/networking/filter.txt for more information.

Signed-off-by: Shubham Bansal <illusionist.neo@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-08-22 09:26:43 -07:00
Laura Abbott 74d86a7063 arm: use set_memory.h header
set_memory_* functions have moved to set_memory.h.  Switch to this
explicitly

Link: http://lkml.kernel.org/r/1488920133-27229-3-git-send-email-labbott@redhat.com
Signed-off-by: Laura Abbott <labbott@redhat.com>
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-05-08 17:15:13 -07:00
Rabin Vincent f941461c92 ARM: net: bpf: fix zero right shift
The LSR instruction cannot be used to perform a zero right shift since a
0 as the immediate value (imm5) in the LSR instruction encoding means
that a shift of 32 is perfomed.  See DecodeIMMShift() in the ARM ARM.

Make the JIT skip generation of the LSR if a zero-shift is requested.

This was found using american fuzzy lop.

Signed-off-by: Rabin Vincent <rabin@rab.in>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-01-06 01:32:09 -05:00
Rabin Vincent 55795ef546 net: filter: make JITs zero A for SKF_AD_ALU_XOR_X
The SKF_AD_ALU_XOR_X ancillary is not like the other ancillary data
instructions since it XORs A with X while all the others replace A with
some loaded value.  All the BPF JITs fail to clear A if this is used as
the first instruction in a filter.  This was found using american fuzzy
lop.

Add a helper to determine if A needs to be cleared given the first
instruction in a filter, and use this in the JITs.  Except for ARM, the
rest have only been compile-tested.

Fixes: 3480593131 ("net: filter: get rid of BPF_S_* enum")
Signed-off-by: Rabin Vincent <rabin@rab.in>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-01-06 00:43:52 -05:00
Daniel Borkmann ebaef649c2 bpf, arm: start flushing icache range from header
During review I noticed that the icache range we're flushing should
start at header already and not at ctx.image.

Reason is that after 55309dd3d4 ("net: bpf: arm: address randomize
and write protect JIT code"), we also want to make sure to flush the
random-sized trap in front of the start of the actual program (analogous
to x86). No operational differences from user side.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Tested-by: Nicolas Schichan <nschichan@freebox.fr>
Cc: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-11-16 14:40:49 -05:00
David S. Miller 26440c835f Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Conflicts:
	drivers/net/usb/asix_common.c
	net/ipv4/inet_connection_sock.c
	net/switchdev/switchdev.c

In the inet_connection_sock.c case the request socket hashing scheme
is completely different in net-next.

The other two conflicts were overlapping changes.

Signed-off-by: David S. Miller <davem@davemloft.net>
2015-10-20 06:08:27 -07:00
Nicolas Schichan 4560cdff03 ARM: net: support BPF_ALU | BPF_MOD instructions in the BPF JIT.
For ARMv7 with UDIV instruction support, generate an UDIV instruction
followed by an MLS instruction.

For other ARM variants, generate code calling a C wrapper similar to
the jit_udiv() function used for BPF_ALU | BPF_DIV instructions.

Some performance numbers reported by the test_bpf module (the duration
per filter run is reported in nanoseconds, between "jitted:<x>" and
"PASS":

ARMv7 QEMU nojit:	test_bpf: #3 DIV_MOD_KX jited:0 2196 PASS
ARMv7 QEMU jit:		test_bpf: #3 DIV_MOD_KX jited:1 104 PASS
ARMv5 QEMU nojit:	test_bpf: #3 DIV_MOD_KX jited:0 2176 PASS
ARMv5 QEMU jit:		test_bpf: #3 DIV_MOD_KX jited:1 1104 PASS
ARMv5 kirkwood nojit:	test_bpf: #3 DIV_MOD_KX jited:0 1103 PASS
ARMv5 kirkwood jit:	test_bpf: #3 DIV_MOD_KX jited:1 311 PASS

Signed-off-by: Nicolas Schichan <nschichan@freebox.fr>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-10-05 07:02:42 -07:00
Nicolas Schichan 8690f47d6e ARM: net: make BPF_LD | BPF_IND instruction trigger r_X initialisation to 0.
Without this patch, if the only instructions using r_X are of the
BPF_LD | BPF_IND type, r_X would not be reset to 0, using whatever
value was there when entering the jited code. With this patch, r_X
will be correctly marked as used so it will be reset to 0 in the
prologue code.

This fix also makes the test "LD_IND byte default X" pass in the
test_bpf module when the ARM JIT is enabled.

Signed-off-by: Nicolas Schichan <nschichan@freebox.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-10-05 07:01:08 -07:00
Daniel Borkmann a91263d520 ebpf: migrate bpf_prog's flags to bitfield
As we need to add further flags to the bpf_prog structure, lets migrate
both bools to a bitfield representation. The size of the base structure
(excluding insns) remains unchanged at 40 bytes.

Add also tags for the kmemchecker, so that it doesn't throw false
positives. Even in case gcc would generate suboptimal code, it's not
being accessed in performance critical paths.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-10-03 05:02:39 -07:00
Nicolas Schichan 5bf705b43b ARM: net: add support for BPF_ANC | SKF_AD_HATYPE in ARM JIT.
Signed-off-by: Nicolas Schichan <nschichan@freebox.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-07-27 14:57:41 -07:00
Nicolas Schichan 303249ab16 ARM: net: add support for BPF_ANC | SKF_AD_PAY_OFFSET in ARM JIT.
Signed-off-by: Nicolas Schichan <nschichan@freebox.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-07-27 14:57:40 -07:00
Nicolas Schichan 1447f93f22 ARM: net: add support for BPF_ANC | SKF_AD_PKTTYPE in ARM JIT.
Signed-off-by: Nicolas Schichan <nschichan@freebox.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-07-27 14:57:40 -07:00
Nicolas Schichan c18fe54b3f ARM: net: fix vlan access instructions in ARM JIT.
This makes BPF_ANC | SKF_AD_VLAN_TAG and BPF_ANC | SKF_AD_VLAN_TAG_PRESENT
have the same behaviour as the in kernel VM and makes the test_bpf LD_VLAN_TAG
and LD_VLAN_TAG_PRESENT tests pass.

Signed-off-by: Nicolas Schichan <nschichan@freebox.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-07-21 22:19:55 -07:00