Граф коммитов

33 Коммитов

Автор SHA1 Сообщение Дата
Sara Bird 396b7fd6f9 staging/zsmalloc: Fixed up incorrect formatted comments
The existing comments are using an odd style. Fixed them up to adhere
to the StyleGuide. No code changes.

Signed-off-by: Sara Bird <sara.bird.iar@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-05-21 10:30:10 -07:00
Marlies Ruck 93ad5ab504 Staging: Fixes string split across lines in zsmalloc zsmalloc-main
Fixes the following checkpatch warning:
WARNING: quoted string split across lines

Signed-off-by: Marlies Ruck <marlies.ruck@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-05-20 10:43:10 -07:00
Arnd Bergmann 796ce5a7e4 staging/zsmalloc: don't use pgtable-mapping from modules
Building zsmalloc as a module does not work on ARM because it uses
an interface that is not exported:

ERROR: "flush_tlb_kernel_range" [drivers/staging/zsmalloc/zsmalloc.ko] undefined!

Since this is only used as a performance optimization and only on ARM,
we can avoid the problem simply by not using that optimization when
building zsmalloc it is a loadable module.

flush_tlb_kernel_range is often an inline function, but out of the
architectures that use an extern function, only powerpc exports
it.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Seth Jennings <sjenning@linux.vnet.ibm.com>
Cc: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-04-23 10:34:20 -07:00
Joerg Roedel d95abbbb29 staging: zsmalloc: Fix link error on ARM
Testing the arm chromebook config against the upstream
kernel produces a linker error for the zsmalloc module from
staging. The symbol flush_tlb_kernel_range is not available
there. Fix this by removing the reimplementation of
unmap_kernel_range in the zsmalloc module and using the
function directly. The unmap_kernel_range function is not
usable by modules, so also disallow building the driver as a
module for now.

Cc: stable <stable@vger.kernel.org>
Signed-off-by: Joerg Roedel <joro@8bytes.org>
Acked-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-03-28 16:08:54 -07:00
Mel Gorman 22b751c3d0 mm: rename page struct field helpers
The function names page_xchg_last_nid(), page_last_nid() and
reset_page_last_nid() were judged to be inconsistent so rename them to a
struct_field_op style pattern.  As it looked jarring to have
reset_page_mapcount() and page_nid_reset_last() beside each other in
memmap_init_zone(), this patch also renames reset_page_mapcount() to
page_mapcount_reset().  There are others like init_page_count() but as
it is used throughout the arch code a rename would likely cause more
conflicts than it is worth.

[akpm@linux-foundation.org: fix zcache]
Signed-off-by: Mel Gorman <mgorman@suse.de>
Suggested-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-23 17:50:18 -08:00
Seth Jennings 0d145a5017 staging: zsmalloc: remove unused pool name
zs_create_pool() currently takes a name argument which is
never used in any useful way.

This patch removes it.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Acked-by: Nitin Gupta <ngupta@vflare.org>
Acked-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-01-30 18:22:41 +01:00
Minchan Kim 9915518887 staging: zsmalloc: Fix TLB coherency and build problem
Recently, Matt Sealey reported he fail to build zsmalloc caused by
using of local_flush_tlb_kernel_range which are architecture dependent
function so !CONFIG_SMP in ARM couldn't implement it so it ends up
build error following as.

  MODPOST 216 modules
  LZMA    arch/arm/boot/compressed/piggy.lzma
  AS      arch/arm/boot/compressed/lib1funcs.o
ERROR: "v7wbi_flush_kern_tlb_range"
[drivers/staging/zsmalloc/zsmalloc.ko] undefined!
make[1]: *** [__modpost] Error 1
make: *** [modules] Error 2
make: *** Waiting for unfinished jobs....

The reason we used that function is copy method by [1]
was really slow in ARM but at that time.

More severe problem is ARM can prefetch speculatively on other CPUs
so under us, other TLBs can have an entry only if we do flush local
CPU. Russell King pointed that. Thanks!
We don't have many choices except using flush_tlb_kernel_range.

My experiment in ARMv7 processor 4 core didn't make any difference with
zsmapbench[2] between local_flush_tlb_kernel_range and flush_tlb_kernel_range
but still page-table based is much better than copy-based.

* bigger is better.

1. local_flush_tlb_kernel_range: 3918795 mappings
2. flush_tlb_kernel_range : 3989538 mappings
3. copy-based: 635158 mappings

This patch replace local_flush_tlb_kernel_range with
flush_tlb_kernel_range which are avaialbe in all architectures
because we already have used it in vmalloc allocator which are
generic one so build problem should go away and performane loss
shoud be void.

[1] f553646, zsmalloc: add page table mapping method
[2] https://github.com/spartacus06/zsmapbench

Cc: stable@vger.kernel.org
Cc: Dan Magenheimer <dan.magenheimer@oracle.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>
Cc: Nitin Gupta <ngupta@vflare.org>
Cc: Seth Jennings <sjenning@linux.vnet.ibm.com>
Reported-by: Matt Sealey <matt@genesi-usa.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-01-29 23:22:16 -05:00
Seth Jennings d662b8eba9 staging: zsmalloc: make CLASS_DELTA relative to PAGE_SIZE
Right now ZS_SIZE_CLASS_DELTA is hardcoded to be 16.  This
creates 254 classes for systems with 4k pages. However, on
PPC64 with 64k pages, it creates 4095 classes which is far
too many.

This patch makes ZS_SIZE_CLASS_DELTA relative to PAGE_SIZE
so that regardless of the page size, there will be the same
number of classes.

Acked-by: Nitin Gupta <ngupta@vflare.org>
Acked-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-01-29 23:16:42 -05:00
Davidlohr Bueso 4bbc0bc06b staging: zsmalloc: comment zs_create_pool function
Just as with zs_malloc() and zs_map_object(), it is worth
formally commenting the zs_create_pool() function.

Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-01-15 23:49:55 -08:00
Seth Jennings 0959c63f11 zsmalloc: collapse internal .h into .c
The patch collapses in the internal zsmalloc_int.h into
the zsmalloc-main.c file.

This is done in preparation for the promotion to mm/ where
separate internal headers are discouraged.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
Acked-by: Nitin Gupta <ngupta@vflare.org>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-08-13 19:34:24 -07:00
Seth Jennings f553646a67 staging: zsmalloc: add page table mapping method
This patchset provides page mapping via the page table.
On some archs, most notably ARM, this method has been
demonstrated to be faster than copying.

The logic controlling the method selection (copy vs page table)
is controlled by the definition of USE_PGTABLE_MAPPING which
is/can be defined for any arch that performs better with page
table mapping.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-08-13 19:28:09 -07:00
Seth Jennings c60369f011 staging: zsmalloc: prevent mappping in interrupt context
Because we use per-cpu mapping areas shared among the
pools/users, we can't allow mapping in interrupt context
because it can corrupt another users mappings.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-08-13 19:28:08 -07:00
Seth Jennings 6539a36c0c staging: zsmalloc: s/firstpage/page in new copy map funcs
firstpage already has precedent and meaning the first page
of a zspage.  In the case of the copy mapping functions,
it is the first of a pair of pages needing to be mapped.

This patch just renames the firstpage argument to "page" to
avoid confusion.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-08-13 19:28:08 -07:00
Seth Jennings b741851086 staging: zsmalloc: add mapping modes
This patch improves mapping performance in zsmalloc by getting
usage information from the user in the form of a "mapping mode"
and using it to avoid unnecessary copying for objects that span
pages.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-07-09 11:35:00 -07:00
Seth Jennings 166cfda752 staging: zsmalloc: add details to zs_map_object boiler plate
Add information on the usage limits of zs_map_object()

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-07-09 11:34:59 -07:00
Seth Jennings 103123305c staging: zsmalloc: add single-page object fastpath in unmap
Improve zs_unmap_object() performance by adding a fast path for
objects that don't span pages.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-07-09 11:34:59 -07:00
Seth Jennings 5f601902c6 staging: zsmalloc: remove x86 dependency
This patch replaces the page table assisted object mapping
method, which has x86 dependencies, with a arch-independent
method that does a simple copy into a temporary per-cpu
buffer.

While a copy seems like it would be worse than mapping the pages,
tests demonstrate the copying is always faster and, in the case of
running inside a KVM guest, roughly 4x faster.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-07-09 11:34:59 -07:00
Ben Hutchings 069f101fa4 staging: zsmalloc: Finish conversion to a separate module
ZSMALLOC is tristate, but the code has no MODULE_LICENSE and since it
depends on GPL-only symbols it cannot be loaded as a module.  This in
turn breaks zram which now depends on it.  I assume it's meant to be
Dual BSD/GPL like the other z-stuff.

There is also no module_exit, which will make it impossible to unload.
Add the appropriate module_init and module_exit declarations suggested
by comments.

Reported-by: Christian Ohm <chr.ohm@gmx.net>
References: http://bugs.debian.org/677273
Cc: stable@vger.kernel.org # v3.4
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-06-20 16:11:34 -07:00
Seth Jennings b4b700c5a6 staging: zsmalloc: fix uninit'ed variable warning
This patch fixes an uninitialized variable warning in
alloc_zspage().  It also fixes the secondary issue of
prev_page leaving scope on each loop iteration.  The only
reason this ever worked was because prev_page was occupying
the same space on the stack on each iteration.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Reported-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-06-13 17:21:46 -07:00
Nitin Gupta 2db51dae56 staging: zsmalloc documentation
Documentation of various struct page fields
used by zsmalloc.

Changes for v2:
	- Regroup descriptions as suggested by Konrad

Signed-off-by: Nitin Gupta <ngupta@vflare.org>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-06-11 09:02:09 -07:00
Minchan Kim c234434835 staging: zsmalloc: zsmalloc: use unsigned long instead of void *
We should use unsigned long as handle instead of void * to avoid any
confusion. Without this, users may just treat zs_malloc return value as
a pointer and try to deference it.

This patch passed compile test(zram, zcache and ramster) and zram is
tested on qemu.

changelog
  * from v2
	- remove hval pointed out by Nitin
	- based on next-20120607
  * from v1
	- change zcache's zv_create return value
	- baesd on next-20120604

Cc: Dan Magenheimer <dan.magenheimer@oracle.com>
Acked-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
Acked-by: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-06-11 08:57:47 -07:00
Minchan Kim 00a61d8618 staging: zsmalloc: add/fix function comment
Add/fix the comment.

Signed-off-by: Minchan Kim <minchan@kernel.org>
Acked-by: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-05-09 13:20:22 -07:00
Minchan Kim 2e3b615471 staging: zsmalloc: rename zspage_order with zspage_pages
zspage_order defines how many pages are needed to make a zspage.
So _order_ is rather awkward naming. It already deceive Jonathan
- http://lwn.net/Articles/477067/
" For each size, the code calculates an optimum number of pages (up to 16)"

Let's change from _order_ to _pages_ and some function names.

Signed-off-by: Minchan Kim <minchan@kernel.org>
Acked-by: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-05-09 13:20:22 -07:00
Greg Kroah-Hartman d210267741 Merge 3.4-rc5 into staging-next
This resolves the conflict in:
	drivers/staging/vt6656/ioctl.c

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-05-02 11:48:07 -07:00
Minchan Kim a27545bf0b zsmalloc: use PageFlag macro instead of [set|test]_bit
MM code always uses PageXXX to handle page flags.
Let's keep the consistency.

Signed-off-by: Minchan Kim <minchan@kernel.org>
Acked-by: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-04-25 10:59:16 -07:00
Nitin Gupta f4477e90b3 staging: zsmalloc: fix memory leak
This patch fixes a memory leak in zsmalloc where the first
subpage of each zspage is leaked when the zspage is freed.

Signed-off-by: Nitin Gupta <ngupta@vflare.org>
Acked-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-04-10 09:18:59 -07:00
Seth Jennings b76dee4a41 staging: zsmalloc: remove SPARSEMEM dep from Kconfig
This patch removes the SPARSEMEM from the zsmalloc
Kconfig

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Acked-by: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-03-07 13:31:28 -08:00
Seth Jennings b9ed4f6c9a staging: zsmalloc: change ZS_MIN_ALLOC_SIZE
This patch ensures that the value of ZS_MIN_ALLOC_SIZE, for the
PAGE_SIZE and MAX_PHYSMEM_BITS on the system, allows for all
possible object ids in the lowest storage class to be encoded
in the object handle.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Acked-by: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-03-07 13:31:28 -08:00
Seth Jennings 6e00ec00b1 staging: zsmalloc: calculate MAX_PHYSMEM_BITS if not defined
This patch provides a way to determine or "set a
reasonable value for" MAX_PHYSMEM_BITS in the case that
it is not defined (i.e. !SPARSEMEM)

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Acked-by: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-03-07 13:31:28 -08:00
Seth Jennings 84d4faaba2 staging: zsmalloc: add ZS_MAX_PAGES_PER_ZSPAGE
This patch moves where max_zspage_order is declared and
changes its meaning.  "Order" typically implies 2^order
of something; however, it is currently being used as the
"maximum number of single pages in a zspage".  To add clarity,
ZS_MAX_ZSPAGE_ORDER is now used to calculate ZS_MAX_PAGES_PER_ZSPAGE,
which is 2^ZS_MAX_ZSPAGE_ORDER and is the upper bound on the number
of pages in a zspage.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Acked-by: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-03-07 13:31:28 -08:00
Seth Jennings aafefe932a staging: zsmalloc: move object/handle masking defines
This patch moves the definitions of _PFN_BITS, OBJ_INDEX_BITS
and OBJ_INDEX_MASK from zsmalloc-main.c to zsmalloc_int.h

They will be needed to determine ZS_MIN_ALLOC_SIZE in the next
patch

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Acked-by: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-03-07 13:31:28 -08:00
Seth Jennings 0cbb613fa8 staging: fix powerpc linux-next break on zsmalloc
linux/vmalloc.h added to zsmalloc-main.c to resolve implicit
declaration errors.

X86 dependency added to zsmalloc and dependent drivers zcache and zram.

This X86 only requirement is not ideal.  Working to find portable
functions for __flush_tlb_one and set_pte.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-13 06:57:17 -08:00
Nitin Gupta 61989a80fb staging: zsmalloc: zsmalloc memory allocation library
This patch creates a new memory allocation library named
zsmalloc.

NOTE: zsmalloc currently depends on SPARSEMEM for the MAX_PHYSMEM_BITS
value needed to determine the format of the object handle. There may
be a better way to do this.  Feedback is welcome.

Signed-off-by: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-08 17:12:53 -08:00