memcg: limit change shrink usage

Shrinking memory usage at limit change.

[akpm@linux-foundation.org: coding-style fixes]
Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com>
Acked-by: Pavel Emelyanov <xemul@openvz.org>
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Paul Menage <menage@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
KAMEZAWA Hiroyuki 2008-07-25 01:47:20 -07:00 коммит произвёл Linus Torvalds
Родитель 12b9804419
Коммит 628f423553
2 изменённых файлов: 45 добавлений и 6 удалений

Просмотреть файл

@ -242,8 +242,7 @@ rmdir() if there are no tasks.
1. Add support for accounting huge pages (as a separate controller) 1. Add support for accounting huge pages (as a separate controller)
2. Make per-cgroup scanner reclaim not-shared pages first 2. Make per-cgroup scanner reclaim not-shared pages first
3. Teach controller to account for shared-pages 3. Teach controller to account for shared-pages
4. Start reclamation when the limit is lowered 4. Start reclamation in the background when the limit is
5. Start reclamation in the background when the limit is
not yet hit but the usage is getting closer not yet hit but the usage is getting closer
Summary Summary

Просмотреть файл

@ -812,6 +812,30 @@ int mem_cgroup_shrink_usage(struct mm_struct *mm, gfp_t gfp_mask)
return 0; return 0;
} }
int mem_cgroup_resize_limit(struct mem_cgroup *memcg, unsigned long long val)
{
int retry_count = MEM_CGROUP_RECLAIM_RETRIES;
int progress;
int ret = 0;
while (res_counter_set_limit(&memcg->res, val)) {
if (signal_pending(current)) {
ret = -EINTR;
break;
}
if (!retry_count) {
ret = -EBUSY;
break;
}
progress = try_to_free_mem_cgroup_pages(memcg, GFP_KERNEL);
if (!progress)
retry_count--;
}
return ret;
}
/* /*
* This routine traverse page_cgroup in given list and drop them all. * This routine traverse page_cgroup in given list and drop them all.
* *And* this routine doesn't reclaim page itself, just removes page_cgroup. * *And* this routine doesn't reclaim page itself, just removes page_cgroup.
@ -896,13 +920,29 @@ static u64 mem_cgroup_read(struct cgroup *cont, struct cftype *cft)
return res_counter_read_u64(&mem_cgroup_from_cont(cont)->res, return res_counter_read_u64(&mem_cgroup_from_cont(cont)->res,
cft->private); cft->private);
} }
/*
* The user of this function is...
* RES_LIMIT.
*/
static int mem_cgroup_write(struct cgroup *cont, struct cftype *cft, static int mem_cgroup_write(struct cgroup *cont, struct cftype *cft,
const char *buffer) const char *buffer)
{ {
return res_counter_write(&mem_cgroup_from_cont(cont)->res, struct mem_cgroup *memcg = mem_cgroup_from_cont(cont);
cft->private, buffer, unsigned long long val;
res_counter_memparse_write_strategy); int ret;
switch (cft->private) {
case RES_LIMIT:
/* This function does all necessary parse...reuse it */
ret = res_counter_memparse_write_strategy(buffer, &val);
if (!ret)
ret = mem_cgroup_resize_limit(memcg, val);
break;
default:
ret = -EINVAL; /* should be BUG() ? */
break;
}
return ret;
} }
static int mem_cgroup_reset(struct cgroup *cont, unsigned int event) static int mem_cgroup_reset(struct cgroup *cont, unsigned int event)