Merge branch 'akpm' (patches from Andrew)

Merge misc updates from Andrew Morton:
 "Bite-sized chunks this time, to avoid the MTA ratelimiting woes.

   - fs/notify updates

   - ocfs2

   - some of MM"

That laconic "some MM" is mainly the removal of remap_file_pages(),
which is a big simplification of the VM, and which gets rid of a *lot*
of random cruft and special cases because we no longer support the
non-linear mappings that it used.

From a user interface perspective, nothing has changed, because the
remap_file_pages() syscall still exists, it's just done by emulating the
old behavior by creating a lot of individual small mappings instead of
one non-linear one.

The emulation is slower than the old "native" non-linear mappings, but
nobody really uses or cares about remap_file_pages(), and simplifying
the VM is a big advantage.

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (78 commits)
  memcg: zap memcg_slab_caches and memcg_slab_mutex
  memcg: zap memcg_name argument of memcg_create_kmem_cache
  memcg: zap __memcg_{charge,uncharge}_slab
  mm/page_alloc.c: place zone_id check before VM_BUG_ON_PAGE check
  mm: hugetlb: fix type of hugetlb_treat_as_movable variable
  mm, hugetlb: remove unnecessary lower bound on sysctl handlers"?
  mm: memory: merge shared-writable dirtying branches in do_wp_page()
  mm: memory: remove ->vm_file check on shared writable vmas
  xtensa: drop _PAGE_FILE and pte_file()-related helpers
  x86: drop _PAGE_FILE and pte_file()-related helpers
  unicore32: drop pte_file()-related helpers
  um: drop _PAGE_FILE and pte_file()-related helpers
  tile: drop pte_file()-related helpers
  sparc: drop pte_file()-related helpers
  sh: drop _PAGE_FILE and pte_file()-related helpers
  score: drop _PAGE_FILE and pte_file()-related helpers
  s390: drop pte_file()-related helpers
  parisc: drop _PAGE_FILE and pte_file()-related helpers
  openrisc: drop _PAGE_FILE and pte_file()-related helpers
  nios2: drop _PAGE_FILE and pte_file()-related helpers
  ...
This commit is contained in:
Linus Torvalds 2015-02-10 16:45:56 -08:00
Родитель b2718bffb4 d5b3cf7139
Коммит 992de5a8ec
142 изменённых файлов: 641 добавлений и 2182 удалений

Просмотреть файл

@ -317,10 +317,10 @@ maps this page at its virtual address.
about doing this.
The idea is, first at flush_dcache_page() time, if
page->mapping->i_mmap is an empty tree and ->i_mmap_nonlinear
an empty list, just mark the architecture private page flag bit.
Later, in update_mmu_cache(), a check is made of this flag bit,
and if set the flush is done and the flag bit is cleared.
page->mapping->i_mmap is an empty tree, just mark the architecture
private page flag bit. Later, in update_mmu_cache(), a check is
made of this flag bit, and if set the flush is done and the flag
bit is cleared.
IMPORTANT NOTE: It is often important, if you defer the flush,
that the actual flush occurs on the same CPU

Просмотреть файл

@ -196,7 +196,8 @@ struct fiemap_extent_info {
};
It is intended that the file system should not need to access any of this
structure directly.
structure directly. Filesystem handlers should be tolerant to signals and return
EINTR once fatal signal received.
Flag checking should be done at the beginning of the ->fiemap callback via the

Просмотреть файл

@ -4,201 +4,10 @@
Document started 15 Mar 2005 by Robert Love <rml@novell.com>
Document updated 4 Jan 2015 by Zhang Zhen <zhenzhang.zhang@huawei.com>
--Deleted obsoleted interface, just refer to manpages for user interface.
(i) User Interface
Inotify is controlled by a set of three system calls and normal file I/O on a
returned file descriptor.
First step in using inotify is to initialise an inotify instance:
int fd = inotify_init ();
Each instance is associated with a unique, ordered queue.
Change events are managed by "watches". A watch is an (object,mask) pair where
the object is a file or directory and the mask is a bit mask of one or more
inotify events that the application wishes to receive. See <linux/inotify.h>
for valid events. A watch is referenced by a watch descriptor, or wd.
Watches are added via a path to the file.
Watches on a directory will return events on any files inside of the directory.
Adding a watch is simple:
int wd = inotify_add_watch (fd, path, mask);
Where "fd" is the return value from inotify_init(), path is the path to the
object to watch, and mask is the watch mask (see <linux/inotify.h>).
You can update an existing watch in the same manner, by passing in a new mask.
An existing watch is removed via
int ret = inotify_rm_watch (fd, wd);
Events are provided in the form of an inotify_event structure that is read(2)
from a given inotify instance. The filename is of dynamic length and follows
the struct. It is of size len. The filename is padded with null bytes to
ensure proper alignment. This padding is reflected in len.
You can slurp multiple events by passing a large buffer, for example
size_t len = read (fd, buf, BUF_LEN);
Where "buf" is a pointer to an array of "inotify_event" structures at least
BUF_LEN bytes in size. The above example will return as many events as are
available and fit in BUF_LEN.
Each inotify instance fd is also select()- and poll()-able.
You can find the size of the current event queue via the standard FIONREAD
ioctl on the fd returned by inotify_init().
All watches are destroyed and cleaned up on close.
(ii)
Prototypes:
int inotify_init (void);
int inotify_add_watch (int fd, const char *path, __u32 mask);
int inotify_rm_watch (int fd, __u32 mask);
(iii) Kernel Interface
Inotify's kernel API consists a set of functions for managing watches and an
event callback.
To use the kernel API, you must first initialize an inotify instance with a set
of inotify_operations. You are given an opaque inotify_handle, which you use
for any further calls to inotify.
struct inotify_handle *ih = inotify_init(my_event_handler);
You must provide a function for processing events and a function for destroying
the inotify watch.
void handle_event(struct inotify_watch *watch, u32 wd, u32 mask,
u32 cookie, const char *name, struct inode *inode)
watch - the pointer to the inotify_watch that triggered this call
wd - the watch descriptor
mask - describes the event that occurred
cookie - an identifier for synchronizing events
name - the dentry name for affected files in a directory-based event
inode - the affected inode in a directory-based event
void destroy_watch(struct inotify_watch *watch)
You may add watches by providing a pre-allocated and initialized inotify_watch
structure and specifying the inode to watch along with an inotify event mask.
You must pin the inode during the call. You will likely wish to embed the
inotify_watch structure in a structure of your own which contains other
information about the watch. Once you add an inotify watch, it is immediately
subject to removal depending on filesystem events. You must grab a reference if
you depend on the watch hanging around after the call.
inotify_init_watch(&my_watch->iwatch);
inotify_get_watch(&my_watch->iwatch); // optional
s32 wd = inotify_add_watch(ih, &my_watch->iwatch, inode, mask);
inotify_put_watch(&my_watch->iwatch); // optional
You may use the watch descriptor (wd) or the address of the inotify_watch for
other inotify operations. You must not directly read or manipulate data in the
inotify_watch. Additionally, you must not call inotify_add_watch() more than
once for a given inotify_watch structure, unless you have first called either
inotify_rm_watch() or inotify_rm_wd().
To determine if you have already registered a watch for a given inode, you may
call inotify_find_watch(), which gives you both the wd and the watch pointer for
the inotify_watch, or an error if the watch does not exist.
wd = inotify_find_watch(ih, inode, &watchp);
You may use container_of() on the watch pointer to access your own data
associated with a given watch. When an existing watch is found,
inotify_find_watch() bumps the refcount before releasing its locks. You must
put that reference with:
put_inotify_watch(watchp);
Call inotify_find_update_watch() to update the event mask for an existing watch.
inotify_find_update_watch() returns the wd of the updated watch, or an error if
the watch does not exist.
wd = inotify_find_update_watch(ih, inode, mask);
An existing watch may be removed by calling either inotify_rm_watch() or
inotify_rm_wd().
int ret = inotify_rm_watch(ih, &my_watch->iwatch);
int ret = inotify_rm_wd(ih, wd);
A watch may be removed while executing your event handler with the following:
inotify_remove_watch_locked(ih, iwatch);
Call inotify_destroy() to remove all watches from your inotify instance and
release it. If there are no outstanding references, inotify_destroy() will call
your destroy_watch op for each watch.
inotify_destroy(ih);
When inotify removes a watch, it sends an IN_IGNORED event to your callback.
You may use this event as an indication to free the watch memory. Note that
inotify may remove a watch due to filesystem events, as well as by your request.
If you use IN_ONESHOT, inotify will remove the watch after the first event, at
which point you may call the final inotify_put_watch.
(iv) Kernel Interface Prototypes
struct inotify_handle *inotify_init(struct inotify_operations *ops);
inotify_init_watch(struct inotify_watch *watch);
s32 inotify_add_watch(struct inotify_handle *ih,
struct inotify_watch *watch,
struct inode *inode, u32 mask);
s32 inotify_find_watch(struct inotify_handle *ih, struct inode *inode,
struct inotify_watch **watchp);
s32 inotify_find_update_watch(struct inotify_handle *ih,
struct inode *inode, u32 mask);
int inotify_rm_wd(struct inotify_handle *ih, u32 wd);
int inotify_rm_watch(struct inotify_handle *ih,
struct inotify_watch *watch);
void inotify_remove_watch_locked(struct inotify_handle *ih,
struct inotify_watch *watch);
void inotify_destroy(struct inotify_handle *ih);
void get_inotify_watch(struct inotify_watch *watch);
void put_inotify_watch(struct inotify_watch *watch);
(v) Internal Kernel Implementation
Each inotify instance is represented by an inotify_handle structure.
Inotify's userspace consumers also have an inotify_device which is
associated with the inotify_handle, and on which events are queued.
Each watch is associated with an inotify_watch structure. Watches are chained
off of each associated inotify_handle and each associated inode.
See fs/notify/inotify/inotify_fsnotify.c and fs/notify/inotify/inotify_user.c
for the locking and lifetime rules.
(vi) Rationale
(i) Rationale
Q: What is the design decision behind not tying the watch to the open fd of
the watched object?

Просмотреть файл

@ -100,3 +100,7 @@ coherency=full (*) Disallow concurrent O_DIRECT writes, cluster inode
coherency=buffered Allow concurrent O_DIRECT writes without EX lock among
nodes, which gains high performance at risk of getting
stale data on other nodes.
journal_async_commit Commit block can be written to disk without waiting
for descriptor blocks. If enabled older kernels cannot
mount the device. This will enable 'journal_checksum'
internally.

Просмотреть файл

@ -18,10 +18,9 @@ on 32-bit systems to map files bigger than can linearly fit into 32-bit
virtual address space. This use-case is not critical anymore since 64-bit
systems are widely available.
The plan is to deprecate the syscall and replace it with an emulation.
The emulation will create new VMAs instead of nonlinear mappings. It's
going to work slower for rare users of remap_file_pages() but ABI is
preserved.
The syscall is deprecated and replaced it with an emulation now. The
emulation creates new VMAs instead of nonlinear mappings. It's going to
work slower for rare users of remap_file_pages() but ABI is preserved.
One side effect of emulation (apart from performance) is that user can hit
vm.max_map_count limit more easily due to additional VMAs. See comment for

Просмотреть файл

@ -73,7 +73,6 @@ struct vm_area_struct;
/* .. and these are ours ... */
#define _PAGE_DIRTY 0x20000
#define _PAGE_ACCESSED 0x40000
#define _PAGE_FILE 0x80000 /* set:pagecache, unset:swap */
/*
* NOTE! The "accessed" bit isn't necessarily exact: it can be kept exactly
@ -268,7 +267,6 @@ extern inline void pgd_clear(pgd_t * pgdp) { pgd_val(*pgdp) = 0; }
extern inline int pte_write(pte_t pte) { return !(pte_val(pte) & _PAGE_FOW); }
extern inline int pte_dirty(pte_t pte) { return pte_val(pte) & _PAGE_DIRTY; }
extern inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED; }
extern inline int pte_file(pte_t pte) { return pte_val(pte) & _PAGE_FILE; }
extern inline int pte_special(pte_t pte) { return 0; }
extern inline pte_t pte_wrprotect(pte_t pte) { pte_val(pte) |= _PAGE_FOW; return pte; }
@ -345,11 +343,6 @@ extern inline pte_t mk_swap_pte(unsigned long type, unsigned long offset)
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
#define pte_to_pgoff(pte) (pte_val(pte) >> 32)
#define pgoff_to_pte(off) ((pte_t) { ((off) << 32) | _PAGE_FILE })
#define PTE_FILE_MAX_BITS 32
#ifndef CONFIG_DISCONTIGMEM
#define kern_addr_valid(addr) (1)
#endif

Просмотреть файл

@ -61,7 +61,6 @@
#define _PAGE_WRITE (1<<4) /* Page has user write perm (H) */
#define _PAGE_READ (1<<5) /* Page has user read perm (H) */
#define _PAGE_MODIFIED (1<<6) /* Page modified (dirty) (S) */
#define _PAGE_FILE (1<<7) /* page cache/ swap (S) */
#define _PAGE_GLOBAL (1<<8) /* Page is global (H) */
#define _PAGE_PRESENT (1<<10) /* TLB entry is valid (H) */
@ -73,7 +72,6 @@
#define _PAGE_READ (1<<3) /* Page has user read perm (H) */
#define _PAGE_ACCESSED (1<<4) /* Page is accessed (S) */
#define _PAGE_MODIFIED (1<<5) /* Page modified (dirty) (S) */
#define _PAGE_FILE (1<<6) /* page cache/ swap (S) */
#define _PAGE_GLOBAL (1<<8) /* Page is global (H) */
#define _PAGE_PRESENT (1<<9) /* TLB entry is valid (H) */
#define _PAGE_SHARED_CODE (1<<11) /* Shared Code page with cmn vaddr
@ -268,15 +266,6 @@ static inline void pmd_set(pmd_t *pmdp, pte_t *ptep)
pte; \
})
/* TBD: Non linear mapping stuff */
static inline int pte_file(pte_t pte)
{
return pte_val(pte) & _PAGE_FILE;
}
#define PTE_FILE_MAX_BITS 30
#define pgoff_to_pte(x) __pte(x)
#define pte_to_pgoff(x) (pte_val(x) >> 2)
#define pte_pfn(pte) (pte_val(pte) >> PAGE_SHIFT)
#define pfn_pte(pfn, prot) (__pte(((pfn) << PAGE_SHIFT) | pgprot_val(prot)))
#define __pte_index(addr) (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
@ -364,7 +353,7 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address,
/* Encode swap {type,off} tuple into PTE
* We reserve 13 bits for 5-bit @type, keeping bits 12-5 zero, ensuring that
* both PAGE_FILE and PAGE_PRESENT are zero in a PTE holding swap "identifier"
* PAGE_PRESENT is zero in a PTE holding swap "identifier"
*/
#define __swp_entry(type, off) ((swp_entry_t) { \
((type) & 0x1f) | ((off) << 13) })

Просмотреть файл

@ -118,7 +118,6 @@
#define L_PTE_VALID (_AT(pteval_t, 1) << 0) /* Valid */
#define L_PTE_PRESENT (_AT(pteval_t, 1) << 0)
#define L_PTE_YOUNG (_AT(pteval_t, 1) << 1)
#define L_PTE_FILE (_AT(pteval_t, 1) << 2) /* only when !PRESENT */
#define L_PTE_DIRTY (_AT(pteval_t, 1) << 6)
#define L_PTE_RDONLY (_AT(pteval_t, 1) << 7)
#define L_PTE_USER (_AT(pteval_t, 1) << 8)

Просмотреть файл

@ -77,7 +77,6 @@
*/
#define L_PTE_VALID (_AT(pteval_t, 1) << 0) /* Valid */
#define L_PTE_PRESENT (_AT(pteval_t, 3) << 0) /* Present */
#define L_PTE_FILE (_AT(pteval_t, 1) << 2) /* only when !PRESENT */
#define L_PTE_USER (_AT(pteval_t, 1) << 6) /* AP[1] */
#define L_PTE_SHARED (_AT(pteval_t, 3) << 8) /* SH[1:0], inner shareable */
#define L_PTE_YOUNG (_AT(pteval_t, 1) << 10) /* AF */

Просмотреть файл

@ -54,8 +54,6 @@
typedef pte_t *pte_addr_t;
static inline int pte_file(pte_t pte) { return 0; }
/*
* ZERO_PAGE is a global shared page that is always zero: used
* for zero-mapped memory areas etc..

Просмотреть файл

@ -318,12 +318,12 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
*
* 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1
* 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
* <--------------- offset ----------------------> < type -> 0 0 0
* <--------------- offset ------------------------> < type -> 0 0
*
* This gives us up to 31 swap files and 64GB per swap file. Note that
* This gives us up to 31 swap files and 128GB per swap file. Note that
* the offset field is always non-zero.
*/
#define __SWP_TYPE_SHIFT 3
#define __SWP_TYPE_SHIFT 2
#define __SWP_TYPE_BITS 5
#define __SWP_TYPE_MASK ((1 << __SWP_TYPE_BITS) - 1)
#define __SWP_OFFSET_SHIFT (__SWP_TYPE_BITS + __SWP_TYPE_SHIFT)
@ -342,20 +342,6 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
*/
#define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > __SWP_TYPE_BITS)
/*
* Encode and decode a file entry. File entries are stored in the Linux
* page tables as follows:
*
* 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1
* 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
* <----------------------- offset ------------------------> 1 0 0
*/
#define pte_file(pte) (pte_val(pte) & L_PTE_FILE)
#define pte_to_pgoff(x) (pte_val(x) >> 3)
#define pgoff_to_pte(x) __pte(((x) << 3) | L_PTE_FILE)
#define PTE_FILE_MAX_BITS 29
/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */
/* FIXME: this is not correct */
#define kern_addr_valid(addr) (1)

Просмотреть файл

@ -98,7 +98,7 @@
#endif
#if !defined (CONFIG_ARM_LPAE) && \
(L_PTE_XN+L_PTE_USER+L_PTE_RDONLY+L_PTE_DIRTY+L_PTE_YOUNG+\
L_PTE_FILE+L_PTE_PRESENT) > L_PTE_SHARED
L_PTE_PRESENT) > L_PTE_SHARED
#error Invalid Linux PTE bit settings
#endif
#endif /* CONFIG_MMU */

Просмотреть файл

@ -25,7 +25,6 @@
* Software defined PTE bits definition.
*/
#define PTE_VALID (_AT(pteval_t, 1) << 0)
#define PTE_FILE (_AT(pteval_t, 1) << 2) /* only when !pte_present() */
#define PTE_DIRTY (_AT(pteval_t, 1) << 55)
#define PTE_SPECIAL (_AT(pteval_t, 1) << 56)
#define PTE_WRITE (_AT(pteval_t, 1) << 57)
@ -469,13 +468,12 @@ extern pgd_t idmap_pg_dir[PTRS_PER_PGD];
/*
* Encode and decode a swap entry:
* bits 0-1: present (must be zero)
* bit 2: PTE_FILE
* bits 3-8: swap type
* bits 9-57: swap offset
* bits 2-7: swap type
* bits 8-57: swap offset
*/
#define __SWP_TYPE_SHIFT 3
#define __SWP_TYPE_SHIFT 2
#define __SWP_TYPE_BITS 6
#define __SWP_OFFSET_BITS 49
#define __SWP_OFFSET_BITS 50
#define __SWP_TYPE_MASK ((1 << __SWP_TYPE_BITS) - 1)
#define __SWP_OFFSET_SHIFT (__SWP_TYPE_BITS + __SWP_TYPE_SHIFT)
#define __SWP_OFFSET_MASK ((1UL << __SWP_OFFSET_BITS) - 1)
@ -493,18 +491,6 @@ extern pgd_t idmap_pg_dir[PTRS_PER_PGD];
*/
#define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > __SWP_TYPE_BITS)
/*
* Encode and decode a file entry:
* bits 0-1: present (must be zero)
* bit 2: PTE_FILE
* bits 3-57: file offset / PAGE_SIZE
*/
#define pte_file(pte) (pte_val(pte) & PTE_FILE)
#define pte_to_pgoff(x) (pte_val(x) >> 3)
#define pgoff_to_pte(x) __pte(((x) << 3) | PTE_FILE)
#define PTE_FILE_MAX_BITS 55
extern int kern_addr_valid(unsigned long addr);
#include <asm-generic/pgtable.h>

Просмотреть файл

@ -86,9 +86,6 @@ extern struct page *empty_zero_page;
#define _PAGE_BIT_PRESENT 10
#define _PAGE_BIT_ACCESSED 11 /* software: page was accessed */
/* The following flags are only valid when !PRESENT */
#define _PAGE_BIT_FILE 0 /* software: pagecache or swap? */
#define _PAGE_WT (1 << _PAGE_BIT_WT)
#define _PAGE_DIRTY (1 << _PAGE_BIT_DIRTY)
#define _PAGE_EXECUTE (1 << _PAGE_BIT_EXECUTE)
@ -101,7 +98,6 @@ extern struct page *empty_zero_page;
/* Software flags */
#define _PAGE_ACCESSED (1 << _PAGE_BIT_ACCESSED)
#define _PAGE_PRESENT (1 << _PAGE_BIT_PRESENT)
#define _PAGE_FILE (1 << _PAGE_BIT_FILE)
/*
* Page types, i.e. sizes. _PAGE_TYPE_NONE corresponds to what is
@ -210,14 +206,6 @@ static inline int pte_special(pte_t pte)
return 0;
}
/*
* The following only work if pte_present() is not true.
*/
static inline int pte_file(pte_t pte)
{
return pte_val(pte) & _PAGE_FILE;
}
/* Mutator functions for PTE bits */
static inline pte_t pte_wrprotect(pte_t pte)
{
@ -329,7 +317,6 @@ extern void update_mmu_cache(struct vm_area_struct * vma,
* Encode and decode a swap entry
*
* Constraints:
* _PAGE_FILE at bit 0
* _PAGE_TYPE_* at bits 2-3 (for emulating _PAGE_PROTNONE)
* _PAGE_PRESENT at bit 10
*
@ -346,18 +333,6 @@ extern void update_mmu_cache(struct vm_area_struct * vma,
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
/*
* Encode and decode a nonlinear file mapping entry. We have to
* preserve _PAGE_FILE and _PAGE_PRESENT here. _PAGE_TYPE_* isn't
* necessary, since _PAGE_FILE implies !_PAGE_PROTNONE (?)
*/
#define PTE_FILE_MAX_BITS 30
#define pte_to_pgoff(pte) (((pte_val(pte) >> 1) & 0x1ff) \
| ((pte_val(pte) >> 11) << 9))
#define pgoff_to_pte(off) ((pte_t) { ((((off) & 0x1ff) << 1) \
| (((off) >> 9) << 11) \
| _PAGE_FILE) })
typedef pte_t *pte_addr_t;
#define kern_addr_valid(addr) (1)

Просмотреть файл

@ -45,11 +45,6 @@ extern void paging_init(void);
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
static inline int pte_file(pte_t pte)
{
return 0;
}
#define set_pte(pteptr, pteval) (*(pteptr) = pteval)
#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval)

Просмотреть файл

@ -50,11 +50,6 @@ extern void paging_init(void);
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
static inline int pte_file(pte_t pte)
{
return 0;
}
#define set_pte(pteptr, pteval) (*(pteptr) = pteval)
#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval)

Просмотреть файл

@ -58,7 +58,6 @@ typedef struct
/* Bits the HW doesn't care about but the kernel uses them in SW */
#define _PAGE_PRESENT (1<<4) /* page present in memory */
#define _PAGE_FILE (1<<5) /* set: pagecache, unset: swap (when !PRESENT) */
#define _PAGE_ACCESSED (1<<5) /* simulated in software using valid bit */
#define _PAGE_MODIFIED (1<<6) /* simulated in software using we bit */
#define _PAGE_READ (1<<7) /* read-enabled */
@ -105,6 +104,4 @@ typedef struct
#define __S110 PAGE_SHARED
#define __S111 PAGE_SHARED
#define PTE_FILE_MAX_BITS 26
#endif

Просмотреть файл

@ -53,7 +53,6 @@ typedef struct
* software.
*/
#define _PAGE_PRESENT (1 << 5) /* Page is present in memory. */
#define _PAGE_FILE (1 << 6) /* 1=pagecache, 0=swap (when !present) */
#define _PAGE_ACCESSED (1 << 6) /* Simulated in software using valid bit. */
#define _PAGE_MODIFIED (1 << 7) /* Simulated in software using we bit. */
#define _PAGE_READ (1 << 8) /* Read enabled. */
@ -108,6 +107,4 @@ typedef struct
#define __S110 PAGE_SHARED_EXEC
#define __S111 PAGE_SHARED_EXEC
#define PTE_FILE_MAX_BITS 25
#endif /* _ASM_CRIS_ARCH_MMU_H */

Просмотреть файл

@ -114,7 +114,6 @@ extern unsigned long empty_zero_page;
static inline int pte_write(pte_t pte) { return pte_val(pte) & _PAGE_WRITE; }
static inline int pte_dirty(pte_t pte) { return pte_val(pte) & _PAGE_MODIFIED; }
static inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED; }
static inline int pte_file(pte_t pte) { return pte_val(pte) & _PAGE_FILE; }
static inline int pte_special(pte_t pte) { return 0; }
static inline pte_t pte_wrprotect(pte_t pte)
@ -290,9 +289,6 @@ static inline void update_mmu_cache(struct vm_area_struct * vma,
*/
#define pgtable_cache_init() do { } while (0)
#define pte_to_pgoff(x) (pte_val(x) >> 6)
#define pgoff_to_pte(x) __pte(((x) << 6) | _PAGE_FILE)
typedef pte_t *pte_addr_t;
#endif /* __ASSEMBLY__ */

Просмотреть файл

@ -62,10 +62,6 @@ typedef pte_t *pte_addr_t;
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
#ifndef __ASSEMBLY__
static inline int pte_file(pte_t pte) { return 0; }
#endif
#define ZERO_PAGE(vaddr) ({ BUG(); NULL; })
#define swapper_pg_dir ((pgd_t *) NULL)
@ -298,7 +294,6 @@ static inline pmd_t *pmd_offset(pud_t *dir, unsigned long address)
#define _PAGE_RESERVED_MASK (xAMPRx_RESERVED8 | xAMPRx_RESERVED13)
#define _PAGE_FILE 0x002 /* set:pagecache unset:swap */
#define _PAGE_PROTNONE 0x000 /* If not present */
#define _PAGE_CHG_MASK (PTE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY)
@ -463,27 +458,15 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
* Handle swap and file entries
* - the PTE is encoded in the following format:
* bit 0: Must be 0 (!_PAGE_PRESENT)
* bit 1: Type: 0 for swap, 1 for file (_PAGE_FILE)
* bits 2-7: Swap type
* bits 8-31: Swap offset
* bits 2-31: File pgoff
* bits 1-6: Swap type
* bits 7-31: Swap offset
*/
#define __swp_type(x) (((x).val >> 2) & 0x1f)
#define __swp_offset(x) ((x).val >> 8)
#define __swp_entry(type, offset) ((swp_entry_t) { ((type) << 2) | ((offset) << 8) })
#define __swp_type(x) (((x).val >> 1) & 0x1f)
#define __swp_offset(x) ((x).val >> 7)
#define __swp_entry(type, offset) ((swp_entry_t) { ((type) << 1) | ((offset) << 7) })
#define __pte_to_swp_entry(_pte) ((swp_entry_t) { (_pte).pte })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
static inline int pte_file(pte_t pte)
{
return pte.pte & _PAGE_FILE;
}
#define PTE_FILE_MAX_BITS 29
#define pte_to_pgoff(PTE) ((PTE).pte >> 2)
#define pgoff_to_pte(off) __pte((off) << 2 | _PAGE_FILE)
/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */
#define PageSkip(page) (0)
#define kern_addr_valid(addr) (1)

Просмотреть файл

@ -61,13 +61,6 @@ extern unsigned long zero_page_mask;
#define _PAGE_DIRTY (1<<1)
#define _PAGE_ACCESSED (1<<2)
/*
* _PAGE_FILE is only meaningful if _PAGE_PRESENT is false, while
* _PAGE_DIRTY is only meaningful if _PAGE_PRESENT is true.
* So we can overload the bit...
*/
#define _PAGE_FILE _PAGE_DIRTY /* set: pagecache, unset = swap */
/*
* For now, let's say that Valid and Present are the same thing.
* Alternatively, we could say that it's the "or" of R, W, and X
@ -456,57 +449,36 @@ static inline int pte_exec(pte_t pte)
#define pgtable_cache_init() do { } while (0)
/*
* Swap/file PTE definitions. If _PAGE_PRESENT is zero, the rest of the
* PTE is interpreted as swap information. Depending on the _PAGE_FILE
* bit, the remaining free bits are eitehr interpreted as a file offset
* or a swap type/offset tuple. Rather than have the TLB fill handler
* test _PAGE_PRESENT, we're going to reserve the permissions bits
* and set them to all zeros for swap entries, which speeds up the
* miss handler at the cost of 3 bits of offset. That trade-off can
* be revisited if necessary, but Hexagon processor architecture and
* target applications suggest a lot of TLB misses and not much swap space.
* Swap/file PTE definitions. If _PAGE_PRESENT is zero, the rest of the PTE is
* interpreted as swap information. The remaining free bits are interpreted as
* swap type/offset tuple. Rather than have the TLB fill handler test
* _PAGE_PRESENT, we're going to reserve the permissions bits and set them to
* all zeros for swap entries, which speeds up the miss handler at the cost of
* 3 bits of offset. That trade-off can be revisited if necessary, but Hexagon
* processor architecture and target applications suggest a lot of TLB misses
* and not much swap space.
*
* Format of swap PTE:
* bit 0: Present (zero)
* bit 1: _PAGE_FILE (zero)
* bits 2-6: swap type (arch independent layer uses 5 bits max)
* bits 7-9: bits 2:0 of offset
* bits 10-12: effectively _PAGE_PROTNONE (all zero)
* bits 13-31: bits 21:3 of swap offset
*
* Format of file PTE:
* bit 0: Present (zero)
* bit 1: _PAGE_FILE (zero)
* bits 2-9: bits 7:0 of offset
* bits 10-12: effectively _PAGE_PROTNONE (all zero)
* bits 13-31: bits 26:8 of swap offset
* bits 1-5: swap type (arch independent layer uses 5 bits max)
* bits 6-9: bits 3:0 of offset
* bits 10-12: effectively _PAGE_PROTNONE (all zero)
* bits 13-31: bits 22:4 of swap offset
*
* The split offset makes some of the following macros a little gnarly,
* but there's plenty of precedent for this sort of thing.
*/
#define PTE_FILE_MAX_BITS 27
/* Used for swap PTEs */
#define __swp_type(swp_pte) (((swp_pte).val >> 2) & 0x1f)
#define __swp_type(swp_pte) (((swp_pte).val >> 1) & 0x1f)
#define __swp_offset(swp_pte) \
((((swp_pte).val >> 7) & 0x7) | (((swp_pte).val >> 10) & 0x003ffff8))
((((swp_pte).val >> 6) & 0xf) | (((swp_pte).val >> 9) & 0x7ffff0))
#define __swp_entry(type, offset) \
((swp_entry_t) { \
((type << 2) | \
((offset & 0x3ffff8) << 10) | ((offset & 0x7) << 7)) })
/* Used for file PTEs */
#define pte_file(pte) \
((pte_val(pte) & (_PAGE_FILE | _PAGE_PRESENT)) == _PAGE_FILE)
#define pte_to_pgoff(pte) \
(((pte_val(pte) >> 2) & 0xff) | ((pte_val(pte) >> 5) & 0x07ffff00))
#define pgoff_to_pte(off) \
((pte_t) { ((((off) & 0x7ffff00) << 5) | (((off) & 0xff) << 2)\
| _PAGE_FILE) })
((type << 1) | \
((offset & 0x7ffff0) << 9) | ((offset & 0xf) << 6)) })
/* Oh boy. There are a lot of possible arch overrides found in this file. */
#include <asm-generic/pgtable.h>

Просмотреть файл

@ -57,9 +57,6 @@
#define _PAGE_ED (__IA64_UL(1) << 52) /* exception deferral */
#define _PAGE_PROTNONE (__IA64_UL(1) << 63)
/* Valid only for a PTE with the present bit cleared: */
#define _PAGE_FILE (1 << 1) /* see swap & file pte remarks below */
#define _PFN_MASK _PAGE_PPN_MASK
/* Mask of bits which may be changed by pte_modify(); the odd bits are there for _PAGE_PROTNONE */
#define _PAGE_CHG_MASK (_PAGE_P | _PAGE_PROTNONE | _PAGE_PL_MASK | _PAGE_AR_MASK | _PAGE_ED)
@ -300,7 +297,6 @@ extern unsigned long VMALLOC_END;
#define pte_exec(pte) ((pte_val(pte) & _PAGE_AR_RX) != 0)
#define pte_dirty(pte) ((pte_val(pte) & _PAGE_D) != 0)
#define pte_young(pte) ((pte_val(pte) & _PAGE_A) != 0)
#define pte_file(pte) ((pte_val(pte) & _PAGE_FILE) != 0)
#define pte_special(pte) 0
/*
@ -472,27 +468,16 @@ extern void paging_init (void);
*
* Format of swap pte:
* bit 0 : present bit (must be zero)
* bit 1 : _PAGE_FILE (must be zero)
* bits 2- 8: swap-type
* bits 9-62: swap offset
* bit 63 : _PAGE_PROTNONE bit
*
* Format of file pte:
* bit 0 : present bit (must be zero)
* bit 1 : _PAGE_FILE (must be one)
* bits 2-62: file_offset/PAGE_SIZE
* bits 1- 7: swap-type
* bits 8-62: swap offset
* bit 63 : _PAGE_PROTNONE bit
*/
#define __swp_type(entry) (((entry).val >> 2) & 0x7f)
#define __swp_offset(entry) (((entry).val << 1) >> 10)
#define __swp_entry(type,offset) ((swp_entry_t) { ((type) << 2) | ((long) (offset) << 9) })
#define __swp_type(entry) (((entry).val >> 1) & 0x7f)
#define __swp_offset(entry) (((entry).val << 1) >> 9)
#define __swp_entry(type,offset) ((swp_entry_t) { ((type) << 1) | ((long) (offset) << 8) })
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
#define PTE_FILE_MAX_BITS 61
#define pte_to_pgoff(pte) ((pte_val(pte) << 1) >> 3)
#define pgoff_to_pte(off) ((pte_t) { ((off) << 2) | _PAGE_FILE })
/*
* ZERO_PAGE is a global shared page that is always zero: used
* for zero-mapped memory areas etc..

Просмотреть файл

@ -70,9 +70,5 @@ static inline pmd_t *pmd_offset(pgd_t * dir, unsigned long address)
#define pfn_pte(pfn, prot) __pte(((pfn) << PAGE_SHIFT) | pgprot_val(prot))
#define pfn_pmd(pfn, prot) __pmd(((pfn) << PAGE_SHIFT) | pgprot_val(prot))
#define PTE_FILE_MAX_BITS 29
#define pte_to_pgoff(pte) (((pte_val(pte) >> 2) & 0x7f) | (((pte_val(pte) >> 10)) << 7))
#define pgoff_to_pte(off) ((pte_t) { (((off) & 0x7f) << 2) | (((off) >> 7) << 10) | _PAGE_FILE })
#endif /* __KERNEL__ */
#endif /* _ASM_M32R_PGTABLE_2LEVEL_H */

Просмотреть файл

@ -80,8 +80,6 @@ extern unsigned long empty_zero_page[1024];
*/
#define _PAGE_BIT_DIRTY 0 /* software: page changed */
#define _PAGE_BIT_FILE 0 /* when !present: nonlinear file
mapping */
#define _PAGE_BIT_PRESENT 1 /* Valid: page is valid */
#define _PAGE_BIT_GLOBAL 2 /* Global */
#define _PAGE_BIT_LARGE 3 /* Large */
@ -93,7 +91,6 @@ extern unsigned long empty_zero_page[1024];
#define _PAGE_BIT_PROTNONE 9 /* software: if not present */
#define _PAGE_DIRTY (1UL << _PAGE_BIT_DIRTY)
#define _PAGE_FILE (1UL << _PAGE_BIT_FILE)
#define _PAGE_PRESENT (1UL << _PAGE_BIT_PRESENT)
#define _PAGE_GLOBAL (1UL << _PAGE_BIT_GLOBAL)
#define _PAGE_LARGE (1UL << _PAGE_BIT_LARGE)
@ -206,14 +203,6 @@ static inline int pte_write(pte_t pte)
return pte_val(pte) & _PAGE_WRITE;
}
/*
* The following only works if pte_present() is not true.
*/
static inline int pte_file(pte_t pte)
{
return pte_val(pte) & _PAGE_FILE;
}
static inline int pte_special(pte_t pte)
{
return 0;

Просмотреть файл

@ -35,7 +35,6 @@
* hitting hardware.
*/
#define CF_PAGE_DIRTY 0x00000001
#define CF_PAGE_FILE 0x00000200
#define CF_PAGE_ACCESSED 0x00001000
#define _PAGE_CACHE040 0x020 /* 68040 cache mode, cachable, copyback */
@ -243,11 +242,6 @@ static inline int pte_young(pte_t pte)
return pte_val(pte) & CF_PAGE_ACCESSED;
}
static inline int pte_file(pte_t pte)
{
return pte_val(pte) & CF_PAGE_FILE;
}
static inline int pte_special(pte_t pte)
{
return 0;
@ -391,26 +385,13 @@ static inline void cache_page(void *vaddr)
*ptep = pte_mkcache(*ptep);
}
#define PTE_FILE_MAX_BITS 21
#define PTE_FILE_SHIFT 11
static inline unsigned long pte_to_pgoff(pte_t pte)
{
return pte_val(pte) >> PTE_FILE_SHIFT;
}
static inline pte_t pgoff_to_pte(unsigned pgoff)
{
return __pte((pgoff << PTE_FILE_SHIFT) + CF_PAGE_FILE);
}
/*
* Encode and de-code a swap entry (must be !pte_none(e) && !pte_present(e))
*/
#define __swp_type(x) ((x).val & 0xFF)
#define __swp_offset(x) ((x).val >> PTE_FILE_SHIFT)
#define __swp_offset(x) ((x).val >> 11)
#define __swp_entry(typ, off) ((swp_entry_t) { (typ) | \
(off << PTE_FILE_SHIFT) })
(off << 11) })
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) (__pte((x).val))

Просмотреть файл

@ -28,7 +28,6 @@
#define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_NOCACHE)
#define _PAGE_PROTNONE 0x004
#define _PAGE_FILE 0x008 /* pagecache or swap? */
#ifndef __ASSEMBLY__
@ -168,7 +167,6 @@ static inline void pgd_set(pgd_t *pgdp, pmd_t *pmdp)
static inline int pte_write(pte_t pte) { return !(pte_val(pte) & _PAGE_RONLY); }
static inline int pte_dirty(pte_t pte) { return pte_val(pte) & _PAGE_DIRTY; }
static inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED; }
static inline int pte_file(pte_t pte) { return pte_val(pte) & _PAGE_FILE; }
static inline int pte_special(pte_t pte) { return 0; }
static inline pte_t pte_wrprotect(pte_t pte) { pte_val(pte) |= _PAGE_RONLY; return pte; }
@ -266,19 +264,6 @@ static inline void cache_page(void *vaddr)
}
}
#define PTE_FILE_MAX_BITS 28
static inline unsigned long pte_to_pgoff(pte_t pte)
{
return pte.pte >> 4;
}
static inline pte_t pgoff_to_pte(unsigned off)
{
pte_t pte = { (off << 4) + _PAGE_FILE };
return pte;
}
/* Encode and de-code a swap entry (must be !pte_none(e) && !pte_present(e)) */
#define __swp_type(x) (((x).val >> 4) & 0xff)
#define __swp_offset(x) ((x).val >> 12)

Просмотреть файл

@ -37,8 +37,6 @@ extern void paging_init(void);
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
static inline int pte_file(pte_t pte) { return 0; }
/*
* ZERO_PAGE is a global shared page that is always zero: used
* for zero-mapped memory areas etc..

Просмотреть файл

@ -38,8 +38,6 @@
#define _PAGE_PRESENT (SUN3_PAGE_VALID)
#define _PAGE_ACCESSED (SUN3_PAGE_ACCESSED)
#define PTE_FILE_MAX_BITS 28
/* Compound page protection values. */
//todo: work out which ones *should* have SUN3_PAGE_NOCACHE and fix...
// is it just PAGE_KERNEL and PAGE_SHARED?
@ -168,7 +166,6 @@ static inline void pgd_clear (pgd_t *pgdp) {}
static inline int pte_write(pte_t pte) { return pte_val(pte) & SUN3_PAGE_WRITEABLE; }
static inline int pte_dirty(pte_t pte) { return pte_val(pte) & SUN3_PAGE_MODIFIED; }
static inline int pte_young(pte_t pte) { return pte_val(pte) & SUN3_PAGE_ACCESSED; }
static inline int pte_file(pte_t pte) { return pte_val(pte) & SUN3_PAGE_ACCESSED; }
static inline int pte_special(pte_t pte) { return 0; }
static inline pte_t pte_wrprotect(pte_t pte) { pte_val(pte) &= ~SUN3_PAGE_WRITEABLE; return pte; }
@ -202,18 +199,6 @@ static inline pmd_t *pmd_offset (pgd_t *pgd, unsigned long address)
return (pmd_t *) pgd;
}
static inline unsigned long pte_to_pgoff(pte_t pte)
{
return pte.pte & SUN3_PAGE_PGNUM_MASK;
}
static inline pte_t pgoff_to_pte(unsigned off)
{
pte_t pte = { off + SUN3_PAGE_ACCESSED };
return pte;
}
/* Find an entry in the third-level pagetable. */
#define pte_index(address) ((address >> PAGE_SHIFT) & (PTRS_PER_PTE-1))
#define pte_offset_kernel(pmd, address) ((pte_t *) __pmd_page(*pmd) + pte_index(address))

Просмотреть файл

@ -47,7 +47,6 @@
*/
#define _PAGE_ACCESSED _PAGE_ALWAYS_ZERO_1
#define _PAGE_DIRTY _PAGE_ALWAYS_ZERO_2
#define _PAGE_FILE _PAGE_ALWAYS_ZERO_3
/* Pages owned, and protected by, the kernel. */
#define _PAGE_KERNEL _PAGE_PRIV
@ -219,7 +218,6 @@ extern unsigned long empty_zero_page;
static inline int pte_write(pte_t pte) { return pte_val(pte) & _PAGE_WRITE; }
static inline int pte_dirty(pte_t pte) { return pte_val(pte) & _PAGE_DIRTY; }
static inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED; }
static inline int pte_file(pte_t pte) { return pte_val(pte) & _PAGE_FILE; }
static inline int pte_special(pte_t pte) { return 0; }
static inline pte_t pte_wrprotect(pte_t pte) { pte_val(pte) &= (~_PAGE_WRITE); return pte; }
@ -327,10 +325,6 @@ static inline void update_mmu_cache(struct vm_area_struct *vma,
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
#define PTE_FILE_MAX_BITS 22
#define pte_to_pgoff(x) (pte_val(x) >> 10)
#define pgoff_to_pte(x) __pte(((x) << 10) | _PAGE_FILE)
#define kern_addr_valid(addr) (1)
/*

Просмотреть файл

@ -40,10 +40,6 @@ extern int mem_init_done;
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
#ifndef __ASSEMBLY__
static inline int pte_file(pte_t pte) { return 0; }
#endif /* __ASSEMBLY__ */
#define ZERO_PAGE(vaddr) ({ BUG(); NULL; })
#define swapper_pg_dir ((pgd_t *) NULL)
@ -207,7 +203,6 @@ static inline pte_t pte_mkspecial(pte_t pte) { return pte; }
/* Definitions for MicroBlaze. */
#define _PAGE_GUARDED 0x001 /* G: page is guarded from prefetch */
#define _PAGE_FILE 0x001 /* when !present: nonlinear file mapping */
#define _PAGE_PRESENT 0x002 /* software: PTE contains a translation */
#define _PAGE_NO_CACHE 0x004 /* I: caching is inhibited */
#define _PAGE_WRITETHRU 0x008 /* W: caching is write-through */
@ -337,7 +332,6 @@ static inline int pte_write(pte_t pte) { return pte_val(pte) & _PAGE_RW; }
static inline int pte_exec(pte_t pte) { return pte_val(pte) & _PAGE_EXEC; }
static inline int pte_dirty(pte_t pte) { return pte_val(pte) & _PAGE_DIRTY; }
static inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED; }
static inline int pte_file(pte_t pte) { return pte_val(pte) & _PAGE_FILE; }
static inline void pte_uncache(pte_t pte) { pte_val(pte) |= _PAGE_NO_CACHE; }
static inline void pte_cache(pte_t pte) { pte_val(pte) &= ~_PAGE_NO_CACHE; }
@ -499,11 +493,6 @@ static inline pmd_t *pmd_offset(pgd_t *dir, unsigned long address)
#define pte_unmap(pte) kunmap_atomic(pte)
/* Encode and decode a nonlinear file mapping entry */
#define PTE_FILE_MAX_BITS 29
#define pte_to_pgoff(pte) (pte_val(pte) >> 3)
#define pgoff_to_pte(off) ((pte_t) { ((off) << 3) | _PAGE_FILE })
extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
/*

Просмотреть файл

@ -161,22 +161,6 @@ pfn_pte(unsigned long pfn, pgprot_t prot)
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
/*
* Encode and decode a nonlinear file mapping entry
*/
#define pte_to_pgoff(_pte) ((((_pte).pte >> 1 ) & 0x07) | \
(((_pte).pte >> 2 ) & 0x38) | \
(((_pte).pte >> 10) << 6 ))
#define pgoff_to_pte(off) ((pte_t) { (((off) & 0x07) << 1 ) | \
(((off) & 0x38) << 2 ) | \
(((off) >> 6 ) << 10) | \
_PAGE_FILE })
/*
* Bits 0, 4, 8, and 9 are taken, split up 28 bits of offset into this range:
*/
#define PTE_FILE_MAX_BITS 28
#else
#if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
@ -188,13 +172,6 @@ pfn_pte(unsigned long pfn, pgprot_t prot)
#define __pte_to_swp_entry(pte) ((swp_entry_t) { (pte).pte_high })
#define __swp_entry_to_pte(x) ((pte_t) { 0, (x).val })
/*
* Bits 0 and 1 of pte_high are taken, use the rest for the page offset...
*/
#define pte_to_pgoff(_pte) ((_pte).pte_high >> 2)
#define pgoff_to_pte(off) ((pte_t) { _PAGE_FILE, (off) << 2 })
#define PTE_FILE_MAX_BITS 30
#else
/*
* Constraints:
@ -209,19 +186,6 @@ pfn_pte(unsigned long pfn, pgprot_t prot)
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
/*
* Encode and decode a nonlinear file mapping entry
*/
#define pte_to_pgoff(_pte) ((((_pte).pte >> 1) & 0x7) | \
(((_pte).pte >> 2) & 0x8) | \
(((_pte).pte >> 8) << 4))
#define pgoff_to_pte(off) ((pte_t) { (((off) & 0x7) << 1) | \
(((off) & 0x8) << 2) | \
(((off) >> 4) << 8) | \
_PAGE_FILE })
#define PTE_FILE_MAX_BITS 28
#endif /* defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32) */
#endif /* defined(CONFIG_CPU_R3000) || defined(CONFIG_CPU_TX39XX) */

Просмотреть файл

@ -291,13 +291,4 @@ static inline pte_t mk_swap_pte(unsigned long type, unsigned long offset)
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
/*
* Bits 0, 4, 6, and 7 are taken. Let's leave bits 1, 2, 3, and 5 alone to
* make things easier, and only use the upper 56 bits for the page offset...
*/
#define PTE_FILE_MAX_BITS 56
#define pte_to_pgoff(_pte) ((_pte).pte >> 8)
#define pgoff_to_pte(off) ((pte_t) { ((off) << 8) | _PAGE_FILE })
#endif /* _ASM_PGTABLE_64_H */

Просмотреть файл

@ -48,8 +48,6 @@
/*
* The following bits are implemented in software
*
* _PAGE_FILE semantics: set:pagecache unset:swap
*/
#define _PAGE_PRESENT_SHIFT (_CACHE_SHIFT + 3)
#define _PAGE_PRESENT (1 << _PAGE_PRESENT_SHIFT)
@ -64,7 +62,6 @@
#define _PAGE_SILENT_READ _PAGE_VALID
#define _PAGE_SILENT_WRITE _PAGE_DIRTY
#define _PAGE_FILE _PAGE_MODIFIED
#define _PFN_SHIFT (PAGE_SHIFT - 12 + _CACHE_SHIFT + 3)
@ -72,8 +69,6 @@
/*
* The following are implemented by software
*
* _PAGE_FILE semantics: set:pagecache unset:swap
*/
#define _PAGE_PRESENT_SHIFT 0
#define _PAGE_PRESENT (1 << _PAGE_PRESENT_SHIFT)
@ -85,8 +80,6 @@
#define _PAGE_ACCESSED (1 << _PAGE_ACCESSED_SHIFT)
#define _PAGE_MODIFIED_SHIFT 4
#define _PAGE_MODIFIED (1 << _PAGE_MODIFIED_SHIFT)
#define _PAGE_FILE_SHIFT 4
#define _PAGE_FILE (1 << _PAGE_FILE_SHIFT)
/*
* And these are the hardware TLB bits
@ -116,7 +109,6 @@
* The following bits are implemented in software
*
* _PAGE_READ / _PAGE_READ_SHIFT should be unused if cpu_has_rixi.
* _PAGE_FILE semantics: set:pagecache unset:swap
*/
#define _PAGE_PRESENT_SHIFT (0)
#define _PAGE_PRESENT (1 << _PAGE_PRESENT_SHIFT)
@ -128,7 +120,6 @@
#define _PAGE_ACCESSED (1 << _PAGE_ACCESSED_SHIFT)
#define _PAGE_MODIFIED_SHIFT (_PAGE_ACCESSED_SHIFT + 1)
#define _PAGE_MODIFIED (1 << _PAGE_MODIFIED_SHIFT)
#define _PAGE_FILE (_PAGE_MODIFIED)
#ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT
/* huge tlb page */

Просмотреть файл

@ -231,7 +231,6 @@ extern pgd_t swapper_pg_dir[];
static inline int pte_write(pte_t pte) { return pte.pte_low & _PAGE_WRITE; }
static inline int pte_dirty(pte_t pte) { return pte.pte_low & _PAGE_MODIFIED; }
static inline int pte_young(pte_t pte) { return pte.pte_low & _PAGE_ACCESSED; }
static inline int pte_file(pte_t pte) { return pte.pte_low & _PAGE_FILE; }
static inline pte_t pte_wrprotect(pte_t pte)
{
@ -287,7 +286,6 @@ static inline pte_t pte_mkyoung(pte_t pte)
static inline int pte_write(pte_t pte) { return pte_val(pte) & _PAGE_WRITE; }
static inline int pte_dirty(pte_t pte) { return pte_val(pte) & _PAGE_MODIFIED; }
static inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED; }
static inline int pte_file(pte_t pte) { return pte_val(pte) & _PAGE_FILE; }
static inline pte_t pte_wrprotect(pte_t pte)
{

Просмотреть файл

@ -134,7 +134,6 @@ extern pte_t kernel_vmalloc_ptes[(VMALLOC_END - VMALLOC_START) / PAGE_SIZE];
#define _PAGE_NX 0 /* no-execute bit */
/* If _PAGE_VALID is clear, we use these: */
#define _PAGE_FILE xPTEL2_C /* set:pagecache unset:swap */
#define _PAGE_PROTNONE 0x000 /* If not present */
#define __PAGE_PROT_UWAUX 0x010
@ -241,11 +240,6 @@ static inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED; }
static inline int pte_write(pte_t pte) { return pte_val(pte) & __PAGE_PROT_WRITE; }
static inline int pte_special(pte_t pte){ return 0; }
/*
* The following only works if pte_present() is not true.
*/
static inline int pte_file(pte_t pte) { return pte_val(pte) & _PAGE_FILE; }
static inline pte_t pte_rdprotect(pte_t pte)
{
pte_val(pte) &= ~(__PAGE_PROT_USER|__PAGE_PROT_UWAUX); return pte;
@ -338,16 +332,11 @@ static inline int pte_exec_kernel(pte_t pte)
return 1;
}
#define PTE_FILE_MAX_BITS 30
#define pte_to_pgoff(pte) (pte_val(pte) >> 2)
#define pgoff_to_pte(off) __pte((off) << 2 | _PAGE_FILE)
/* Encode and de-code a swap entry */
#define __swp_type(x) (((x).val >> 2) & 0x3f)
#define __swp_offset(x) ((x).val >> 8)
#define __swp_type(x) (((x).val >> 1) & 0x3f)
#define __swp_offset(x) ((x).val >> 7)
#define __swp_entry(type, offset) \
((swp_entry_t) { ((type) << 2) | ((offset) << 8) })
((swp_entry_t) { ((type) << 1) | ((offset) << 7) })
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) __pte((x).val)

Просмотреть файл

@ -30,6 +30,5 @@
#define _PAGE_PRESENT (1<<25) /* PTE contains a translation */
#define _PAGE_ACCESSED (1<<26) /* page referenced */
#define _PAGE_DIRTY (1<<27) /* dirty page */
#define _PAGE_FILE (1<<28) /* PTE used for file mapping or swap */
#endif /* _ASM_NIOS2_PGTABLE_BITS_H */

Просмотреть файл

@ -112,8 +112,6 @@ static inline int pte_dirty(pte_t pte) \
{ return pte_val(pte) & _PAGE_DIRTY; }
static inline int pte_young(pte_t pte) \
{ return pte_val(pte) & _PAGE_ACCESSED; }
static inline int pte_file(pte_t pte) \
{ return pte_val(pte) & _PAGE_FILE; }
static inline int pte_special(pte_t pte) { return 0; }
#define pgprot_noncached pgprot_noncached
@ -272,8 +270,7 @@ static inline void pte_clear(struct mm_struct *mm,
__FILE__, __LINE__, pgd_val(e))
/*
* Encode and decode a swap entry (must be !pte_none(pte) && !pte_present(pte)
* && !pte_file(pte)):
* Encode and decode a swap entry (must be !pte_none(pte) && !pte_present(pte):
*
* 31 30 29 28 27 26 25 24 23 22 21 20 19 18 ... 1 0
* 0 0 0 0 type. 0 0 0 0 0 0 offset.........
@ -290,11 +287,6 @@ static inline void pte_clear(struct mm_struct *mm,
#define __swp_entry_to_pte(swp) ((pte_t) { (swp).val })
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
/* Encode and decode a nonlinear file mapping entry */
#define PTE_FILE_MAX_BITS 25
#define pte_to_pgoff(pte) (pte_val(pte) & 0x1ffffff)
#define pgoff_to_pte(off) __pte(((off) & 0x1ffffff) | _PAGE_FILE)
#define kern_addr_valid(addr) (1)
#include <asm-generic/pgtable.h>

Просмотреть файл

@ -125,7 +125,6 @@ extern void paging_init(void);
#define _PAGE_CC 0x001 /* software: pte contains a translation */
#define _PAGE_CI 0x002 /* cache inhibit */
#define _PAGE_WBC 0x004 /* write back cache */
#define _PAGE_FILE 0x004 /* set: pagecache, unset: swap (when !PRESENT) */
#define _PAGE_WOM 0x008 /* weakly ordered memory */
#define _PAGE_A 0x010 /* accessed */
@ -240,7 +239,6 @@ static inline int pte_write(pte_t pte) { return pte_val(pte) & _PAGE_WRITE; }
static inline int pte_exec(pte_t pte) { return pte_val(pte) & _PAGE_EXEC; }
static inline int pte_dirty(pte_t pte) { return pte_val(pte) & _PAGE_DIRTY; }
static inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED; }
static inline int pte_file(pte_t pte) { return pte_val(pte) & _PAGE_FILE; }
static inline int pte_special(pte_t pte) { return 0; }
static inline pte_t pte_mkspecial(pte_t pte) { return pte; }
@ -438,12 +436,6 @@ static inline void update_mmu_cache(struct vm_area_struct *vma,
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
/* Encode and decode a nonlinear file mapping entry */
#define PTE_FILE_MAX_BITS 26
#define pte_to_pgoff(x) (pte_val(x) >> 6)
#define pgoff_to_pte(x) __pte(((x) << 6) | _PAGE_FILE)
#define kern_addr_valid(addr) (1)
#include <asm-generic/pgtable.h>

Просмотреть файл

@ -754,11 +754,6 @@ _dc_enable:
/* ===============================================[ page table masks ]=== */
/* bit 4 is used in hardware as write back cache bit. we never use this bit
* explicitly, so we can reuse it as _PAGE_FILE bit and mask it out when
* writing into hardware pte's
*/
#define DTLB_UP_CONVERT_MASK 0x3fa
#define ITLB_UP_CONVERT_MASK 0x3a

Просмотреть файл

@ -146,7 +146,6 @@ extern void purge_tlb_entries(struct mm_struct *, unsigned long);
#define _PAGE_GATEWAY_BIT 28 /* (0x008) privilege promotion allowed */
#define _PAGE_DMB_BIT 27 /* (0x010) Data Memory Break enable (B bit) */
#define _PAGE_DIRTY_BIT 26 /* (0x020) Page Dirty (D bit) */
#define _PAGE_FILE_BIT _PAGE_DIRTY_BIT /* overload this bit */
#define _PAGE_REFTRAP_BIT 25 /* (0x040) Page Ref. Trap enable (T bit) */
#define _PAGE_NO_CACHE_BIT 24 /* (0x080) Uncached Page (U bit) */
#define _PAGE_ACCESSED_BIT 23 /* (0x100) Software: Page Accessed */
@ -167,13 +166,6 @@ extern void purge_tlb_entries(struct mm_struct *, unsigned long);
/* PFN_PTE_SHIFT defines the shift of a PTE value to access the PFN field */
#define PFN_PTE_SHIFT 12
/* this is how many bits may be used by the file functions */
#define PTE_FILE_MAX_BITS (BITS_PER_LONG - PTE_SHIFT)
#define pte_to_pgoff(pte) (pte_val(pte) >> PTE_SHIFT)
#define pgoff_to_pte(off) ((pte_t) { ((off) << PTE_SHIFT) | _PAGE_FILE })
#define _PAGE_READ (1 << xlate_pabit(_PAGE_READ_BIT))
#define _PAGE_WRITE (1 << xlate_pabit(_PAGE_WRITE_BIT))
#define _PAGE_RW (_PAGE_READ | _PAGE_WRITE)
@ -186,7 +178,6 @@ extern void purge_tlb_entries(struct mm_struct *, unsigned long);
#define _PAGE_ACCESSED (1 << xlate_pabit(_PAGE_ACCESSED_BIT))
#define _PAGE_PRESENT (1 << xlate_pabit(_PAGE_PRESENT_BIT))
#define _PAGE_USER (1 << xlate_pabit(_PAGE_USER_BIT))
#define _PAGE_FILE (1 << xlate_pabit(_PAGE_FILE_BIT))
#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_READ | _PAGE_WRITE | _PAGE_DIRTY | _PAGE_ACCESSED)
#define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY)
@ -344,7 +335,6 @@ static inline void pgd_clear(pgd_t * pgdp) { }
static inline int pte_dirty(pte_t pte) { return pte_val(pte) & _PAGE_DIRTY; }
static inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED; }
static inline int pte_write(pte_t pte) { return pte_val(pte) & _PAGE_WRITE; }
static inline int pte_file(pte_t pte) { return pte_val(pte) & _PAGE_FILE; }
static inline int pte_special(pte_t pte) { return 0; }
static inline pte_t pte_mkclean(pte_t pte) { pte_val(pte) &= ~_PAGE_DIRTY; return pte; }

Просмотреть файл

@ -249,10 +249,10 @@ static inline int is_module_addr(void *addr)
_PAGE_YOUNG)
/*
* handle_pte_fault uses pte_present, pte_none and pte_file to find out the
* pte type WITHOUT holding the page table lock. The _PAGE_PRESENT bit
* is used to distinguish present from not-present ptes. It is changed only
* with the page table lock held.
* handle_pte_fault uses pte_present and pte_none to find out the pte type
* WITHOUT holding the page table lock. The _PAGE_PRESENT bit is used to
* distinguish present from not-present ptes. It is changed only with the page
* table lock held.
*
* The following table gives the different possible bit combinations for
* the pte hardware and software bits in the last 12 bits of a pte:
@ -279,7 +279,6 @@ static inline int is_module_addr(void *addr)
*
* pte_present is true for the bit pattern .xx...xxxxx1, (pte & 0x001) == 0x001
* pte_none is true for the bit pattern .10...xxxx00, (pte & 0x603) == 0x400
* pte_file is true for the bit pattern .11...xxxxx0, (pte & 0x601) == 0x600
* pte_swap is true for the bit pattern .10...xxxx10, (pte & 0x603) == 0x402
*/
@ -671,13 +670,6 @@ static inline int pte_swap(pte_t pte)
== (_PAGE_INVALID | _PAGE_TYPE);
}
static inline int pte_file(pte_t pte)
{
/* Bit pattern: (pte & 0x601) == 0x600 */
return (pte_val(pte) & (_PAGE_INVALID | _PAGE_PROTECT | _PAGE_PRESENT))
== (_PAGE_INVALID | _PAGE_PROTECT);
}
static inline int pte_special(pte_t pte)
{
return (pte_val(pte) & _PAGE_SPECIAL);
@ -1756,19 +1748,6 @@ static inline pte_t mk_swap_pte(unsigned long type, unsigned long offset)
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
#ifndef CONFIG_64BIT
# define PTE_FILE_MAX_BITS 26
#else /* CONFIG_64BIT */
# define PTE_FILE_MAX_BITS 59
#endif /* CONFIG_64BIT */
#define pte_to_pgoff(__pte) \
((((__pte).pte >> 12) << 7) + (((__pte).pte >> 1) & 0x7f))
#define pgoff_to_pte(__off) \
((pte_t) { ((((__off) & 0x7f) << 1) + (((__off) >> 7) << 12)) \
| _PAGE_INVALID | _PAGE_PROTECT })
#endif /* !__ASSEMBLY__ */
#define kern_addr_valid(addr) (1)

Просмотреть файл

@ -6,7 +6,6 @@
#define _PAGE_WRITE (1<<7) /* implemented in software */
#define _PAGE_PRESENT (1<<9) /* implemented in software */
#define _PAGE_MODIFIED (1<<10) /* implemented in software */
#define _PAGE_FILE (1<<10)
#define _PAGE_GLOBAL (1<<0)
#define _PAGE_VALID (1<<1)

Просмотреть файл

@ -90,15 +90,6 @@ static inline void pmd_clear(pmd_t *pmdp)
((pte_t *)page_address(pmd_page(*(dir))) + __pte_offset(address))
#define pte_unmap(pte) ((void)(pte))
/*
* Bits 9(_PAGE_PRESENT) and 10(_PAGE_FILE)are taken,
* split up 30 bits of offset into this range:
*/
#define PTE_FILE_MAX_BITS 30
#define pte_to_pgoff(_pte) \
(((_pte).pte & 0x1ff) | (((_pte).pte >> 11) << 9))
#define pgoff_to_pte(off) \
((pte_t) {((off) & 0x1ff) | (((off) >> 9) << 11) | _PAGE_FILE})
#define __pte_to_swp_entry(pte) \
((swp_entry_t) { pte_val(pte)})
#define __swp_entry_to_pte(x) ((pte_t) {(x).val})
@ -169,8 +160,8 @@ static inline pgprot_t pgprot_noncached(pgprot_t _prot)
}
#define __swp_type(x) ((x).val & 0x1f)
#define __swp_offset(x) ((x).val >> 11)
#define __swp_entry(type, offset) ((swp_entry_t){(type) | ((offset) << 11)})
#define __swp_offset(x) ((x).val >> 10)
#define __swp_entry(type, offset) ((swp_entry_t){(type) | ((offset) << 10)})
extern unsigned long empty_zero_page;
extern unsigned long zero_page_mask;
@ -198,11 +189,6 @@ static inline int pte_young(pte_t pte)
return pte_val(pte) & _PAGE_ACCESSED;
}
static inline int pte_file(pte_t pte)
{
return pte_val(pte) & _PAGE_FILE;
}
#define pte_special(pte) (0)
static inline pte_t pte_wrprotect(pte_t pte)

Просмотреть файл

@ -1,7 +1,7 @@
config SUPERH
def_bool y
select ARCH_MIGHT_HAVE_PC_PARPORT
select EXPERT
select HAVE_PATA_PLATFORM
select CLKDEV_LOOKUP
select HAVE_IDE if HAS_IOPORT_MAP
select HAVE_MEMBLOCK

Просмотреть файл

@ -14,9 +14,6 @@
#define DRV_NAME "SE7343-FPGA"
#define pr_fmt(fmt) DRV_NAME ": " fmt
#define irq_reg_readl ioread16
#define irq_reg_writel iowrite16
#include <linux/init.h>
#include <linux/irq.h>
#include <linux/interrupt.h>

Просмотреть файл

@ -11,9 +11,6 @@
#define DRV_NAME "SE7722-FPGA"
#define pr_fmt(fmt) DRV_NAME ": " fmt
#define irq_reg_readl ioread16
#define irq_reg_writel iowrite16
#include <linux/init.h>
#include <linux/irq.h>
#include <linux/interrupt.h>

Просмотреть файл

@ -26,8 +26,6 @@
* and timing control which (together with bit 0) are moved into the
* old-style PTEA on the parts that support it.
*
* XXX: Leave the _PAGE_FILE and _PAGE_WT overhaul for a rainy day.
*
* SH-X2 MMUs and extended PTEs
*
* SH-X2 supports an extended mode TLB with split data arrays due to the
@ -51,7 +49,6 @@
#define _PAGE_PRESENT 0x100 /* V-bit : page is valid */
#define _PAGE_PROTNONE 0x200 /* software: if not present */
#define _PAGE_ACCESSED 0x400 /* software: page referenced */
#define _PAGE_FILE _PAGE_WT /* software: pagecache or swap? */
#define _PAGE_SPECIAL 0x800 /* software: special page */
#define _PAGE_SZ_MASK (_PAGE_SZ0 | _PAGE_SZ1)
@ -105,14 +102,13 @@ static inline unsigned long copy_ptea_attributes(unsigned long x)
/* Mask which drops unused bits from the PTEL value */
#if defined(CONFIG_CPU_SH3)
#define _PAGE_CLEAR_FLAGS (_PAGE_PROTNONE | _PAGE_ACCESSED| \
_PAGE_FILE | _PAGE_SZ1 | \
_PAGE_HW_SHARED)
_PAGE_SZ1 | _PAGE_HW_SHARED)
#elif defined(CONFIG_X2TLB)
/* Get rid of the legacy PR/SZ bits when using extended mode */
#define _PAGE_CLEAR_FLAGS (_PAGE_PROTNONE | _PAGE_ACCESSED | \
_PAGE_FILE | _PAGE_PR_MASK | _PAGE_SZ_MASK)
_PAGE_PR_MASK | _PAGE_SZ_MASK)
#else
#define _PAGE_CLEAR_FLAGS (_PAGE_PROTNONE | _PAGE_ACCESSED | _PAGE_FILE)
#define _PAGE_CLEAR_FLAGS (_PAGE_PROTNONE | _PAGE_ACCESSED)
#endif
#define _PAGE_FLAGS_HARDWARE_MASK (phys_addr_mask() & ~(_PAGE_CLEAR_FLAGS))
@ -343,7 +339,6 @@ static inline void set_pte(pte_t *ptep, pte_t pte)
#define pte_not_present(pte) (!((pte).pte_low & _PAGE_PRESENT))
#define pte_dirty(pte) ((pte).pte_low & _PAGE_DIRTY)
#define pte_young(pte) ((pte).pte_low & _PAGE_ACCESSED)
#define pte_file(pte) ((pte).pte_low & _PAGE_FILE)
#define pte_special(pte) ((pte).pte_low & _PAGE_SPECIAL)
#ifdef CONFIG_X2TLB
@ -445,7 +440,6 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
* Encode and de-code a swap entry
*
* Constraints:
* _PAGE_FILE at bit 0
* _PAGE_PRESENT at bit 8
* _PAGE_PROTNONE at bit 9
*
@ -453,9 +447,7 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
* swap offset into bits 10:30. For the 64-bit PTE case, we keep the
* preserved bits in the low 32-bits and use the upper 32 as the swap
* offset (along with a 5-bit type), following the same approach as x86
* PAE. This keeps the logic quite simple, and allows for a full 32
* PTE_FILE_MAX_BITS, as opposed to the 29-bits we're constrained with
* in the pte_low case.
* PAE. This keeps the logic quite simple.
*
* As is evident by the Alpha code, if we ever get a 64-bit unsigned
* long (swp_entry_t) to match up with the 64-bit PTEs, this all becomes
@ -471,13 +463,6 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
#define __pte_to_swp_entry(pte) ((swp_entry_t){ (pte).pte_high })
#define __swp_entry_to_pte(x) ((pte_t){ 0, (x).val })
/*
* Encode and decode a nonlinear file mapping entry
*/
#define pte_to_pgoff(pte) ((pte).pte_high)
#define pgoff_to_pte(off) ((pte_t) { _PAGE_FILE, (off) })
#define PTE_FILE_MAX_BITS 32
#else
#define __swp_type(x) ((x).val & 0xff)
#define __swp_offset(x) ((x).val >> 10)
@ -485,13 +470,6 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) >> 1 })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val << 1 })
/*
* Encode and decode a nonlinear file mapping entry
*/
#define PTE_FILE_MAX_BITS 29
#define pte_to_pgoff(pte) (pte_val(pte) >> 1)
#define pgoff_to_pte(off) ((pte_t) { ((off) << 1) | _PAGE_FILE })
#endif
#endif /* __ASSEMBLY__ */

Просмотреть файл

@ -107,7 +107,6 @@ static __inline__ void set_pte(pte_t *pteptr, pte_t pteval)
#define _PAGE_DEVICE 0x001 /* CB0: if uncacheable, 1->device (i.e. no write-combining or reordering at bus level) */
#define _PAGE_CACHABLE 0x002 /* CB1: uncachable/cachable */
#define _PAGE_PRESENT 0x004 /* software: page referenced */
#define _PAGE_FILE 0x004 /* software: only when !present */
#define _PAGE_SIZE0 0x008 /* SZ0-bit : size of page */
#define _PAGE_SIZE1 0x010 /* SZ1-bit : size of page */
#define _PAGE_SHARED 0x020 /* software: reflects PTEH's SH */
@ -129,7 +128,7 @@ static __inline__ void set_pte(pte_t *pteptr, pte_t pteval)
#define _PAGE_WIRED _PAGE_EXT(0x001) /* software: wire the tlb entry */
#define _PAGE_SPECIAL _PAGE_EXT(0x002)
#define _PAGE_CLEAR_FLAGS (_PAGE_PRESENT | _PAGE_FILE | _PAGE_SHARED | \
#define _PAGE_CLEAR_FLAGS (_PAGE_PRESENT | _PAGE_SHARED | \
_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_WIRED)
/* Mask which drops software flags */
@ -260,7 +259,6 @@ static __inline__ void set_pte(pte_t *pteptr, pte_t pteval)
*/
static inline int pte_dirty(pte_t pte) { return pte_val(pte) & _PAGE_DIRTY; }
static inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED; }
static inline int pte_file(pte_t pte) { return pte_val(pte) & _PAGE_FILE; }
static inline int pte_write(pte_t pte) { return pte_val(pte) & _PAGE_WRITE; }
static inline int pte_special(pte_t pte){ return pte_val(pte) & _PAGE_SPECIAL; }
@ -304,11 +302,6 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
/* Encode and decode a nonlinear file mapping entry */
#define PTE_FILE_MAX_BITS 29
#define pte_to_pgoff(pte) (pte_val(pte))
#define pgoff_to_pte(off) ((pte_t) { (off) | _PAGE_FILE })
#endif /* !__ASSEMBLY__ */
#define pfn_pte(pfn, prot) __pte(((pfn) << PAGE_SHIFT) | pgprot_val(prot))

Просмотреть файл

@ -221,14 +221,6 @@ static inline int pte_young(pte_t pte)
return pte_val(pte) & SRMMU_REF;
}
/*
* The following only work if pte_present() is not true.
*/
static inline int pte_file(pte_t pte)
{
return pte_val(pte) & SRMMU_FILE;
}
static inline int pte_special(pte_t pte)
{
return 0;
@ -375,22 +367,6 @@ static inline swp_entry_t __swp_entry(unsigned long type, unsigned long offset)
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
/* file-offset-in-pte helpers */
static inline unsigned long pte_to_pgoff(pte_t pte)
{
return pte_val(pte) >> SRMMU_PTE_FILE_SHIFT;
}
static inline pte_t pgoff_to_pte(unsigned long pgoff)
{
return __pte((pgoff << SRMMU_PTE_FILE_SHIFT) | SRMMU_FILE);
}
/*
* This is made a constant because mm/fremap.c required a constant.
*/
#define PTE_FILE_MAX_BITS 24
static inline unsigned long
__get_phys (unsigned long addr)
{

Просмотреть файл

@ -137,7 +137,6 @@ bool kern_addr_valid(unsigned long addr);
#define _PAGE_SOFT_4U _AC(0x0000000000001F80,UL) /* Software bits: */
#define _PAGE_EXEC_4U _AC(0x0000000000001000,UL) /* Executable SW bit */
#define _PAGE_MODIFIED_4U _AC(0x0000000000000800,UL) /* Modified (dirty) */
#define _PAGE_FILE_4U _AC(0x0000000000000800,UL) /* Pagecache page */
#define _PAGE_ACCESSED_4U _AC(0x0000000000000400,UL) /* Accessed (ref'd) */
#define _PAGE_READ_4U _AC(0x0000000000000200,UL) /* Readable SW Bit */
#define _PAGE_WRITE_4U _AC(0x0000000000000100,UL) /* Writable SW Bit */
@ -167,7 +166,6 @@ bool kern_addr_valid(unsigned long addr);
#define _PAGE_EXEC_4V _AC(0x0000000000000080,UL) /* Executable Page */
#define _PAGE_W_4V _AC(0x0000000000000040,UL) /* Writable */
#define _PAGE_SOFT_4V _AC(0x0000000000000030,UL) /* Software bits */
#define _PAGE_FILE_4V _AC(0x0000000000000020,UL) /* Pagecache page */
#define _PAGE_PRESENT_4V _AC(0x0000000000000010,UL) /* Present */
#define _PAGE_RESV_4V _AC(0x0000000000000008,UL) /* Reserved */
#define _PAGE_SZ16GB_4V _AC(0x0000000000000007,UL) /* 16GB Page */
@ -332,22 +330,6 @@ static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
}
#endif
static inline pte_t pgoff_to_pte(unsigned long off)
{
off <<= PAGE_SHIFT;
__asm__ __volatile__(
"\n661: or %0, %2, %0\n"
" .section .sun4v_1insn_patch, \"ax\"\n"
" .word 661b\n"
" or %0, %3, %0\n"
" .previous\n"
: "=r" (off)
: "0" (off), "i" (_PAGE_FILE_4U), "i" (_PAGE_FILE_4V));
return __pte(off);
}
static inline pgprot_t pgprot_noncached(pgprot_t prot)
{
unsigned long val = pgprot_val(prot);
@ -609,22 +591,6 @@ static inline unsigned long pte_exec(pte_t pte)
return (pte_val(pte) & mask);
}
static inline unsigned long pte_file(pte_t pte)
{
unsigned long val = pte_val(pte);
__asm__ __volatile__(
"\n661: and %0, %2, %0\n"
" .section .sun4v_1insn_patch, \"ax\"\n"
" .word 661b\n"
" and %0, %3, %0\n"
" .previous\n"
: "=r" (val)
: "0" (val), "i" (_PAGE_FILE_4U), "i" (_PAGE_FILE_4V));
return val;
}
static inline unsigned long pte_present(pte_t pte)
{
unsigned long val = pte_val(pte);
@ -971,12 +937,6 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
/* File offset in PTE support. */
unsigned long pte_file(pte_t);
#define pte_to_pgoff(pte) (pte_val(pte) >> PAGE_SHIFT)
pte_t pgoff_to_pte(unsigned long);
#define PTE_FILE_MAX_BITS (64UL - PAGE_SHIFT - 1UL)
int page_in_phys_avail(unsigned long paddr);
/*

Просмотреть файл

@ -80,10 +80,6 @@
#define SRMMU_PRIV 0x1c
#define SRMMU_PRIV_RDONLY 0x18
#define SRMMU_FILE 0x40 /* Implemented in software */
#define SRMMU_PTE_FILE_SHIFT 8 /* == 32-PTE_FILE_MAX_BITS */
#define SRMMU_CHG_MASK (0xffffff00 | SRMMU_REF | SRMMU_DIRTY)
/* SRMMU swap entry encoding
@ -94,13 +90,13 @@
* oooooooooooooooooootttttRRRRRRRR
* fedcba9876543210fedcba9876543210
*
* The bottom 8 bits are reserved for protection and status bits, especially
* FILE and PRESENT.
* The bottom 7 bits are reserved for protection and status bits, especially
* PRESENT.
*/
#define SRMMU_SWP_TYPE_MASK 0x1f
#define SRMMU_SWP_TYPE_SHIFT SRMMU_PTE_FILE_SHIFT
#define SRMMU_SWP_OFF_MASK 0x7ffff
#define SRMMU_SWP_OFF_SHIFT (SRMMU_PTE_FILE_SHIFT + 5)
#define SRMMU_SWP_TYPE_SHIFT 7
#define SRMMU_SWP_OFF_MASK 0xfffff
#define SRMMU_SWP_OFF_SHIFT (SRMMU_SWP_TYPE_SHIFT + 5)
/* Some day I will implement true fine grained access bits for
* user pages because the SRMMU gives us the capabilities to

Просмотреть файл

@ -284,17 +284,6 @@ static inline pte_t pfn_pte(unsigned long pfn, pgprot_t prot)
extern void start_mm_caching(struct mm_struct *mm);
extern void check_mm_caching(struct mm_struct *prev, struct mm_struct *next);
/*
* Support non-linear file mappings (see sys_remap_file_pages).
* This is defined by CLIENT1 set but CLIENT0 and _PAGE_PRESENT clear, and the
* file offset in the 32 high bits.
*/
#define _PAGE_FILE HV_PTE_CLIENT1
#define PTE_FILE_MAX_BITS 32
#define pte_file(pte) (hv_pte_get_client1(pte) && !hv_pte_get_client0(pte))
#define pte_to_pgoff(pte) ((pte).val >> 32)
#define pgoff_to_pte(off) ((pte_t) { (((long long)(off)) << 32) | _PAGE_FILE })
/*
* Encode and de-code a swap entry (see <linux/swapops.h>).
* We put the swap file type+offset in the 32 high bits;

Просмотреть файл

@ -263,10 +263,6 @@ static int pte_to_home(pte_t pte)
/* Update the home of a PTE if necessary (can also be used for a pgprot_t). */
pte_t pte_set_home(pte_t pte, int home)
{
/* Check for non-linear file mapping "PTEs" and pass them through. */
if (pte_file(pte))
return pte;
#if CHIP_HAS_MMIO()
/* Check for MMIO mappings and pass them through. */
if (hv_pte_get_mode(pte) == HV_PTE_MODE_MMIO)

Просмотреть файл

@ -41,13 +41,4 @@ static inline void pgd_mkuptodate(pgd_t pgd) { }
#define pfn_pte(pfn, prot) __pte(pfn_to_phys(pfn) | pgprot_val(prot))
#define pfn_pmd(pfn, prot) __pmd(pfn_to_phys(pfn) | pgprot_val(prot))
/*
* Bits 0 through 4 are taken
*/
#define PTE_FILE_MAX_BITS 27
#define pte_to_pgoff(pte) (pte_val(pte) >> 5)
#define pgoff_to_pte(off) ((pte_t) { ((off) << 5) + _PAGE_FILE })
#endif

Просмотреть файл

@ -112,25 +112,5 @@ static inline pmd_t pfn_pmd(pfn_t page_nr, pgprot_t pgprot)
return __pmd((page_nr << PAGE_SHIFT) | pgprot_val(pgprot));
}
/*
* Bits 0 through 3 are taken in the low part of the pte,
* put the 32 bits of offset into the high part.
*/
#define PTE_FILE_MAX_BITS 32
#ifdef CONFIG_64BIT
#define pte_to_pgoff(p) ((p).pte >> 32)
#define pgoff_to_pte(off) ((pte_t) { ((off) << 32) | _PAGE_FILE })
#else
#define pte_to_pgoff(pte) ((pte).pte_high)
#define pgoff_to_pte(off) ((pte_t) { _PAGE_FILE, (off) })
#endif
#endif

Просмотреть файл

@ -18,7 +18,6 @@
#define _PAGE_ACCESSED 0x080
#define _PAGE_DIRTY 0x100
/* If _PAGE_PRESENT is clear, we use these: */
#define _PAGE_FILE 0x008 /* nonlinear file mapping, saved PTE; unset:swap */
#define _PAGE_PROTNONE 0x010 /* if the user mapped it with PROT_NONE;
pte_present gives true */
@ -151,14 +150,6 @@ static inline int pte_write(pte_t pte)
!(pte_get_bits(pte, _PAGE_PROTNONE)));
}
/*
* The following only works if pte_present() is not true.
*/
static inline int pte_file(pte_t pte)
{
return pte_get_bits(pte, _PAGE_FILE);
}
static inline int pte_dirty(pte_t pte)
{
return pte_get_bits(pte, _PAGE_DIRTY);

Просмотреть файл

@ -44,7 +44,6 @@
#define PTE_TYPE_INVALID (3 << 0)
#define PTE_PRESENT (1 << 2)
#define PTE_FILE (1 << 3) /* only when !PRESENT */
#define PTE_YOUNG (1 << 3)
#define PTE_DIRTY (1 << 4)
#define PTE_CACHEABLE (1 << 5)

Просмотреть файл

@ -283,20 +283,6 @@ extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
#define MAX_SWAPFILES_CHECK() \
BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > __SWP_TYPE_BITS)
/*
* Encode and decode a file entry. File entries are stored in the Linux
* page tables as follows:
*
* 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1
* 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
* <----------------------- offset ----------------------> 1 0 0 0
*/
#define pte_file(pte) (pte_val(pte) & PTE_FILE)
#define pte_to_pgoff(x) (pte_val(x) >> 4)
#define pgoff_to_pte(x) __pte(((x) << 4) | PTE_FILE)
#define PTE_FILE_MAX_BITS 28
/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */
/* FIXME: this is not correct */
#define kern_addr_valid(addr) (1)

Просмотреть файл

@ -62,44 +62,8 @@ static inline unsigned long pte_bitop(unsigned long value, unsigned int rightshi
return ((value >> rightshift) & mask) << leftshift;
}
/*
* Bits _PAGE_BIT_PRESENT, _PAGE_BIT_FILE and _PAGE_BIT_PROTNONE are taken,
* split up the 29 bits of offset into this range.
*/
#define PTE_FILE_MAX_BITS 29
#define PTE_FILE_SHIFT1 (_PAGE_BIT_PRESENT + 1)
#define PTE_FILE_SHIFT2 (_PAGE_BIT_FILE + 1)
#define PTE_FILE_SHIFT3 (_PAGE_BIT_PROTNONE + 1)
#define PTE_FILE_BITS1 (PTE_FILE_SHIFT2 - PTE_FILE_SHIFT1 - 1)
#define PTE_FILE_BITS2 (PTE_FILE_SHIFT3 - PTE_FILE_SHIFT2 - 1)
#define PTE_FILE_MASK1 ((1U << PTE_FILE_BITS1) - 1)
#define PTE_FILE_MASK2 ((1U << PTE_FILE_BITS2) - 1)
#define PTE_FILE_LSHIFT2 (PTE_FILE_BITS1)
#define PTE_FILE_LSHIFT3 (PTE_FILE_BITS1 + PTE_FILE_BITS2)
static __always_inline pgoff_t pte_to_pgoff(pte_t pte)
{
return (pgoff_t)
(pte_bitop(pte.pte_low, PTE_FILE_SHIFT1, PTE_FILE_MASK1, 0) +
pte_bitop(pte.pte_low, PTE_FILE_SHIFT2, PTE_FILE_MASK2, PTE_FILE_LSHIFT2) +
pte_bitop(pte.pte_low, PTE_FILE_SHIFT3, -1UL, PTE_FILE_LSHIFT3));
}
static __always_inline pte_t pgoff_to_pte(pgoff_t off)
{
return (pte_t){
.pte_low =
pte_bitop(off, 0, PTE_FILE_MASK1, PTE_FILE_SHIFT1) +
pte_bitop(off, PTE_FILE_LSHIFT2, PTE_FILE_MASK2, PTE_FILE_SHIFT2) +
pte_bitop(off, PTE_FILE_LSHIFT3, -1UL, PTE_FILE_SHIFT3) +
_PAGE_FILE,
};
}
/* Encode and de-code a swap entry */
#define SWP_TYPE_BITS (_PAGE_BIT_FILE - _PAGE_BIT_PRESENT - 1)
#define SWP_TYPE_BITS 5
#define SWP_OFFSET_SHIFT (_PAGE_BIT_PROTNONE + 1)
#define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > SWP_TYPE_BITS)

Просмотреть файл

@ -176,18 +176,6 @@ static inline pmd_t native_pmdp_get_and_clear(pmd_t *pmdp)
#define native_pmdp_get_and_clear(xp) native_local_pmdp_get_and_clear(xp)
#endif
/*
* Bits 0, 6 and 7 are taken in the low part of the pte,
* put the 32 bits of offset into the high part.
*
* For soft-dirty tracking 11 bit is taken from
* the low part of pte as well.
*/
#define pte_to_pgoff(pte) ((pte).pte_high)
#define pgoff_to_pte(off) \
((pte_t) { { .pte_low = _PAGE_FILE, .pte_high = (off) } })
#define PTE_FILE_MAX_BITS 32
/* Encode and de-code a swap entry */
#define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > 5)
#define __swp_type(x) (((x).val) & 0x1f)

Просмотреть файл

@ -115,11 +115,6 @@ static inline int pte_write(pte_t pte)
return pte_flags(pte) & _PAGE_RW;
}
static inline int pte_file(pte_t pte)
{
return pte_flags(pte) & _PAGE_FILE;
}
static inline int pte_huge(pte_t pte)
{
return pte_flags(pte) & _PAGE_PSE;
@ -329,21 +324,6 @@ static inline pmd_t pmd_mksoft_dirty(pmd_t pmd)
return pmd_set_flags(pmd, _PAGE_SOFT_DIRTY);
}
static inline pte_t pte_file_clear_soft_dirty(pte_t pte)
{
return pte_clear_flags(pte, _PAGE_SOFT_DIRTY);
}
static inline pte_t pte_file_mksoft_dirty(pte_t pte)
{
return pte_set_flags(pte, _PAGE_SOFT_DIRTY);
}
static inline int pte_file_soft_dirty(pte_t pte)
{
return pte_flags(pte) & _PAGE_SOFT_DIRTY;
}
#endif /* CONFIG_HAVE_ARCH_SOFT_DIRTY */
/*

Просмотреть файл

@ -133,10 +133,6 @@ static inline int pgd_large(pgd_t pgd) { return 0; }
/* PUD - Level3 access */
/* PMD - Level 2 access */
#define pte_to_pgoff(pte) ((pte_val((pte)) & PHYSICAL_PAGE_MASK) >> PAGE_SHIFT)
#define pgoff_to_pte(off) ((pte_t) { .pte = ((off) << PAGE_SHIFT) | \
_PAGE_FILE })
#define PTE_FILE_MAX_BITS __PHYSICAL_MASK_SHIFT
/* PTE - Level 1 access. */
@ -145,7 +141,7 @@ static inline int pgd_large(pgd_t pgd) { return 0; }
#define pte_unmap(pte) ((void)(pte))/* NOP */
/* Encode and de-code a swap entry */
#define SWP_TYPE_BITS (_PAGE_BIT_FILE - _PAGE_BIT_PRESENT - 1)
#define SWP_TYPE_BITS 5
#ifdef CONFIG_NUMA_BALANCING
/* Automatic NUMA balancing needs to be distinguishable from swap entries */
#define SWP_OFFSET_SHIFT (_PAGE_BIT_PROTNONE + 2)

Просмотреть файл

@ -38,8 +38,6 @@
/* If _PAGE_BIT_PRESENT is clear, we use these: */
/* - if the user mapped it with PROT_NONE; pte_present gives true */
#define _PAGE_BIT_PROTNONE _PAGE_BIT_GLOBAL
/* - set: nonlinear file mapping, saved PTE; unset:swap */
#define _PAGE_BIT_FILE _PAGE_BIT_DIRTY
#define _PAGE_PRESENT (_AT(pteval_t, 1) << _PAGE_BIT_PRESENT)
#define _PAGE_RW (_AT(pteval_t, 1) << _PAGE_BIT_RW)
@ -114,7 +112,6 @@
#define _PAGE_NX (_AT(pteval_t, 0))
#endif
#define _PAGE_FILE (_AT(pteval_t, 1) << _PAGE_BIT_FILE)
#define _PAGE_PROTNONE (_AT(pteval_t, 1) << _PAGE_BIT_PROTNONE)
#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \

Просмотреть файл

@ -178,4 +178,15 @@ static __init int setup_hugepagesz(char *opt)
return 1;
}
__setup("hugepagesz=", setup_hugepagesz);
#ifdef CONFIG_CMA
static __init int gigantic_pages_init(void)
{
/* With CMA we can allocate gigantic pages at runtime */
if (cpu_has_gbpages && !size_to_hstate(1UL << PUD_SHIFT))
hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT);
return 0;
}
arch_initcall(gigantic_pages_init);
#endif
#endif

Просмотреть файл

@ -89,8 +89,6 @@
* (PAGE_NONE)| PPN | 0 | 00 | ADW | 01 | 11 | 11 |
* +-----------------------------------------+
* swap | index | type | 01 | 11 | 00 |
* +- - - - - - - - - - - - - - - - - - - - -+
* file | file offset | 01 | 11 | 10 |
* +-----------------------------------------+
*
* For T1050 hardware and earlier the layout differs for present and (PAGE_NONE)
@ -111,7 +109,6 @@
* index swap offset / PAGE_SIZE (bit 11-31: 21 bits -> 8 GB)
* (note that the index is always non-zero)
* type swap type (5 bits -> 32 types)
* file offset 26-bit offset into the file, in increments of PAGE_SIZE
*
* Notes:
* - (PROT_NONE) is a special case of 'present' but causes an exception for
@ -144,7 +141,6 @@
#define _PAGE_HW_VALID 0x00
#define _PAGE_NONE 0x0f
#endif
#define _PAGE_FILE (1<<1) /* file mapped page, only if !present */
#define _PAGE_USER (1<<4) /* user access (ring=1) */
@ -260,7 +256,6 @@ static inline void pgtable_cache_init(void) { }
static inline int pte_write(pte_t pte) { return pte_val(pte) & _PAGE_WRITABLE; }
static inline int pte_dirty(pte_t pte) { return pte_val(pte) & _PAGE_DIRTY; }
static inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED; }
static inline int pte_file(pte_t pte) { return pte_val(pte) & _PAGE_FILE; }
static inline int pte_special(pte_t pte) { return 0; }
static inline pte_t pte_wrprotect(pte_t pte)
@ -390,11 +385,6 @@ ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
#define PTE_FILE_MAX_BITS 26
#define pte_to_pgoff(pte) (pte_val(pte) >> 6)
#define pgoff_to_pte(off) \
((pte_t) { ((off) << 6) | _PAGE_CA_INVALID | _PAGE_FILE | _PAGE_USER })
#endif /* !defined (__ASSEMBLY__) */

Просмотреть файл

@ -50,8 +50,7 @@
*
* You must not use multiple offset managers on a single address_space.
* Otherwise, mm-core will be unable to tear down memory mappings as the VM will
* no longer be linear. Please use VM_NONLINEAR in that case and implement your
* own offset managers.
* no longer be linear.
*
* This offset manager works on page-based addresses. That is, every argument
* and return code (with the exception of drm_vma_node_offset_addr()) is given

Просмотреть файл

@ -831,7 +831,6 @@ static const struct vm_operations_struct v9fs_file_vm_ops = {
.fault = filemap_fault,
.map_pages = filemap_map_pages,
.page_mkwrite = v9fs_vm_page_mkwrite,
.remap_pages = generic_file_remap_pages,
};
static const struct vm_operations_struct v9fs_mmap_file_vm_ops = {
@ -839,7 +838,6 @@ static const struct vm_operations_struct v9fs_mmap_file_vm_ops = {
.fault = filemap_fault,
.map_pages = filemap_map_pages,
.page_mkwrite = v9fs_vm_page_mkwrite,
.remap_pages = generic_file_remap_pages,
};

Просмотреть файл

@ -2081,7 +2081,6 @@ static const struct vm_operations_struct btrfs_file_vm_ops = {
.fault = filemap_fault,
.map_pages = filemap_map_pages,
.page_mkwrite = btrfs_page_mkwrite,
.remap_pages = generic_file_remap_pages,
};
static int btrfs_file_mmap(struct file *filp, struct vm_area_struct *vma)

Просмотреть файл

@ -1569,7 +1569,6 @@ out:
static struct vm_operations_struct ceph_vmops = {
.fault = ceph_filemap_fault,
.page_mkwrite = ceph_page_mkwrite,
.remap_pages = generic_file_remap_pages,
};
int ceph_mmap(struct file *file, struct vm_area_struct *vma)

Просмотреть файл

@ -3236,7 +3236,6 @@ static struct vm_operations_struct cifs_file_vm_ops = {
.fault = filemap_fault,
.map_pages = filemap_map_pages,
.page_mkwrite = cifs_page_mkwrite,
.remap_pages = generic_file_remap_pages,
};
int cifs_file_strict_mmap(struct file *file, struct vm_area_struct *vma)

Просмотреть файл

@ -195,7 +195,6 @@ static const struct vm_operations_struct ext4_file_vm_ops = {
.fault = filemap_fault,
.map_pages = filemap_map_pages,
.page_mkwrite = ext4_page_mkwrite,
.remap_pages = generic_file_remap_pages,
};
static int ext4_file_mmap(struct file *file, struct vm_area_struct *vma)

Просмотреть файл

@ -92,7 +92,6 @@ static const struct vm_operations_struct f2fs_file_vm_ops = {
.fault = filemap_fault,
.map_pages = filemap_map_pages,
.page_mkwrite = f2fs_vm_page_mkwrite,
.remap_pages = generic_file_remap_pages,
};
static int get_parent_ino(struct inode *inode, nid_t *pino)

Просмотреть файл

@ -2062,7 +2062,6 @@ static const struct vm_operations_struct fuse_file_vm_ops = {
.fault = filemap_fault,
.map_pages = filemap_map_pages,
.page_mkwrite = fuse_page_mkwrite,
.remap_pages = generic_file_remap_pages,
};
static int fuse_file_mmap(struct file *file, struct vm_area_struct *vma)

Просмотреть файл

@ -498,7 +498,6 @@ static const struct vm_operations_struct gfs2_vm_ops = {
.fault = filemap_fault,
.map_pages = filemap_map_pages,
.page_mkwrite = gfs2_page_mkwrite,
.remap_pages = generic_file_remap_pages,
};
/**

Просмотреть файл

@ -356,7 +356,6 @@ void address_space_init_once(struct address_space *mapping)
INIT_LIST_HEAD(&mapping->private_list);
spin_lock_init(&mapping->private_lock);
mapping->i_mmap = RB_ROOT;
INIT_LIST_HEAD(&mapping->i_mmap_nonlinear);
}
EXPORT_SYMBOL(address_space_init_once);

Просмотреть файл

@ -379,6 +379,11 @@ int __generic_block_fiemap(struct inode *inode,
past_eof = true;
}
cond_resched();
if (fatal_signal_pending(current)) {
ret = -EINTR;
break;
}
} while (1);
/* If ret is 1 then we just hit the end of the extent array */

Просмотреть файл

@ -646,7 +646,6 @@ static const struct vm_operations_struct nfs_file_vm_ops = {
.fault = filemap_fault,
.map_pages = filemap_map_pages,
.page_mkwrite = nfs_vm_page_mkwrite,
.remap_pages = generic_file_remap_pages,
};
static int nfs_need_sync_write(struct file *filp, struct inode *inode)

Просмотреть файл

@ -128,7 +128,6 @@ static const struct vm_operations_struct nilfs_file_vm_ops = {
.fault = filemap_fault,
.map_pages = filemap_map_pages,
.page_mkwrite = nilfs_page_mkwrite,
.remap_pages = generic_file_remap_pages,
};
static int nilfs_file_mmap(struct file *file, struct vm_area_struct *vma)

Просмотреть файл

@ -140,7 +140,7 @@ static bool fanotify_should_send_event(struct fsnotify_mark *inode_mark,
}
if (S_ISDIR(path->dentry->d_inode->i_mode) &&
(marks_ignored_mask & FS_ISDIR))
!(marks_mask & FS_ISDIR & ~marks_ignored_mask))
return false;
if (event_mask & marks_mask & ~marks_ignored_mask)

Просмотреть файл

@ -487,19 +487,26 @@ static __u32 fanotify_mark_remove_from_mask(struct fsnotify_mark *fsn_mark,
unsigned int flags,
int *destroy)
{
__u32 oldmask;
__u32 oldmask = 0;
spin_lock(&fsn_mark->lock);
if (!(flags & FAN_MARK_IGNORED_MASK)) {
oldmask = fsn_mark->mask;
fsnotify_set_mark_mask_locked(fsn_mark, (oldmask & ~mask));
} else {
oldmask = fsn_mark->ignored_mask;
fsnotify_set_mark_ignored_mask_locked(fsn_mark, (oldmask & ~mask));
}
spin_unlock(&fsn_mark->lock);
__u32 tmask = fsn_mark->mask & ~mask;
*destroy = !(oldmask & ~mask);
if (flags & FAN_MARK_ONDIR)
tmask &= ~FAN_ONDIR;
oldmask = fsn_mark->mask;
fsnotify_set_mark_mask_locked(fsn_mark, tmask);
} else {
__u32 tmask = fsn_mark->ignored_mask & ~mask;
if (flags & FAN_MARK_ONDIR)
tmask &= ~FAN_ONDIR;
fsnotify_set_mark_ignored_mask_locked(fsn_mark, tmask);
}
*destroy = !(fsn_mark->mask | fsn_mark->ignored_mask);
spin_unlock(&fsn_mark->lock);
return mask & oldmask;
}
@ -569,20 +576,22 @@ static __u32 fanotify_mark_add_to_mask(struct fsnotify_mark *fsn_mark,
spin_lock(&fsn_mark->lock);
if (!(flags & FAN_MARK_IGNORED_MASK)) {
__u32 tmask = fsn_mark->mask | mask;
if (flags & FAN_MARK_ONDIR)
tmask |= FAN_ONDIR;
oldmask = fsn_mark->mask;
fsnotify_set_mark_mask_locked(fsn_mark, (oldmask | mask));
fsnotify_set_mark_mask_locked(fsn_mark, tmask);
} else {
__u32 tmask = fsn_mark->ignored_mask | mask;
if (flags & FAN_MARK_ONDIR)
tmask |= FAN_ONDIR;
fsnotify_set_mark_ignored_mask_locked(fsn_mark, tmask);
if (flags & FAN_MARK_IGNORED_SURV_MODIFY)
fsn_mark->flags |= FSNOTIFY_MARK_FLAG_IGNORED_SURV_MODIFY;
}
if (!(flags & FAN_MARK_ONDIR)) {
__u32 tmask = fsn_mark->ignored_mask | FAN_ONDIR;
fsnotify_set_mark_ignored_mask_locked(fsn_mark, tmask);
}
spin_unlock(&fsn_mark->lock);
return mask & ~oldmask;

Просмотреть файл

@ -245,16 +245,14 @@ int ocfs2_set_acl(handle_t *handle,
ret = posix_acl_equiv_mode(acl, &mode);
if (ret < 0)
return ret;
else {
if (ret == 0)
acl = NULL;
ret = ocfs2_acl_set_mode(inode, di_bh,
handle, mode);
if (ret)
return ret;
if (ret == 0)
acl = NULL;
}
ret = ocfs2_acl_set_mode(inode, di_bh,
handle, mode);
if (ret)
return ret;
}
break;
case ACL_TYPE_DEFAULT:

Просмотреть файл

@ -6873,7 +6873,7 @@ int ocfs2_convert_inline_data_to_extents(struct inode *inode,
if (IS_ERR(handle)) {
ret = PTR_ERR(handle);
mlog_errno(ret);
goto out_unlock;
goto out;
}
ret = ocfs2_journal_access_di(handle, INODE_CACHE(inode), di_bh,
@ -6931,7 +6931,7 @@ int ocfs2_convert_inline_data_to_extents(struct inode *inode,
if (ret) {
mlog_errno(ret);
need_free = 1;
goto out_commit;
goto out_unlock;
}
page_end = PAGE_CACHE_SIZE;
@ -6964,12 +6964,16 @@ int ocfs2_convert_inline_data_to_extents(struct inode *inode,
if (ret) {
mlog_errno(ret);
need_free = 1;
goto out_commit;
goto out_unlock;
}
inode->i_blocks = ocfs2_inode_sector_count(inode);
}
out_unlock:
if (pages)
ocfs2_unlock_and_free_pages(pages, num_pages);
out_commit:
if (ret < 0 && did_quota)
dquot_free_space_nodirty(inode,
@ -6989,15 +6993,11 @@ out_commit:
ocfs2_commit_trans(osb, handle);
out_unlock:
out:
if (data_ac)
ocfs2_free_alloc_context(data_ac);
out:
if (pages) {
ocfs2_unlock_and_free_pages(pages, num_pages);
if (pages)
kfree(pages);
}
return ret;
}

Просмотреть файл

@ -1016,7 +1016,8 @@ void o2net_fill_node_map(unsigned long *map, unsigned bytes)
memset(map, 0, bytes);
for (node = 0; node < O2NM_MAX_NODES; ++node) {
o2net_tx_can_proceed(o2net_nn_from_num(node), &sc, &ret);
if (!o2net_tx_can_proceed(o2net_nn_from_num(node), &sc, &ret))
continue;
if (!ret) {
set_bit(node, map);
sc_put(sc);

Просмотреть файл

@ -107,12 +107,12 @@ struct o2net_node {
struct list_head nn_status_list;
/* connects are attempted from when heartbeat comes up until either hb
* goes down, the node is unconfigured, no connect attempts succeed
* before O2NET_CONN_IDLE_DELAY, or a connect succeeds. connect_work
* is queued from set_nn_state both from hb up and from itself if a
* connect attempt fails and so can be self-arming. shutdown is
* careful to first mark the nn such that no connects will be attempted
* before canceling delayed connect work and flushing the queue. */
* goes down, the node is unconfigured, or a connect succeeds.
* connect_work is queued from set_nn_state both from hb up and from
* itself if a connect attempt fails and so can be self-arming.
* shutdown is careful to first mark the nn such that no connects will
* be attempted before canceling delayed connect work and flushing the
* queue. */
struct delayed_work nn_connect_work;
unsigned long nn_last_connect_attempt;

Просмотреть файл

@ -3456,10 +3456,8 @@ static int ocfs2_find_dir_space_el(struct inode *dir, const char *name,
int blocksize = dir->i_sb->s_blocksize;
status = ocfs2_read_dir_block(dir, 0, &bh, 0);
if (status) {
mlog_errno(status);
if (status)
goto bail;
}
rec_len = OCFS2_DIR_REC_LEN(namelen);
offset = 0;
@ -3480,10 +3478,9 @@ static int ocfs2_find_dir_space_el(struct inode *dir, const char *name,
status = ocfs2_read_dir_block(dir,
offset >> sb->s_blocksize_bits,
&bh, 0);
if (status) {
mlog_errno(status);
if (status)
goto bail;
}
/* move to next block */
de = (struct ocfs2_dir_entry *) bh->b_data;
}
@ -3513,7 +3510,6 @@ next:
de = (struct ocfs2_dir_entry *)((char *) de + le16_to_cpu(de->rec_len));
}
status = 0;
bail:
brelse(bh);
if (status)

Просмотреть файл

@ -385,8 +385,12 @@ int dlm_proxy_ast_handler(struct o2net_msg *msg, u32 len, void *data,
head = &res->granted;
list_for_each_entry(lock, head, list) {
if (lock->ml.cookie == cookie)
/* if lock is found but unlock is pending ignore the bast */
if (lock->ml.cookie == cookie) {
if (lock->unlock_pending)
break;
goto do_ast;
}
}
mlog(0, "Got %sast for unknown lock! cookie=%u:%llu, name=%.*s, "

Просмотреть файл

@ -406,7 +406,7 @@ static int debug_purgelist_print(struct dlm_ctxt *dlm, char *buf, int len)
}
spin_unlock(&dlm->spinlock);
out += snprintf(buf + out, len - out, "Total on list: %ld\n", total);
out += snprintf(buf + out, len - out, "Total on list: %lu\n", total);
return out;
}
@ -464,7 +464,7 @@ static int debug_mle_print(struct dlm_ctxt *dlm, char *buf, int len)
spin_unlock(&dlm->master_lock);
out += snprintf(buf + out, len - out,
"Total: %ld, Longest: %ld\n", total, longest);
"Total: %lu, Longest: %lu\n", total, longest);
return out;
}

Просмотреть файл

@ -674,20 +674,6 @@ static void dlm_leave_domain(struct dlm_ctxt *dlm)
spin_unlock(&dlm->spinlock);
}
int dlm_joined(struct dlm_ctxt *dlm)
{
int ret = 0;
spin_lock(&dlm_domain_lock);
if (dlm->dlm_state == DLM_CTXT_JOINED)
ret = 1;
spin_unlock(&dlm_domain_lock);
return ret;
}
int dlm_shutting_down(struct dlm_ctxt *dlm)
{
int ret = 0;

Просмотреть файл

@ -28,7 +28,6 @@
extern spinlock_t dlm_domain_lock;
extern struct list_head dlm_domains;
int dlm_joined(struct dlm_ctxt *dlm);
int dlm_shutting_down(struct dlm_ctxt *dlm);
void dlm_fire_domain_eviction_callbacks(struct dlm_ctxt *dlm,
int node_num);

Просмотреть файл

@ -1070,6 +1070,9 @@ static void dlm_move_reco_locks_to_list(struct dlm_ctxt *dlm,
dead_node, dlm->name);
list_del_init(&lock->list);
dlm_lock_put(lock);
/* Can't schedule DLM_UNLOCK_FREE_LOCK
* - do manually */
dlm_lock_put(lock);
break;
}
}
@ -2346,6 +2349,10 @@ static void dlm_do_local_recovery_cleanup(struct dlm_ctxt *dlm, u8 dead_node)
dead_node, dlm->name);
list_del_init(&lock->list);
dlm_lock_put(lock);
/* Can't schedule
* DLM_UNLOCK_FREE_LOCK
* - do manually */
dlm_lock_put(lock);
break;
}
}

Просмотреть файл

@ -3750,6 +3750,9 @@ static int ocfs2_dentry_convert_worker(struct ocfs2_lock_res *lockres,
break;
spin_unlock(&dentry_attach_lock);
if (S_ISDIR(dl->dl_inode->i_mode))
shrink_dcache_parent(dentry);
mlog(0, "d_delete(%pd);\n", dentry);
/*

Просмотреть файл

@ -569,7 +569,7 @@ static int __ocfs2_extend_allocation(struct inode *inode, u32 logical_start,
handle_t *handle = NULL;
struct ocfs2_alloc_context *data_ac = NULL;
struct ocfs2_alloc_context *meta_ac = NULL;
enum ocfs2_alloc_restarted why;
enum ocfs2_alloc_restarted why = RESTART_NONE;
struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
struct ocfs2_extent_tree et;
int did_quota = 0;

Просмотреть файл

@ -1447,7 +1447,6 @@ bail:
* requires that we call do_exit(). And it isn't exported, but
* complete_and_exit() seems to be a minimal wrapper around it. */
complete_and_exit(NULL, status);
return status;
}
void ocfs2_recovery_thread(struct ocfs2_super *osb, int node_num)

Просмотреть файл

@ -173,7 +173,6 @@ out:
static const struct vm_operations_struct ocfs2_file_vm_ops = {
.fault = ocfs2_fault,
.page_mkwrite = ocfs2_page_mkwrite,
.remap_pages = generic_file_remap_pages,
};
int ocfs2_mmap(struct file *file, struct vm_area_struct *vma)

Просмотреть файл

@ -279,6 +279,8 @@ enum ocfs2_mount_options
writes */
OCFS2_MOUNT_HB_NONE = 1 << 13, /* No heartbeat */
OCFS2_MOUNT_HB_GLOBAL = 1 << 14, /* Global heartbeat */
OCFS2_MOUNT_JOURNAL_ASYNC_COMMIT = 1 << 15, /* Journal Async Commit */
};
#define OCFS2_OSB_SOFT_RO 0x0001

Просмотреть файл

@ -73,12 +73,6 @@ static loff_t ol_dqblk_off(struct super_block *sb, int c, int off)
ol_dqblk_block_off(sb, c, off);
}
/* Compute block number from given offset */
static inline unsigned int ol_dqblk_file_block(struct super_block *sb, loff_t off)
{
return off >> sb->s_blocksize_bits;
}
static inline unsigned int ol_dqblk_block_offset(struct super_block *sb, loff_t off)
{
return off & ((1 << sb->s_blocksize_bits) - 1);

Просмотреть файл

@ -2428,8 +2428,6 @@ static int ocfs2_calc_refcount_meta_credits(struct super_block *sb,
get_bh(prev_bh);
}
rb = (struct ocfs2_refcount_block *)ref_leaf_bh->b_data;
trace_ocfs2_calc_refcount_meta_credits_iterate(
recs_add, (unsigned long long)cpos, clusters,
(unsigned long long)le64_to_cpu(rec.r_cpos),

Просмотреть файл

@ -39,7 +39,7 @@
#define OCFS2_CHECK_RESERVATIONS
#endif
DEFINE_SPINLOCK(resv_lock);
static DEFINE_SPINLOCK(resv_lock);
#define OCFS2_MIN_RESV_WINDOW_BITS 8
#define OCFS2_MAX_RESV_WINDOW_BITS 1024

Просмотреть файл

@ -191,6 +191,7 @@ enum {
Opt_coherency_full,
Opt_resv_level,
Opt_dir_resv_level,
Opt_journal_async_commit,
Opt_err,
};
@ -222,6 +223,7 @@ static const match_table_t tokens = {
{Opt_coherency_full, "coherency=full"},
{Opt_resv_level, "resv_level=%u"},
{Opt_dir_resv_level, "dir_resv_level=%u"},
{Opt_journal_async_commit, "journal_async_commit"},
{Opt_err, NULL}
};
@ -1470,6 +1472,9 @@ static int ocfs2_parse_options(struct super_block *sb,
option < OCFS2_MAX_RESV_LEVEL)
mopt->dir_resv_level = option;
break;
case Opt_journal_async_commit:
mopt->mount_opt |= OCFS2_MOUNT_JOURNAL_ASYNC_COMMIT;
break;
default:
mlog(ML_ERROR,
"Unrecognized mount option \"%s\" "
@ -1576,6 +1581,9 @@ static int ocfs2_show_options(struct seq_file *s, struct dentry *root)
if (osb->osb_dir_resv_level != osb->osb_resv_level)
seq_printf(s, ",dir_resv_level=%d", osb->osb_resv_level);
if (opts & OCFS2_MOUNT_JOURNAL_ASYNC_COMMIT)
seq_printf(s, ",journal_async_commit");
return 0;
}
@ -2445,6 +2453,15 @@ static int ocfs2_check_volume(struct ocfs2_super *osb)
goto finally;
}
if (osb->s_mount_opt & OCFS2_MOUNT_JOURNAL_ASYNC_COMMIT)
jbd2_journal_set_features(osb->journal->j_journal,
JBD2_FEATURE_COMPAT_CHECKSUM, 0,
JBD2_FEATURE_INCOMPAT_ASYNC_COMMIT);
else
jbd2_journal_clear_features(osb->journal->j_journal,
JBD2_FEATURE_COMPAT_CHECKSUM, 0,
JBD2_FEATURE_INCOMPAT_ASYNC_COMMIT);
if (dirty) {
/* recover my local alloc if we didn't unmount cleanly. */
status = ocfs2_begin_local_alloc_recovery(osb,

Просмотреть файл

@ -5334,16 +5334,6 @@ out:
return ret;
}
static inline char *ocfs2_xattr_bucket_get_val(struct inode *inode,
struct ocfs2_xattr_bucket *bucket,
int offs)
{
int block_off = offs >> inode->i_sb->s_blocksize_bits;
offs = offs % inode->i_sb->s_blocksize;
return bucket_block(bucket, block_off) + offs;
}
/*
* Truncate the specified xe_off entry in xattr bucket.
* bucket is indicated by header_bh and len is the new length.

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше