2005-04-17 02:20:36 +04:00
|
|
|
/*
|
2005-11-02 06:58:39 +03:00
|
|
|
* Copyright (c) 2000,2002,2005 Silicon Graphics, Inc.
|
|
|
|
* All Rights Reserved.
|
2005-04-17 02:20:36 +04:00
|
|
|
*
|
2005-11-02 06:58:39 +03:00
|
|
|
* This program is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU General Public License as
|
2005-04-17 02:20:36 +04:00
|
|
|
* published by the Free Software Foundation.
|
|
|
|
*
|
2005-11-02 06:58:39 +03:00
|
|
|
* This program is distributed in the hope that it would be useful,
|
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
* GNU General Public License for more details.
|
2005-04-17 02:20:36 +04:00
|
|
|
*
|
2005-11-02 06:58:39 +03:00
|
|
|
* You should have received a copy of the GNU General Public License
|
|
|
|
* along with this program; if not, write the Free Software Foundation,
|
|
|
|
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
|
2005-04-17 02:20:36 +04:00
|
|
|
*/
|
|
|
|
#ifndef __XFS_TRANS_PRIV_H__
|
|
|
|
#define __XFS_TRANS_PRIV_H__
|
|
|
|
|
|
|
|
struct xfs_log_item;
|
|
|
|
struct xfs_log_item_desc;
|
|
|
|
struct xfs_mount;
|
|
|
|
struct xfs_trans;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* From xfs_trans_item.c
|
|
|
|
*/
|
|
|
|
struct xfs_log_item_desc *xfs_trans_add_item(struct xfs_trans *,
|
|
|
|
struct xfs_log_item *);
|
|
|
|
void xfs_trans_free_item(struct xfs_trans *,
|
|
|
|
struct xfs_log_item_desc *);
|
|
|
|
struct xfs_log_item_desc *xfs_trans_find_item(struct xfs_trans *,
|
|
|
|
struct xfs_log_item *);
|
|
|
|
struct xfs_log_item_desc *xfs_trans_first_item(struct xfs_trans *);
|
|
|
|
struct xfs_log_item_desc *xfs_trans_next_item(struct xfs_trans *,
|
|
|
|
struct xfs_log_item_desc *);
|
|
|
|
void xfs_trans_free_items(struct xfs_trans *, int);
|
|
|
|
void xfs_trans_unlock_items(struct xfs_trans *,
|
|
|
|
xfs_lsn_t);
|
|
|
|
void xfs_trans_free_busy(xfs_trans_t *tp);
|
|
|
|
xfs_log_busy_slot_t *xfs_trans_add_busy(xfs_trans_t *tp,
|
|
|
|
xfs_agnumber_t ag,
|
|
|
|
xfs_extlen_t idx);
|
|
|
|
|
|
|
|
/*
|
2008-10-30 09:38:39 +03:00
|
|
|
* AIL traversal cursor.
|
|
|
|
*
|
|
|
|
* Rather than using a generation number for detecting changes in the ail, use
|
|
|
|
* a cursor that is protected by the ail lock. The aild cursor exists in the
|
|
|
|
* struct xfs_ail, but other traversals can declare it on the stack and link it
|
|
|
|
* to the ail list.
|
|
|
|
*
|
|
|
|
* When an object is deleted from or moved int the AIL, the cursor list is
|
|
|
|
* searched to see if the object is a designated cursor item. If it is, it is
|
|
|
|
* deleted from the cursor so that the next time the cursor is used traversal
|
|
|
|
* will return to the start.
|
|
|
|
*
|
|
|
|
* This means a traversal colliding with a removal will cause a restart of the
|
|
|
|
* list scan, rather than any insertion or deletion anywhere in the list. The
|
|
|
|
* low bit of the item pointer is set if the cursor has been invalidated so
|
|
|
|
* that we can tell the difference between invalidation and reaching the end
|
|
|
|
* of the list to trigger traversal restarts.
|
2005-04-17 02:20:36 +04:00
|
|
|
*/
|
2008-10-30 09:38:39 +03:00
|
|
|
struct xfs_ail_cursor {
|
|
|
|
struct xfs_ail_cursor *next;
|
|
|
|
struct xfs_log_item *item;
|
|
|
|
};
|
2005-04-17 02:20:36 +04:00
|
|
|
|
[XFS] Move AIL pushing into it's own thread
When many hundreds to thousands of threads all try to do simultaneous
transactions and the log is in a tail-pushing situation (i.e. full), we
can get multiple threads walking the AIL list and contending on the AIL
lock.
The AIL push is, in effect, a simple I/O dispatch algorithm complicated by
the ordering constraints placed on it by the transaction subsystem. It
really does not need multiple threads to push on it - even when only a
single CPU is pushing the AIL, it can push the I/O out far faster that
pretty much any disk subsystem can handle.
So, to avoid contention problems stemming from multiple list walkers, move
the list walk off into another thread and simply provide a "target" to
push to. When a thread requires a push, it sets the target and wakes the
push thread, then goes to sleep waiting for the required amount of space
to become available in the log.
This mechanism should also be a lot fairer under heavy load as the waiters
will queue in arrival order, rather than queuing in "who completed a push
first" order.
Also, by moving the pushing to a separate thread we can do more
effectively overload detection and prevention as we can keep context from
loop iteration to loop iteration. That is, we can push only part of the
list each loop and not have to loop back to the start of the list every
time we run. This should also help by reducing the number of items we try
to lock and/or push items that we cannot move.
Note that this patch is not intended to solve the inefficiencies in the
AIL structure and the associated issues with extremely large list
contents. That needs to be addresses separately; parallel access would
cause problems to any new structure as well, so I'm only aiming to isolate
the structure from unbounded parallelism here.
SGI-PV: 972759
SGI-Modid: xfs-linux-melb:xfs-kern:30371a
Signed-off-by: David Chinner <dgc@sgi.com>
Signed-off-by: Lachlan McIlroy <lachlan@sgi.com>
2008-02-05 04:13:32 +03:00
|
|
|
/*
|
2008-10-30 09:38:39 +03:00
|
|
|
* Private AIL structures.
|
|
|
|
*
|
|
|
|
* Eventually we need to drive the locking in here as well.
|
[XFS] Move AIL pushing into it's own thread
When many hundreds to thousands of threads all try to do simultaneous
transactions and the log is in a tail-pushing situation (i.e. full), we
can get multiple threads walking the AIL list and contending on the AIL
lock.
The AIL push is, in effect, a simple I/O dispatch algorithm complicated by
the ordering constraints placed on it by the transaction subsystem. It
really does not need multiple threads to push on it - even when only a
single CPU is pushing the AIL, it can push the I/O out far faster that
pretty much any disk subsystem can handle.
So, to avoid contention problems stemming from multiple list walkers, move
the list walk off into another thread and simply provide a "target" to
push to. When a thread requires a push, it sets the target and wakes the
push thread, then goes to sleep waiting for the required amount of space
to become available in the log.
This mechanism should also be a lot fairer under heavy load as the waiters
will queue in arrival order, rather than queuing in "who completed a push
first" order.
Also, by moving the pushing to a separate thread we can do more
effectively overload detection and prevention as we can keep context from
loop iteration to loop iteration. That is, we can push only part of the
list each loop and not have to loop back to the start of the list every
time we run. This should also help by reducing the number of items we try
to lock and/or push items that we cannot move.
Note that this patch is not intended to solve the inefficiencies in the
AIL structure and the associated issues with extremely large list
contents. That needs to be addresses separately; parallel access would
cause problems to any new structure as well, so I'm only aiming to isolate
the structure from unbounded parallelism here.
SGI-PV: 972759
SGI-Modid: xfs-linux-melb:xfs-kern:30371a
Signed-off-by: David Chinner <dgc@sgi.com>
Signed-off-by: Lachlan McIlroy <lachlan@sgi.com>
2008-02-05 04:13:32 +03:00
|
|
|
*/
|
2008-10-30 09:38:26 +03:00
|
|
|
struct xfs_ail {
|
|
|
|
struct xfs_mount *xa_mount;
|
|
|
|
struct list_head xa_ail;
|
|
|
|
uint xa_gen;
|
|
|
|
struct task_struct *xa_task;
|
|
|
|
xfs_lsn_t xa_target;
|
2008-10-30 09:38:39 +03:00
|
|
|
struct xfs_ail_cursor xa_cursors;
|
2008-10-30 09:38:26 +03:00
|
|
|
};
|
|
|
|
|
2008-10-30 09:38:39 +03:00
|
|
|
/*
|
|
|
|
* From xfs_trans_ail.c
|
|
|
|
*/
|
|
|
|
void xfs_trans_update_ail(struct xfs_mount *mp,
|
|
|
|
struct xfs_log_item *lip, xfs_lsn_t lsn)
|
|
|
|
__releases(mp->m_ail_lock);
|
|
|
|
void xfs_trans_delete_ail(struct xfs_mount *mp,
|
|
|
|
struct xfs_log_item *lip)
|
|
|
|
__releases(mp->m_ail_lock);
|
|
|
|
struct xfs_log_item *xfs_trans_first_ail(struct xfs_mount *mp,
|
|
|
|
struct xfs_ail_cursor *cur);
|
|
|
|
struct xfs_log_item *xfs_trans_next_ail(struct xfs_mount *mp,
|
|
|
|
struct xfs_ail_cursor *cur);
|
|
|
|
|
|
|
|
void xfs_trans_ail_cursor_init(struct xfs_ail *ailp,
|
|
|
|
struct xfs_ail_cursor *cur);
|
|
|
|
void xfs_trans_ail_cursor_done(struct xfs_ail *ailp,
|
|
|
|
struct xfs_ail_cursor *cur);
|
|
|
|
|
2008-10-30 09:38:26 +03:00
|
|
|
long xfsaild_push(struct xfs_ail *, xfs_lsn_t *);
|
|
|
|
void xfsaild_wakeup(struct xfs_ail *, xfs_lsn_t);
|
|
|
|
int xfsaild_start(struct xfs_ail *);
|
|
|
|
void xfsaild_stop(struct xfs_ail *);
|
[XFS] Move AIL pushing into it's own thread
When many hundreds to thousands of threads all try to do simultaneous
transactions and the log is in a tail-pushing situation (i.e. full), we
can get multiple threads walking the AIL list and contending on the AIL
lock.
The AIL push is, in effect, a simple I/O dispatch algorithm complicated by
the ordering constraints placed on it by the transaction subsystem. It
really does not need multiple threads to push on it - even when only a
single CPU is pushing the AIL, it can push the I/O out far faster that
pretty much any disk subsystem can handle.
So, to avoid contention problems stemming from multiple list walkers, move
the list walk off into another thread and simply provide a "target" to
push to. When a thread requires a push, it sets the target and wakes the
push thread, then goes to sleep waiting for the required amount of space
to become available in the log.
This mechanism should also be a lot fairer under heavy load as the waiters
will queue in arrival order, rather than queuing in "who completed a push
first" order.
Also, by moving the pushing to a separate thread we can do more
effectively overload detection and prevention as we can keep context from
loop iteration to loop iteration. That is, we can push only part of the
list each loop and not have to loop back to the start of the list every
time we run. This should also help by reducing the number of items we try
to lock and/or push items that we cannot move.
Note that this patch is not intended to solve the inefficiencies in the
AIL structure and the associated issues with extremely large list
contents. That needs to be addresses separately; parallel access would
cause problems to any new structure as well, so I'm only aiming to isolate
the structure from unbounded parallelism here.
SGI-PV: 972759
SGI-Modid: xfs-linux-melb:xfs-kern:30371a
Signed-off-by: David Chinner <dgc@sgi.com>
Signed-off-by: Lachlan McIlroy <lachlan@sgi.com>
2008-02-05 04:13:32 +03:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
#endif /* __XFS_TRANS_PRIV_H__ */
|