Commit 6c06f072 authored by Nathaniel W. Turner's avatar Nathaniel W. Turner Committed by Alex Elder

xfs: copy li_lsn before dropping AIL lock

Access to log items on the AIL is generally protected by m_ail_lock;
this is particularly needed when we're getting or setting the 64-bit
li_lsn on a 32-bit platform.  This patch fixes a couple places where we
were accessing the log item after dropping the AIL lock on 32-bit
machines.

This can result in a partially-zeroed log->l_tail_lsn if
xfs_trans_ail_delete is racing with xfs_trans_ail_update, and in at
least some cases, this can leave the l_tail_lsn with a zero cycle
number, which means xlog_space_left will think the log is full (unless
CONFIG_XFS_DEBUG is set, in which case we'll trip an ASSERT), leading to
processes stuck forever in xlog_grant_log_space.

Thanks to Adrian VanderSpek for first spotting the race potential and to
Dave Chinner for debug assistance.
Signed-off-by: default avatarNathaniel W. Turner <nate@houseofnate.net>
Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
Signed-off-by: default avatarAlex Elder <aelder@sgi.com>
parent 8ec6dba2
...@@ -467,6 +467,7 @@ xfs_trans_ail_update( ...@@ -467,6 +467,7 @@ xfs_trans_ail_update(
{ {
xfs_log_item_t *dlip = NULL; xfs_log_item_t *dlip = NULL;
xfs_log_item_t *mlip; /* ptr to minimum lip */ xfs_log_item_t *mlip; /* ptr to minimum lip */
xfs_lsn_t tail_lsn;
mlip = xfs_ail_min(ailp); mlip = xfs_ail_min(ailp);
...@@ -483,8 +484,16 @@ xfs_trans_ail_update( ...@@ -483,8 +484,16 @@ xfs_trans_ail_update(
if (mlip == dlip) { if (mlip == dlip) {
mlip = xfs_ail_min(ailp); mlip = xfs_ail_min(ailp);
/*
* It is not safe to access mlip after the AIL lock is
* dropped, so we must get a copy of li_lsn before we do
* so. This is especially important on 32-bit platforms
* where accessing and updating 64-bit values like li_lsn
* is not atomic.
*/
tail_lsn = mlip->li_lsn;
spin_unlock(&ailp->xa_lock); spin_unlock(&ailp->xa_lock);
xfs_log_move_tail(ailp->xa_mount, mlip->li_lsn); xfs_log_move_tail(ailp->xa_mount, tail_lsn);
} else { } else {
spin_unlock(&ailp->xa_lock); spin_unlock(&ailp->xa_lock);
} }
...@@ -514,6 +523,7 @@ xfs_trans_ail_delete( ...@@ -514,6 +523,7 @@ xfs_trans_ail_delete(
{ {
xfs_log_item_t *dlip; xfs_log_item_t *dlip;
xfs_log_item_t *mlip; xfs_log_item_t *mlip;
xfs_lsn_t tail_lsn;
if (lip->li_flags & XFS_LI_IN_AIL) { if (lip->li_flags & XFS_LI_IN_AIL) {
mlip = xfs_ail_min(ailp); mlip = xfs_ail_min(ailp);
...@@ -527,9 +537,16 @@ xfs_trans_ail_delete( ...@@ -527,9 +537,16 @@ xfs_trans_ail_delete(
if (mlip == dlip) { if (mlip == dlip) {
mlip = xfs_ail_min(ailp); mlip = xfs_ail_min(ailp);
/*
* It is not safe to access mlip after the AIL lock
* is dropped, so we must get a copy of li_lsn
* before we do so. This is especially important
* on 32-bit platforms where accessing and updating
* 64-bit values like li_lsn is not atomic.
*/
tail_lsn = mlip ? mlip->li_lsn : 0;
spin_unlock(&ailp->xa_lock); spin_unlock(&ailp->xa_lock);
xfs_log_move_tail(ailp->xa_mount, xfs_log_move_tail(ailp->xa_mount, tail_lsn);
(mlip ? mlip->li_lsn : 0));
} else { } else {
spin_unlock(&ailp->xa_lock); spin_unlock(&ailp->xa_lock);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment