Commit a280df32 authored by J. Bruce Fields's avatar J. Bruce Fields Committed by Linus Torvalds

nfsd: fix possible read-ahead cache and export table corruption

The value of nperbucket calculated here is too small--we should be rounding up
instead of down--with the result that the index j in the following loop can
overflow the raparm_hash array.  At least in my case, the next thing in memory
turns out to be export_table, so the symptoms I see are crashes caused by the
appearance of four zeroed-out export entries in the first bucket of the hash
table of exports (which were actually entries in the readahead cache, a
pointer to which had been written to the export table in this initialization
code).

It looks like the bug was probably introduced with commit
fce1456a ("knfsd: make the readahead params
cache SMP-friendly").

Cc: <stable@kernel.org>
Cc: Greg Banks <gnb@melbourne.sgi.com>
Signed-off-by: default avatar"J. Bruce Fields" <bfields@citi.umich.edu>
Acked-by: default avatarNeilBrown <neilb@suse.de>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent d688abf5
...@@ -1916,7 +1916,7 @@ nfsd_racache_init(int cache_size) ...@@ -1916,7 +1916,7 @@ nfsd_racache_init(int cache_size)
raparm_hash[i].pb_head = NULL; raparm_hash[i].pb_head = NULL;
spin_lock_init(&raparm_hash[i].pb_lock); spin_lock_init(&raparm_hash[i].pb_lock);
} }
nperbucket = cache_size >> RAPARM_HASH_BITS; nperbucket = DIV_ROUND_UP(cache_size, RAPARM_HASH_SIZE);
for (i = 0; i < cache_size - 1; i++) { for (i = 0; i < cache_size - 1; i++) {
if (i % nperbucket == 0) if (i % nperbucket == 0)
raparm_hash[j++].pb_head = raparml + i; raparm_hash[j++].pb_head = raparml + i;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment