An error occurred fetching the project authors.
- 09 Nov, 2007 2 commits
-
-
Alan D. Brunelle authored
Added blk_unplug interface, allowing all invocations of unplugs to result in a generated blktrace UNPLUG. Signed-off-by:
Alan D. Brunelle <Alan.Brunelle@hp.com> Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
Jens Axboe authored
Credit goes to juergen.kadidlo@exasol.com for diagnosing this issue and supplying the initial patch. blk_queue_invalidate_tags() must use the proper requeueing paths instead of open coding the re-add of the request, otherwise we bug out in rq accounting. Just switch to using blk_requeue_request(), that takes care of end-tag handling as well and also adds the blktrace REQUEUE notify event that is also appropriate here. Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
- 02 Nov, 2007 2 commits
-
-
Jens Axboe authored
Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
Jens Axboe authored
sg_mark_end() overwrites the page_link information, but all users want __sg_mark_end() behaviour where we just set the end bit. That is the most natural way to use the sg list, since you'll fill it in and then mark the end point. So change sg_mark_end() to only set the termination bit. Add a sg_magic debug check as well, and clear a chain pointer if it is set. Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
- 29 Oct, 2007 4 commits
-
-
Jens Axboe authored
For the locking to work, only the tag map and tag bit map may be shared (incidentally, I was just explaining this to Nick yesterday, but I apparently didn't review the code well enough myself). But we also share the busy list! The busy_list must be queue private, or we need a block_queue_tag covering lock as well. So we have to move the busy_list to the queue. This'll work fine, and it'll actually also fix a problem with blk_queue_invalidate_tags() which will invalidate tags across all shared queues. This is a bit confusing, the low level driver should call it for each queue seperately since otherwise you cannot kill tags on just a single queue for eg a hard drive that stops responding. Since the function has no callers currently, it's not an issue. Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
Nick Piggin authored
The block queue tag map can use lock bitops. Signed-off-by:
Nick Piggin <npiggin@suse.de> Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
Oleg Nesterov authored
blk_sync_queue() cancels the timer, but forgets to cancel the work. Signed-off-by:
Oleg Nesterov <oleg@tv-sign.ru> Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
Jerome Marchand authored
The nr_sector argument of drive_stat_acct() is not used anymore since the read and write sectors statistics are now updated in end_that_request_first(). This patch removes the useless argument. Signed-off-by:
Jerome Marchand <jmarchan@redhat.com> Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
- 24 Oct, 2007 1 commit
-
-
Jens Axboe authored
Most drivers need to set length and offset as well, so may as well fold those three lines into one. Add sg_assign_page() for those two locations that only needed to set the page, where the offset/length is set outside of the function context. Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
- 23 Oct, 2007 2 commits
-
-
Jens Axboe authored
Since blk_rq_map_sg() sets the termination bit at the end of the sg table, we could see it prematurely on the next mapping unless we force drivers to do a full sg_init_table() prior to each mapping. So force clear the termination bit to avoid having to put that clear in the driver for every mapping. Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
Jens Axboe authored
It's not a proper lvalue on all archs. Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
- 22 Oct, 2007 1 commit
-
-
Jens Axboe authored
Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
- 19 Oct, 2007 2 commits
-
-
Pavel Emelyanov authored
The task_struct->pid member is going to be deprecated, so start using the helpers (task_pid_nr/task_pid_vnr/task_pid_nr_ns) in the kernel. The first thing to start with is the pid, printed to dmesg - in this case we may safely use task_pid_nr(). Besides, printks produce more (much more) than a half of all the explicit pid usage. [akpm@linux-foundation.org: git-drm went and changed lots of stuff] Signed-off-by:
Pavel Emelyanov <xemul@openvz.org> Cc: Dave Airlie <airlied@linux.ie> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Randy Dunlap authored
Fix kernel-api docbook contents problems. docproc: linux-2.6.23-git13/include/asm-x86/unaligned_32.h: No such file or directory Warning(linux-2.6.23-git13//include/linux/list.h:482): bad line: of list entry Warning(linux-2.6.23-git13//mm/filemap.c:864): No description found for parameter 'ra' Warning(linux-2.6.23-git13//block/ll_rw_blk.c:3760): No description found for parameter 'req' Warning(linux-2.6.23-git13//include/linux/input.h:1077): No description found for parameter 'private' Warning(linux-2.6.23-git13//include/linux/input.h:1077): No description found for parameter 'cdev' Signed-off-by:
Randy Dunlap <randy.dunlap@oracle.com> Cc: Jens Axboe <jens.axboe@oracle.com> Cc: WU Fengguang <wfg@mail.ustc.edu.cn> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 17 Oct, 2007 3 commits
-
-
Jens Axboe authored
Don't ever use sg_next() on the last entry, it may not be valid! Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
Peter Zijlstra authored
provide BDI constructor/destructor hooks [akpm@linux-foundation.org: compile fix] Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Jens Axboe authored
The memset() of the sg entry was originally removed, because it could overwrite a chain pointer. But it's quite OK to memset() it when we know it's a valid entry, since it can't contain a chain pointer. Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
- 16 Oct, 2007 8 commits
-
-
Fengguang Wu authored
Remove the size limit max_sectors_kb imposed on max_readahead_kb. The size restriction is unreasonable. Especially when max_sectors_kb cannot grow larger than max_hw_sectors_kb, which can be rather small for some disk drives. Cc: Jens Axboe <jens.axboe@oracle.com> Signed-off-by:
Fengguang Wu <wfg@mail.ustc.edu.cn> Acked-by:
Jens Axboe <jens.axboe@oracle.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Jens Axboe authored
Expose this setting for now, so that users can play with enabling large commands without defaulting it to on globally. This is a debug patch, it will be dropped for the final versions. Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
Jens Axboe authored
Convert the main rq mapper (blk_rq_map_sg()) to the sg helper setup. Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
Jens Axboe authored
Then we can get rid of ->issue_flush_fn() and all the driver private implementations of that. Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
Jens Axboe authored
This implements functionality to pass down or insert a barrier in a queue, without having data attached to it. The ->prepare_flush_fn() infrastructure from data barriers are reused to provide this functionality. Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
Jens Axboe authored
End of device check is done twice in __generic_make_request() and it's fully inlined each time. Factor out bio_check_eod(). Signed-off-by:
Tejun Heo <htejun@gmail.com> Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
Jens Axboe authored
We can use this helper in the elevator core for BLKPREP_KILL, and it'll also be useful for the empty barrier patch. Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
Jens Axboe authored
Fix ?: construct, a typo, whitespace, and similar. Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
- 12 Oct, 2007 1 commit
-
-
Greg Kroah-Hartman authored
A number of different drivers incorrect access the kobject name field directly. This is not correct as the name might not be in the array. Use the proper accessor function instead.
-
- 10 Oct, 2007 10 commits
-
-
NeilBrown authored
As bi_end_io is only called once when the reqeust is complete, the 'size' argument is now redundant. Remove it. Now there is no need for bio_endio to subtract the size completed from bi_size. So don't do that either. While we are at it, change bi_end_io to return void. Signed-off-by:
Neil Brown <neilb@suse.de> Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
NeilBrown authored
The only caller of bio_endio that does not pass the full bi_size is end_that_request_first. Also, no ->bi_end_io method is really interested in bi_size being decremented. So move the decrement and related code into ll_rw_blk and merge it with order_bio_endio to form req_bio_endio which does endio functionality specific to request completion. As some ->bi_end_io methods do check bi_size of 0, we set it thus for now, but that will go in the next patch. Signed-off-by:
Neil Brown <neilb@suse.de> ### Diffstat output ./block/ll_rw_blk.c | 42 +++++++++++++++++++++++++++--------------- ./fs/bio.c | 23 +++++++++++------------ 2 files changed, 38 insertions(+), 27 deletions(-) diff .prev/block/ll_rw_blk.c ./block/ll_rw_blk.c Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
NeilBrown authored
The entire function of flush_dry_bio_endio is to undo the effects of bio_endio (when called on a barrier request). So remove the function and the call to bio_endio. This allows us to remove "bi_size" from "struct request_queue". Signed-off-by:
Neil Brown <neilb@suse.de> ### Diffstat output ./block/ll_rw_blk.c | 39 ++------------------------------------- ./include/linux/blkdev.h | 1 - 2 files changed, 2 insertions(+), 38 deletions(-) diff .prev/block/ll_rw_blk.c ./block/ll_rw_blk.c Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
Satyam Sharma authored
blk_cpu_notifier is marked as __devinitdata, but __devinitdata need not be __init even if HOTPLUG_CPU=n, which wastes space. It should be marked __cpuinitdata, and the callback itself as __cpuinit. Signed-off-by:
Satyam Sharma <satyam@infradead.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
Jens Axboe authored
Remove one level of nesting where appropriate. Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
NeilBrown authored
These have very similar functions and should share code where possible. Signed-off-by:
Neil Brown <neilb@suse.de> diff .prev/block/ll_rw_blk.c ./block/ll_rw_blk.c Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
NeilBrown authored
blk_rq_bio_prep is exported for use in exactly one place. That place can benefit from using the new blk_rq_append_bio instead. So - change dm-emc to call blk_rq_append_bio - stop exporting blk_rq_bio_prep, and - initialise rq_disk in blk_rq_bio_prep, as dm-emc needs it. Signed-off-by:
Neil Brown <neilb@suse.de> diff .prev/block/ll_rw_blk.c ./block/ll_rw_blk.c Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
NeilBrown authored
ll_back_merge_fn is currently exported to SCSI where is it used, together with blk_rq_bio_prep, in exactly the same way these functions are used in __blk_rq_map_user. So move the common code into a new function (blk_rq_append_bio), and don't export ll_back_merge_fn any longer. Signed-off-by:
Neil Brown <neilb@suse.de> diff .prev/block/ll_rw_blk.c ./block/ll_rw_blk.c Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
NeilBrown authored
Every usage of rq_for_each_bio wraps a usage of bio_for_each_segment, so these can be combined into rq_for_each_segment. We define "struct req_iterator" to hold the 'bio' and 'index' that are needed for the double iteration. Signed-off-by:
Neil Brown <neilb@suse.de> Various compile fixes by me... Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
NeilBrown authored
blk_recalc_rq_segments calls blk_recount_segments on each bio, then does some extra calculations to handle segments that overlap two bios. If we merge the code from blk_recount_segments into blk_recalc_rq_segments, we can process the whole request one bio_vec at a time, and not need the messy cross-bio calculations. Then blk_recount_segments can be implemented by calling blk_recalc_rq_segments, passing it a simple on-stack request which stores just the bio. Signed-off-by:
Neil Brown <neilb@suse.de> diff .prev/block/ll_rw_blk.c ./block/ll_rw_blk.c Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
- 14 Sep, 2007 1 commit
-
-
Nick Piggin authored
Should add some comments for the tag barriers (they won't be so important if we can switch over to the explicit _lock bitops, but for now we should make it clear). Jens' original patch said a barrier after the test_and_clear_bit was also required. I can't see why (and it would prevent the use of the _lock bitop). Acked-by:
Jens Axboe <jens.axboe@oracle.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> --
-
- 13 Sep, 2007 1 commit
-
-
Jens Axboe authored
There's a race condition in blk_queue_end_tag() for shared tag maps, users include stex (promise supertrak thingy) and qla2xxx. The former at least has reported bugs in this area, not sure why we haven't seen any for the latter. It could be because the window is narrow and that other conditions in the qla2xxx code hide this. It's a real bug, though, as the stex smp users can attest. We need to ensure two things - the tag bit clearing needs to happen AFTER we cleared the tag pointer, as the tag bit clearing/setting is what protects this map. Secondly, we need to ensure that the visibility of the tag pointer and tag bit clear are ordered properly. [ I removed the SMP barriers - "test_and_clear_bit()" already implies all the required barriers. -- Linus ] Also see http://bugzilla.kernel.org/show_bug.cgi?id=7842Signed-off-by:
Jens Axboe <jens.axboe@oracle.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 11 Aug, 2007 1 commit
-
-
Alan D. Brunelle authored
This patch provides more information concerning REMAP operations on block IOs. The additional information provides clearer details at the user level, and supports post-processing analysis in btt. o Adds in partition remaps on the same device. o Fixed up the remap information in DM to be in the right order o Sent up mapped-from and mapped-to device information Signed-off-by:
Alan D. Brunelle <alan.brunelle@hp.com> Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
- 24 Jul, 2007 1 commit
-
-
Jens Axboe authored
Some of the code has been gradually transitioned to using the proper struct request_queue, but there's lots left. So do a full sweet of the kernel and get rid of this typedef and replace its uses with the proper type. Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-