Commit 33f775ee authored by Paolo 'Blaisorblade' Giarrusso's avatar Paolo 'Blaisorblade' Giarrusso Committed by Linus Torvalds

[PATCH] uml ubd driver: ubd_io_lock usage fixup

Add some comments about requirements for ubd_io_lock and expand its use.

When an irq signals that the "controller" (i.e.  another thread on the host,
which does the actual requests and is the only one blocked on I/O on the host)
has done some work, we call again the request function ourselves
(do_ubd_request).

We now do that with ubd_io_lock held - that's useful to protect against
concurrent calls to elv_next_request and so on.

XXX: Maybe we shouldn't call at all the request function.  Input needed on
this.  Are we supposed to plug and unplug the queue?  That code "indirectly"
does that by setting a flag, called do_ubd, which makes the request function
return (it's a residual of 2.4 block layer interface).

Meanwhile, however, merge this patch, which improves things.

Cc: Jens Axboe <axboe@suse.de>
Signed-off-by: default avatarPaolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Cc: Jeff Dike <jdike@addtoit.com>
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent d7fb2c38
...@@ -106,6 +106,8 @@ static inline void ubd_set_bit(__u64 bit, unsigned char *data) ...@@ -106,6 +106,8 @@ static inline void ubd_set_bit(__u64 bit, unsigned char *data)
#define DRIVER_NAME "uml-blkdev" #define DRIVER_NAME "uml-blkdev"
/* Can be taken in interrupt context, and is passed to the block layer to lock
* the request queue. Kernel side code knows that. */
static DEFINE_SPINLOCK(ubd_io_lock); static DEFINE_SPINLOCK(ubd_io_lock);
static DEFINE_MUTEX(ubd_lock); static DEFINE_MUTEX(ubd_lock);
...@@ -497,6 +499,8 @@ static void __ubd_finish(struct request *req, int error) ...@@ -497,6 +499,8 @@ static void __ubd_finish(struct request *req, int error)
end_request(req, 1); end_request(req, 1);
} }
/* Callable only from interrupt context - otherwise you need to do
* spin_lock_irq()/spin_lock_irqsave() */
static inline void ubd_finish(struct request *req, int error) static inline void ubd_finish(struct request *req, int error)
{ {
spin_lock(&ubd_io_lock); spin_lock(&ubd_io_lock);
...@@ -504,7 +508,7 @@ static inline void ubd_finish(struct request *req, int error) ...@@ -504,7 +508,7 @@ static inline void ubd_finish(struct request *req, int error)
spin_unlock(&ubd_io_lock); spin_unlock(&ubd_io_lock);
} }
/* Called without ubd_io_lock held */ /* Called without ubd_io_lock held, and only in interrupt context. */
static void ubd_handler(void) static void ubd_handler(void)
{ {
struct io_thread_req req; struct io_thread_req req;
...@@ -525,7 +529,9 @@ static void ubd_handler(void) ...@@ -525,7 +529,9 @@ static void ubd_handler(void)
ubd_finish(rq, req.error); ubd_finish(rq, req.error);
reactivate_fd(thread_fd, UBD_IRQ); reactivate_fd(thread_fd, UBD_IRQ);
spin_lock(&ubd_io_lock);
do_ubd_request(ubd_queue); do_ubd_request(ubd_queue);
spin_unlock(&ubd_io_lock);
} }
static irqreturn_t ubd_intr(int irq, void *dev) static irqreturn_t ubd_intr(int irq, void *dev)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment