1. 07 Sep, 2009 1 commit
    • Roland Dreier's avatar
      IB/mad: Fix possible lock-lock-timer deadlock · 6b2eef8f
      Roland Dreier authored
      Lockdep reported a possible deadlock with cm_id_priv->lock,
      mad_agent_priv->lock and mad_agent_priv->timed_work.timer; this
      happens because the mad module does
      
      	cancel_delayed_work(&mad_agent_priv->timed_work);
      
      while holding mad_agent_priv->lock.  cancel_delayed_work() internally
      does del_timer_sync(&mad_agent_priv->timed_work.timer).
      
      This can turn into a deadlock because mad_agent_priv->lock is taken
      inside cm_id_priv->lock, so we can get the following set of contexts
      that deadlock each other:
      
       A: holding cm_id_priv->lock, waiting for mad_agent_priv->lock
       B: holding mad_agent_priv->lock, waiting for del_timer_sync()
       C: interrupt during mad_agent_priv->timed_work.timer that takes
          cm_id_priv->lock
      
      Fix this by using the new __cancel_delayed_work() interface (which
      internally does del_timer() instead of del_timer_sync()) in all the
      places where we are holding a lock.
      
      Addresses: http://bugzilla.kernel.org/show_bug.cgi?id=13757Reported-by: default avatarBart Van Assche <bart.vanassche@gmail.com>
      Signed-off-by: default avatarRoland Dreier <rolandd@cisco.com>
      6b2eef8f
  2. 06 Sep, 2009 1 commit
  3. 05 Sep, 2009 31 commits
  4. 04 Sep, 2009 7 commits
    • Sunil Mushran's avatar
      ocfs2: ocfs2_write_begin_nolock() should handle len=0 · 8379e7c4
      Sunil Mushran authored
      Bug introduced by mainline commit e7432675
      The bug causes ocfs2_write_begin_nolock() to oops when len=0.
      Signed-off-by: default avatarSunil Mushran <sunil.mushran@oracle.com>
      Cc: stable@kernel.org
      Signed-off-by: default avatarJoel Becker <joel.becker@oracle.com>
      8379e7c4
    • Mikulas Patocka's avatar
      dm snapshot: fix on disk chunk size validation · ae0b7448
      Mikulas Patocka authored
      Fix some problems seen in the chunk size processing when activating a
      pre-existing snapshot.
      
      For a new snapshot, the chunk size can either be supplied by the creator
      or a default value can be used.  For an existing snapshot, the
      chunk size in the snapshot header on disk should always be used.
      
      If someone attempts to load an existing snapshot and has the 'default
      chunk size' option set, the kernel uses its default value even when it
      is incorrect for the snapshot being loaded.  This patch ensures the
      correct on-disk value is always used.
      
      Secondly, when the code does use the chunk size stored on the disk it is
      prudent to revalidate it, so the code can exit cleanly if it got
      corrupted as happened in
      https://bugzilla.redhat.com/show_bug.cgi?id=461506 .
      
      Cc: stable@kernel.org
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarAlasdair G Kergon <agk@redhat.com>
      ae0b7448
    • Mikulas Patocka's avatar
      dm exception store: split set_chunk_size · 2defcc3f
      Mikulas Patocka authored
      Break the function set_chunk_size to two functions in preparation for
      the fix in the following patch.
      
      Cc: stable@kernel.org
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarAlasdair G Kergon <agk@redhat.com>
      2defcc3f
    • Mikulas Patocka's avatar
      dm snapshot: fix header corruption race on invalidation · 61578dcd
      Mikulas Patocka authored
      If a persistent snapshot fills up, a race can corrupt the on-disk header
      which causes a crash on any future attempt to activate the snapshot
      (typically while booting).  This patch fixes the race.
      
      When the snapshot overflows, __invalidate_snapshot is called, which calls
      snapshot store method drop_snapshot. It goes to persistent_drop_snapshot that
      calls write_header. write_header constructs the new header in the "area"
      location.
      
      Concurrently, an existing kcopyd job may finish, call copy_callback
      and commit_exception method, that goes to persistent_commit_exception.
      persistent_commit_exception doesn't do locking, relying on the fact that
      callbacks are single-threaded, but it can race with snapshot invalidation and
      overwrite the header that is just being written while the snapshot is being
      invalidated.
      
      The result of this race is a corrupted header being written that can
      lead to a crash on further reactivation (if chunk_size is zero in the
      corrupted header).
      
      The fix is to use separate memory areas for each.
      
      See the bug: https://bugzilla.redhat.com/show_bug.cgi?id=461506
      
      Cc: stable@kernel.org
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarAlasdair G Kergon <agk@redhat.com>
      61578dcd
    • Mikulas Patocka's avatar
      dm snapshot: refactor zero_disk_area to use chunk_io · 02d2fd31
      Mikulas Patocka authored
      Refactor chunk_io to prepare for the fix in the following patch.
      
      Pass an area pointer to chunk_io and simplify zero_disk_area to use
      chunk_io.  No functional change.
      
      Cc: stable@kernel.org
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarAlasdair G Kergon <agk@redhat.com>
      02d2fd31
    • Jonathan Brassow's avatar
      dm log: userspace add luid to distinguish between concurrent log instances · 7ec23d50
      Jonathan Brassow authored
      Device-mapper userspace logs (like the clustered log) are
      identified by a universally unique identifier (UUID).  This
      identifier is used to associate requests from the kernel to
      a specific log in userspace.  The UUID must be unique everywhere,
      since multiple machines may use this identifier when communicating
      about a particular log, as is the case for cluster logs.
      
      Sometimes, device-mapper/LVM may re-use a UUID.  This is the
      case during pvmoves, when moving from one segment of an LV
      to another, or when resizing a mirror, etc.  In these cases,
      a new log is created with the same UUID and loaded in the
      "inactive" slot.  When a device-mapper "resume" is issued,
      the "live" table is deactivated and the new "inactive" table
      becomes "live".  (The "inactive" table can also be removed
      via a device-mapper 'clear' command.)
      
      The above two issues were colliding.  More than one log was being
      created with the same UUID, and there was no way to distinguish
      between them.  So, sometimes the wrong log would be swapped
      out during the exchange.
      
      The solution is to create a locally unique identifier,
      'luid', to go along with the UUID.  This new identifier is used
      to determine exactly which log is being referenced by the kernel
      when the log exchange is made.  The identifier is not
      universally safe, but it does not need to be, since
      create/destroy/suspend/resume operations are bound to a specific
      machine; and these are the operations that make up the exchange.
      Signed-off-by: default avatarJonathan Brassow <jbrassow@redhat.com>
      Signed-off-by: default avatarAlasdair G Kergon <agk@redhat.com>
      7ec23d50
    • Jonathan Brassow's avatar
      dm raid1: do not allow log_failure variable to unset after being set · d2b69864
      Jonathan Brassow authored
      This patch fixes a bug which was triggering a case where the primary leg
      could not be changed on failure even when the mirror was in-sync.
      
      The case involves the failure of the primary device along with
      the transient failure of the log device.  The problem is that
      bios can be put on the 'failures' list (due to log failure)
      before 'fail_mirror' is called due to the primary device failure.
      Normally, this is fine, but if the log device failure is transient,
      a subsequent iteration of the work thread, 'do_mirror', will
      reset 'log_failure'.  The 'do_failures' function then resets
      the 'in_sync' variable when processing bios on the failures list.
      The 'in_sync' variable is what is used to determine if the
      primary device can be switched in the event of a failure.  Since
      this has been reset, the primary device is incorrectly assumed
      to be not switchable.
      
      The case has been seen in the cluster mirror context, where one
      machine realizes the log device is dead before the other machines.
      As the responsibilities of the server migrate from one node to
      another (because the mirror is being reconfigured due to the failure),
      the new server may think for a moment that the log device is fine -
      thus resetting the 'log_failure' variable.
      
      In any case, it is inappropiate for us to reset the 'log_failure'
      variable.  The above bug simply illustrates that it can actually
      hurt us.
      
      Cc: stable@kernel.org
      Signed-off-by: default avatarJonathan Brassow <jbrassow@redhat.com>
      Signed-off-by: default avatarAlasdair G Kergon <agk@redhat.com>
      d2b69864