Commit 2668dab7 authored by Carsten Otte's avatar Carsten Otte Committed by Avi Kivity

KVM: s390: Fix memory slot versus run - v3

This patch fixes an incorrectness in the kvm backend for s390.
In case virtual cpus are being created before the corresponding
memory slot is being registered, we need to update the sie
control blocks for the virtual cpus.

*updates in v3*
In consideration of the s390 memslot constraints locking was changed
to trylock. These locks should never be held, as vcpu's can't run without
the single memslot we just assign when running this code. To ensure this
never deadlocks in case other code changes the code uses trylocks and bail
out if it can't get all locks.

Additionally most of the discussed special conditions for s390 like
only one memslot and no user_alloc are now checked for validity in
kvm_arch_set_memory_region.
Reported-by: default avatarMijo Safradin <mijo@linux.vnet.ibm.com>
Signed-off-by: default avatarCarsten Otte <cotte@de.ibm.com>
Signed-off-by: default avatarChristian Ehrhardt <ehrhardt@de.ibm.com>
Signed-off-by: default avatarAvi Kivity <avi@redhat.com>
parent 58f8ac27
...@@ -657,6 +657,8 @@ int kvm_arch_set_memory_region(struct kvm *kvm, ...@@ -657,6 +657,8 @@ int kvm_arch_set_memory_region(struct kvm *kvm,
struct kvm_memory_slot old, struct kvm_memory_slot old,
int user_alloc) int user_alloc)
{ {
int i;
/* A few sanity checks. We can have exactly one memory slot which has /* A few sanity checks. We can have exactly one memory slot which has
to start at guest virtual zero and which has to be located at a to start at guest virtual zero and which has to be located at a
page boundary in userland and which has to end at a page boundary. page boundary in userland and which has to end at a page boundary.
...@@ -664,7 +666,7 @@ int kvm_arch_set_memory_region(struct kvm *kvm, ...@@ -664,7 +666,7 @@ int kvm_arch_set_memory_region(struct kvm *kvm,
vmas. It is okay to mmap() and munmap() stuff in this slot after vmas. It is okay to mmap() and munmap() stuff in this slot after
doing this call at any time */ doing this call at any time */
if (mem->slot) if (mem->slot || kvm->arch.guest_memsize)
return -EINVAL; return -EINVAL;
if (mem->guest_phys_addr) if (mem->guest_phys_addr)
...@@ -676,15 +678,39 @@ int kvm_arch_set_memory_region(struct kvm *kvm, ...@@ -676,15 +678,39 @@ int kvm_arch_set_memory_region(struct kvm *kvm,
if (mem->memory_size & (PAGE_SIZE - 1)) if (mem->memory_size & (PAGE_SIZE - 1))
return -EINVAL; return -EINVAL;
if (!user_alloc)
return -EINVAL;
/* lock all vcpus */
for (i = 0; i < KVM_MAX_VCPUS; ++i) {
if (!kvm->vcpus[i])
continue;
if (!mutex_trylock(&kvm->vcpus[i]->mutex))
goto fail_out;
}
kvm->arch.guest_origin = mem->userspace_addr; kvm->arch.guest_origin = mem->userspace_addr;
kvm->arch.guest_memsize = mem->memory_size; kvm->arch.guest_memsize = mem->memory_size;
/* FIXME: we do want to interrupt running CPUs and update their memory /* update sie control blocks, and unlock all vcpus */
configuration now to avoid race conditions. But hey, changing the for (i = 0; i < KVM_MAX_VCPUS; ++i) {
memory layout while virtual CPUs are running is usually bad if (kvm->vcpus[i]) {
programming practice. */ kvm->vcpus[i]->arch.sie_block->gmsor =
kvm->arch.guest_origin;
kvm->vcpus[i]->arch.sie_block->gmslm =
kvm->arch.guest_memsize +
kvm->arch.guest_origin +
VIRTIODESCSPACE - 1ul;
mutex_unlock(&kvm->vcpus[i]->mutex);
}
}
return 0; return 0;
fail_out:
for (; i >= 0; i--)
mutex_unlock(&kvm->vcpus[i]->mutex);
return -EINVAL;
} }
void kvm_arch_flush_shadow(struct kvm *kvm) void kvm_arch_flush_shadow(struct kvm *kvm)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment