An error occurred fetching the project authors.
- 16 Jul, 2007 23 commits
-
-
Avi Kivity authored
Needs to be set on vcpu 0 only. Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Shani Moideen authored
Signed-off-by:
Shani Moideen <shani.moideen@wipro.com> Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
When a vcpu causes a shadow tlb entry to have reduced permissions, it must also clear the tlb on remote vcpus. We do that by: - setting a bit on the vcpu that requests a tlb flush before the next entry - if the vcpu is currently executing, we send an ipi to make sure it exits before we continue Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
This has two use cases: the bios can't boot from disk, and guest smp bootstrap. Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
Will soon have a thid user. Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Robert P. J. Day authored
Signed-off-by:
Robert P. J. Day <rpjday@mindspring.com> Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
Switch guest paging context may require us to allocate memory, which might fail. Instead of wiring up error paths everywhere, make context switching lazy and actually do the switch before the next guest entry, where we can return an error if allocation fails. Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Eddie Dong authored
MSR_EFER.LME/LMA bits are automatically save/restored by VMX hardware, KVM only needs to save NX/SCE bits at time of heavy weight VM Exit. But clearing NX bits in host envirnment may cause system hang if the host page table is using EXB bits, thus we leave NX bits as it is. If Host NX=1 and guest NX=0, we can do guest page table EXB bits check before inserting a shadow pte (though no guest is expecting to see this kind of gp fault). If host NX=0, we present guest no Execute-Disable feature to guest, thus no host NX=0, guest NX=1 combination. This patch reduces raw vmexit time by ~27%. Me: fix compile warnings on i386. Signed-off-by:
Yaozu (Eddie) Dong <eddie.dong@intel.com> Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Eddie Dong authored
Signed-off-by:
Yaozu (Eddie) Dong <eddie.dong@intel.com> Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Eddie Dong authored
In a lightweight exit (where we exit and reenter the guest without scheduling or exiting to userspace in between), we don't need various msrs on the host, and avoiding shuffling them around reduces raw exit time by 8%. i386 compile fix by Daniel Hecken <dh@bahntechnik.de>. Signed-off-by:
Yaozu (Eddie) Dong <eddie.dong@intel.com> Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Nitin A Kamble authored
Instructions with address size override prefix opcode 0x67 Cause the #SS fault with 0 error code in VM86 mode. Forward them to the emulator. Signed-Off-By:
Nitin A Kamble <nitin.a.kamble@intel.com> Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
This makes oprofile dumps and disassebly easier to read. Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
kunmap() expects a struct page, not a virtual address. Fixes an oops loading kvm-intel.ko on i386 with CONFIG_HIGHMEM. Thanks to Michael Ivanov <deruhu@peterstar.ru> for reporting. Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
The real mode tr needs to be set to a specific tss so that I/O instructions can function. Divert the new tr values to the real mode save area from where they will be restored on transition to protected mode. This fixes some crashes on reboot when the bios accesses an I/O instruction. Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
If we set an msr via an ioctl() instead of by handling a guest exit, we have the host state loaded, so reloading the msrs would clobber host state instead of guest state. This fixes a host oops (and loss of a cpu) on a guest reboot. Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
Easier to keep track of where the fpu is this way. Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
Everyone owns a piece of the exception bitmap, but they happily write to the entire thing like there's no tomorrow. Centralize handling in update_exception_bitmap() and have everyone call that. Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
The lightweight vmexit path avoids saving and reloading certain host state. However in certain cases lightweight vmexit handling can schedule() which requires reloading the host state. So we store the host state in the vcpu structure, and reloaded it if we relinquish the vcpu. Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
i386 wants fs for accessing the pda even on a lightweight exit, so ensure we can always restore it. This fixes a regression on i386 introduced by the lightweight vmexit patch. Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
Many msrs and the like will only be used by the host if we schedule() or return to userspace. Therefore, we avoid saving them if we handle the exit within the kernel, and if a reschedule is not requested. Based on a patch from Eddie Dong <eddie.dong@intel.com> with a couple of fixes by me. Signed-off-by:
Yaozu(Eddie) Dong <eddie.dong@intel.com> Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
He, Qing authored
This patch enables IO bitmaps control on vmx and unmask the 0x80 port to avoid VMEXITs caused by accessing port 0x80. 0x80 is used as delays (see include/asm/io.h), and handling VMEXITs on its access is unnecessary but slows things down. This patch improves kernel build test at around 3%~5%. Because every VM uses the same io bitmap, it is shared between all VMs rather than a per-VM data structure. Signed-off-by:
Qing He <qing.he@intel.com> Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
- 15 Jun, 2007 1 commit
-
-
Avi Kivity authored
The lazy fpu changes did not take into account that some vmexit handlers can sleep. Move loading the guest state into the inner loop so that it can be reloaded if necessary, and move loading the host state into vmx_vcpu_put() so it can be performed whenever we relinquish the vcpu. Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
- 01 Jun, 2007 1 commit
-
-
Sam Ravnborg authored
Fix following section mismatch warning in kvm-intel.o: WARNING: o-i386/drivers/kvm/kvm-intel.o(.init.text+0xbd): Section mismatch: reference to .exit.text: (between 'hardware_setup' and 'vmx_disabled_by_bios') The function free_kvm_area is used in the function alloc_kvm_area which is marked __init. The __exit area is discarded by some archs during link-time if a module is built-in resulting in an oops. Note: This warning is only seen by my local copy of modpost but the change will soon hit upstream. Signed-off-by:
Sam Ravnborg <sam@ravnborg.org> Cc: Avi Kivity <avi@qumranet.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 21 May, 2007 1 commit
-
-
Alexey Dobriyan authored
First thing mm.h does is including sched.h solely for can_do_mlock() inline function which has "current" dereference inside. By dealing with can_do_mlock() mm.h can be detached from sched.h which is good. See below, why. This patch a) removes unconditional inclusion of sched.h from mm.h b) makes can_do_mlock() normal function in mm/mlock.c c) exports can_do_mlock() to not break compilation d) adds sched.h inclusions back to files that were getting it indirectly. e) adds less bloated headers to some files (asm/signal.h, jiffies.h) that were getting them indirectly Net result is: a) mm.h users would get less code to open, read, preprocess, parse, ... if they don't need sched.h b) sched.h stops being dependency for significant number of files: on x86_64 allmodconfig touching sched.h results in recompile of 4083 files, after patch it's only 3744 (-8.3%). Cross-compile tested on all arm defconfigs, all mips defconfigs, all powerpc defconfigs, alpha alpha-up arm i386 i386-up i386-defconfig i386-allnoconfig ia64 ia64-up m68k mips parisc parisc-up powerpc powerpc-up s390 s390-up sparc sparc-up sparc64 sparc64-up um-x86_64 x86_64 x86_64-up x86_64-defconfig x86_64-allnoconfig as well as my two usual configs. Signed-off-by:
Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 03 May, 2007 14 commits
-
-
Avi Kivity authored
As we no longer emulate in userspace, this is meaningless. We don't compute it on SVM anyway. Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Anthony Liguori authored
Only save/restore the FPU host state when the guest is actually using the FPU. Signed-off-by:
Anthony Liguori <aliguori@us.ibm.com> Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Anthony Liguori authored
Set all of the host mask bits for CR0 so that we can maintain a proper shadow of CR0. This exposes CR0.TS, paving the way for lazy fpu handling. Signed-off-by:
Anthony Liguori <aliguori@us.ibm.com> Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
It slows down Windows x64 horribly. Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
Make the exit statistics per-vcpu instead of global. This gives a 3.5% boost when running one virtual machine per core on my two socket dual core (4 cores total) machine. Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
Intel hosts only support syscall/sysret in long more (and only if efer.sce is enabled), so only reload the related MSR_K6_STAR if the guest will actually be able to use it. This reduces vmexit cost by about 500 cycles (6400 -> 5870) on my setup. Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
No meat in that file. Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
Some msrs are only used by x86_64 instructions, and are therefore not needed when the guest is legacy mode. By not bothering to switch them, we reduce vmexit latency by 2400 cycles (from about 8800) when running a 32-bt guest on a 64-bit host. Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
THe automatically switched msrs are never changed on the host (with the exception of MSR_KERNEL_GS_BASE) and thus there is no need to save them on every vm entry. This reduces vmexit latency by ~400 cycles on i386 and by ~900 cycles (10%) on x86_64. Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Eric Sesterhenn / Snakebyte authored
The expression sp - 6 < sp where sp is a u16 is undefined in C since 'sp - 6' is promoted to int, and signed overflow is undefined in C. gcc 4.2 actually warns about it. Replace with a simpler test. Signed-off-by:
Eric Sesterhenn <snakebyte@gmx.de> Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
Mapping a guest page to a host page is a common operation. Currently, one has first to find the memory slot where the page belongs (gfn_to_memslot), then locate the page itself (gfn_to_page()). This is clumsy, and also won't work well with memory aliases. So simplify gfn_to_page() not to require memory slot translation first, and instead do it internally. Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
No longer interesting. Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
As usual, we need to mangle segment registers when emulating real mode as vm86 has specific constraints. We special case the reset segment base, and set the "access rights" (or descriptor flags) to vm86 comaptible values. This fixes reboot on vmx. Signed-off-by:
Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
set_cr0_no_modeswitch() was a hack to avoid corrupting segment registers. As we now cache the protected mode values on entry to real mode, this isn't an issue anymore, and it interferes with reboot (which usually _is_ a modeswitch). Signed-off-by:
Avi Kivity <avi@qumranet.com>
-