An error occurred fetching the project authors.
- 31 Dec, 2008 21 commits
-
-
Marcelo Tosatti authored
Skip syncing global pages on cr3 switch (but not on cr4/cr0). This is important for Linux 32-bit guests with PAE, where the kmap page is marked as global. Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Marcelo Tosatti authored
Instead of invoking the handler directly collect pages into an array so the caller can work with it. Simplifies TLB flush collapsing. Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Guillaume Thouvenin authored
Instruction like shld has three operands, so we need to add a Src2 decode set. We start with Src2None, Src2CL, and Src2ImmByte, Src2One to support shld/shrd and we will expand it later. Signed-off-by:
Guillaume Thouvenin <guillaume.thouvenin@ext.bull.net> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Eduardo Habkost authored
This function can be used by the reboot or kdump code to forcibly disable SVM on the CPU. Signed-off-by:
Eduardo Habkost <ehabkost@redhat.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Eduardo Habkost authored
Create cpu_svm_disable() function. Signed-off-by:
Eduardo Habkost <ehabkost@redhat.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Eduardo Habkost authored
Use a trick to keep the printk()s on has_svm() working as before. gcc will take care of not generating code for the 'msg' stuff when the function is called with a NULL msg argument. Signed-off-by:
Eduardo Habkost <ehabkost@redhat.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Eduardo Habkost authored
Add cpu_emergency_vmxoff() and its friends: cpu_vmx_enabled() and __cpu_emergency_vmxoff(). Signed-off-by:
Eduardo Habkost <ehabkost@redhat.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Eduardo Habkost authored
Unfortunately we can't use exactly the same code from vmx hardware_disable(), because the KVM function uses the __kvm_handle_fault_on_reboot() tricks. Signed-off-by:
Eduardo Habkost <ehabkost@redhat.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Eduardo Habkost authored
It will be used by core code on kdump and reboot, to disable vmx if needed. Signed-off-by:
Eduardo Habkost <ehabkost@redhat.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Eduardo Habkost authored
Those definitions will be used by code outside KVM, so move it outside of a KVM-specific source file. Those definitions are used only on kvm/vmx.c, that already includes asm/vmx.h, so they can be moved safely. Signed-off-by:
Eduardo Habkost <ehabkost@redhat.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Eduardo Habkost authored
svm.h will be used by core code that is independent of KVM, so I am moving it outside the arch/x86/kvm directory. Signed-off-by:
Eduardo Habkost <ehabkost@redhat.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Eduardo Habkost authored
vmx.h will be used by core code that is independent of KVM, so I am moving it outside the arch/x86/kvm directory. Signed-off-by:
Eduardo Habkost <ehabkost@redhat.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Izik Eidus authored
Some areas of kvm x86 mmu are using gfn offset inside a slot without unaliasing the gfn first. This patch makes sure that the gfn will be unaliased and add gfn_to_memslot_unaliased() to save the calculating of the gfn unaliasing in case we have it unaliased already. Signed-off-by:
Izik Eidus <ieidus@redhat.com> Acked-by:
Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Jan Kiszka authored
As suggested by Avi, this patch introduces a counter of VCPUs that have LVT0 set to NMI mode. Only if the counter > 0, we push the PIT ticks via all LAPIC LVT0 lines to enable NMI watchdog support. Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com> Acked-by:
Sheng Yang <sheng@linux.intel.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Sheng Yang authored
Otherwise set_bit() for private memory slot(above KVM_MEMORY_SLOTS) would corrupted memory in 32bit host. Signed-off-by:
Sheng Yang <sheng@linux.intel.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Sheng Yang authored
Remove one left improper comment of removed CR2. Signed-off-by:
Sheng Yang <sheng@linux.intel.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Sheng Yang authored
The effective memory type of EPT is the mixture of MSR_IA32_CR_PAT and memory type field of EPT entry. Signed-off-by:
Sheng Yang <sheng@linux.intel.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Sheng Yang authored
As well as reset mmu context when set MTRR. Signed-off-by:
Sheng Yang <sheng@linux.intel.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Sheng Yang authored
For KVM can reuse the type define, and need them to support shadow MTRR. Signed-off-by:
Sheng Yang <sheng@linux.intel.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Jan Kiszka authored
Introduces the KVM_NMI IOCTL to the generic x86 part of KVM for injecting NMIs from user space and also extends the statistic report accordingly. Based on the original patch by Sheng Yang. Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by:
Sheng Yang <sheng.yang@intel.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Jan Kiszka authored
There are currently two ways in VMX to check if an IRQ or NMI can be injected: - vmx_{nmi|irq}_enabled and - vcpu.arch.{nmi|interrupt}_window_open. Even worse, one test (at the end of vmx_vcpu_run) uses an inconsistent, likely incorrect logic. This patch consolidates and unifies the tests over {nmi|interrupt}_window_open as cache + vmx_update_window_states for updating the cache content. Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
- 25 Dec, 2008 2 commits
-
-
Martin Schwidefsky authored
arch_setup_additional_pages currently gets two arguments, the binary format descripton and an indication if the process uses an executable stack or not. The second argument is not used by anybody, it could be removed without replacement. What actually does make sense is to pass an indication if the process uses the elf interpreter or not. The glibc code will not use anything from the vdso if the process does not use the dynamic linker, so for statically linked binaries the architecture backend can choose not to map the vdso. Acked-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Frederic Weisbecker authored
Impact: fix a crash/hard-reboot on certain configs while enabling cpu runtime On some archs, the boot of a secondary cpu can have an early fragile state. On x86-64, the pda is not initialized on the first stage of a cpu boot but it is needed to get the cpu number and the current task pointer. This data is needed during tracing. As they were dereferenced at this stage, we got a crash while tracing a cpu being enabled at runtime. Some other archs like ia64 can have such kind of issue too. Changes on v2: We dropped the previous solution of a per-arch called function to guess the current state of a cpu. That could slow down the tracing. This patch removes the -pg flag on arch/x86/kernel/cpu/common.c where the low level cpu boot functions exist, on start_secondary() and a helper function used at this stage. Signed-off-by:
Frederic Weisbecker <fweisbec@gmail.com> Acked-by:
Steven Rostedt <srostedt@redhat.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 20 Dec, 2008 1 commit
-
-
Markus Metzger authored
Impact: introduce new ptrace facility Add arch_ptrace_untrace() function that is called when the tracer detaches (either voluntarily or when the tracing task dies); ptrace_disable() is only called on a voluntary detach. Add ptrace_fork() and arch_ptrace_fork(). They are called when a traced task is forked. Clear DS and BTS related fields on fork. Release DS resources and reclaim memory in ptrace_untrace(). This releases resources already when the tracing task dies. We used to do that when the traced task dies. Signed-off-by:
Markus Metzger <markus.t.metzger@intel.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 19 Dec, 2008 2 commits
-
-
venkatesh.pallipadi@intel.com authored
Impact: Cleanup and branch hints only. Move the track and untrack pfn stub routines from memory.c to asm-generic. Also add unlikely to pfnmap related calls in fork and exit path. Signed-off-by:
Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by:
Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by:
H. Peter Anvin <hpa@zytor.com>
-
venkatesh.pallipadi@intel.com authored
Impact: Cleanup - removes a new function in favor of a recently modified older one. Replace follow_pfnmap_pte in pat code with follow_phys. follow_phys lso returns protection eliminating the need of pte_pgprot call. Using follow_phys also eliminates the need for pte_pa. Signed-off-by:
Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by:
Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by:
H. Peter Anvin <hpa@zytor.com>
-
- 18 Dec, 2008 10 commits
-
-
Hiroshi Shimamoto authored
Impact: cleanup Remove struct sigfram32 and rt_sigframe32 because there is no user. Signed-off-by:
Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> Signed-off-by:
H. Peter Anvin <hpa@zytor.com>
-
Hiroshi Shimamoto authored
Impact: cleanup Include following headers for dependency. asm/sigcontext.h asm/siginfo.h asm/ucontext.h Signed-off-by:
Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> Signed-off-by:
H. Peter Anvin <hpa@zytor.com>
-
Jaswinder Singh authored
Impact: cleanup In asm/traps.h :- do_double_fault : added under X86_64 sync_regs : added under X86_64 math_error : moved out from X86_32 as it is common for both 32 and 64 bit math_emulate : moved from X86_32 as it is common for both 32 and 64 bit smp_thermal_interrupt : added under X86_64 mce_threshold_interrupt : added under X86_64 Signed-off-by:
Jaswinder Singh <jaswinder@infradead.org> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
venkatesh.pallipadi@intel.com authored
Impact: New mm functionality. Add pgprot_writecombine. pgprot_writecombine will be aliased to pgprot_noncached when not supported by the architecture. Signed-off-by:
Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by:
Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by:
H. Peter Anvin <hpa@zytor.com>
-
venkatesh.pallipadi@intel.com authored
Impact: mm behavior change. Make pgprot_noncached uc_minus instead of strong UC. This will make pgprot_noncached to be in line with ioremap_nocache() and all the other APIs that map page uc_minus on uc request. Signed-off-by:
Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by:
Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by:
H. Peter Anvin <hpa@zytor.com>
-
venkatesh.pallipadi@intel.com authored
Impact: New mm functionality. Hookup remap_pfn_range and vm_insert_pfn and corresponding copy and free routines with reserve and free tracking. reserve and free here only takes care of non RAM region mapping. For RAM region, driver should use set_memory_[uc|wc|wb] to set the cache type and then setup the mapping for user pte. We can bypass below reserve/free in that case. Signed-off-by:
Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by:
Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by:
H. Peter Anvin <hpa@zytor.com>
-
Hiroshi Shimamoto authored
Impact: cleanup Add missing guard macro _ASM_X86_SIGFRAME_H. Signed-off-by:
Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Jaswinder Singh authored
Impact: cleanup In asm/syscalls.h move out sys_set_thread_area() and sys_get_thread_area() as they are common for both 32 and 64 bit. Signed-off-by:
Jaswinder Singh <jaswinder@infradead.org> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Hiroshi Shimamoto authored
Impact: cleanup, prepare to use from ia32_signal.c Make struct sigframe_ia32 and rt_sigframe_ia32 visible to ia32_signal.c. Signed-off-by:
Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Hiroshi Shimamoto authored
Impact: cleanup, move header file Move arch/x86/kernel/sigframe.h to arch/x86/include/asm/sigframe.h. It will be used in arch/x86/ia32/ia32_signal.c. Signed-off-by:
Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 17 Dec, 2008 1 commit
-
-
Jeremy Fitzhardinge authored
swiotlb on 32 bit will be used by Xen domain 0 support. Signed-off-by:
Ian Campbell <ian.campbell@citrix.com> Signed-off-by:
Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 16 Dec, 2008 3 commits
-
-
Cyrill Gorcunov authored
Impact: clean up Itroduce MCOUNT_SAVE/RESTORE_FRAME which allow us to save a number of lines on source level. Also fix a comment in ftrace.h. Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Russ Anderson authored
Impact: fix crash xpc needs to pass the physical address, not virtual. Testing uncovered this problem. The virtual address happens to work most of the time due to the way bios was masking off the node bits. Passing the physical address makes it work all of the time. Signed-off-by:
Russ Anderson <rja@sgi.com> Acked-by:
Dean Nelson <dcn@sgi.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Jack Steiner authored
Impact: fix UV boot crash This fixes a UV bug related to generating global memory addresses on partitioned systems. Partition systems do not have physical memory at address 0. Instead, a chunk of high memory is remapped by the chipset so that it appears to be at address 0. This remapping is INVISIBLE to most of the OS. The only OS functions that need to be aware of the remaping are functions that directly interface to the chipset. The GRU is one example. Also, delete a couple of unused macros related to global memory addresses. Signed-off-by:
Jack Steiner <steiner@sgi.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-