]> Git Repo - linux.git/commit
x86/paravirt: Optimize native pv_lock_ops.vcpu_is_preempted()
authorPeter Zijlstra <[email protected]>
Tue, 15 Nov 2016 15:47:06 +0000 (16:47 +0100)
committerIngo Molnar <[email protected]>
Tue, 22 Nov 2016 11:48:11 +0000 (12:48 +0100)
commit3cded41794818d788aa1dc028ede4a1c1222d937
treeb38f7540e28ee21ccec1e2fb850ff4235041984e
parent05ffc951392df57edecc2519327b169210c3df75
x86/paravirt: Optimize native pv_lock_ops.vcpu_is_preempted()

Avoid the pointless function call to pv_lock_ops.vcpu_is_preempted()
when a paravirt spinlock enabled kernel is ran on native hardware.

Do this by patching out the CALL instruction with "XOR %RAX,%RAX"
which has the same effect (0 return value).

Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: [email protected]
Cc: Linus Torvalds <[email protected]>
Cc: Pan Xinhui <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Ingo Molnar <[email protected]>
arch/x86/include/asm/paravirt.h
arch/x86/include/asm/paravirt_types.h
arch/x86/include/asm/qspinlock.h
arch/x86/include/asm/spinlock.h
arch/x86/kernel/kvm.c
arch/x86/kernel/paravirt-spinlocks.c
arch/x86/kernel/paravirt_patch_32.c
arch/x86/kernel/paravirt_patch_64.c
arch/x86/xen/spinlock.c
This page took 0.062333 seconds and 4 git commands to generate.