]> Git Repo - linux.git/commitdiff
x86: Align skb w/ start of cacheline on newer core 2/Xeon Arch
authorAlexander Duyck <[email protected]>
Tue, 29 Jun 2010 18:38:00 +0000 (18:38 +0000)
committerDavid S. Miller <[email protected]>
Wed, 30 Jun 2010 21:34:09 +0000 (14:34 -0700)
x86 architectures can handle unaligned accesses in hardware, and it has
been shown that unaligned DMA accesses can be expensive on Nehalem
architectures.  As such we should overwrite NET_IP_ALIGN to resolve
this issue.

Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
Signed-off-by: Alexander Duyck <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
Acked-by: H. Peter Anvin <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
arch/x86/include/asm/system.h

index b8fe48ee2ed971d648fa4ad940a210ddfc9fb90c..b4293fc8b7980cc1881d1ed25f6d9d4a4e3d8f41 100644 (file)
@@ -457,4 +457,13 @@ static inline void rdtsc_barrier(void)
        alternative(ASM_NOP3, "lfence", X86_FEATURE_LFENCE_RDTSC);
 }
 
+#ifdef CONFIG_MCORE2
+/*
+ * We handle most unaligned accesses in hardware.  On the other hand
+ * unaligned DMA can be quite expensive on some Nehalem processors.
+ *
+ * Based on this we disable the IP header alignment in network drivers.
+ */
+#define NET_IP_ALIGN   0
+#endif
 #endif /* _ASM_X86_SYSTEM_H */
This page took 0.057866 seconds and 4 git commands to generate.