---------------------------
- What: USER_SCHED
- When: 2.6.34
-
- Why: USER_SCHED was implemented as a proof of concept for group scheduling.
- The effect of USER_SCHED can already be achieved from userspace with
- the help of libcgroup. The removal of USER_SCHED will also simplify
- the scheduler code with the removal of one major ifdef. There are also
- issues USER_SCHED has with USER_NS. A decision was taken not to fix
- those and instead remove USER_SCHED. Also new group scheduling
- features will not be implemented for USER_SCHED.
-
-
- ---------------------------
-
What: PRISM54
When: 2.6.34
---------------------------
+ What: Deprecated snapshot ioctls
+ When: 2.6.36
+
+ Why: The ioctls in kernel/power/user.c were marked as deprecated long time
+ ago. Now they notify users about that so that they need to replace
+ their userspace. After some more time, remove them completely.
+
+
+ ---------------------------
+
What: The ieee80211_regdom module parameter
When: March 2010 / desktop catchup
---------------------------
-What: CONFIG_WIRELESS_OLD_REGULATORY - old static regulatory information
-When: March 2010 / desktop catchup
-
-Why: The old regulatory infrastructure has been replaced with a new one
- which does not require statically defined regulatory domains. We do
- not want to keep static regulatory domains in the kernel due to the
- the dynamic nature of regulatory law and localization. We kept around
- the old static definitions for the regulatory domains of:
-
- * US
- * JP
- * EU
-
- and used by default the US when CONFIG_WIRELESS_OLD_REGULATORY was
- set. We will remove this option once the standard Linux desktop catches
- up with the new userspace APIs we have implemented.
-
-
----------------------------
-
What: dev->power.power_state
When: July 2007
Why: Broken design for runtime control over driver power states, confusing
sensors) wich are also supported by the gspca_zc3xx driver
(which supports 53 USB-ID's in total)
+
+----------------------------
+
+What: capifs
+When: February 2011
+Files: drivers/isdn/capi/capifs.*
+Why: udev fully replaces this special file system that only contains CAPI
+ NCCI TTY device nodes. User space (pppdcapiplugin) works without
+ noticing the difference.
IMA Integrity measurement architecture is enabled.
IOSCHED More than one I/O scheduler is enabled.
IP_PNP IP DHCP, BOOTP, or RARP is enabled.
+ IPV6 IPv6 support is enabled.
ISAPNP ISA PnP code is enabled.
ISDN Appropriate ISDN support is enabled.
JOY Appropriate joystick support is enabled.
acpi_display_output=video
See above.
+ acpi_early_pdc_eval [HW,ACPI] Evaluate processor _PDC methods
+ early. Needed on some platforms to properly
+ initialize the EC.
+
acpi_irq_balance [HW,ACPI]
ACPI will balance active IRQs
default in APIC mode
aic79xx= [HW,SCSI]
See Documentation/scsi/aic79xx.txt.
+ alignment= [KNL,ARM]
+ Allow the default userspace alignment fault handler
+ behaviour to be specified. Bit 0 enables warnings,
+ bit 1 enables fixups, and bit 2 sends a segfault.
+
amd_iommu= [HW,X86-84]
Pass parameters to the AMD IOMMU driver in the system.
Possible values are:
Change the amount of debugging information output
when initialising the APIC and IO-APIC components.
+ autoconf= [IPV6]
+ See Documentation/networking/ipv6.txt.
+
show_lapic= [APIC,X86] Advanced Programmable Interrupt Controller
Limit apic dumping. The parameter defines the maximal
number of local apics being dumped. Also it is possible
See drivers/char/README.epca and
Documentation/serial/digiepca.txt.
+ disable= [IPV6]
+ See Documentation/networking/ipv6.txt.
+
+ disable_ipv6= [IPV6]
+ See Documentation/networking/ipv6.txt.
+
disable_mtrr_cleanup [X86]
The kernel tries to adjust MTRR layout from continuous
to discrete, to make X server driver able to add WB
nomfgpt [X86-32] Disable Multi-Function General Purpose
Timer usage (for AMD Geode machines).
+ nopat [X86] Disable PAT (page attribute table extension of
+ pagetables) support.
+
norandmaps Don't use address space randomization. Equivalent to
echo 0 > /proc/sys/kernel/randomize_va_space
IRQ routing is enabled.
noacpi [X86] Do not use ACPI for IRQ routing
or for PCI scanning.
- use_crs [X86] Use _CRS for PCI resource
- allocation.
+ use_crs [X86] Use PCI host bridge window information
+ from ACPI. On BIOSes from 2008 or later, this
+ is enabled by default. If you need to use this,
+ please report a bug.
+ nocrs [X86] Ignore PCI host bridge windows from ACPI.
+ If you need to use this, please report a bug.
routeirq Do IRQ routing for all PCI devices.
This is normally done in pci_enable_device(),
so this option is a temporary workaround
force Enable ASPM even on devices that claim not to support it.
WARNING: Forcing ASPM on may cause system lockups.
+ pcie_pme= [PCIE,PM] Native PCIe PME signaling options:
+ off Do not use native PCIe PME signaling.
+ force Use native PCIe PME signaling even if the BIOS refuses
+ to allow the kernel to control the relevant PCIe config
+ registers.
+ nomsi Do not use MSI for native PCIe PME signaling (this makes
+ all PCIe root ports use INTx for everything).
+
pcmv= [HW,PCMCIA] BadgePAD 4
pd. [PARIDE]
medium is write-protected).
Example: quirks=0419:aaf5:rl,0421:0433:rc
+ userpte=
+ [X86] Flags controlling user PTE allocations.
+
+ nohigh = do not allocate PTE pages in
+ HIGHMEM regardless of setting
+ of CONFIG_HIGHPTE.
+
vdso= [X86,SH]
vdso=2: enable compat VDSO (default with COMPAT_VDSO)
vdso=1: enable VDSO (default)
ACER ASPIRE ONE TEMPERATURE AND FAN DRIVER
W: http://piie.net/?section=acerhdf
S: Maintained
F: drivers/platform/x86/acerhdf.c
ACER WMI LAPTOP EXTRAS
W: http://code.google.com/p/aceracpi
S: Maintained
F: drivers/platform/x86/acer-wmi.c
ACPI WMI DRIVER
- L: linux-acpi@vger.kernel.org
+ L: platform-driver-x86@vger.kernel.org
W: http://www.lesswatts.org/projects/acpi/
S: Maintained
F: drivers/platform/x86/wmi.c
S: Maintained
ARM/CORTINA SYSTEMS GEMINI ARM ARCHITECTURE
- M: Paulius Zaleckas <paulius.zaleckas@teltonika.lt>
+ M: Paulius Zaleckas <paulius.zaleckas@gmail.com>
T: git git://gitorious.org/linux-gemini/mainline.git
- S: Maintained
+ S: Odd Fixes
F: arch/arm/mach-gemini/
ARM/EBSA110 MACHINE SUPPORT
F: arch/arm/mach-pxa/ezx.c
ARM/FARADAY FA526 PORT
- M: Paulius Zaleckas <paulius.zaleckas@teltonika.lt>
+ M: Paulius Zaleckas <paulius.zaleckas@gmail.com>
- S: Maintained
+ S: Odd Fixes
F: arch/arm/mm/*-fa*
ARM/FOOTBRIDGE ARCHITECTURE
W: http://acpi4asus.sf.net
S: Maintained
F: drivers/platform/x86/asus_acpi.c
ASUS LAPTOP EXTRAS DRIVER
W: http://acpi4asus.sf.net
S: Maintained
F: drivers/platform/x86/asus-laptop.c
CMPC ACPI DRIVER
S: Supported
F: drivers/platform/x86/classmate-laptop.c
COMPAL LAPTOP SUPPORT
S: Maintained
F: drivers/platform/x86/compal-laptop.c
DELL LAPTOP DRIVER
S: Maintained
F: drivers/platform/x86/dell-laptop.c
EEEPC LAPTOP EXTRAS DRIVER
W: http://acpi4asus.sf.net
S: Maintained
F: drivers/platform/x86/eeepc-laptop.c
F: Documentation/fault-injection/
F: lib/fault-inject.c
+ FCOE SUBSYSTEM (libfc, libfcoe, fcoe)
+ W: www.Open-FCoE.org
+ S: Supported
+ F: drivers/scsi/libfc/
+ F: drivers/scsi/fcoe/
+ F: include/scsi/fc/
+ F: include/scsi/libfc.h
+ F: include/scsi/libfcoe.h
+
FILE LOCKING (flock() and fcntl()/lockf())
FUJITSU LAPTOP EXTRAS
- L: linux-acpi@vger.kernel.org
+ L: platform-driver-x86@vger.kernel.org
S: Maintained
F: drivers/platform/x86/fujitsu-laptop.c
F: drivers/isdn/gigaset/
F: include/linux/gigaset_dev.h
+GRETH 10/100/1G Ethernet MAC device driver
+S: Maintained
+F: drivers/net/greth*
+
HARD DRIVE ACTIVE PROTECTION SYSTEM (HDAPS) DRIVER
S: Odd Fixes
F: drivers/char/hvc_*
+ VIRTIO CONSOLE DRIVER
+ S: Maintained
+ F: drivers/char/virtio_console.c
+
+ iSCSI BOOT FIRMWARE TABLE (iBFT) DRIVER
+ S: Maintained
+ F: drivers/firmware/iscsi_ibft*
+
GSPCA FINEPIX SUBDRIVER
HP COMPAQ TC1100 TABLET WMI EXTRAS DRIVER
S: Odd Fixes
F: drivers/platform/x86/tc1100-wmi.c
INTEL MENLOW THERMAL DRIVER
- L: linux-acpi@vger.kernel.org
+ L: platform-driver-x86@vger.kernel.org
W: http://www.lesswatts.org/projects/acpi/
S: Supported
F: drivers/platform/x86/intel_menlow.c
F: include/linux/mv643xx.h
MARVELL MWL8K WIRELESS DRIVER
-M: Lennert Buytenhek <buytenh@marvell.com>
+M: Lennert Buytenhek <buytenh@wantstofly.org>
-S: Supported
+S: Maintained
F: drivers/net/wireless/mwl8k.c
MARVELL SOC MMC/SD/SDIO CONTROLLER DRIVER
MSI LAPTOP SUPPORT
W: https://tango.0pointer.de/mailman/listinfo/s270-linux
W: http://0pointer.de/lennart/tchibo.html
S: Maintained
MSI WMI SUPPORT
S: Supported
F: drivers/platform/x86/msi-wmi.c
PANASONIC LAPTOP ACPI EXTRAS DRIVER
S: Maintained
F: drivers/platform/x86/panasonic-laptop.c
F: Documentation/networking/LICENSE.qla3xxx
F: drivers/net/qla3xxx.*
+QLOGIC QLCNIC (1/10)Gb ETHERNET DRIVER
+S: Supported
+F: drivers/net/qlcnic/
+
QLOGIC QLGE 10Gb ETHERNET DRIVER
RCUTORTURE MODULE
- S: Maintained
+ S: Supported
F: Documentation/RCU/torture.txt
F: kernel/rcutorture.c
W: http://www.rdrop.com/users/paulmck/rclock/
S: Supported
- F: Documentation/RCU/rcu.txt
- F: Documentation/RCU/rcuref.txt
- F: include/linux/rcupdate.h
- F: include/linux/srcu.h
- F: kernel/rcupdate.c
+ F: Documentation/RCU/
+ F: include/linux/rcu*
+ F: include/linux/srcu*
+ F: kernel/rcu*
+ F: kernel/srcu*
+ X: kernel/rcutorture.c
REAL TIME CLOCK DRIVER
F: drivers/media/video/*7146*
F: include/media/*7146*
+ TLG2300 VIDEO4LINUX-2 DRIVER
+ S: Supported
+ F: drivers/media/video/tlg2300
+
SC1200 WDT DRIVER
S: Maintained
SERVER ENGINES 10Gbps NIC - BladeEngine 2 DRIVER
W: http://www.serverengines.com
S: Supported
SONY VAIO CONTROL DEVICE DRIVER
- L: linux-acpi@vger.kernel.org
+ L: platform-driver-x86@vger.kernel.org
W: http://www.linux.it/~malattia/wiki/index.php/Sony_drivers
S: Maintained
F: Documentation/laptops/sony-laptop.txt
THINKPAD ACPI EXTRAS DRIVER
W: http://ibm-acpi.sourceforge.net
W: http://thinkwiki.org/wiki/Ibm-acpi
T: git git://repo.or.cz/linux-2.6/linux-acpi-2.6/ibm-acpi-2.6.git
TOPSTAR LAPTOP EXTRAS DRIVER
S: Maintained
F: drivers/platform/x86/topstar-laptop.c
TOSHIBA ACPI EXTRAS DRIVER
S: Orphan
F: drivers/platform/x86/toshiba_acpi.c
F: Documentation/filesystems/vfat.txt
F: fs/fat/
+VIRTIO HOST (VHOST)
+S: Maintained
+F: drivers/vhost/
+F: include/linux/vhost.h
+
VIA RHINE NETWORK DRIVER
S: Maintained
F: drivers/input/misc/wistron_btns.c
WL1251 WIRELESS DRIVER
-M: Kalle Valo <kalle.valo@nokia.com>
+M: Kalle Valo <kalle.valo@iki.fi>
W: http://wireless.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-testing.git
F: Documentation/x86/
F: arch/x86/
+ X86 PLATFORM DRIVERS
+ S: Maintained
+ F: drivers/platform/x86
+
XEN HYPERVISOR INTERFACE
CONFIG_MARKERS=y
CONFIG_OPROFILE=y
CONFIG_HAVE_OPROFILE=y
- # CONFIG_KPROBES is not set
+ CONFIG_KPROBES=y
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
+ CONFIG_KRETPROBES=y
CONFIG_HAVE_IOREMAP_PROT=y
CONFIG_HAVE_KPROBES=y
CONFIG_HAVE_KRETPROBES=y
CONFIG_LPARCFG=y
CONFIG_PPC_SMLPAR=y
CONFIG_CMM=y
+ CONFIG_DTL=y
CONFIG_PPC_ISERIES=y
#
CONFIG_KEXEC=y
# CONFIG_PHYP_DUMP is not set
CONFIG_IRQ_ALL_CPUS=y
- # CONFIG_NUMA is not set
+ CONFIG_NUMA=y
+ CONFIG_NODES_SHIFT=8
+ CONFIG_MAX_ACTIVE_REGIONS=256
CONFIG_ARCH_SELECT_MEMORY_MODEL=y
- CONFIG_ARCH_FLATMEM_ENABLE=y
CONFIG_ARCH_SPARSEMEM_ENABLE=y
CONFIG_ARCH_SPARSEMEM_DEFAULT=y
CONFIG_ARCH_POPULATES_NODE_MAP=y
# CONFIG_DISCONTIGMEM_MANUAL is not set
CONFIG_SPARSEMEM_MANUAL=y
CONFIG_SPARSEMEM=y
+ CONFIG_NEED_MULTIPLE_NODES=y
CONFIG_HAVE_MEMORY_PRESENT=y
CONFIG_SPARSEMEM_EXTREME=y
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
CONFIG_ZONE_DMA_FLAG=1
CONFIG_BOUNCE=y
CONFIG_UNEVICTABLE_LRU=y
+ CONFIG_NODES_SPAN_OTHER_NODES=y
CONFIG_ARCH_MEMORY_PROBE=y
CONFIG_PPC_HAS_HASH_64K=y
# CONFIG_PPC_64K_PAGES is not set
CONFIG_FORCE_MAX_ZONEORDER=13
- # CONFIG_SCHED_SMT is not set
+ CONFIG_SCHED_SMT=y
CONFIG_PROC_DEVICETREE=y
# CONFIG_CMDLINE_BOOL is not set
CONFIG_EXTRA_TARGETS=""
CONFIG_SCSI_IPR_TRACE=y
CONFIG_SCSI_IPR_DUMP=y
# CONFIG_SCSI_QLOGIC_1280 is not set
- # CONFIG_SCSI_QLA_FC is not set
+ CONFIG_SCSI_QLA_FC=m
# CONFIG_SCSI_QLA_ISCSI is not set
CONFIG_SCSI_LPFC=m
# CONFIG_SCSI_DC395x is not set
# CONFIG_SCSI_DC390T is not set
- CONFIG_SCSI_DEBUG=m
+ # CONFIG_SCSI_DEBUG is not set
# CONFIG_SCSI_SRP is not set
# CONFIG_SCSI_LOWLEVEL_PCMCIA is not set
# CONFIG_SCSI_DH is not set
CONFIG_MD_LINEAR=y
CONFIG_MD_RAID0=y
CONFIG_MD_RAID1=y
- CONFIG_MD_RAID10=y
- CONFIG_MD_RAID456=y
- CONFIG_MD_RAID5_RESHAPE=y
+ CONFIG_MD_RAID10=m
+ CONFIG_MD_RAID456=m
CONFIG_MD_MULTIPATH=m
CONFIG_MD_FAULTY=m
CONFIG_BLK_DEV_DM=y
CONFIG_ACENIC_OMIT_TIGON_I=y
# CONFIG_DL2K is not set
CONFIG_E1000=y
- # CONFIG_E1000E is not set
+ CONFIG_E1000E=m
# CONFIG_IP1000 is not set
# CONFIG_IGB is not set
# CONFIG_NS83820 is not set
CONFIG_SPIDER_NET=m
CONFIG_GELIC_NET=m
CONFIG_GELIC_WIRELESS=y
-# CONFIG_GELIC_WIRELESS_OLD_PSK_INTERFACE is not set
# CONFIG_QLA3XXX is not set
# CONFIG_ATL1 is not set
# CONFIG_ATL1E is not set
# CONFIG_JME is not set
CONFIG_NETDEV_10000=y
- # CONFIG_CHELSIO_T1 is not set
- # CONFIG_CHELSIO_T3 is not set
+ CONFIG_CHELSIO_T1=m
+ CONFIG_CHELSIO_T3=m
CONFIG_EHEA=m
# CONFIG_ENIC is not set
- # CONFIG_IXGBE is not set
+ CONFIG_IXGBE=m
CONFIG_IXGB=m
- # CONFIG_S2IO is not set
- # CONFIG_MYRI10GE is not set
- # CONFIG_NETXEN_NIC is not set
+ CONFIG_S2IO=m
+ CONFIG_MYRI10GE=m
+ CONFIG_NETXEN_NIC=m
# CONFIG_NIU is not set
CONFIG_PASEMI_MAC=y
- # CONFIG_MLX4_EN is not set
- # CONFIG_MLX4_CORE is not set
+ CONFIG_MLX4_EN=m
+ CONFIG_MLX4_CORE=m
# CONFIG_TEHUTI is not set
# CONFIG_BNX2X is not set
# CONFIG_QLGE is not set
CONFIG_HAS_TXX9_SERIAL=y
CONFIG_SERIAL_TXX9_NR_UARTS=6
CONFIG_SERIAL_TXX9_CONSOLE=y
- # CONFIG_SERIAL_JSM is not set
+ CONFIG_SERIAL_JSM=m
# CONFIG_SERIAL_OF_PLATFORM is not set
CONFIG_UNIX98_PTYS=y
CONFIG_LEGACY_PTYS=y
CONFIG_USB_DEVICE_CLASS=y
# CONFIG_USB_DYNAMIC_MINORS is not set
# CONFIG_USB_OTG is not set
- # CONFIG_USB_MON is not set
+ CONFIG_USB_MON=m
# CONFIG_USB_WUSB is not set
# CONFIG_USB_WUSB_CBAF is not set
# CONFIG_NEW_LEDS is not set
# CONFIG_ACCESSIBILITY is not set
CONFIG_INFINIBAND=m
- # CONFIG_INFINIBAND_USER_MAD is not set
- # CONFIG_INFINIBAND_USER_ACCESS is not set
+ CONFIG_INFINIBAND_USER_MAD=m
+ CONFIG_INFINIBAND_USER_ACCESS=m
+ CONFIG_INFINIBAND_USER_MEM=y
CONFIG_INFINIBAND_ADDR_TRANS=y
CONFIG_INFINIBAND_MTHCA=m
CONFIG_INFINIBAND_MTHCA_DEBUG=y
- # CONFIG_INFINIBAND_IPATH is not set
+ CONFIG_INFINIBAND_IPATH=m
CONFIG_INFINIBAND_EHCA=m
# CONFIG_INFINIBAND_AMSO1100 is not set
- # CONFIG_MLX4_INFINIBAND is not set
+ CONFIG_MLX4_INFINIBAND=m
# CONFIG_INFINIBAND_NES is not set
CONFIG_INFINIBAND_IPOIB=m
- # CONFIG_INFINIBAND_IPOIB_CM is not set
+ CONFIG_INFINIBAND_IPOIB_CM=y
CONFIG_INFINIBAND_IPOIB_DEBUG=y
# CONFIG_INFINIBAND_IPOIB_DEBUG_DATA is not set
- # CONFIG_INFINIBAND_SRP is not set
+ CONFIG_INFINIBAND_SRP=m
CONFIG_INFINIBAND_ISER=m
CONFIG_EDAC=y
CONFIG_REISERFS_FS_XATTR=y
CONFIG_REISERFS_FS_POSIX_ACL=y
CONFIG_REISERFS_FS_SECURITY=y
- CONFIG_JFS_FS=y
+ CONFIG_JFS_FS=m
CONFIG_JFS_POSIX_ACL=y
CONFIG_JFS_SECURITY=y
# CONFIG_JFS_DEBUG is not set
# CONFIG_XFS_RT is not set
# CONFIG_XFS_DEBUG is not set
# CONFIG_GFS2_FS is not set
- # CONFIG_OCFS2_FS is not set
+ CONFIG_OCFS2_FS=m
+ CONFIG_OCFS2_FS_O2CB=m
+ CONFIG_OCFS2_FS_STATS=y
+ CONFIG_OCFS2_DEBUG_MASKLOG=y
+ # CONFIG_OCFS2_DEBUG_FS is not set
+ # CONFIG_OCFS2_COMPAT_JBD is not set
+ CONFIG_BTRFS_FS=m
+ CONFIG_BTRFS_FS_POSIX_ACL=y
+ CONFIG_NILFS2_FS=m
CONFIG_DNOTIFY=y
CONFIG_INOTIFY=y
CONFIG_INOTIFY_USER=y
# CONFIG_QUOTA is not set
# CONFIG_AUTOFS_FS is not set
CONFIG_AUTOFS4_FS=m
- # CONFIG_FUSE_FS is not set
+ CONFIG_FUSE_FS=m
#
# CD-ROM/DVD Filesystems
# CONFIG_TMPFS_POSIX_ACL is not set
CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y
- # CONFIG_CONFIGFS_FS is not set
+ CONFIG_CONFIGFS_FS=m
#
# Miscellaneous filesystems
CONFIG_XMON_DISASSEMBLY=y
CONFIG_DEBUGGER=y
CONFIG_IRQSTACKS=y
- # CONFIG_VIRQ_DEBUG is not set
+ CONFIG_VIRQ_DEBUG=y
CONFIG_BOOTX_TEXT=y
# CONFIG_PPC_EARLY_DEBUG is not set
} else if (machine_is(mpc8569_mds)) {
#define BCSR7_UCC12_GETHnRST (0x1 << 2)
#define BCSR8_UEM_MARVELL_RST (0x1 << 1)
+#define BCSR_UCC_RGMII (0x1 << 6)
+#define BCSR_UCC_RTBI (0x1 << 5)
/*
* U-Boot mangles interrupt polarity for Marvell PHYs,
* so reset built-in and UEM Marvell PHYs, this puts
setbits8(&bcsr_regs[7], BCSR7_UCC12_GETHnRST);
clrbits8(&bcsr_regs[8], BCSR8_UEM_MARVELL_RST);
+
+ for (np = NULL; (np = of_find_compatible_node(np,
+ "network",
+ "ucc_geth")) != NULL;) {
+ const unsigned int *prop;
+ int ucc_num;
+
+ prop = of_get_property(np, "cell-index", NULL);
+ if (prop == NULL)
+ continue;
+
+ ucc_num = *prop - 1;
+
+ prop = of_get_property(np, "phy-connection-type", NULL);
+ if (prop == NULL)
+ continue;
+
+ if (strcmp("rtbi", (const char *)prop) == 0)
+ clrsetbits_8(&bcsr_regs[7 + ucc_num],
+ BCSR_UCC_RGMII, BCSR_UCC_RTBI);
+ }
+
}
iounmap(bcsr_regs);
}
{ .compatible = "gianfar", },
{ .compatible = "fsl,rapidio-delta", },
{ .compatible = "fsl,mpc8548-guts", },
+ { .compatible = "gpio-leds", },
{},
};
static int __init mpc85xx_publish_devices(void)
{
+ if (machine_is(mpc8568_mds))
+ simple_gpiochip_init("fsl,mpc8568mds-bcsr-gpio");
if (machine_is(mpc8569_mds))
simple_gpiochip_init("fsl,mpc8569mds-bcsr-gpio");
}
mpic = mpic_alloc(np, r.start,
- MPIC_PRIMARY | MPIC_WANTS_RESET | MPIC_BIG_ENDIAN,
+ MPIC_PRIMARY | MPIC_WANTS_RESET | MPIC_BIG_ENDIAN |
+ MPIC_BROKEN_FRR_NIRQS,
0, 256, " OpenPIC ");
BUG_ON(mpic == NULL);
of_node_put(np);
struct ibft_nic *nic = entry->nic;
void *ibft_loc = entry->header;
char *str = buf;
- int val;
- char *mac;
+ __be32 val;
if (!nic)
return 0;
str += sprintf_ipaddr(str, nic->ip_addr);
break;
case ibft_eth_subnet_mask:
- val = ~((1 << (32-nic->subnet_mask_prefix))-1);
- str += sprintf(str, NIPQUAD_FMT,
- (u8)(val >> 24), (u8)(val >> 16),
- (u8)(val >> 8), (u8)(val));
+ val = cpu_to_be32(~((1 << (32-nic->subnet_mask_prefix))-1));
+ str += sprintf(str, "%pI4", &val);
break;
case ibft_eth_origin:
str += sprintf(str, "%d\n", nic->origin);
str += sprintf(str, "%d\n", nic->vlan);
break;
case ibft_eth_mac:
- mac = nic->mac;
- str += sprintf(str, "%02x:%02x:%02x:%02x:%02x:%02x\n",
- (u8)mac[0], (u8)mac[1], (u8)mac[2],
- (u8)mac[3], (u8)mac[4], (u8)mac[5]);
+ str += sprintf(str, "%pM\n", nic->mac);
break;
case ibft_eth_hostname:
str += sprintf_string(str, nic->hostname_len,
"bytes left in TS. Resyncing.\n", ts_remain);
priv->ule_sndu_len = 0;
priv->need_pusi = 1;
+ ts += TS_SZ;
continue;
}
(*secfilter)->filter_mask[10] = mac_mask[1];
(*secfilter)->filter_mask[11]=mac_mask[0];
- dprintk("%s: filter mac=%02x %02x %02x %02x %02x %02x\n",
- dev->name, mac[0], mac[1], mac[2], mac[3], mac[4], mac[5]);
- dprintk("%s: filter mask=%02x %02x %02x %02x %02x %02x\n",
- dev->name, mac_mask[0], mac_mask[1], mac_mask[2],
- mac_mask[3], mac_mask[4], mac_mask[5]);
+ dprintk("%s: filter mac=%pM\n", dev->name, mac);
+ dprintk("%s: filter mask=%pM\n", dev->name, mac_mask);
return 0;
}
} else if ((dev->flags & IFF_ALLMULTI)) {
dprintk("%s: allmulti mode\n", dev->name);
priv->rx_mode = RX_MODE_ALL_MULTI;
- } else if (dev->mc_count) {
+ } else if (!netdev_mc_empty(dev)) {
int mci;
struct dev_mc_list *mc;
dprintk("%s: set_mc_list, %d entries\n",
- dev->name, dev->mc_count);
+ dev->name, netdev_mc_count(dev));
priv->rx_mode = RX_MODE_MULTI;
priv->multi_num = 0;
for (mci = 0, mc=dev->mc_list;
- mci < dev->mc_count;
+ mci < netdev_mc_count(dev);
mc = mc->next, mci++) {
dvb_set_mc_filter(dev, mc);
}
dev->header_ops = &dvb_header_ops;
dev->netdev_ops = &dvb_netdev_ops;
dev->mtu = 4096;
- dev->mc_count = 0;
dev->flags |= IFF_NOARP;
}
#include <linux/delay.h>
#include <linux/crc32.h>
#include <linux/phy.h>
+ #include <linux/platform_device.h>
#include <asm/cpu.h>
#include <asm/mipsregs.h>
#include <asm/processor.h>
#include <au1000.h>
+ #include <au1xxx_eth.h>
#include <prom.h>
#include "au1000_eth.h"
*
* PHY detection algorithm
*
- * If AU1XXX_PHY_STATIC_CONFIG is undefined, the PHY setup is
+ * If phy_static_config is undefined, the PHY setup is
* autodetected:
*
* mii_probe() first searches the current MAC's MII bus for a PHY,
- * selecting the first (or last, if AU1XXX_PHY_SEARCH_HIGHEST_ADDR is
+ * selecting the first (or last, if phy_search_highest_addr is
* defined) PHY address not already claimed by another netdev.
*
* If nothing was found that way when searching for the 2nd ethernet
- * controller's PHY and AU1XXX_PHY1_SEARCH_ON_MAC0 is defined, then
+ * controller's PHY and phy1_search_mac0 is defined, then
* the first MII bus is searched as well for an unclaimed PHY; this is
* needed in case of a dual-PHY accessible only through the MAC0's MII
* bus.
* controller is not registered to the network subsystem.
*/
- /* autodetection defaults */
- #undef AU1XXX_PHY_SEARCH_HIGHEST_ADDR
- #define AU1XXX_PHY1_SEARCH_ON_MAC0
+ /* autodetection defaults: phy1_search_mac0 */
/* static PHY setup
*
* specific irq-map
*/
- #if defined(CONFIG_MIPS_BOSPORUS)
- /*
- * Micrel/Kendin 5 port switch attached to MAC0,
- * MAC0 is associated with PHY address 5 (== WAN port)
- * MAC1 is not associated with any PHY, since it's connected directly
- * to the switch.
- * no interrupts are used
- */
- # define AU1XXX_PHY_STATIC_CONFIG
-
- # define AU1XXX_PHY0_ADDR 5
- # define AU1XXX_PHY0_BUSID 0
- # undef AU1XXX_PHY0_IRQ
-
- # undef AU1XXX_PHY1_ADDR
- # undef AU1XXX_PHY1_BUSID
- # undef AU1XXX_PHY1_IRQ
- #endif
-
- #if defined(AU1XXX_PHY0_BUSID) && (AU1XXX_PHY0_BUSID > 0)
- # error MAC0-associated PHY attached 2nd MACs MII bus not supported yet
- #endif
-
static void enable_mac(struct net_device *dev, int force_reset)
{
unsigned long flags;
struct au1000_private *const aup = netdev_priv(dev);
struct phy_device *phydev = NULL;
- #if defined(AU1XXX_PHY_STATIC_CONFIG)
- BUG_ON(aup->mac_id < 0 || aup->mac_id > 1);
+ if (aup->phy_static_config) {
+ BUG_ON(aup->mac_id < 0 || aup->mac_id > 1);
- if(aup->mac_id == 0) { /* get PHY0 */
- # if defined(AU1XXX_PHY0_ADDR)
- phydev = au_macs[AU1XXX_PHY0_BUSID]->mii_bus->phy_map[AU1XXX_PHY0_ADDR];
- # else
- printk (KERN_INFO DRV_NAME ":%s: using PHY-less setup\n",
- dev->name);
- return 0;
- # endif /* defined(AU1XXX_PHY0_ADDR) */
- } else if (aup->mac_id == 1) { /* get PHY1 */
- # if defined(AU1XXX_PHY1_ADDR)
- phydev = au_macs[AU1XXX_PHY1_BUSID]->mii_bus->phy_map[AU1XXX_PHY1_ADDR];
- # else
- printk (KERN_INFO DRV_NAME ":%s: using PHY-less setup\n",
- dev->name);
+ if (aup->phy_addr)
+ phydev = aup->mii_bus->phy_map[aup->phy_addr];
+ else
+ printk (KERN_INFO DRV_NAME ":%s: using PHY-less setup\n",
+ dev->name);
return 0;
- # endif /* defined(AU1XXX_PHY1_ADDR) */
- }
-
- #else /* defined(AU1XXX_PHY_STATIC_CONFIG) */
- int phy_addr;
-
- /* find the first (lowest address) PHY on the current MAC's MII bus */
- for (phy_addr = 0; phy_addr < PHY_MAX_ADDR; phy_addr++)
- if (aup->mii_bus->phy_map[phy_addr]) {
- phydev = aup->mii_bus->phy_map[phy_addr];
- # if !defined(AU1XXX_PHY_SEARCH_HIGHEST_ADDR)
- break; /* break out with first one found */
- # endif
- }
+ } else {
+ int phy_addr;
+
+ /* find the first (lowest address) PHY on the current MAC's MII bus */
+ for (phy_addr = 0; phy_addr < PHY_MAX_ADDR; phy_addr++)
+ if (aup->mii_bus->phy_map[phy_addr]) {
+ phydev = aup->mii_bus->phy_map[phy_addr];
+ if (!aup->phy_search_highest_addr)
+ break; /* break out with first one found */
+ }
- # if defined(AU1XXX_PHY1_SEARCH_ON_MAC0)
- /* try harder to find a PHY */
- if (!phydev && (aup->mac_id == 1)) {
- /* no PHY found, maybe we have a dual PHY? */
- printk (KERN_INFO DRV_NAME ": no PHY found on MAC1, "
- "let's see if it's attached to MAC0...\n");
+ if (aup->phy1_search_mac0) {
+ /* try harder to find a PHY */
+ if (!phydev && (aup->mac_id == 1)) {
+ /* no PHY found, maybe we have a dual PHY? */
+ printk (KERN_INFO DRV_NAME ": no PHY found on MAC1, "
+ "let's see if it's attached to MAC0...\n");
- BUG_ON(!au_macs[0]);
+ /* find the first (lowest address) non-attached PHY on
+ * the MAC0 MII bus */
+ for (phy_addr = 0; phy_addr < PHY_MAX_ADDR; phy_addr++) {
+ struct phy_device *const tmp_phydev =
+ aup->mii_bus->phy_map[phy_addr];
- /* find the first (lowest address) non-attached PHY on
- * the MAC0 MII bus */
- for (phy_addr = 0; phy_addr < PHY_MAX_ADDR; phy_addr++) {
- struct phy_device *const tmp_phydev =
- au_macs[0]->mii_bus->phy_map[phy_addr];
+ if (aup->mac_id == 1)
+ break;
- if (!tmp_phydev)
- continue; /* no PHY here... */
+ if (!tmp_phydev)
+ continue; /* no PHY here... */
- if (tmp_phydev->attached_dev)
- continue; /* already claimed by MAC0 */
+ if (tmp_phydev->attached_dev)
+ continue; /* already claimed by MAC0 */
- phydev = tmp_phydev;
- break; /* found it */
+ phydev = tmp_phydev;
+ break; /* found it */
+ }
+ }
}
}
- # endif /* defined(AU1XXX_PHY1_SEARCH_OTHER_BUS) */
- #endif /* defined(AU1XXX_PHY_STATIC_CONFIG) */
if (!phydev) {
printk (KERN_ERR DRV_NAME ":%s: no PHY found\n", dev->name);
return -1;
}
}
- static struct {
- u32 base_addr;
- u32 macen_addr;
- int irq;
- struct net_device *dev;
- } iflist[2] = {
- #ifdef CONFIG_SOC_AU1000
- {AU1000_ETH0_BASE, AU1000_MAC0_ENABLE, AU1000_MAC0_DMA_INT},
- {AU1000_ETH1_BASE, AU1000_MAC1_ENABLE, AU1000_MAC1_DMA_INT}
- #endif
- #ifdef CONFIG_SOC_AU1100
- {AU1100_ETH0_BASE, AU1100_MAC0_ENABLE, AU1100_MAC0_DMA_INT}
- #endif
- #ifdef CONFIG_SOC_AU1500
- {AU1500_ETH0_BASE, AU1500_MAC0_ENABLE, AU1500_MAC0_DMA_INT},
- {AU1500_ETH1_BASE, AU1500_MAC1_ENABLE, AU1500_MAC1_DMA_INT}
- #endif
- #ifdef CONFIG_SOC_AU1550
- {AU1550_ETH0_BASE, AU1550_MAC0_ENABLE, AU1550_MAC0_DMA_INT},
- {AU1550_ETH1_BASE, AU1550_MAC1_ENABLE, AU1550_MAC1_DMA_INT}
- #endif
- };
-
- static int num_ifs;
-
/*
* ethtool operations
*/
static inline void update_rx_stats(struct net_device *dev, u32 status)
{
- struct au1000_private *aup = netdev_priv(dev);
struct net_device_stats *ps = &dev->stats;
ps->rx_packets++;
}
pDB = aup->tx_db_inuse[aup->tx_head];
- skb_copy_from_linear_data(skb, pDB->vaddr, skb->len);
+ skb_copy_from_linear_data(skb, (void *)pDB->vaddr, skb->len);
if (skb->len < ETH_ZLEN) {
for (i=skb->len; i<ETH_ZLEN; i++) {
((char *)pDB->vaddr)[i] = 0;
if (dev->flags & IFF_PROMISC) { /* Set promiscuous. */
aup->mac->control |= MAC_PROMISCUOUS;
} else if ((dev->flags & IFF_ALLMULTI) ||
- dev->mc_count > MULTICAST_FILTER_LIMIT) {
+ netdev_mc_count(dev) > MULTICAST_FILTER_LIMIT) {
aup->mac->control |= MAC_PASS_ALL_MULTI;
aup->mac->control &= ~MAC_PROMISCUOUS;
printk(KERN_INFO "%s: Pass all multicast\n", dev->name);
} else {
- int i;
struct dev_mc_list *mclist;
u32 mc_filter[2]; /* Multicast hash filter */
mc_filter[1] = mc_filter[0] = 0;
- for (i = 0, mclist = dev->mc_list; mclist && i < dev->mc_count;
- i++, mclist = mclist->next) {
+ netdev_for_each_mc_addr(mclist, dev)
set_bit(ether_crc(ETH_ALEN, mclist->dmi_addr)>>26,
(long *)mc_filter);
- }
aup->mac->multi_hash_high = mc_filter[1];
aup->mac->multi_hash_low = mc_filter[0];
aup->mac->control &= ~MAC_PROMISCUOUS;
.ndo_change_mtu = eth_change_mtu,
};
- static struct net_device * au1000_probe(int port_num)
+ static int __devinit au1000_probe(struct platform_device *pdev)
{
static unsigned version_printed = 0;
struct au1000_private *aup = NULL;
+ struct au1000_eth_platform_data *pd;
struct net_device *dev = NULL;
db_dest_t *pDB, *pDBfree;
+ int irq, i, err = 0;
+ struct resource *base, *macen;
char ethaddr[6];
- int irq, i, err;
- u32 base, macen;
- if (port_num >= NUM_ETH_INTERFACES)
- return NULL;
+ base = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!base) {
+ printk(KERN_ERR DRV_NAME ": failed to retrieve base register\n");
+ err = -ENODEV;
+ goto out;
+ }
- base = CPHYSADDR(iflist[port_num].base_addr );
- macen = CPHYSADDR(iflist[port_num].macen_addr);
- irq = iflist[port_num].irq;
+ macen = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+ if (!macen) {
+ printk(KERN_ERR DRV_NAME ": failed to retrieve MAC Enable register\n");
+ err = -ENODEV;
+ goto out;
+ }
- if (!request_mem_region( base, MAC_IOSIZE, "Au1x00 ENET") ||
- !request_mem_region(macen, 4, "Au1x00 ENET"))
- return NULL;
+ irq = platform_get_irq(pdev, 0);
+ if (irq < 0) {
+ printk(KERN_ERR DRV_NAME ": failed to retrieve IRQ\n");
+ err = -ENODEV;
+ goto out;
+ }
- if (version_printed++ == 0)
- printk("%s version %s %s\n", DRV_NAME, DRV_VERSION, DRV_AUTHOR);
+ if (!request_mem_region(base->start, resource_size(base), pdev->name)) {
+ printk(KERN_ERR DRV_NAME ": failed to request memory region for base registers\n");
+ err = -ENXIO;
+ goto out;
+ }
+
+ if (!request_mem_region(macen->start, resource_size(macen), pdev->name)) {
+ printk(KERN_ERR DRV_NAME ": failed to request memory region for MAC enable register\n");
+ err = -ENXIO;
+ goto err_request;
+ }
dev = alloc_etherdev(sizeof(struct au1000_private));
if (!dev) {
printk(KERN_ERR "%s: alloc_etherdev failed\n", DRV_NAME);
- return NULL;
+ err = -ENOMEM;
+ goto err_alloc;
}
- dev->base_addr = base;
- dev->irq = irq;
- dev->netdev_ops = &au1000_netdev_ops;
- SET_ETHTOOL_OPS(dev, &au1000_ethtool_ops);
- dev->watchdog_timeo = ETH_TX_TIMEOUT;
-
- err = register_netdev(dev);
- if (err != 0) {
- printk(KERN_ERR "%s: Cannot register net device, error %d\n",
- DRV_NAME, err);
- free_netdev(dev);
- return NULL;
- }
-
- printk("%s: Au1xx0 Ethernet found at 0x%x, irq %d\n",
- dev->name, base, irq);
-
+ SET_NETDEV_DEV(dev, &pdev->dev);
+ platform_set_drvdata(pdev, dev);
aup = netdev_priv(dev);
spin_lock_init(&aup->lock);
(NUM_TX_BUFFS + NUM_RX_BUFFS),
&aup->dma_addr, 0);
if (!aup->vaddr) {
- free_netdev(dev);
- release_mem_region( base, MAC_IOSIZE);
- release_mem_region(macen, 4);
- return NULL;
+ printk(KERN_ERR DRV_NAME ": failed to allocate data buffers\n");
+ err = -ENOMEM;
+ goto err_vaddr;
}
/* aup->mac is the base address of the MAC's registers */
- aup->mac = (volatile mac_reg_t *)iflist[port_num].base_addr;
+ aup->mac = (volatile mac_reg_t *)ioremap_nocache(base->start, resource_size(base));
+ if (!aup->mac) {
+ printk(KERN_ERR DRV_NAME ": failed to ioremap MAC registers\n");
+ err = -ENXIO;
+ goto err_remap1;
+ }
- /* Setup some variables for quick register address access */
- aup->enable = (volatile u32 *)iflist[port_num].macen_addr;
- aup->mac_id = port_num;
- au_macs[port_num] = aup;
+ /* Setup some variables for quick register address access */
+ aup->enable = (volatile u32 *)ioremap_nocache(macen->start, resource_size(macen));
+ if (!aup->enable) {
+ printk(KERN_ERR DRV_NAME ": failed to ioremap MAC enable register\n");
+ err = -ENXIO;
+ goto err_remap2;
+ }
+ aup->mac_id = pdev->id;
- if (port_num == 0) {
+ if (pdev->id == 0) {
if (prom_get_ethernet_addr(ethaddr) == 0)
memcpy(au1000_mac_addr, ethaddr, sizeof(au1000_mac_addr));
else {
}
setup_hw_rings(aup, MAC0_RX_DMA_ADDR, MAC0_TX_DMA_ADDR);
- } else if (port_num == 1)
+ } else if (pdev->id == 1)
setup_hw_rings(aup, MAC1_RX_DMA_ADDR, MAC1_TX_DMA_ADDR);
/*
* to match those that are printed on their stickers
*/
memcpy(dev->dev_addr, au1000_mac_addr, sizeof(au1000_mac_addr));
- dev->dev_addr[5] += port_num;
+ dev->dev_addr[5] += pdev->id;
*aup->enable = 0;
aup->mac_enabled = 0;
+ pd = pdev->dev.platform_data;
+ if (!pd) {
+ printk(KERN_INFO DRV_NAME ": no platform_data passed, PHY search on MAC0\n");
+ aup->phy1_search_mac0 = 1;
+ } else {
+ aup->phy_static_config = pd->phy_static_config;
+ aup->phy_search_highest_addr = pd->phy_search_highest_addr;
+ aup->phy1_search_mac0 = pd->phy1_search_mac0;
+ aup->phy_addr = pd->phy_addr;
+ aup->phy_busid = pd->phy_busid;
+ aup->phy_irq = pd->phy_irq;
+ }
+
+ if (aup->phy_busid && aup->phy_busid > 0) {
+ printk(KERN_ERR DRV_NAME ": MAC0-associated PHY attached 2nd MACs MII"
+ "bus not supported yet\n");
+ err = -ENODEV;
+ goto err_mdiobus_alloc;
+ }
+
aup->mii_bus = mdiobus_alloc();
- if (aup->mii_bus == NULL)
- goto err_out;
+ if (aup->mii_bus == NULL) {
+ printk(KERN_ERR DRV_NAME ": failed to allocate mdiobus structure\n");
+ err = -ENOMEM;
+ goto err_mdiobus_alloc;
+ }
aup->mii_bus->priv = dev;
aup->mii_bus->read = au1000_mdiobus_read;
for(i = 0; i < PHY_MAX_ADDR; ++i)
aup->mii_bus->irq[i] = PHY_POLL;
-
/* if known, set corresponding PHY IRQs */
- #if defined(AU1XXX_PHY_STATIC_CONFIG)
- # if defined(AU1XXX_PHY0_IRQ)
- if (AU1XXX_PHY0_BUSID == aup->mac_id)
- aup->mii_bus->irq[AU1XXX_PHY0_ADDR] = AU1XXX_PHY0_IRQ;
- # endif
- # if defined(AU1XXX_PHY1_IRQ)
- if (AU1XXX_PHY1_BUSID == aup->mac_id)
- aup->mii_bus->irq[AU1XXX_PHY1_ADDR] = AU1XXX_PHY1_IRQ;
- # endif
- #endif
- mdiobus_register(aup->mii_bus);
+ if (aup->phy_static_config)
+ if (aup->phy_irq && aup->phy_busid == aup->mac_id)
+ aup->mii_bus->irq[aup->phy_addr] = aup->phy_irq;
+
+ err = mdiobus_register(aup->mii_bus);
+ if (err) {
+ printk(KERN_ERR DRV_NAME " failed to register MDIO bus\n");
+ goto err_mdiobus_reg;
+ }
- if (mii_probe(dev) != 0) {
+ if (mii_probe(dev) != 0)
goto err_out;
- }
pDBfree = NULL;
/* setup the data buffer descriptors and attach a buffer to each one */
aup->tx_db_inuse[i] = pDB;
}
+ dev->base_addr = base->start;
+ dev->irq = irq;
+ dev->netdev_ops = &au1000_netdev_ops;
+ SET_ETHTOOL_OPS(dev, &au1000_ethtool_ops);
+ dev->watchdog_timeo = ETH_TX_TIMEOUT;
+
/*
* The boot code uses the ethernet controller, so reset it to start
* fresh. au1000_init() expects that the device is in reset state.
*/
reset_mac(dev);
- return dev;
+ err = register_netdev(dev);
+ if (err) {
+ printk(KERN_ERR DRV_NAME "%s: Cannot register net device, aborting.\n",
+ dev->name);
+ goto err_out;
+ }
+
+ printk("%s: Au1xx0 Ethernet found at 0x%lx, irq %d\n",
+ dev->name, (unsigned long)base->start, irq);
+ if (version_printed++ == 0)
+ printk("%s version %s %s\n", DRV_NAME, DRV_VERSION, DRV_AUTHOR);
+
+ return 0;
err_out:
- if (aup->mii_bus != NULL) {
+ if (aup->mii_bus != NULL)
mdiobus_unregister(aup->mii_bus);
- mdiobus_free(aup->mii_bus);
- }
/* here we should have a valid dev plus aup-> register addresses
* so we can reset the mac properly.*/
if (aup->tx_db_inuse[i])
ReleaseDB(aup, aup->tx_db_inuse[i]);
}
+ err_mdiobus_reg:
+ mdiobus_free(aup->mii_bus);
+ err_mdiobus_alloc:
+ iounmap(aup->enable);
+ err_remap2:
+ iounmap(aup->mac);
+ err_remap1:
dma_free_noncoherent(NULL, MAX_BUF_SIZE * (NUM_TX_BUFFS + NUM_RX_BUFFS),
(void *)aup->vaddr, aup->dma_addr);
- unregister_netdev(dev);
+ err_vaddr:
free_netdev(dev);
- release_mem_region( base, MAC_IOSIZE);
- release_mem_region(macen, 4);
- return NULL;
+ err_alloc:
+ release_mem_region(macen->start, resource_size(macen));
+ err_request:
+ release_mem_region(base->start, resource_size(base));
+ out:
+ return err;
}
- /*
- * Setup the base address and interrupt of the Au1xxx ethernet macs
- * based on cpu type and whether the interface is enabled in sys_pinfunc
- * register. The last interface is enabled if SYS_PF_NI2 (bit 4) is 0.
- */
- static int __init au1000_init_module(void)
+ static int __devexit au1000_remove(struct platform_device *pdev)
{
- int ni = (int)((au_readl(SYS_PINFUNC) & (u32)(SYS_PF_NI2)) >> 4);
- struct net_device *dev;
- int i, found_one = 0;
+ struct net_device *dev = platform_get_drvdata(pdev);
+ struct au1000_private *aup = netdev_priv(dev);
+ int i;
+ struct resource *base, *macen;
- num_ifs = NUM_ETH_INTERFACES - ni;
+ platform_set_drvdata(pdev, NULL);
+
+ unregister_netdev(dev);
+ mdiobus_unregister(aup->mii_bus);
+ mdiobus_free(aup->mii_bus);
+
+ for (i = 0; i < NUM_RX_DMA; i++)
+ if (aup->rx_db_inuse[i])
+ ReleaseDB(aup, aup->rx_db_inuse[i]);
+
+ for (i = 0; i < NUM_TX_DMA; i++)
+ if (aup->tx_db_inuse[i])
+ ReleaseDB(aup, aup->tx_db_inuse[i]);
+
+ dma_free_noncoherent(NULL, MAX_BUF_SIZE *
+ (NUM_TX_BUFFS + NUM_RX_BUFFS),
+ (void *)aup->vaddr, aup->dma_addr);
+
+ iounmap(aup->mac);
+ iounmap(aup->enable);
+
+ base = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ release_mem_region(base->start, resource_size(base));
+
+ macen = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+ release_mem_region(macen->start, resource_size(macen));
+
+ free_netdev(dev);
- for(i = 0; i < num_ifs; i++) {
- dev = au1000_probe(i);
- iflist[i].dev = dev;
- if (dev)
- found_one++;
- }
- if (!found_one)
- return -ENODEV;
return 0;
}
- static void __exit au1000_cleanup_module(void)
+ static struct platform_driver au1000_eth_driver = {
+ .probe = au1000_probe,
+ .remove = __devexit_p(au1000_remove),
+ .driver = {
+ .name = "au1000-eth",
+ .owner = THIS_MODULE,
+ },
+ };
+ MODULE_ALIAS("platform:au1000-eth");
+
+
+ static int __init au1000_init_module(void)
+ {
+ return platform_driver_register(&au1000_eth_driver);
+ }
+
+ static void __exit au1000_exit_module(void)
{
- int i, j;
- struct net_device *dev;
- struct au1000_private *aup;
-
- for (i = 0; i < num_ifs; i++) {
- dev = iflist[i].dev;
- if (dev) {
- aup = netdev_priv(dev);
- unregister_netdev(dev);
- mdiobus_unregister(aup->mii_bus);
- mdiobus_free(aup->mii_bus);
- for (j = 0; j < NUM_RX_DMA; j++)
- if (aup->rx_db_inuse[j])
- ReleaseDB(aup, aup->rx_db_inuse[j]);
- for (j = 0; j < NUM_TX_DMA; j++)
- if (aup->tx_db_inuse[j])
- ReleaseDB(aup, aup->tx_db_inuse[j]);
- dma_free_noncoherent(NULL, MAX_BUF_SIZE *
- (NUM_TX_BUFFS + NUM_RX_BUFFS),
- (void *)aup->vaddr, aup->dma_addr);
- release_mem_region(dev->base_addr, MAC_IOSIZE);
- release_mem_region(CPHYSADDR(iflist[i].macen_addr), 4);
- free_netdev(dev);
- }
- }
+ platform_driver_unregister(&au1000_eth_driver);
}
module_init(au1000_init_module);
- module_exit(au1000_cleanup_module);
+ module_exit(au1000_exit_module);
#include <linux/phy_fixed.h>
#include <linux/platform_device.h>
#include <linux/dma-mapping.h>
+ #include <linux/clk.h>
#include <asm/gpio.h>
#include <asm/atomic.h>
static int cpmac_mdio_reset(struct mii_bus *bus)
{
+ struct clk *cpmac_clk;
+
+ cpmac_clk = clk_get(&bus->dev, "cpmac");
+ if (IS_ERR(cpmac_clk)) {
+ printk(KERN_ERR "unable to get cpmac clock\n");
+ return -1;
+ }
ar7_device_reset(AR7_RESET_BIT_MDIO);
cpmac_write(bus->priv, CPMAC_MDIO_CONTROL, MDIOC_ENABLE |
- MDIOC_CLKDIV(ar7_cpmac_freq() / 2200000 - 1));
+ MDIOC_CLKDIV(clk_get_rate(cpmac_clk) / 2200000 - 1));
return 0;
}
static void cpmac_set_multicast_list(struct net_device *dev)
{
struct dev_mc_list *iter;
- int i;
u8 tmp;
u32 mbp, bit, hash[2] = { 0, };
struct cpmac_priv *priv = netdev_priv(dev);
* cpmac uses some strange mac address hashing
* (not crc32)
*/
- for (i = 0, iter = dev->mc_list; i < dev->mc_count;
- i++, iter = iter->next) {
+ netdev_for_each_mc_addr(iter, dev) {
bit = 0;
tmp = iter->dmi_addr[0];
bit ^= (tmp >> 2) ^ (tmp << 4);
mp->port_aaui = port_aaui;
else {
/* Apple Network Server uses the AAUI port */
- if (machine_is_compatible("AAPL,ShinerESB"))
+ if (of_machine_is_compatible("AAPL,ShinerESB"))
mp->port_aaui = 1;
else {
#ifdef CONFIG_MACE_AAUI_PORT
{
struct mace_data *mp = netdev_priv(dev);
volatile struct mace __iomem *mb = mp->mace;
- int i, j;
+ int i;
u32 crc;
unsigned long flags;
mp->maccc |= PROM;
} else {
unsigned char multicast_filter[8];
- struct dev_mc_list *dmi = dev->mc_list;
+ struct dev_mc_list *dmi;
if (dev->flags & IFF_ALLMULTI) {
for (i = 0; i < 8; i++)
} else {
for (i = 0; i < 8; i++)
multicast_filter[i] = 0;
- for (i = 0; i < dev->mc_count; i++) {
+ netdev_for_each_mc_addr(dmi, dev) {
crc = ether_crc_le(6, dmi->dmi_addr);
- j = crc >> 26; /* bit number in multicast_filter */
- multicast_filter[j >> 3] |= 1 << (j & 7);
- dmi = dmi->next;
+ i = crc >> 26; /* bit number in multicast_filter */
+ multicast_filter[i >> 3] |= 1 << (i & 7);
}
}
#if 0
#include "mace.h"
static char mac_mace_string[] = "macmace";
- static struct platform_device *mac_mace_device;
#define N_TX_BUFF_ORDER 0
#define N_TX_RING (1 << N_TX_BUFF_ORDER)
{
struct mace_data *mp = netdev_priv(dev);
volatile struct mace *mb = mp->mace;
- int i, j;
+ int i;
u32 crc;
u8 maccc;
unsigned long flags;
mb->maccc |= PROM;
} else {
unsigned char multicast_filter[8];
- struct dev_mc_list *dmi = dev->mc_list;
+ struct dev_mc_list *dmi;
if (dev->flags & IFF_ALLMULTI) {
for (i = 0; i < 8; i++) {
} else {
for (i = 0; i < 8; i++)
multicast_filter[i] = 0;
- for (i = 0; i < dev->mc_count; i++) {
+ netdev_for_each_mc_addr(dmi, dev) {
crc = ether_crc_le(6, dmi->dmi_addr);
- j = crc >> 26; /* bit number in multicast_filter */
- multicast_filter[j >> 3] |= 1 << (j & 7);
- dmi = dmi->next;
+ /* bit number in multicast_filter */
+ i = crc >> 26;
+ multicast_filter[i >> 3] |= 1 << (i & 7);
}
}
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Macintosh MACE ethernet driver");
+ MODULE_ALIAS("platform:macmace");
static int __devexit mac_mace_device_remove (struct platform_device *pdev)
{
.probe = mace_probe,
.remove = __devexit_p(mac_mace_device_remove),
.driver = {
- .name = mac_mace_string,
+ .name = mac_mace_string,
+ .owner = THIS_MODULE,
},
};
static int __init mac_mace_init_module(void)
{
- int err;
-
if (!MACH_IS_MAC)
return -ENODEV;
- if ((err = platform_driver_register(&mac_mace_driver))) {
- printk(KERN_ERR "Driver registration failed\n");
- return err;
- }
-
- mac_mace_device = platform_device_alloc(mac_mace_string, 0);
- if (!mac_mace_device)
- goto out_unregister;
-
- if (platform_device_add(mac_mace_device)) {
- platform_device_put(mac_mace_device);
- mac_mace_device = NULL;
- }
-
- return 0;
-
- out_unregister:
- platform_driver_unregister(&mac_mace_driver);
-
- return -ENOMEM;
+ return platform_driver_register(&mac_mace_driver);
}
static void __exit mac_mace_cleanup_module(void)
{
platform_driver_unregister(&mac_mace_driver);
-
- if (mac_mace_device) {
- platform_device_unregister(mac_mace_device);
- mac_mace_device = NULL;
- }
}
module_init(mac_mace_init_module);
link->conf.Attributes |= CONF_ENABLE_SPKR;
link->conf.Status = CCSR_AUDIO_ENA;
- link->irq.Attributes =
- IRQ_TYPE_DYNAMIC_SHARING;
+ link->irq.Attributes = IRQ_TYPE_DYNAMIC_SHARING;
link->io.IOAddrLines = 16;
link->io.Attributes2 = IO_DATA_PATH_WIDTH_8;
link->io.NumPorts2 = 8;
link->conf.Attributes |= CONF_ENABLE_SPKR;
link->conf.Status = CCSR_AUDIO_ENA;
- link->irq.Attributes =
- IRQ_TYPE_DYNAMIC_SHARING|IRQ_FIRST_SHARED;
+ link->irq.Attributes = IRQ_TYPE_DYNAMIC_SHARING;
link->io.NumPorts1 = 64;
link->io.Attributes2 = IO_DATA_PATH_WIDTH_8;
link->io.NumPorts2 = 8;
return;
}
-/*======================================================================
-
- Calculate values for the hardware multicast filter hash table.
-
-======================================================================*/
-
-static void fill_multicast_tbl(int count, struct dev_mc_list *addrs,
- u_char *multicast_table)
-{
- struct dev_mc_list *mc_addr;
-
- for (mc_addr = addrs; mc_addr && count-- > 0; mc_addr = mc_addr->next) {
- u_int position = ether_crc(6, mc_addr->dmi_addr);
-#ifndef final_version /* Verify multicast address. */
- if ((mc_addr->dmi_addr[0] & 1) == 0)
- continue;
-#endif
- multicast_table[position >> 29] |= 1 << ((position >> 26) & 7);
- }
-}
-
/*======================================================================
Set the receive mode.
} else if (dev->flags & IFF_ALLMULTI)
rx_cfg_setting = RxStripCRC | RxEnable | RxAllMulti;
else {
- if (dev->mc_count) {
- fill_multicast_tbl(dev->mc_count, dev->mc_list,
- (u_char *)multicast_table);
+ if (!netdev_mc_empty(dev)) {
+ struct dev_mc_list *mc_addr;
+
+ netdev_for_each_mc_addr(mc_addr, dev) {
+ u_int position = ether_crc(6, mc_addr->dmi_addr);
+#ifndef final_version /* Verify multicast address. */
+ if ((mc_addr->dmi_addr[0] & 1) == 0)
+ continue;
+#endif
+ multicast_table[position >> 29] |= 1 << ((position >> 26) & 7);
+ }
}
rx_cfg_setting = RxStripCRC | RxEnable;
}
# Makefile for the PCI bus specific drivers.
#
- obj-y += access.o bus.o probe.o remove.o pci.o quirks.o \
+ obj-y += access.o bus.o probe.o remove.o pci.o \
pci-driver.o search.o pci-sysfs.o rom.o setup-res.o \
- irq.o
+ irq.o vpd.o
obj-$(CONFIG_PROC_FS) += proc.o
obj-$(CONFIG_SYSFS) += slot.o
- obj-$(CONFIG_PCI_LEGACY) += legacy.o
- CFLAGS_legacy.o += -Wno-deprecated-declarations
+ obj-$(CONFIG_PCI_QUIRKS) += quirks.o
# Build PCI Express stuff if needed
obj-$(CONFIG_PCIEPORTBUS) += pcie/
#include <linux/dmi.h>
#include <linux/pci-aspm.h>
#include <linux/ioport.h>
+ #include <asm/dma.h> /* isa_dma_bridge_buggy */
#include "pci.h"
- int isa_dma_bridge_buggy;
- EXPORT_SYMBOL(isa_dma_bridge_buggy);
- int pci_pci_problems;
- EXPORT_SYMBOL(pci_pci_problems);
-
- #ifdef CONFIG_PCI_QUIRKS
/*
* This quirk function disables memory decoding and releases memory resources
* of the device specified by kernel's boot parameter 'pci=resource_alignment='.
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x10e8, quirk_i82576_sriov);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x150a, quirk_i82576_sriov);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x150d, quirk_i82576_sriov);
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1518, quirk_i82576_sriov);
#endif /* CONFIG_PCI_IOV */
}
pci_do_fixups(dev, start, end);
}
+ EXPORT_SYMBOL(pci_fixup_device);
static int __init pci_apply_final_quirks(void)
{
return -ENOTTY;
}
-
- #else
- void pci_fixup_device(enum pci_fixup_pass pass, struct pci_dev *dev) {}
- int pci_dev_specific_reset(struct pci_dev *dev, int probe) { return -ENOTTY; }
- #endif
- EXPORT_SYMBOL(pci_fixup_device);
#include <linux/module.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
- #include <linux/delay.h>
#include <linux/phy.h>
#include <net/dst.h>
#include "ethernet-tx.h"
#include "ethernet-mdio.h"
#include "ethernet-util.h"
- #include "ethernet-proc.h"
-
#include "cvmx-pip.h"
#include "cvmx-pko.h"
"\t\"eth2,spi3,spi7\" would cause these three devices to transmit\n"
"\tusing the pow_send_group.");
- static int disable_core_queueing = 1;
- module_param(disable_core_queueing, int, 0444);
- MODULE_PARM_DESC(disable_core_queueing, "\n"
- "\tWhen set the networking core's tx_queue_len is set to zero. This\n"
- "\tallows packets to be sent without lock contention in the packet\n"
- "\tscheduler resulting in some cases in improved throughput.\n");
+ int max_rx_cpus = -1;
+ module_param(max_rx_cpus, int, 0444);
+ MODULE_PARM_DESC(max_rx_cpus, "\n"
+ "\t\tThe maximum number of CPUs to use for packet reception.\n"
+ "\t\tUse -1 to use all available CPUs.");
+ int rx_napi_weight = 32;
+ module_param(rx_napi_weight, int, 0444);
+ MODULE_PARM_DESC(rx_napi_weight, "The NAPI WEIGHT parameter.");
/*
* The offset from mac_addr_base that should be used for the next port
static unsigned int cvm_oct_mac_addr_offset;
/**
- * Periodic timer to check auto negotiation
+ * cvm_oct_poll_queue - Workqueue for polling operations.
+ */
+ struct workqueue_struct *cvm_oct_poll_queue;
+
+ /**
+ * cvm_oct_poll_queue_stopping - flag to indicate polling should stop.
+ *
+ * Set to one right before cvm_oct_poll_queue is destroyed.
*/
- static struct timer_list cvm_oct_poll_timer;
+ atomic_t cvm_oct_poll_queue_stopping = ATOMIC_INIT(0);
/**
* Array of every ethernet device owned by this driver indexed by
*/
struct net_device *cvm_oct_device[TOTAL_NUMBER_OF_PORTS];
- /**
- * Periodic timer tick for slow management operations
- *
- * @arg: Device to check
- */
- static void cvm_do_timer(unsigned long arg)
+ u64 cvm_oct_tx_poll_interval;
+
+ static void cvm_oct_rx_refill_worker(struct work_struct *work);
+ static DECLARE_DELAYED_WORK(cvm_oct_rx_refill_work, cvm_oct_rx_refill_worker);
+
+ static void cvm_oct_rx_refill_worker(struct work_struct *work)
{
- int32_t skb_to_free, undo;
- int queues_per_port;
- int qos;
- struct octeon_ethernet *priv;
- static int port;
+ /*
+ * FPA 0 may have been drained, try to refill it if we need
+ * more than num_packet_buffers / 2, otherwise normal receive
+ * processing will refill it. If it were drained, no packets
+ * could be received so cvm_oct_napi_poll would never be
+ * invoked to do the refill.
+ */
+ cvm_oct_rx_refill_pool(num_packet_buffers / 2);
- if (port >= CVMX_PIP_NUM_INPUT_PORTS) {
- /*
- * All ports have been polled. Start the next
- * iteration through the ports in one second.
- */
- port = 0;
- mod_timer(&cvm_oct_poll_timer, jiffies + HZ);
- return;
- }
- if (!cvm_oct_device[port])
- goto out;
+ if (!atomic_read(&cvm_oct_poll_queue_stopping))
+ queue_delayed_work(cvm_oct_poll_queue,
+ &cvm_oct_rx_refill_work, HZ);
+ }
+
+ static void cvm_oct_periodic_worker(struct work_struct *work)
+ {
+ struct octeon_ethernet *priv = container_of(work,
+ struct octeon_ethernet,
+ port_periodic_work.work);
- priv = netdev_priv(cvm_oct_device[port]);
if (priv->poll)
- priv->poll(cvm_oct_device[port]);
-
- queues_per_port = cvmx_pko_get_num_queues(port);
- /* Drain any pending packets in the free list */
- for (qos = 0; qos < queues_per_port; qos++) {
- if (skb_queue_len(&priv->tx_free_list[qos]) == 0)
- continue;
- skb_to_free = cvmx_fau_fetch_and_add32(priv->fau + qos * 4,
- MAX_SKB_TO_FREE);
- undo = skb_to_free > 0 ?
- MAX_SKB_TO_FREE : skb_to_free + MAX_SKB_TO_FREE;
- if (undo > 0)
- cvmx_fau_atomic_add32(priv->fau+qos*4, -undo);
- skb_to_free = -skb_to_free > MAX_SKB_TO_FREE ?
- MAX_SKB_TO_FREE : -skb_to_free;
- cvm_oct_free_tx_skbs(priv, skb_to_free, qos, 1);
- }
- cvm_oct_device[port]->netdev_ops->ndo_get_stats(cvm_oct_device[port]);
+ priv->poll(cvm_oct_device[priv->port]);
- out:
- port++;
- /* Poll the next port in a 50th of a second.
- This spreads the polling of ports out a little bit */
- mod_timer(&cvm_oct_poll_timer, jiffies + HZ / 50);
- }
+ cvm_oct_device[priv->port]->netdev_ops->ndo_get_stats(cvm_oct_device[priv->port]);
+
+ if (!atomic_read(&cvm_oct_poll_queue_stopping))
+ queue_delayed_work(cvm_oct_poll_queue, &priv->port_periodic_work, HZ);
+ }
- /**
- * Configure common hardware for all interfaces
- */
static __init void cvm_oct_configure_common_hw(void)
{
- int r;
/* Setup the FPA */
cvmx_fpa_enable();
cvm_oct_mem_fill_fpa(CVMX_FPA_PACKET_POOL, CVMX_FPA_PACKET_POOL_SIZE,
cvmx_helper_setup_red(num_packet_buffers / 4,
num_packet_buffers / 8);
- /* Enable the MII interface */
- if (!octeon_is_simulation())
- cvmx_write_csr(CVMX_SMIX_EN(0), 1);
-
- /* Register an IRQ hander for to receive POW interrupts */
- r = request_irq(OCTEON_IRQ_WORKQ0 + pow_receive_group,
- cvm_oct_do_interrupt, IRQF_SHARED, "Ethernet",
- cvm_oct_device);
-
- #if defined(CONFIG_SMP) && 0
- if (USE_MULTICORE_RECEIVE) {
- irq_set_affinity(OCTEON_IRQ_WORKQ0 + pow_receive_group,
- cpu_online_mask);
- }
- #endif
}
/**
- * Free a work queue entry received in a intercept callback.
+ * cvm_oct_free_work- Free a work queue entry
+ *
+ * @work_queue_entry: Work queue entry to free
*
- * @work_queue_entry:
- * Work queue entry to free
* Returns Zero on success, Negative on failure.
*/
int cvm_oct_free_work(void *work_queue_entry)
EXPORT_SYMBOL(cvm_oct_free_work);
/**
- * Get the low level ethernet statistics
- *
+ * cvm_oct_common_get_stats - get the low level ethernet statistics
* @dev: Device to get the statistics from
+ *
* Returns Pointer to the statistics
*/
static struct net_device_stats *cvm_oct_common_get_stats(struct net_device *dev)
}
/**
- * Change the link MTU. Unimplemented
- *
+ * cvm_oct_common_change_mtu - change the link MTU
* @dev: Device to change
* @new_mtu: The new MTU
*
}
/**
- * Set the multicast list. Currently unimplemented.
- *
+ * cvm_oct_common_set_multicast_list - set the multicast list
* @dev: Device to work on
*/
static void cvm_oct_common_set_multicast_list(struct net_device *dev)
control.u64 = 0;
control.s.bcst = 1; /* Allow broadcast MAC addresses */
- if (dev->mc_list || (dev->flags & IFF_ALLMULTI) ||
+ if (!netdev_mc_empty(dev) || (dev->flags & IFF_ALLMULTI) ||
(dev->flags & IFF_PROMISC))
/* Force accept multicast packets */
control.s.mcst = 2;
}
/**
- * Set the hardware MAC address for a device
- *
- * @dev: Device to change the MAC address for
- * @addr: Address structure to change it too. MAC address is addr + 2.
+ * cvm_oct_common_set_mac_address - set the hardware MAC address for a device
+ * @dev: The device in question.
+ * @addr: Address structure to change it too.
+
* Returns Zero on success
*/
static int cvm_oct_common_set_mac_address(struct net_device *dev, void *addr)
}
/**
- * Per network device initialization
- *
+ * cvm_oct_common_init - per network device initialization
* @dev: Device to initialize
+ *
* Returns Zero on success
*/
int cvm_oct_common_init(struct net_device *dev)
&& (always_use_pow || strstr(pow_send_list, dev->name)))
priv->queue = -1;
- if (priv->queue != -1 && USE_HW_TCPUDP_CHECKSUM)
- dev->features |= NETIF_F_IP_CSUM;
+ if (priv->queue != -1) {
+ dev->features |= NETIF_F_SG;
+ if (USE_HW_TCPUDP_CHECKSUM)
+ dev->features |= NETIF_F_IP_CSUM;
+ }
/* We do our own locking, Linux doesn't need to */
dev->features |= NETIF_F_LLTX;
extern void octeon_mdiobus_force_mod_depencency(void);
- /**
- * Module/ driver initialization. Creates the linux network
- * devices.
- *
- * Returns Zero on success
- */
static int __init cvm_oct_init_module(void)
{
int num_interfaces;
else
cvm_oct_mac_addr_offset = 0;
- cvm_oct_proc_initialize();
- cvm_oct_rx_initialize();
+ cvm_oct_poll_queue = create_singlethread_workqueue("octeon-ethernet");
+ if (cvm_oct_poll_queue == NULL) {
+ pr_err("octeon-ethernet: Cannot create workqueue");
+ return -ENOMEM;
+ }
+
cvm_oct_configure_common_hw();
cvmx_helper_initialize_packet_io_global();
*/
cvmx_fau_atomic_write32(FAU_NUM_PACKET_BUFFERS_TO_FREE, 0);
+ /* Initialize the FAU used for counting tx SKBs that need to be freed */
+ cvmx_fau_atomic_write32(FAU_TOTAL_TX_TO_CLEAN, 0);
+
if ((pow_send_group != -1)) {
struct net_device *dev;
pr_info("\tConfiguring device for POW only access\n");
if (dev) {
/* Initialize the device private structure. */
struct octeon_ethernet *priv = netdev_priv(dev);
- memset(priv, 0, sizeof(struct octeon_ethernet));
dev->netdev_ops = &cvm_oct_pow_netdev_ops;
priv->imode = CVMX_HELPER_INTERFACE_MODE_DISABLED;
skb_queue_head_init(&priv->tx_free_list[qos]);
if (register_netdev(dev) < 0) {
- pr_err("Failed to register ethernet "
- "device for POW\n");
+ pr_err("Failed to register ethernet device for POW\n");
kfree(dev);
} else {
cvm_oct_device[CVMX_PIP_NUM_INPUT_PORTS] = dev;
- pr_info("%s: POW send group %d, receive "
- "group %d\n",
- dev->name, pow_send_group,
- pow_receive_group);
+ pr_info("%s: POW send group %d, receive group %d\n",
+ dev->name, pow_send_group,
+ pow_receive_group);
}
} else {
- pr_err("Failed to allocate ethernet device "
- "for POW\n");
+ pr_err("Failed to allocate ethernet device for POW\n");
}
}
struct net_device *dev =
alloc_etherdev(sizeof(struct octeon_ethernet));
if (!dev) {
- pr_err("Failed to allocate ethernet device "
- "for port %d\n", port);
+ pr_err("Failed to allocate ethernet device for port %d\n", port);
continue;
}
- if (disable_core_queueing)
- dev->tx_queue_len = 0;
/* Initialize the device private structure. */
priv = netdev_priv(dev);
- memset(priv, 0, sizeof(struct octeon_ethernet));
+ INIT_DELAYED_WORK(&priv->port_periodic_work,
+ cvm_oct_periodic_worker);
priv->imode = imode;
priv->port = port;
priv->queue = cvmx_pko_get_base_queue(priv->port);
fau -=
cvmx_pko_get_num_queues(priv->port) *
sizeof(uint32_t);
+ queue_delayed_work(cvm_oct_poll_queue,
+ &priv->port_periodic_work, HZ);
}
}
}
- if (INTERRUPT_LIMIT) {
- /*
- * Set the POW timer rate to give an interrupt at most
- * INTERRUPT_LIMIT times per second.
- */
- cvmx_write_csr(CVMX_POW_WQ_INT_PC,
- octeon_bootinfo->eclock_hz / (INTERRUPT_LIMIT *
- 16 * 256) << 8);
+ cvm_oct_tx_initialize();
+ cvm_oct_rx_initialize();
- /*
- * Enable POW timer interrupt. It will count when
- * there are packets available.
- */
- cvmx_write_csr(CVMX_POW_WQ_INT_THRX(pow_receive_group),
- 0x1ful << 24);
- } else {
- /* Enable POW interrupt when our port has at least one packet */
- cvmx_write_csr(CVMX_POW_WQ_INT_THRX(pow_receive_group), 0x1001);
- }
+ /*
+ * 150 uS: about 10 1500-byte packtes at 1GE.
+ */
+ cvm_oct_tx_poll_interval = 150 * (octeon_get_clock_rate() / 1000000);
- /* Enable the poll timer for checking RGMII status */
- init_timer(&cvm_oct_poll_timer);
- cvm_oct_poll_timer.data = 0;
- cvm_oct_poll_timer.function = cvm_do_timer;
- mod_timer(&cvm_oct_poll_timer, jiffies + HZ);
+ queue_delayed_work(cvm_oct_poll_queue, &cvm_oct_rx_refill_work, HZ);
return 0;
}
- /**
- * Module / driver shutdown
- *
- * Returns Zero on success
- */
static void __exit cvm_oct_cleanup_module(void)
{
int port;
/* Free the interrupt handler */
free_irq(OCTEON_IRQ_WORKQ0 + pow_receive_group, cvm_oct_device);
- del_timer(&cvm_oct_poll_timer);
+ atomic_inc_return(&cvm_oct_poll_queue_stopping);
+ cancel_delayed_work_sync(&cvm_oct_rx_refill_work);
+
cvm_oct_rx_shutdown();
+ cvm_oct_tx_shutdown();
+
cvmx_pko_disable();
/* Free the ethernet devices */
for (port = 0; port < TOTAL_NUMBER_OF_PORTS; port++) {
if (cvm_oct_device[port]) {
- cvm_oct_tx_shutdown(cvm_oct_device[port]);
- unregister_netdev(cvm_oct_device[port]);
- kfree(cvm_oct_device[port]);
+ struct net_device *dev = cvm_oct_device[port];
+ struct octeon_ethernet *priv = netdev_priv(dev);
+ cancel_delayed_work_sync(&priv->port_periodic_work);
+
+ cvm_oct_tx_shutdown_dev(dev);
+ unregister_netdev(dev);
+ kfree(dev);
cvm_oct_device[port] = NULL;
}
}
+ destroy_workqueue(cvm_oct_poll_queue);
+
cvmx_pko_shutdown();
- cvm_oct_proc_shutdown();
cvmx_ipd_free_ptr();
PCI_BUS_FLAGS_NO_MMRBC = (__force pci_bus_flags_t) 2,
};
+ /* Based on the PCI Hotplug Spec, but some values are made up by us */
+ enum pci_bus_speed {
+ PCI_SPEED_33MHz = 0x00,
+ PCI_SPEED_66MHz = 0x01,
+ PCI_SPEED_66MHz_PCIX = 0x02,
+ PCI_SPEED_100MHz_PCIX = 0x03,
+ PCI_SPEED_133MHz_PCIX = 0x04,
+ PCI_SPEED_66MHz_PCIX_ECC = 0x05,
+ PCI_SPEED_100MHz_PCIX_ECC = 0x06,
+ PCI_SPEED_133MHz_PCIX_ECC = 0x07,
+ PCI_SPEED_66MHz_PCIX_266 = 0x09,
+ PCI_SPEED_100MHz_PCIX_266 = 0x0a,
+ PCI_SPEED_133MHz_PCIX_266 = 0x0b,
+ AGP_UNKNOWN = 0x0c,
+ AGP_1X = 0x0d,
+ AGP_2X = 0x0e,
+ AGP_4X = 0x0f,
+ AGP_8X = 0x10,
+ PCI_SPEED_66MHz_PCIX_533 = 0x11,
+ PCI_SPEED_100MHz_PCIX_533 = 0x12,
+ PCI_SPEED_133MHz_PCIX_533 = 0x13,
+ PCIE_SPEED_2_5GT = 0x14,
+ PCIE_SPEED_5_0GT = 0x15,
+ PCIE_SPEED_8_0GT = 0x16,
+ PCI_SPEED_UNKNOWN = 0xff,
+ };
+
struct pci_cap_saved_state {
struct hlist_node next;
char cap_nr;
configuration space */
unsigned int pme_support:5; /* Bitmask of states from which PME#
can be generated */
+ unsigned int pme_interrupt:1;
unsigned int d1_support:1; /* Low power state D1 is supported */
unsigned int d2_support:1; /* Low power state D2 is supported */
unsigned int no_d1d2:1; /* Only allow D0 and D3 */
unsigned int msix_enabled:1;
unsigned int ari_enabled:1; /* ARI forwarding */
unsigned int is_managed:1;
- unsigned int is_pcie:1;
+ unsigned int is_pcie:1; /* Obsolete. Will be removed.
+ Use pci_is_pcie() instead */
unsigned int needs_freset:1; /* Dev requires fundamental reset */
unsigned int state_saved:1;
unsigned int is_physfn:1;
hlist_add_head(&new_cap->next, &pci_dev->saved_cap_space);
}
- #ifndef PCI_BUS_NUM_RESOURCES
- #define PCI_BUS_NUM_RESOURCES 16
- #endif
+ /*
+ * The first PCI_BRIDGE_RESOURCE_NUM PCI bus resources (those that correspond
+ * to P2P or CardBus bridge windows) go in a table. Additional ones (for
+ * buses below host bridges or subtractive decode bridges) go in the list.
+ * Use pci_bus_for_each_resource() to iterate through all the resources.
+ */
+
+ /*
+ * PCI_SUBTRACTIVE_DECODE means the bridge forwards the window implicitly
+ * and there's no way to program the bridge with the details of the window.
+ * This does not apply to ACPI _CRS windows, even with the _DEC subtractive-
+ * decode bit set, because they are explicit and can be programmed with _SRS.
+ */
+ #define PCI_SUBTRACTIVE_DECODE 0x1
+
+ struct pci_bus_resource {
+ struct list_head list;
+ struct resource *res;
+ unsigned int flags;
+ };
#define PCI_REGION_FLAG_MASK 0x0fU /* These bits of resource flags tell us the PCI region flags */
struct list_head devices; /* list of devices on this bus */
struct pci_dev *self; /* bridge device as seen by parent */
struct list_head slots; /* list of slots on this bus */
- struct resource *resource[PCI_BUS_NUM_RESOURCES];
- /* address space routed to this bus */
+ struct resource *resource[PCI_BRIDGE_RESOURCE_NUM];
+ struct list_head resources; /* address space routed to this bus */
struct pci_ops *ops; /* configuration access functions */
void *sysdata; /* hook for sys-specific extension */
unsigned char primary; /* number of primary bridge */
unsigned char secondary; /* number of secondary bridge */
unsigned char subordinate; /* max number of subordinate buses */
+ unsigned char max_bus_speed; /* enum pci_bus_speed */
+ unsigned char cur_bus_speed; /* enum pci_bus_speed */
char name[48];
char *pcibios_setup(char *str);
/* Used only when drivers/pci/setup.c is used */
- void pcibios_align_resource(void *, struct resource *, resource_size_t,
+ resource_size_t pcibios_align_resource(void *, const struct resource *,
+ resource_size_t,
resource_size_t);
void pcibios_update_irq(struct pci_dev *, int irq);
struct pci_ops *ops, void *sysdata);
struct pci_bus *pci_add_new_bus(struct pci_bus *parent, struct pci_dev *dev,
int busnr);
+ void pcie_update_link_speed(struct pci_bus *bus, u16 link_status);
struct pci_slot *pci_create_slot(struct pci_bus *parent, int slot_nr,
const char *name,
struct hotplug_slot *hotplug);
extern void pci_stop_bus_device(struct pci_dev *dev);
void pci_setup_cardbus(struct pci_bus *bus);
extern void pci_sort_breadthfirst(void);
+#define dev_is_pci(d) ((d)->bus == &pci_bus_type)
+#define dev_is_pf(d) ((dev_is_pci(d) ? to_pci_dev(d)->is_physfn : false))
+#define dev_num_vf(d) ((dev_is_pci(d) ? pci_num_vf(to_pci_dev(d)) : 0))
/* Generic PCI functions exported to card drivers */
- #ifdef CONFIG_PCI_LEGACY
- struct pci_dev __deprecated *pci_find_device(unsigned int vendor,
- unsigned int device,
- struct pci_dev *from);
- #endif /* CONFIG_PCI_LEGACY */
-
enum pci_lost_interrupt_reason {
PCI_LOST_IRQ_NO_INFORMATION = 0,
PCI_LOST_IRQ_DISABLE_MSI,
pci_power_t pci_choose_state(struct pci_dev *dev, pm_message_t state);
bool pci_pme_capable(struct pci_dev *dev, pci_power_t state);
void pci_pme_active(struct pci_dev *dev, bool enable);
- int pci_enable_wake(struct pci_dev *dev, pci_power_t state, bool enable);
+ int __pci_enable_wake(struct pci_dev *dev, pci_power_t state,
+ bool runtime, bool enable);
int pci_wake_from_d3(struct pci_dev *dev, bool enable);
pci_power_t pci_target_state(struct pci_dev *dev);
int pci_prepare_to_sleep(struct pci_dev *dev);
int pci_back_from_sleep(struct pci_dev *dev);
+ bool pci_dev_run_wake(struct pci_dev *dev);
+
+ static inline int pci_enable_wake(struct pci_dev *dev, pci_power_t state,
+ bool enable)
+ {
+ return __pci_enable_wake(dev, state, false, enable);
+ }
/* For use by arch with custom probe code */
void set_pcie_port_type(struct pci_dev *pdev);
void pci_bus_size_bridges(struct pci_bus *bus);
int pci_claim_resource(struct pci_dev *, int);
void pci_assign_unassigned_resources(void);
+ void pci_assign_unassigned_bridge_resources(struct pci_dev *bridge);
void pdev_enable_device(struct pci_dev *);
void pdev_sort_resources(struct pci_dev *, struct resource_list *);
int pci_enable_resources(struct pci_dev *, int mask);
void pci_release_selected_regions(struct pci_dev *, int);
/* drivers/pci/bus.c */
+ void pci_bus_add_resource(struct pci_bus *bus, struct resource *res, unsigned int flags);
+ struct resource *pci_bus_resource_n(const struct pci_bus *bus, int n);
+ void pci_bus_remove_resources(struct pci_bus *bus);
+
+ #define pci_bus_for_each_resource(bus, res, i) \
+ for (i = 0; \
+ (res = pci_bus_resource_n(bus, i)) || i < PCI_BRIDGE_RESOURCE_NUM; \
+ i++)
+
int __must_check pci_bus_alloc_resource(struct pci_bus *bus,
struct resource *res, resource_size_t size,
resource_size_t align, resource_size_t min,
unsigned int type_mask,
- void (*alignf)(void *, struct resource *,
- resource_size_t, resource_size_t),
+ resource_size_t (*alignf)(void *,
+ const struct resource *,
+ resource_size_t,
+ resource_size_t),
void *alignf_data);
void pci_enable_bridges(struct pci_bus *bus);
}
#endif /* CONFIG_PCI_DOMAINS */
+ /* some architectures require additional setup to direct VGA traffic */
+ typedef int (*arch_set_vga_state_t)(struct pci_dev *pdev, bool decode,
+ unsigned int command_bits, bool change_bridge);
+ extern void pci_register_set_vga_state(arch_set_vga_state_t func);
+
#else /* CONFIG_PCI is not enabled */
/*
_PCI_NOP_ALL(read, *)
_PCI_NOP_ALL(write,)
- static inline struct pci_dev *pci_find_device(unsigned int vendor,
- unsigned int device,
- struct pci_dev *from)
- {
- return NULL;
- }
-
static inline struct pci_dev *pci_get_device(unsigned int vendor,
unsigned int device,
struct pci_dev *from)
unsigned int devfn)
{ return NULL; }
+#define dev_is_pci(d) (false)
+#define dev_is_pf(d) (false)
+#define dev_num_vf(d) (0)
#endif /* CONFIG_PCI */
/* Include architecture-dependent settings and functions */
DECLARE_PCI_FIXUP_SECTION(.pci_fixup_suspend, \
suspend##vendor##device##hook, vendor, device, hook)
-
+ #ifdef CONFIG_PCI_QUIRKS
void pci_fixup_device(enum pci_fixup_pass pass, struct pci_dev *dev);
+ #else
+ static inline void pci_fixup_device(enum pci_fixup_pass pass,
+ struct pci_dev *dev) {}
+ #endif
void __iomem *pcim_iomap(struct pci_dev *pdev, int bar, unsigned long maxlen);
void pcim_iounmap(struct pci_dev *pdev, void __iomem *addr);
extern int pci_enable_sriov(struct pci_dev *dev, int nr_virtfn);
extern void pci_disable_sriov(struct pci_dev *dev);
extern irqreturn_t pci_sriov_migration(struct pci_dev *dev);
+extern int pci_num_vf(struct pci_dev *dev);
#else
static inline int pci_enable_sriov(struct pci_dev *dev, int nr_virtfn)
{
{
return IRQ_NONE;
}
+static inline int pci_num_vf(struct pci_dev *dev)
+{
+ return 0;
+}
#endif
#if defined(CONFIG_HOTPLUG_PCI) || defined(CONFIG_HOTPLUG_PCI_MODULE)
void pci_request_acs(void);
+
+#define PCI_VPD_LRDT 0x80 /* Large Resource Data Type */
+#define PCI_VPD_LRDT_ID(x) (x | PCI_VPD_LRDT)
+
+/* Large Resource Data Type Tag Item Names */
+#define PCI_VPD_LTIN_ID_STRING 0x02 /* Identifier String */
+#define PCI_VPD_LTIN_RO_DATA 0x10 /* Read-Only Data */
+#define PCI_VPD_LTIN_RW_DATA 0x11 /* Read-Write Data */
+
+#define PCI_VPD_LRDT_ID_STRING PCI_VPD_LRDT_ID(PCI_VPD_LTIN_ID_STRING)
+#define PCI_VPD_LRDT_RO_DATA PCI_VPD_LRDT_ID(PCI_VPD_LTIN_RO_DATA)
+#define PCI_VPD_LRDT_RW_DATA PCI_VPD_LRDT_ID(PCI_VPD_LTIN_RW_DATA)
+
+/* Small Resource Data Type Tag Item Names */
+#define PCI_VPD_STIN_END 0x78 /* End */
+
+#define PCI_VPD_SRDT_END PCI_VPD_STIN_END
+
+#define PCI_VPD_SRDT_TIN_MASK 0x78
+#define PCI_VPD_SRDT_LEN_MASK 0x07
+
+#define PCI_VPD_LRDT_TAG_SIZE 3
+#define PCI_VPD_SRDT_TAG_SIZE 1
+
+#define PCI_VPD_INFO_FLD_HDR_SIZE 3
+
+#define PCI_VPD_RO_KEYWORD_PARTNO "PN"
+#define PCI_VPD_RO_KEYWORD_MFR_ID "MN"
+#define PCI_VPD_RO_KEYWORD_VENDOR0 "V0"
+
+/**
+ * pci_vpd_lrdt_size - Extracts the Large Resource Data Type length
+ * @lrdt: Pointer to the beginning of the Large Resource Data Type tag
+ *
+ * Returns the extracted Large Resource Data Type length.
+ */
+static inline u16 pci_vpd_lrdt_size(const u8 *lrdt)
+{
+ return (u16)lrdt[1] + ((u16)lrdt[2] << 8);
+}
+
+/**
+ * pci_vpd_srdt_size - Extracts the Small Resource Data Type length
+ * @lrdt: Pointer to the beginning of the Small Resource Data Type tag
+ *
+ * Returns the extracted Small Resource Data Type length.
+ */
+static inline u8 pci_vpd_srdt_size(const u8 *srdt)
+{
+ return (*srdt) & PCI_VPD_SRDT_LEN_MASK;
+}
+
+/**
+ * pci_vpd_info_field_size - Extracts the information field length
+ * @lrdt: Pointer to the beginning of an information field header
+ *
+ * Returns the extracted information field length.
+ */
+static inline u8 pci_vpd_info_field_size(const u8 *info_field)
+{
+ return info_field[2];
+}
+
+/**
+ * pci_vpd_find_tag - Locates the Resource Data Type tag provided
+ * @buf: Pointer to buffered vpd data
+ * @off: The offset into the buffer at which to begin the search
+ * @len: The length of the vpd buffer
+ * @rdt: The Resource Data Type to search for
+ *
+ * Returns the index where the Resource Data Type was found or
+ * -ENOENT otherwise.
+ */
+int pci_vpd_find_tag(const u8 *buf, unsigned int off, unsigned int len, u8 rdt);
+
+/**
+ * pci_vpd_find_info_keyword - Locates an information field keyword in the VPD
+ * @buf: Pointer to buffered vpd data
+ * @off: The offset into the buffer at which to begin the search
+ * @len: The length of the buffer area, relative to off, in which to search
+ * @kw: The keyword to search for
+ *
+ * Returns the index where the information field keyword was found or
+ * -ENOENT otherwise.
+ */
+int pci_vpd_find_info_keyword(const u8 *buf, unsigned int off,
+ unsigned int len, const char *kw);
+
#endif /* __KERNEL__ */
#endif /* LINUX_PCI_H */
* primitives such as list_add_rcu() as long as it's guarded by rcu_read_lock().
*/
#define list_entry_rcu(ptr, type, member) \
- container_of(rcu_dereference(ptr), type, member)
+ container_of(rcu_dereference_raw(ptr), type, member)
/**
* list_first_entry_rcu - get the first element from a list
list_entry_rcu((ptr)->next, type, member)
#define __list_for_each_rcu(pos, head) \
- for (pos = rcu_dereference((head)->next); \
+ for (pos = rcu_dereference_raw((head)->next); \
pos != (head); \
- pos = rcu_dereference(pos->next))
+ pos = rcu_dereference_raw(pos->next))
/**
* list_for_each_entry_rcu - iterate over rcu list of given type
* as long as the traversal is guarded by rcu_read_lock().
*/
#define list_for_each_continue_rcu(pos, head) \
- for ((pos) = rcu_dereference((pos)->next); \
+ for ((pos) = rcu_dereference_raw((pos)->next); \
prefetch((pos)->next), (pos) != (head); \
- (pos) = rcu_dereference((pos)->next))
+ (pos) = rcu_dereference_raw((pos)->next))
/**
* list_for_each_entry_continue_rcu - continue iteration over list of given type
n->next->pprev = &n->next;
}
+#define __hlist_for_each_rcu(pos, head) \
+ for (pos = rcu_dereference((head)->first); \
+ pos && ({ prefetch(pos->next); 1; }); \
+ pos = rcu_dereference(pos->next))
+
/**
* hlist_for_each_entry_rcu - iterate over rcu list of given type
* @tpos: the type * to use as a loop cursor.
* as long as the traversal is guarded by rcu_read_lock().
*/
#define hlist_for_each_entry_rcu(tpos, pos, head, member) \
- for (pos = rcu_dereference((head)->first); \
+ for (pos = rcu_dereference_raw((head)->first); \
pos && ({ prefetch(pos->next); 1; }) && \
({ tpos = hlist_entry(pos, typeof(*tpos), member); 1; }); \
- pos = rcu_dereference(pos->next))
+ pos = rcu_dereference_raw(pos->next))
#endif /* __KERNEL__ */
#endif
#define RTAX_FEATURES RTAX_FEATURES
RTAX_RTO_MIN,
#define RTAX_RTO_MIN RTAX_RTO_MIN
+ RTAX_INITRWND,
+#define RTAX_INITRWND RTAX_INITRWND
__RTAX_MAX
};
extern void rtnl_unlock(void);
extern int rtnl_trylock(void);
extern int rtnl_is_locked(void);
+ #ifdef CONFIG_PROVE_LOCKING
+ extern int lockdep_rtnl_is_held(void);
+ #endif /* #ifdef CONFIG_PROVE_LOCKING */
extern void rtnetlink_init(void);
extern void __rtnl_unlock(void);
}
EXPORT_SYMBOL(dev_load);
-/**
- * dev_open - prepare an interface for use.
- * @dev: device to open
- *
- * Takes a device from down to up state. The device's private open
- * function is invoked and then the multicast lists are loaded. Finally
- * the device is moved into the up state and a %NETDEV_UP message is
- * sent to the netdev notifier chain.
- *
- * Calling this function on an active interface is a nop. On a failure
- * a negative errno code is returned.
- */
-int dev_open(struct net_device *dev)
+static int __dev_open(struct net_device *dev)
{
const struct net_device_ops *ops = dev->netdev_ops;
int ret;
ASSERT_RTNL();
- /*
- * Is it already up?
- */
-
- if (dev->flags & IFF_UP)
- return 0;
-
/*
* Is it even present?
*/
* Wakeup transmit queue engine
*/
dev_activate(dev);
-
- /*
- * ... and announce new interface.
- */
- call_netdevice_notifiers(NETDEV_UP, dev);
}
return ret;
}
-EXPORT_SYMBOL(dev_open);
/**
- * dev_close - shutdown an interface.
- * @dev: device to shutdown
+ * dev_open - prepare an interface for use.
+ * @dev: device to open
*
- * This function moves an active device into down state. A
- * %NETDEV_GOING_DOWN is sent to the netdev notifier chain. The device
- * is then deactivated and finally a %NETDEV_DOWN is sent to the notifier
- * chain.
+ * Takes a device from down to up state. The device's private open
+ * function is invoked and then the multicast lists are loaded. Finally
+ * the device is moved into the up state and a %NETDEV_UP message is
+ * sent to the netdev notifier chain.
+ *
+ * Calling this function on an active interface is a nop. On a failure
+ * a negative errno code is returned.
*/
-int dev_close(struct net_device *dev)
+int dev_open(struct net_device *dev)
+{
+ int ret;
+
+ /*
+ * Is it already up?
+ */
+ if (dev->flags & IFF_UP)
+ return 0;
+
+ /*
+ * Open device
+ */
+ ret = __dev_open(dev);
+ if (ret < 0)
+ return ret;
+
+ /*
+ * ... and announce new interface.
+ */
+ rtmsg_ifinfo(RTM_NEWLINK, dev, IFF_UP|IFF_RUNNING);
+ call_netdevice_notifiers(NETDEV_UP, dev);
+
+ return ret;
+}
+EXPORT_SYMBOL(dev_open);
+
+static int __dev_close(struct net_device *dev)
{
const struct net_device_ops *ops = dev->netdev_ops;
- ASSERT_RTNL();
+ ASSERT_RTNL();
might_sleep();
- if (!(dev->flags & IFF_UP))
- return 0;
-
/*
* Tell people we are going down, so that they can
* prepare to death, when device is still operating.
dev->flags &= ~IFF_UP;
/*
- * Tell people we are down
+ * Shutdown NET_DMA
*/
- call_netdevice_notifiers(NETDEV_DOWN, dev);
+ net_dmaengine_put();
+
+ return 0;
+}
+
+/**
+ * dev_close - shutdown an interface.
+ * @dev: device to shutdown
+ *
+ * This function moves an active device into down state. A
+ * %NETDEV_GOING_DOWN is sent to the netdev notifier chain. The device
+ * is then deactivated and finally a %NETDEV_DOWN is sent to the notifier
+ * chain.
+ */
+int dev_close(struct net_device *dev)
+{
+ if (!(dev->flags & IFF_UP))
+ return 0;
+
+ __dev_close(dev);
/*
- * Shutdown NET_DMA
+ * Tell people we are down
*/
- net_dmaengine_put();
+ rtmsg_ifinfo(RTM_NEWLINK, dev, IFF_UP|IFF_RUNNING);
+ call_netdevice_notifiers(NETDEV_DOWN, dev);
return 0;
}
if (skb->len > (dev->mtu + dev->hard_header_len))
return NET_RX_DROP;
- skb_dst_drop(skb);
+ skb_set_dev(skb, dev);
skb->tstamp.tv64 = 0;
skb->pkt_type = PACKET_HOST;
skb->protocol = eth_type_trans(skb, dev);
- skb->mark = 0;
- secpath_reset(skb);
- nf_reset(skb);
return netif_rx(skb);
}
EXPORT_SYMBOL_GPL(dev_forward_skb);
return false;
}
+/**
+ * skb_dev_set -- assign a new device to a buffer
+ * @skb: buffer for the new device
+ * @dev: network device
+ *
+ * If an skb is owned by a device already, we have to reset
+ * all data private to the namespace a device belongs to
+ * before assigning it a new device.
+ */
+#ifdef CONFIG_NET_NS
+void skb_set_dev(struct sk_buff *skb, struct net_device *dev)
+{
+ skb_dst_drop(skb);
+ if (skb->dev && !net_eq(dev_net(skb->dev), dev_net(dev))) {
+ secpath_reset(skb);
+ nf_reset(skb);
+ skb_init_secmark(skb);
+ skb->mark = 0;
+ skb->priority = 0;
+ skb->nf_trace = 0;
+ skb->ipvs_property = 0;
+#ifdef CONFIG_NET_SCHED
+ skb->tc_index = 0;
+#endif
+ }
+ skb->dev = dev;
+}
+EXPORT_SYMBOL(skb_set_dev);
+#endif /* CONFIG_NET_NS */
+
/*
* Invalidate hardware checksum when packet is to be mangled, and
* complete checksum manually on outgoing path.
skb->next = nskb->next;
nskb->next = NULL;
+
+ /*
+ * If device doesnt need nskb->dst, release it right now while
+ * its hot in this cpu cache
+ */
+ if (dev->priv_flags & IFF_XMIT_DST_RELEASE)
+ skb_dst_drop(nskb);
+
rc = ops->ndo_start_xmit(nskb, dev);
if (unlikely(rc != NETDEV_TX_OK)) {
if (rc & ~NETDEV_TX_MASK)
return rc;
}
+/*
+ * Returns true if either:
+ * 1. skb has frag_list and the device doesn't support FRAGLIST, or
+ * 2. skb is fragmented and the device does not support SG, or if
+ * at least one of fragments is in highmem and device does not
+ * support DMA from it.
+ */
+static inline int skb_needs_linearize(struct sk_buff *skb,
+ struct net_device *dev)
+{
+ return (skb_has_frags(skb) && !(dev->features & NETIF_F_FRAGLIST)) ||
+ (skb_shinfo(skb)->nr_frags && (!(dev->features & NETIF_F_SG) ||
+ illegal_highdma(dev, skb)));
+}
+
/**
* dev_queue_xmit - transmit a buffer
* @skb: buffer to transmit
if (netif_needs_gso(dev, skb))
goto gso;
- if (skb_has_frags(skb) &&
- !(dev->features & NETIF_F_FRAGLIST) &&
- __skb_linearize(skb))
- goto out_kfree_skb;
-
- /* Fragmented skb is linearized if device does not support SG,
- * or if at least one of fragments is in highmem and device
- * does not support DMA from it.
- */
- if (skb_shinfo(skb)->nr_frags &&
- (!(dev->features & NETIF_F_SG) || illegal_highdma(dev, skb)) &&
- __skb_linearize(skb))
+ /* Convert a paged skb to linear, if required */
+ if (skb_needs_linearize(skb, dev) && __skb_linearize(skb))
goto out_kfree_skb;
/* If packet is not checksummed and device does not support
rcu_read_lock_bh();
txq = dev_pick_tx(dev, skb);
- q = rcu_dereference(txq->qdisc);
+ q = rcu_dereference_bh(txq->qdisc);
#ifdef CONFIG_NET_CLS_ACT
skb->tc_verd = SET_TC_AT(skb->tc_verd, AT_EGRESS);
struct packet_type *ptype, *pt_prev;
struct net_device *orig_dev;
struct net_device *null_or_orig;
+ struct net_device *null_or_bond;
int ret = NET_RX_DROP;
__be16 type;
if (!skb)
goto out;
+ /*
+ * Make sure frames received on VLAN interfaces stacked on
+ * bonding interfaces still make their way to any base bonding
+ * device that may have registered for a specific ptype. The
+ * handler may have to adjust skb->dev and orig_dev.
+ */
+ null_or_bond = NULL;
+ if ((skb->dev->priv_flags & IFF_802_1Q_VLAN) &&
+ (vlan_dev_real_dev(skb->dev)->priv_flags & IFF_BONDING)) {
+ null_or_bond = vlan_dev_real_dev(skb->dev);
+ }
+
type = skb->protocol;
list_for_each_entry_rcu(ptype,
&ptype_base[ntohs(type) & PTYPE_HASH_MASK], list) {
- if (ptype->type == type &&
- (ptype->dev == null_or_orig || ptype->dev == skb->dev ||
- ptype->dev == orig_dev)) {
+ if (ptype->type == type && (ptype->dev == null_or_orig ||
+ ptype->dev == skb->dev || ptype->dev == orig_dev ||
+ ptype->dev == null_or_bond)) {
if (pt_prev)
ret = deliver_skb(skb, pt_prev, orig_dev);
pt_prev = ptype;
return netif_receive_skb(skb);
}
-void napi_gro_flush(struct napi_struct *napi)
+static void napi_gro_flush(struct napi_struct *napi)
{
struct sk_buff *skb, *next;
napi->gro_count = 0;
napi->gro_list = NULL;
}
-EXPORT_SYMBOL(napi_gro_flush);
enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff *skb)
{
* entries to the tail of this list, and only ->poll()
* calls can remove this head entry from the list.
*/
- n = list_entry(list->next, struct napi_struct, poll_list);
+ n = list_first_entry(list, struct napi_struct, poll_list);
have = netpoll_poll_lock(n);
{
const struct net_device_stats *stats = dev_get_stats(dev);
- seq_printf(seq, "%6s:%8lu %7lu %4lu %4lu %4lu %5lu %10lu %9lu "
+ seq_printf(seq, "%6s: %7lu %7lu %4lu %4lu %4lu %5lu %10lu %9lu "
"%8lu %7lu %4lu %4lu %4lu %5lu %7lu %10lu\n",
dev->name, stats->rx_bytes, stats->rx_packets,
stats->rx_errors,
/* Unicast addresses changes may only happen under the rtnl,
* therefore calling __dev_set_promiscuity here is safe.
*/
- if (dev->uc.count > 0 && !dev->uc_promisc) {
+ if (!netdev_uc_empty(dev) && !dev->uc_promisc) {
__dev_set_promiscuity(dev, 1);
dev->uc_promisc = 1;
- } else if (dev->uc.count == 0 && dev->uc_promisc) {
+ } else if (netdev_uc_empty(dev) && dev->uc_promisc) {
__dev_set_promiscuity(dev, -1);
dev->uc_promisc = 0;
}
netif_addr_lock_bh(dev);
__dev_addr_discard(&dev->mc_list);
- dev->mc_count = 0;
+ netdev_mc_count(dev) = 0;
netif_addr_unlock_bh(dev);
}
}
EXPORT_SYMBOL(dev_get_flags);
-/**
- * dev_change_flags - change device settings
- * @dev: device
- * @flags: device state flags
- *
- * Change settings on device based state flags. The flags are
- * in the userspace exported format.
- */
-int dev_change_flags(struct net_device *dev, unsigned flags)
+int __dev_change_flags(struct net_device *dev, unsigned int flags)
{
- int ret, changes;
int old_flags = dev->flags;
+ int ret;
ASSERT_RTNL();
ret = 0;
if ((old_flags ^ flags) & IFF_UP) { /* Bit is different ? */
- ret = ((old_flags & IFF_UP) ? dev_close : dev_open)(dev);
+ ret = ((old_flags & IFF_UP) ? __dev_close : __dev_open)(dev);
if (!ret)
dev_set_rx_mode(dev);
}
- if (dev->flags & IFF_UP &&
- ((old_flags ^ dev->flags) & ~(IFF_UP | IFF_PROMISC | IFF_ALLMULTI |
- IFF_VOLATILE)))
- call_netdevice_notifiers(NETDEV_CHANGE, dev);
-
if ((flags ^ dev->gflags) & IFF_PROMISC) {
int inc = (flags & IFF_PROMISC) ? 1 : -1;
dev_set_allmulti(dev, inc);
}
- /* Exclude state transition flags, already notified */
- changes = (old_flags ^ dev->flags) & ~(IFF_UP | IFF_RUNNING);
+ return ret;
+}
+
+void __dev_notify_flags(struct net_device *dev, unsigned int old_flags)
+{
+ unsigned int changes = dev->flags ^ old_flags;
+
+ if (changes & IFF_UP) {
+ if (dev->flags & IFF_UP)
+ call_netdevice_notifiers(NETDEV_UP, dev);
+ else
+ call_netdevice_notifiers(NETDEV_DOWN, dev);
+ }
+
+ if (dev->flags & IFF_UP &&
+ (changes & ~(IFF_UP | IFF_PROMISC | IFF_ALLMULTI | IFF_VOLATILE)))
+ call_netdevice_notifiers(NETDEV_CHANGE, dev);
+}
+
+/**
+ * dev_change_flags - change device settings
+ * @dev: device
+ * @flags: device state flags
+ *
+ * Change settings on device based state flags. The flags are
+ * in the userspace exported format.
+ */
+int dev_change_flags(struct net_device *dev, unsigned flags)
+{
+ int ret, changes;
+ int old_flags = dev->flags;
+
+ ret = __dev_change_flags(dev, flags);
+ if (ret < 0)
+ return ret;
+
+ changes = old_flags ^ dev->flags;
if (changes)
rtmsg_ifinfo(RTM_NEWLINK, dev, changes);
+ __dev_notify_flags(dev, old_flags);
return ret;
}
EXPORT_SYMBOL(dev_change_flags);
*/
call_netdevice_notifiers(NETDEV_UNREGISTER, dev);
+ if (!dev->rtnl_link_ops ||
+ dev->rtnl_link_state == RTNL_LINK_INITIALIZED)
+ rtmsg_ifinfo(RTM_DELLINK, dev, ~0U);
+
/*
* Flush the unicast and multicast chains
*/
}
/* Process any work delayed until the end of the batch */
- dev = list_entry(head->next, struct net_device, unreg_list);
+ dev = list_first_entry(head, struct net_device, unreg_list);
call_netdevice_notifiers(NETDEV_UNREGISTER_BATCH, dev);
synchronize_net();
* Prevent userspace races by waiting until the network
* device is fully setup before sending notifications.
*/
- rtmsg_ifinfo(RTM_NEWLINK, dev, ~0U);
+ if (!dev->rtnl_link_ops ||
+ dev->rtnl_link_state == RTNL_LINK_INITIALIZED)
+ rtmsg_ifinfo(RTM_NEWLINK, dev, ~0U);
out:
return ret;
while (!list_empty(&list)) {
struct net_device *dev
- = list_entry(list.next, struct net_device, todo_list);
+ = list_first_entry(&list, struct net_device, todo_list);
list_del(&dev->todo_list);
if (unlikely(dev->reg_state != NETREG_UNREGISTERING)) {
netdev_init_queues(dev);
+ INIT_LIST_HEAD(&dev->ethtool_ntuple_list.list);
+ dev->ethtool_ntuple_list.count = 0;
INIT_LIST_HEAD(&dev->napi_list);
INIT_LIST_HEAD(&dev->unreg_list);
INIT_LIST_HEAD(&dev->link_watch_list);
/* Flush device addresses */
dev_addr_flush(dev);
+ /* Clear ethtool n-tuple list */
+ ethtool_ntuple_flush(dev);
+
list_for_each_entry_safe(p, n, &dev->napi_list, dev_list)
netif_napi_del(p);
return err;
rcu_read_lock_bh();
- filter = rcu_dereference(sk->sk_filter);
+ filter = rcu_dereference_bh(sk->sk_filter);
if (filter) {
unsigned int pkt_len = sk_run_filter(skb, filter->insns,
filter->len);
}
rcu_read_lock_bh();
- old_fp = rcu_dereference(sk->sk_filter);
+ old_fp = rcu_dereference_bh(sk->sk_filter);
rcu_assign_pointer(sk->sk_filter, fp);
rcu_read_unlock_bh();
sk_filter_delayed_uncharge(sk, old_fp);
return 0;
}
+EXPORT_SYMBOL_GPL(sk_attach_filter);
int sk_detach_filter(struct sock *sk)
{
struct sk_filter *filter;
rcu_read_lock_bh();
- filter = rcu_dereference(sk->sk_filter);
+ filter = rcu_dereference_bh(sk->sk_filter);
if (filter) {
rcu_assign_pointer(sk->sk_filter, NULL);
sk_filter_delayed_uncharge(sk, filter);
rcu_read_unlock_bh();
return ret;
}
+EXPORT_SYMBOL_GPL(sk_detach_filter);
#include <linux/security.h>
#include <linux/mutex.h>
#include <linux/if_addr.h>
+#include <linux/pci.h>
#include <asm/uaccess.h>
#include <asm/system.h>
}
EXPORT_SYMBOL(rtnl_is_locked);
+ #ifdef CONFIG_PROVE_LOCKING
+ int lockdep_rtnl_is_held(void)
+ {
+ return lockdep_is_held(&rtnl_mutex);
+ }
+ EXPORT_SYMBOL(lockdep_rtnl_is_held);
+ #endif /* #ifdef CONFIG_PROVE_LOCKING */
+
static struct rtnl_link *rtnl_msg_handlers[NPROTO];
static inline int rtm_msgindex(int msgtype)
}
}
+static unsigned int rtnl_dev_combine_flags(const struct net_device *dev,
+ const struct ifinfomsg *ifm)
+{
+ unsigned int flags = ifm->ifi_flags;
+
+ /* bugwards compatibility: ifi_change == 0 is treated as ~0 */
+ if (ifm->ifi_change)
+ flags = (flags & ifm->ifi_change) |
+ (dev->flags & ~ifm->ifi_change);
+
+ return flags;
+}
+
static void copy_rtnl_link_stats(struct rtnl_link_stats *a,
const struct net_device_stats *b)
{
a->tx_compressed = b->tx_compressed;
};
+static inline int rtnl_vfinfo_size(const struct net_device *dev)
+{
+ if (dev->dev.parent && dev_is_pci(dev->dev.parent))
+ return dev_num_vf(dev->dev.parent) *
+ sizeof(struct ifla_vf_info);
+ else
+ return 0;
+}
+
static inline size_t if_nlmsg_size(const struct net_device *dev)
{
return NLMSG_ALIGN(sizeof(struct ifinfomsg))
+ nla_total_size(4) /* IFLA_MASTER */
+ nla_total_size(1) /* IFLA_OPERSTATE */
+ nla_total_size(1) /* IFLA_LINKMODE */
+ + nla_total_size(4) /* IFLA_NUM_VF */
+ + nla_total_size(rtnl_vfinfo_size(dev)) /* IFLA_VFINFO */
+ rtnl_link_get_size(dev); /* IFLA_LINKINFO */
}
stats = dev_get_stats(dev);
copy_rtnl_link_stats(nla_data(attr), stats);
+ if (dev->netdev_ops->ndo_get_vf_config && dev->dev.parent) {
+ int i;
+ struct ifla_vf_info ivi;
+
+ NLA_PUT_U32(skb, IFLA_NUM_VF, dev_num_vf(dev->dev.parent));
+ for (i = 0; i < dev_num_vf(dev->dev.parent); i++) {
+ if (dev->netdev_ops->ndo_get_vf_config(dev, i, &ivi))
+ break;
+ NLA_PUT(skb, IFLA_VFINFO, sizeof(ivi), &ivi);
+ }
+ }
if (dev->rtnl_link_ops) {
if (rtnl_link_fill(skb, dev) < 0)
goto nla_put_failure;
[IFLA_LINKINFO] = { .type = NLA_NESTED },
[IFLA_NET_NS_PID] = { .type = NLA_U32 },
[IFLA_IFALIAS] = { .type = NLA_STRING, .len = IFALIASZ-1 },
+ [IFLA_VF_MAC] = { .type = NLA_BINARY,
+ .len = sizeof(struct ifla_vf_mac) },
+ [IFLA_VF_VLAN] = { .type = NLA_BINARY,
+ .len = sizeof(struct ifla_vf_vlan) },
+ [IFLA_VF_TX_RATE] = { .type = NLA_BINARY,
+ .len = sizeof(struct ifla_vf_tx_rate) },
};
EXPORT_SYMBOL(ifla_policy);
}
if (ifm->ifi_flags || ifm->ifi_change) {
- unsigned int flags = ifm->ifi_flags;
-
- /* bugwards compatibility: ifi_change == 0 is treated as ~0 */
- if (ifm->ifi_change)
- flags = (flags & ifm->ifi_change) |
- (dev->flags & ~ifm->ifi_change);
- err = dev_change_flags(dev, flags);
+ err = dev_change_flags(dev, rtnl_dev_combine_flags(dev, ifm));
if (err < 0)
goto errout;
}
write_unlock_bh(&dev_base_lock);
}
+ if (tb[IFLA_VF_MAC]) {
+ struct ifla_vf_mac *ivm;
+ ivm = nla_data(tb[IFLA_VF_MAC]);
+ err = -EOPNOTSUPP;
+ if (ops->ndo_set_vf_mac)
+ err = ops->ndo_set_vf_mac(dev, ivm->vf, ivm->mac);
+ if (err < 0)
+ goto errout;
+ modified = 1;
+ }
+
+ if (tb[IFLA_VF_VLAN]) {
+ struct ifla_vf_vlan *ivv;
+ ivv = nla_data(tb[IFLA_VF_VLAN]);
+ err = -EOPNOTSUPP;
+ if (ops->ndo_set_vf_vlan)
+ err = ops->ndo_set_vf_vlan(dev, ivv->vf,
+ ivv->vlan,
+ ivv->qos);
+ if (err < 0)
+ goto errout;
+ modified = 1;
+ }
+ err = 0;
+
+ if (tb[IFLA_VF_TX_RATE]) {
+ struct ifla_vf_tx_rate *ivt;
+ ivt = nla_data(tb[IFLA_VF_TX_RATE]);
+ err = -EOPNOTSUPP;
+ if (ops->ndo_set_vf_tx_rate)
+ err = ops->ndo_set_vf_tx_rate(dev, ivt->vf, ivt->rate);
+ if (err < 0)
+ goto errout;
+ modified = 1;
+ }
err = 0;
errout:
return 0;
}
+int rtnl_configure_link(struct net_device *dev, const struct ifinfomsg *ifm)
+{
+ unsigned int old_flags;
+ int err;
+
+ old_flags = dev->flags;
+ if (ifm && (ifm->ifi_flags || ifm->ifi_change)) {
+ err = __dev_change_flags(dev, rtnl_dev_combine_flags(dev, ifm));
+ if (err < 0)
+ return err;
+ }
+
+ dev->rtnl_link_state = RTNL_LINK_INITIALIZED;
+ rtmsg_ifinfo(RTM_NEWLINK, dev, ~0U);
+
+ __dev_notify_flags(dev, old_flags);
+ return 0;
+}
+EXPORT_SYMBOL(rtnl_configure_link);
+
struct net_device *rtnl_create_link(struct net *src_net, struct net *net,
char *ifname, const struct rtnl_link_ops *ops, struct nlattr *tb[])
{
dev_net_set(dev, net);
dev->rtnl_link_ops = ops;
+ dev->rtnl_link_state = RTNL_LINK_INITIALIZING;
dev->real_num_tx_queues = real_num_queues;
if (strchr(dev->name, '%')) {
if (!(nlh->nlmsg_flags & NLM_F_CREATE))
return -ENODEV;
- if (ifm->ifi_index || ifm->ifi_flags || ifm->ifi_change)
+ if (ifm->ifi_index)
return -EOPNOTSUPP;
if (tb[IFLA_MAP] || tb[IFLA_MASTER] || tb[IFLA_PROTINFO])
return -EOPNOTSUPP;
err = ops->newlink(net, dev, tb, data);
else
err = register_netdevice(dev);
- if (err < 0 && !IS_ERR(dev))
+ if (err < 0 && !IS_ERR(dev)) {
free_netdev(dev);
+ goto out;
+ }
+ err = rtnl_configure_link(dev, ifm);
+ if (err < 0)
+ unregister_netdevice(dev);
+out:
put_net(dest_net);
return err;
}
struct net_device *dev = ptr;
switch (event) {
- case NETDEV_UNREGISTER:
- rtmsg_ifinfo(RTM_DELLINK, dev, ~0U);
- break;
case NETDEV_UP:
case NETDEV_DOWN:
- rtmsg_ifinfo(RTM_NEWLINK, dev, IFF_UP|IFF_RUNNING);
- break;
+ case NETDEV_PRE_UP:
case NETDEV_POST_INIT:
case NETDEV_REGISTER:
case NETDEV_CHANGE:
case NETDEV_GOING_DOWN:
+ case NETDEV_UNREGISTER:
case NETDEV_UNREGISTER_BATCH:
break;
default:
};
-static int rtnetlink_net_init(struct net *net)
+static int __net_init rtnetlink_net_init(struct net *net)
{
struct sock *sk;
sk = netlink_kernel_create(net, NETLINK_ROUTE, RTNLGRP_MAX,
return 0;
}
-static void rtnetlink_net_exit(struct net *net)
+static void __net_exit rtnetlink_net_exit(struct net *net)
{
netlink_kernel_release(net->rtnl);
net->rtnl = NULL;
struct timeval tm;
} v;
- unsigned int lv = sizeof(int);
+ int lv = sizeof(int);
int len;
if (get_user(len, optlen))
if (sk->sk_destruct)
sk->sk_destruct(sk);
- filter = rcu_dereference(sk->sk_filter);
+ filter = rcu_dereference_check(sk->sk_filter,
+ atomic_read(&sk->sk_wmem_alloc) == 0);
if (filter) {
sk_filter_uncharge(sk, filter);
rcu_assign_pointer(sk->sk_filter, NULL);
}
EXPORT_SYMBOL_GPL(sock_prot_inuse_get);
-static int sock_inuse_init_net(struct net *net)
+static int __net_init sock_inuse_init_net(struct net *net)
{
net->core.inuse = alloc_percpu(struct prot_inuse);
return net->core.inuse ? 0 : -ENOMEM;
}
-static void sock_inuse_exit_net(struct net *net)
+static void __net_exit sock_inuse_exit_net(struct net *net)
{
free_percpu(net->core.inuse);
}
}
if (prot->rsk_prot != NULL) {
- static const char mask[] = "request_sock_%s";
-
- prot->rsk_prot->slab_name = kmalloc(strlen(prot->name) + sizeof(mask) - 1, GFP_KERNEL);
+ prot->rsk_prot->slab_name = kasprintf(GFP_KERNEL, "request_sock_%s", prot->name);
if (prot->rsk_prot->slab_name == NULL)
goto out_free_sock_slab;
- sprintf(prot->rsk_prot->slab_name, mask, prot->name);
prot->rsk_prot->slab = kmem_cache_create(prot->rsk_prot->slab_name,
prot->rsk_prot->obj_size, 0,
SLAB_HWCACHE_ALIGN, NULL);
}
if (prot->twsk_prot != NULL) {
- static const char mask[] = "tw_sock_%s";
-
- prot->twsk_prot->twsk_slab_name = kmalloc(strlen(prot->name) + sizeof(mask) - 1, GFP_KERNEL);
+ prot->twsk_prot->twsk_slab_name = kasprintf(GFP_KERNEL, "tw_sock_%s", prot->name);
if (prot->twsk_prot->twsk_slab_name == NULL)
goto out_free_request_sock_slab;
- sprintf(prot->twsk_prot->twsk_slab_name, mask, prot->name);
prot->twsk_prot->twsk_slab =
kmem_cache_create(prot->twsk_prot->twsk_slab_name,
prot->twsk_prot->twsk_obj_size,
if (!rt_hash_table[st->bucket].chain)
continue;
rcu_read_lock_bh();
- r = rcu_dereference(rt_hash_table[st->bucket].chain);
+ r = rcu_dereference_bh(rt_hash_table[st->bucket].chain);
while (r) {
if (dev_net(r->u.dst.dev) == seq_file_net(seq) &&
r->rt_genid == st->genid)
return r;
- r = rcu_dereference(r->u.dst.rt_next);
+ r = rcu_dereference_bh(r->u.dst.rt_next);
}
rcu_read_unlock_bh();
}
rcu_read_lock_bh();
r = rt_hash_table[st->bucket].chain;
}
- return rcu_dereference(r);
+ return rcu_dereference_bh(r);
}
static struct rtable *rt_cache_get_next(struct seq_file *seq,
if (skb->protocol != htons(ETH_P_IP)) {
/* Not IP (i.e. ARP). Do not create route, if it is
* invalid for proxy arp. DNAT routes are always valid.
+ *
+ * Proxy arp feature have been extended to allow, ARP
+ * replies back to the same interface, to support
+ * Private VLAN switch technologies. See arp.c.
*/
- if (out_dev == in_dev) {
+ if (out_dev == in_dev &&
+ IN_DEV_PROXY_ARP_PVLAN(in_dev) == 0) {
err = -EINVAL;
goto cleanup;
}
hash = rt_hash(flp->fl4_dst, flp->fl4_src, flp->oif, rt_genid(net));
rcu_read_lock_bh();
- for (rth = rcu_dereference(rt_hash_table[hash].chain); rth;
- rth = rcu_dereference(rth->u.dst.rt_next)) {
+ for (rth = rcu_dereference_bh(rt_hash_table[hash].chain); rth;
+ rth = rcu_dereference_bh(rth->u.dst.rt_next)) {
if (rth->fl.fl4_dst == flp->fl4_dst &&
rth->fl.fl4_src == flp->fl4_src &&
rth->fl.iif == 0 &&
if (!rt_hash_table[h].chain)
continue;
rcu_read_lock_bh();
- for (rt = rcu_dereference(rt_hash_table[h].chain), idx = 0; rt;
- rt = rcu_dereference(rt->u.dst.rt_next), idx++) {
+ for (rt = rcu_dereference_bh(rt_hash_table[h].chain), idx = 0; rt;
+ rt = rcu_dereference_bh(rt->u.dst.rt_next), idx++) {
if (!net_eq(dev_net(rt->u.dst.dev), net) || idx < s_idx)
continue;
if (rt_is_expired(rt))
#ifdef CONFIG_NET_CLS_ROUTE
-struct ip_rt_acct *ip_rt_acct __read_mostly;
+struct ip_rt_acct __percpu *ip_rt_acct __read_mostly;
#endif /* CONFIG_NET_CLS_ROUTE */
static __initdata unsigned long rhash_entries;
#include <linux/init.h>
#include <linux/mutex.h>
#include <linux/if_vlan.h>
+#include <linux/virtio_net.h>
#ifdef CONFIG_INET
#include <net/inet_common.h>
unsigned char mr_address[MAX_ADDR_LEN];
};
-#ifdef CONFIG_PACKET_MMAP
static int packet_set_ring(struct sock *sk, struct tpacket_req *req,
int closing, int tx_ring);
struct packet_sock;
static int tpacket_snd(struct packet_sock *po, struct msghdr *msg);
-#endif
static void packet_flush_mclist(struct sock *sk);
/* struct sock has to be the first member of packet_sock */
struct sock sk;
struct tpacket_stats stats;
-#ifdef CONFIG_PACKET_MMAP
struct packet_ring_buffer rx_ring;
struct packet_ring_buffer tx_ring;
int copy_thresh;
-#endif
spinlock_t bind_lock;
struct mutex pg_vec_lock;
unsigned int running:1, /* prot_hook is attached*/
auxdata:1,
- origdev:1;
+ origdev:1,
+ has_vnet_hdr:1;
int ifindex; /* bound device */
__be16 num;
struct packet_mclist *mclist;
-#ifdef CONFIG_PACKET_MMAP
atomic_t mapped;
enum tpacket_versions tp_version;
unsigned int tp_hdrlen;
unsigned int tp_reserve;
unsigned int tp_loss:1;
-#endif
struct packet_type prot_hook ____cacheline_aligned_in_smp;
};
#define PACKET_SKB_CB(__skb) ((struct packet_skb_cb *)((__skb)->cb))
-#ifdef CONFIG_PACKET_MMAP
-
static void __packet_set_status(struct packet_sock *po, void *frame, int status)
{
union {
buff->head = buff->head != buff->frame_max ? buff->head+1 : 0;
}
-#endif
-
static inline struct packet_sock *pkt_sk(struct sock *sk)
{
return (struct packet_sock *)sk;
struct sk_filter *filter;
rcu_read_lock_bh();
- filter = rcu_dereference(sk->sk_filter);
+ filter = rcu_dereference_bh(sk->sk_filter);
if (filter != NULL)
res = sk_run_filter(skb, filter->insns, filter->len);
rcu_read_unlock_bh();
return 0;
}
-#ifdef CONFIG_PACKET_MMAP
static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
struct packet_type *pt, struct net_device *orig_dev)
{
mutex_unlock(&po->pg_vec_lock);
return err;
}
-#endif
+
+static inline struct sk_buff *packet_alloc_skb(struct sock *sk, size_t prepad,
+ size_t reserve, size_t len,
+ size_t linear, int noblock,
+ int *err)
+{
+ struct sk_buff *skb;
+
+ /* Under a page? Don't bother with paged skb. */
+ if (prepad + len < PAGE_SIZE || !linear)
+ linear = len;
+
+ skb = sock_alloc_send_pskb(sk, prepad + linear, len - linear, noblock,
+ err);
+ if (!skb)
+ return NULL;
+
+ skb_reserve(skb, reserve);
+ skb_put(skb, linear);
+ skb->data_len = len - linear;
+ skb->len += len - linear;
+
+ return skb;
+}
static int packet_snd(struct socket *sock,
struct msghdr *msg, size_t len)
__be16 proto;
unsigned char *addr;
int ifindex, err, reserve = 0;
+ struct virtio_net_hdr vnet_hdr = { 0 };
+ int offset = 0;
+ int vnet_hdr_len;
+ struct packet_sock *po = pkt_sk(sk);
+ unsigned short gso_type = 0;
/*
* Get and verify the address.
*/
if (saddr == NULL) {
- struct packet_sock *po = pkt_sk(sk);
-
ifindex = po->ifindex;
proto = po->num;
addr = NULL;
if (!(dev->flags & IFF_UP))
goto out_unlock;
+ if (po->has_vnet_hdr) {
+ vnet_hdr_len = sizeof(vnet_hdr);
+
+ err = -EINVAL;
+ if (len < vnet_hdr_len)
+ goto out_unlock;
+
+ len -= vnet_hdr_len;
+
+ err = memcpy_fromiovec((void *)&vnet_hdr, msg->msg_iov,
+ vnet_hdr_len);
+ if (err < 0)
+ goto out_unlock;
+
+ if ((vnet_hdr.flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) &&
+ (vnet_hdr.csum_start + vnet_hdr.csum_offset + 2 >
+ vnet_hdr.hdr_len))
+ vnet_hdr.hdr_len = vnet_hdr.csum_start +
+ vnet_hdr.csum_offset + 2;
+
+ err = -EINVAL;
+ if (vnet_hdr.hdr_len > len)
+ goto out_unlock;
+
+ if (vnet_hdr.gso_type != VIRTIO_NET_HDR_GSO_NONE) {
+ switch (vnet_hdr.gso_type & ~VIRTIO_NET_HDR_GSO_ECN) {
+ case VIRTIO_NET_HDR_GSO_TCPV4:
+ gso_type = SKB_GSO_TCPV4;
+ break;
+ case VIRTIO_NET_HDR_GSO_TCPV6:
+ gso_type = SKB_GSO_TCPV6;
+ break;
+ case VIRTIO_NET_HDR_GSO_UDP:
+ gso_type = SKB_GSO_UDP;
+ break;
+ default:
+ goto out_unlock;
+ }
+
+ if (vnet_hdr.gso_type & VIRTIO_NET_HDR_GSO_ECN)
+ gso_type |= SKB_GSO_TCP_ECN;
+
+ if (vnet_hdr.gso_size == 0)
+ goto out_unlock;
+
+ }
+ }
+
err = -EMSGSIZE;
- if (len > dev->mtu+reserve)
+ if (!gso_type && (len > dev->mtu+reserve))
goto out_unlock;
- skb = sock_alloc_send_skb(sk, len + LL_ALLOCATED_SPACE(dev),
- msg->msg_flags & MSG_DONTWAIT, &err);
+ err = -ENOBUFS;
+ skb = packet_alloc_skb(sk, LL_ALLOCATED_SPACE(dev),
+ LL_RESERVED_SPACE(dev), len, vnet_hdr.hdr_len,
+ msg->msg_flags & MSG_DONTWAIT, &err);
if (skb == NULL)
goto out_unlock;
- skb_reserve(skb, LL_RESERVED_SPACE(dev));
- skb_reset_network_header(skb);
+ skb_set_network_header(skb, reserve);
err = -EINVAL;
if (sock->type == SOCK_DGRAM &&
- dev_hard_header(skb, dev, ntohs(proto), addr, NULL, len) < 0)
+ (offset = dev_hard_header(skb, dev, ntohs(proto), addr, NULL, len)) < 0)
goto out_free;
/* Returns -EFAULT on error */
- err = memcpy_fromiovec(skb_put(skb, len), msg->msg_iov, len);
+ err = skb_copy_datagram_from_iovec(skb, offset, msg->msg_iov, 0, len);
if (err)
goto out_free;
skb->priority = sk->sk_priority;
skb->mark = sk->sk_mark;
+ if (po->has_vnet_hdr) {
+ if (vnet_hdr.flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) {
+ if (!skb_partial_csum_set(skb, vnet_hdr.csum_start,
+ vnet_hdr.csum_offset)) {
+ err = -EINVAL;
+ goto out_free;
+ }
+ }
+
+ skb_shinfo(skb)->gso_size = vnet_hdr.gso_size;
+ skb_shinfo(skb)->gso_type = gso_type;
+
+ /* Header must be checked, and gso_segs computed. */
+ skb_shinfo(skb)->gso_type |= SKB_GSO_DODGY;
+ skb_shinfo(skb)->gso_segs = 0;
+
+ len += vnet_hdr_len;
+ }
+
/*
* Now send it
*/
static int packet_sendmsg(struct kiocb *iocb, struct socket *sock,
struct msghdr *msg, size_t len)
{
-#ifdef CONFIG_PACKET_MMAP
struct sock *sk = sock->sk;
struct packet_sock *po = pkt_sk(sk);
if (po->tx_ring.pg_vec)
return tpacket_snd(po, msg);
else
-#endif
return packet_snd(sock, msg, len);
}
struct sock *sk = sock->sk;
struct packet_sock *po;
struct net *net;
-#ifdef CONFIG_PACKET_MMAP
struct tpacket_req req;
-#endif
if (!sk)
return 0;
net = sock_net(sk);
po = pkt_sk(sk);
- write_lock_bh(&net->packet.sklist_lock);
- sk_del_node_init(sk);
+ spin_lock_bh(&net->packet.sklist_lock);
+ sk_del_node_init_rcu(sk);
sock_prot_inuse_add(net, sk->sk_prot, -1);
- write_unlock_bh(&net->packet.sklist_lock);
-
- /*
- * Unhook packet receive handler.
- */
+ spin_unlock_bh(&net->packet.sklist_lock);
+ spin_lock(&po->bind_lock);
if (po->running) {
/*
- * Remove the protocol hook
+ * Remove from protocol table
*/
- dev_remove_pack(&po->prot_hook);
po->running = 0;
po->num = 0;
+ __dev_remove_pack(&po->prot_hook);
__sock_put(sk);
}
+ spin_unlock(&po->bind_lock);
packet_flush_mclist(sk);
-#ifdef CONFIG_PACKET_MMAP
memset(&req, 0, sizeof(req));
if (po->rx_ring.pg_vec)
if (po->tx_ring.pg_vec)
packet_set_ring(sk, &req, 1, 1);
-#endif
+ synchronize_net();
/*
* Now the socket is dead. No more input will appear.
*/
-
sock_orphan(sk);
sock->sk = NULL;
po->running = 1;
}
- write_lock_bh(&net->packet.sklist_lock);
- sk_add_node(sk, &net->packet.sklist);
+ spin_lock_bh(&net->packet.sklist_lock);
+ sk_add_node_rcu(sk, &net->packet.sklist);
sock_prot_inuse_add(net, &packet_proto, 1);
- write_unlock_bh(&net->packet.sklist_lock);
+ spin_unlock_bh(&net->packet.sklist_lock);
+
return 0;
out:
return err;
struct sk_buff *skb;
int copied, err;
struct sockaddr_ll *sll;
+ int vnet_hdr_len = 0;
err = -EINVAL;
if (flags & ~(MSG_PEEK|MSG_DONTWAIT|MSG_TRUNC|MSG_CMSG_COMPAT))
if (skb == NULL)
goto out;
+ if (pkt_sk(sk)->has_vnet_hdr) {
+ struct virtio_net_hdr vnet_hdr = { 0 };
+
+ err = -EINVAL;
+ vnet_hdr_len = sizeof(vnet_hdr);
+ if ((len -= vnet_hdr_len) < 0)
+ goto out_free;
+
+ if (skb_is_gso(skb)) {
+ struct skb_shared_info *sinfo = skb_shinfo(skb);
+
+ /* This is a hint as to how much should be linear. */
+ vnet_hdr.hdr_len = skb_headlen(skb);
+ vnet_hdr.gso_size = sinfo->gso_size;
+ if (sinfo->gso_type & SKB_GSO_TCPV4)
+ vnet_hdr.gso_type = VIRTIO_NET_HDR_GSO_TCPV4;
+ else if (sinfo->gso_type & SKB_GSO_TCPV6)
+ vnet_hdr.gso_type = VIRTIO_NET_HDR_GSO_TCPV6;
+ else if (sinfo->gso_type & SKB_GSO_UDP)
+ vnet_hdr.gso_type = VIRTIO_NET_HDR_GSO_UDP;
+ else if (sinfo->gso_type & SKB_GSO_FCOE)
+ goto out_free;
+ else
+ BUG();
+ if (sinfo->gso_type & SKB_GSO_TCP_ECN)
+ vnet_hdr.gso_type |= VIRTIO_NET_HDR_GSO_ECN;
+ } else
+ vnet_hdr.gso_type = VIRTIO_NET_HDR_GSO_NONE;
+
+ if (skb->ip_summed == CHECKSUM_PARTIAL) {
+ vnet_hdr.flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
+ vnet_hdr.csum_start = skb->csum_start -
+ skb_headroom(skb);
+ vnet_hdr.csum_offset = skb->csum_offset;
+ } /* else everything is zero */
+
+ err = memcpy_toiovec(msg->msg_iov, (void *)&vnet_hdr,
+ vnet_hdr_len);
+ if (err < 0)
+ goto out_free;
+ }
+
/*
* If the address length field is there to be filled in, we fill
* it in now.
* Free or return the buffer as appropriate. Again this
* hides all the races and re-entrancy issues from us.
*/
- err = (flags&MSG_TRUNC) ? skb->len : copied;
+ err = vnet_hdr_len + ((flags&MSG_TRUNC) ? skb->len : copied);
out_free:
skb_free_datagram(sk, skb);
goto done;
err = -EINVAL;
- if (mreq->mr_alen > dev->addr_len)
+ if (mreq->mr_alen != dev->addr_len)
goto done;
err = -ENOBUFS;
return ret;
}
-#ifdef CONFIG_PACKET_MMAP
case PACKET_RX_RING:
case PACKET_TX_RING:
{
if (optlen < sizeof(req))
return -EINVAL;
+ if (pkt_sk(sk)->has_vnet_hdr)
+ return -EINVAL;
if (copy_from_user(&req, optval, sizeof(req)))
return -EFAULT;
return packet_set_ring(sk, &req, 0, optname == PACKET_TX_RING);
po->tp_loss = !!val;
return 0;
}
-#endif
case PACKET_AUXDATA:
{
int val;
po->origdev = !!val;
return 0;
}
+ case PACKET_VNET_HDR:
+ {
+ int val;
+
+ if (sock->type != SOCK_RAW)
+ return -EINVAL;
+ if (po->rx_ring.pg_vec || po->tx_ring.pg_vec)
+ return -EBUSY;
+ if (optlen < sizeof(val))
+ return -EINVAL;
+ if (copy_from_user(&val, optval, sizeof(val)))
+ return -EFAULT;
+
+ po->has_vnet_hdr = !!val;
+ return 0;
+ }
default:
return -ENOPROTOOPT;
}
data = &val;
break;
-#ifdef CONFIG_PACKET_MMAP
+ case PACKET_VNET_HDR:
+ if (len > sizeof(int))
+ len = sizeof(int);
+ val = po->has_vnet_hdr;
+
+ data = &val;
+ break;
case PACKET_VERSION:
if (len > sizeof(int))
len = sizeof(int);
val = po->tp_loss;
data = &val;
break;
-#endif
default:
return -ENOPROTOOPT;
}
struct net_device *dev = data;
struct net *net = dev_net(dev);
- read_lock(&net->packet.sklist_lock);
- sk_for_each(sk, node, &net->packet.sklist) {
+ rcu_read_lock();
+ sk_for_each_rcu(sk, node, &net->packet.sklist) {
struct packet_sock *po = pkt_sk(sk);
switch (msg) {
}
break;
case NETDEV_UP:
- spin_lock(&po->bind_lock);
- if (dev->ifindex == po->ifindex && po->num &&
- !po->running) {
- dev_add_pack(&po->prot_hook);
- sock_hold(sk);
- po->running = 1;
+ if (dev->ifindex == po->ifindex) {
+ spin_lock(&po->bind_lock);
+ if (po->num && !po->running) {
+ dev_add_pack(&po->prot_hook);
+ sock_hold(sk);
+ po->running = 1;
+ }
+ spin_unlock(&po->bind_lock);
}
- spin_unlock(&po->bind_lock);
break;
}
}
- read_unlock(&net->packet.sklist_lock);
+ rcu_read_unlock();
return NOTIFY_DONE;
}
return 0;
}
-#ifndef CONFIG_PACKET_MMAP
-#define packet_mmap sock_no_mmap
-#define packet_poll datagram_poll
-#else
-
static unsigned int packet_poll(struct file *file, struct socket *sock,
poll_table *wait)
{
mutex_unlock(&po->pg_vec_lock);
return err;
}
-#endif
-
static const struct proto_ops packet_ops_spkt = {
.family = PF_PACKET,
};
#ifdef CONFIG_PROC_FS
-static inline struct sock *packet_seq_idx(struct net *net, loff_t off)
-{
- struct sock *s;
- struct hlist_node *node;
-
- sk_for_each(s, node, &net->packet.sklist) {
- if (!off--)
- return s;
- }
- return NULL;
-}
static void *packet_seq_start(struct seq_file *seq, loff_t *pos)
- __acquires(seq_file_net(seq)->packet.sklist_lock)
+ __acquires(RCU)
{
struct net *net = seq_file_net(seq);
- read_lock(&net->packet.sklist_lock);
- return *pos ? packet_seq_idx(net, *pos - 1) : SEQ_START_TOKEN;
+
+ rcu_read_lock();
+ return seq_hlist_start_head_rcu(&net->packet.sklist, *pos);
}
static void *packet_seq_next(struct seq_file *seq, void *v, loff_t *pos)
{
struct net *net = seq_file_net(seq);
- ++*pos;
- return (v == SEQ_START_TOKEN)
- ? sk_head(&net->packet.sklist)
- : sk_next((struct sock *)v) ;
+ return seq_hlist_next_rcu(v, &net->packet.sklist, pos);
}
static void packet_seq_stop(struct seq_file *seq, void *v)
- __releases(seq_file_net(seq)->packet.sklist_lock)
+ __releases(RCU)
{
- struct net *net = seq_file_net(seq);
- read_unlock(&net->packet.sklist_lock);
+ rcu_read_unlock();
}
static int packet_seq_show(struct seq_file *seq, void *v)
if (v == SEQ_START_TOKEN)
seq_puts(seq, "sk RefCnt Type Proto Iface R Rmem User Inode\n");
else {
- struct sock *s = v;
+ struct sock *s = sk_entry(v);
const struct packet_sock *po = pkt_sk(s);
seq_printf(seq,
#endif
-static int packet_net_init(struct net *net)
+static int __net_init packet_net_init(struct net *net)
{
- rwlock_init(&net->packet.sklist_lock);
+ spin_lock_init(&net->packet.sklist_lock);
INIT_HLIST_HEAD(&net->packet.sklist);
if (!proc_net_fops_create(net, "packet", 0, &packet_seq_fops))
return 0;
}
-static void packet_net_exit(struct net *net)
+static void __net_exit packet_net_exit(struct net *net)
{
proc_net_remove(net, "packet");
}