rtnet-developers Mailing List for RTnet - Real-Time Networking for Linux (Page 3)
Brought to you by:
bet-frogger,
kiszka
You can subscribe to this list here.
| 2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(9) |
Nov
(12) |
Dec
(6) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2005 |
Jan
(13) |
Feb
(1) |
Mar
(2) |
Apr
|
May
(7) |
Jun
(13) |
Jul
(6) |
Aug
(15) |
Sep
(1) |
Oct
(3) |
Nov
(2) |
Dec
(11) |
| 2006 |
Jan
(2) |
Feb
|
Mar
(13) |
Apr
|
May
(6) |
Jun
(7) |
Jul
(8) |
Aug
(13) |
Sep
(28) |
Oct
(5) |
Nov
(17) |
Dec
(5) |
| 2007 |
Jan
(2) |
Feb
(18) |
Mar
(22) |
Apr
(5) |
May
(4) |
Jun
(2) |
Jul
(5) |
Aug
(22) |
Sep
|
Oct
(3) |
Nov
(4) |
Dec
(2) |
| 2008 |
Jan
|
Feb
(3) |
Mar
(15) |
Apr
(7) |
May
(2) |
Jun
(18) |
Jul
(19) |
Aug
(6) |
Sep
(7) |
Oct
(2) |
Nov
(3) |
Dec
(1) |
| 2009 |
Jan
(8) |
Feb
(2) |
Mar
|
Apr
(3) |
May
(4) |
Jun
(2) |
Jul
(7) |
Aug
|
Sep
(2) |
Oct
(8) |
Nov
(16) |
Dec
(16) |
| 2010 |
Jan
(5) |
Feb
(2) |
Mar
|
Apr
(16) |
May
|
Jun
|
Jul
(1) |
Aug
(1) |
Sep
(8) |
Oct
(20) |
Nov
|
Dec
(10) |
| 2011 |
Jan
(15) |
Feb
(33) |
Mar
(5) |
Apr
(8) |
May
(5) |
Jun
|
Jul
|
Aug
(2) |
Sep
(21) |
Oct
(21) |
Nov
(12) |
Dec
(7) |
| 2012 |
Jan
(2) |
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
(2) |
Sep
(5) |
Oct
|
Nov
|
Dec
(2) |
| 2013 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(3) |
Jun
(7) |
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(1) |
| 2014 |
Jan
|
Feb
(3) |
Mar
(5) |
Apr
|
May
(1) |
Jun
(1) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
(2) |
Nov
(8) |
Dec
|
| 2015 |
Jan
|
Feb
|
Mar
(4) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2016 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
| 2018 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
| 2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Jan K. <jan...@si...> - 2012-09-12 18:44:44
|
Yes, this was way too long since the last release. Too many changes piled up, too many people were forced to use a git snapshot. Will try to do better next time. Here are the changes of this release: - added rt_e1000e - added rt_fec - support for kernel up to 3.2 and Xenomai 2.6 - support for premapping rtskbs (for compatibility with IOMMUs, enabled in rt_igb and rt_e1000e) - rework of rtnetproxy - RT-TCP fixes - MSI-X support for rt_igb - fixes, extended hardware support and cleanups for rt_r8169 (which is no longer considered experimental) - added REBIND_RT_NICS feature to rtnet start script (replaces "cards" driver parameter) - updated RTAI examples - copyright clarification for userspace-relevant headers - various smaller fixes and cleanups Many thanks to all the contributors! Jan -- Siemens AG, Corporate Technology, CT RTC ITP SDP-DE Corporate Competence Center Embedded Linux |
|
From: Jan K. <jan...@si...> - 2012-08-28 16:44:39
|
On 2012-08-26 17:09, Wolfgang Grandegger wrote: > From: Wolfgang Grandegger <wg...@de...> > > This patch adds a driver for the FEC Ethernet controllers available > on various SOCs of PowerPC and ARM processors from Freescale, e.g. > MPC8xx and i.MXn, n=[25,27,28,31,35,51,53,6Q,...]. > > It's derived from the files "drivers/net/ethernet/freescale/fec.[ch]" > of the mainline Linux 3.5 kernel (git tag v3.5-709-ga6be1fc). > > To simplify the backport of Linux patches for the FEC driver, code > changes are minimized. Therefore existing coding style issues have > also not been addressed. > > Due to heavy changes of the clock, platform and device-tree support, > version dependent switches are necessary (using LINUX_VERSION_CODE) > to support recent kernel versions >= 3.0. > > The driver have been tested with v3.0.15 on an i.MX53 and i.MX6Q > board. > > Signed-off-by: Wolfgang Grandegger <wg...@de...> > --- > configure.ac | 11 + > drivers/GNUmakefile.am | 18 + > drivers/rt_fec.c | 1940 ++++++++++++++++++++++++++++++++++++++++++++++++ > drivers/rt_fec.h | 153 ++++ > 4 files changed, 2122 insertions(+) > create mode 100644 drivers/rt_fec.c > create mode 100644 drivers/rt_fec.h Thanks, merged. Jan -- Siemens AG, Corporate Technology, CT RTC ITP SDP-DE Corporate Competence Center Embedded Linux |
|
From: Wolfgang G. <wg...@gr...> - 2012-08-26 15:21:41
|
From: Wolfgang Grandegger <wg...@de...>
This patch adds a driver for the FEC Ethernet controllers available
on various SOCs of PowerPC and ARM processors from Freescale, e.g.
MPC8xx and i.MXn, n=[25,27,28,31,35,51,53,6Q,...].
It's derived from the files "drivers/net/ethernet/freescale/fec.[ch]"
of the mainline Linux 3.5 kernel (git tag v3.5-709-ga6be1fc).
To simplify the backport of Linux patches for the FEC driver, code
changes are minimized. Therefore existing coding style issues have
also not been addressed.
Due to heavy changes of the clock, platform and device-tree support,
version dependent switches are necessary (using LINUX_VERSION_CODE)
to support recent kernel versions >= 3.0.
The driver have been tested with v3.0.15 on an i.MX53 and i.MX6Q
board.
Signed-off-by: Wolfgang Grandegger <wg...@de...>
---
configure.ac | 11 +
drivers/GNUmakefile.am | 18 +
drivers/rt_fec.c | 1940 ++++++++++++++++++++++++++++++++++++++++++++++++
drivers/rt_fec.h | 153 ++++
4 files changed, 2122 insertions(+)
create mode 100644 drivers/rt_fec.c
create mode 100644 drivers/rt_fec.h
diff --git a/configure.ac b/configure.ac
index 88462d2..abedca3 100644
--- a/configure.ac
+++ b/configure.ac
@@ -921,6 +921,17 @@ AC_MSG_RESULT([${CONFIG_RTNET_DRV_MPC52XX_FEC:-n}])
AM_CONDITIONAL(CONFIG_RTNET_DRV_MPC52XX_FEC,[test "$CONFIG_RTNET_DRV_MPC52XX_FEC" = "y"])
+AC_MSG_CHECKING([whether to build fec enet driver])
+AC_ARG_ENABLE(fec,
+ AS_HELP_STRING([--enable-fec], [build fec driver]),
+ [case "$enableval" in
+ y | yes) CONFIG_RTNET_DRV_FEC=y ;;
+ *) CONFIG_RTNET_DRV_FEC=n ;;
+ esac])
+AC_MSG_RESULT([${CONFIG_RTNET_DRV_FEC:-n}])
+AM_CONDITIONAL(CONFIG_RTNET_DRV_FEC,[test "$CONFIG_RTNET_DRV_FEC" = "y"])
+
+
AC_MSG_CHECKING([whether to build SMSC LAN91C111 driver])
AC_ARG_ENABLE(smc91111,
AS_HELP_STRING([--enable-smc91111], [build SMSC LAN91C111 driver]),
diff --git a/drivers/GNUmakefile.am b/drivers/GNUmakefile.am
index 9ebb290..5f66e68 100644
--- a/drivers/GNUmakefile.am
+++ b/drivers/GNUmakefile.am
@@ -34,6 +34,7 @@ EXTRA_LIBRARIES = \
libkernel_mpc8260_fcc_enet.a \
libkernel_mpc8xx_enet.a \
libkernel_mpc8xx_fec.a \
+ libkernel_fec.a \
libkernel_natsemi.a \
libkernel_pcnet32.a \
libkernel_smc91111.a \
@@ -108,6 +109,14 @@ libkernel_mpc8xx_fec_a_CPPFLAGS = \
libkernel_mpc8xx_fec_a_SOURCES = \
rt_mpc8xx_fec.c
+libkernel_fec_a_CPPFLAGS = \
+ $(RTEXT_KMOD_CFLAGS) \
+ -I$(top_srcdir)/stack/include \
+ -I$(top_builddir)/stack/include
+
+libkernel_fec_a_SOURCES = \
+ rt_fec.c
+
libkernel_natsemi_a_CPPFLAGS = \
$(RTEXT_KMOD_CFLAGS) \
-I$(top_srcdir)/stack/include \
@@ -185,6 +194,10 @@ if CONFIG_RTNET_DRV_FEC_ENET
OBJS += rt_mpc8xx_fec$(modext)
endif
+if CONFIG_RTNET_DRV_FEC
+OBJS += rt_fec$(modext)
+endif
+
if CONFIG_RTNET_DRV_NATSEMI
OBJS += rt_natsemi$(modext)
endif
@@ -229,6 +242,9 @@ rt_mpc8xx_enet.o: libkernel_mpc8xx_enet.a
rt_mpc8xx_fec.o: libkernel_mpc8xx_fec.a
$(LD) --whole-archive $< -r -o $@
+rt_fec.o: libkernel_fec.a
+ $(LD) --whole-archive $< -r -o $@
+
rt_natsemi.o: libkernel_natsemi.a
$(LD) --whole-archive $< -r -o $@
@@ -258,6 +274,7 @@ all-local.ko: $(libkernel_8139too_a_SOURCES) \
$(libkernel_mpc8260_fcc_enet_a_SOURCES) \
$(libkernel_mpc8xx_enet_a_SOURCES) \
$(libkernel_mpc8xx_fec_a_SOURCES) \
+ $(libkernel_fec_a_SOURCES) \
$(libkernel_natsemi_a_SOURCES) \
$(libkernel_pcnet32_a_SOURCES) \
$(libkernel_smc91111_a_SOURCES) \
@@ -280,6 +297,7 @@ clean-local: $(libkernel_8139too_a_SOURCES) \
$(libkernel_mpc8260_fcc_enet_a_SOURCES) \
$(libkernel_mpc8xx_enet_a_SOURCES) \
$(libkernel_mpc8xx_fec_a_SOURCES) \
+ $(libkernel_fec_a_SOURCES) \
$(libkernel_natsemi_a_SOURCES) \
$(libkernel_pcnet32_a_SOURCES) \
$(libkernel_smc91111_a_SOURCES) \
diff --git a/drivers/rt_fec.c b/drivers/rt_fec.c
new file mode 100644
index 0000000..a5a1022
--- /dev/null
+++ b/drivers/rt_fec.c
@@ -0,0 +1,1940 @@
+/*
+ * Fast Ethernet Controller (FEC) driver for Motorola MPC8xx.
+ * Copyright (c) 1997 Dan Malek (dm...@jl...)
+ *
+ * Right now, I am very wasteful with the buffers. I allocate memory
+ * pages and then divide them into 2K frame buffers. This way I know I
+ * have buffers large enough to hold one frame within one buffer descriptor.
+ * Once I get this working, I will use 64 or 128 byte CPM buffers, which
+ * will be much more memory efficient and will easily handle lots of
+ * small packets.
+ *
+ * Much better multiple PHY support by Magnus Damm.
+ * Copyright (c) 2000 Ericsson Radio Systems AB.
+ *
+ * Support for FEC controller of ColdFire processors.
+ * Copyright (c) 2001-2005 Greg Ungerer (ge...@sn...)
+ *
+ * Bug fixes and cleanup by Philippe De Muyter (ph...@ma...)
+ * Copyright (c) 2004-2006 Macq Electronique SA.
+ *
+ * Copyright (C) 2010-2011 Freescale Semiconductor, Inc.
+ *
+ * Ported from v3.5 Linux drivers/net/ethernet/freescale/fec.[ch]
+ * (git tag v3.5-709-ga6be1fc)
+ *
+ * Copyright (c) 2012 Wolfgang Grandegger <wg...@de...>
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/version.h>
+#include <linux/string.h>
+#include <linux/ptrace.h>
+#include <linux/errno.h>
+#include <linux/ioport.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/pci.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+#include <linux/workqueue.h>
+#include <linux/bitops.h>
+#include <linux/io.h>
+#include <linux/irq.h>
+#include <linux/clk.h>
+#include <linux/platform_device.h>
+#include <linux/phy.h>
+#include <linux/fec.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/of_gpio.h>
+#include <linux/of_net.h>
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(3,5,0)
+#include <linux/pinctrl/consumer.h>
+#endif
+
+#include <asm/cacheflush.h>
+
+#ifndef CONFIG_ARM
+#include <asm/coldfire.h>
+#include <asm/mcfsim.h>
+#endif
+
+/* RTnet */
+#include <rtnet_port.h>
+#include <rtskb.h>
+
+/* RTnet */
+#include "rt_fec.h"
+
+MODULE_AUTHOR("Maintainer: Wolfgang Grandegger <wg...@de...>");
+MODULE_DESCRIPTION("RTnet driver for the FEC Ethernet");
+MODULE_LICENSE("GPL");
+
+#if LINUX_VERSION_CODE < KERNEL_VERSION(3,3,0)
+#define clk_prepare_enable(clk) do { clk_enable(clk); } while(0)
+#define clk_disable_unprepare(clk) do { clk_disable(clk); } while(0)
+#endif
+
+#if defined(CONFIG_ARM)
+#define FEC_ALIGNMENT 0xf
+#else
+#define FEC_ALIGNMENT 0x3
+#endif
+
+#define DRIVER_NAME "rt_fec"
+
+/* Controller is ENET-MAC */
+#define FEC_QUIRK_ENET_MAC (1 << 0)
+/* Controller needs driver to swap frame */
+#define FEC_QUIRK_SWAP_FRAME (1 << 1)
+/* Controller uses gasket */
+#define FEC_QUIRK_USE_GASKET (1 << 2)
+/* Controller has GBIT support */
+#define FEC_QUIRK_HAS_GBIT (1 << 3)
+
+static struct platform_device_id fec_devtype[] = {
+ {
+ .name = "fec",
+/* For legacy not devicetree based support */
+#if defined(CONFIG_SOC_IMX6Q)
+ .driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT,
+#elif defined(CONFIG_SOC_IMX28)
+ .driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_SWAP_FRAME,
+#elif defined(CONFIG_SOC_IMX25)
+ .driver_data = FEC_QUIRK_USE_GASKET,
+#else
+ /* keep it for coldfire */
+ .driver_data = 0,
+#endif
+ }, {
+ .name = "imx25-fec",
+ .driver_data = FEC_QUIRK_USE_GASKET,
+ }, {
+ .name = "imx27-fec",
+ .driver_data = 0,
+ }, {
+ .name = "imx28-fec",
+ .driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_SWAP_FRAME,
+ }, {
+ .name = "imx6q-fec",
+ .driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT,
+ }, {
+ /* sentinel */
+ }
+};
+MODULE_DEVICE_TABLE(platform, fec_devtype);
+
+enum imx_fec_type {
+ IMX25_FEC = 1, /* runs on i.mx25/50/53 */
+ IMX27_FEC, /* runs on i.mx27/35/51 */
+ IMX28_FEC,
+ IMX6Q_FEC,
+};
+
+static const struct of_device_id fec_dt_ids[] = {
+ { .compatible = "fsl,imx25-fec", .data = &fec_devtype[IMX25_FEC], },
+ { .compatible = "fsl,imx27-fec", .data = &fec_devtype[IMX27_FEC], },
+ { .compatible = "fsl,imx28-fec", .data = &fec_devtype[IMX28_FEC], },
+ { .compatible = "fsl,imx6q-fec", .data = &fec_devtype[IMX6Q_FEC], },
+ { /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(of, fec_dt_ids);
+
+static unsigned char macaddr[ETH_ALEN];
+module_param_array(macaddr, byte, NULL, 0);
+MODULE_PARM_DESC(macaddr, "FEC Ethernet MAC address");
+
+#if defined(CONFIG_M5272)
+/*
+ * Some hardware gets it MAC address out of local flash memory.
+ * if this is non-zero then assume it is the address to get MAC from.
+ */
+#if defined(CONFIG_NETtel)
+#define FEC_FLASHMAC 0xf0006006
+#elif defined(CONFIG_GILBARCONAP) || defined(CONFIG_SCALES)
+#define FEC_FLASHMAC 0xf0006000
+#elif defined(CONFIG_CANCam)
+#define FEC_FLASHMAC 0xf0020000
+#elif defined (CONFIG_M5272C3)
+#define FEC_FLASHMAC (0xffe04000 + 4)
+#elif defined(CONFIG_MOD5272)
+#define FEC_FLASHMAC 0xffc0406b
+#else
+#define FEC_FLASHMAC 0
+#endif
+#endif /* CONFIG_M5272 */
+
+/* The number of Tx and Rx buffers. These are allocated from the page
+ * pool. The code may assume these are power of two, so it it best
+ * to keep them that size.
+ * We don't need to allocate pages for the transmitter. We just use
+ * the skbuffer directly.
+ */
+#define FEC_ENET_RX_PAGES 8
+#define FEC_ENET_RX_FRSIZE RTSKB_SIZE /* Maximum size for RTnet */
+#define FEC_ENET_RX_FRPPG (PAGE_SIZE / FEC_ENET_RX_FRSIZE)
+#define RX_RING_SIZE (FEC_ENET_RX_FRPPG * FEC_ENET_RX_PAGES)
+#define FEC_ENET_TX_FRSIZE 2048
+#define FEC_ENET_TX_FRPPG (PAGE_SIZE / FEC_ENET_TX_FRSIZE)
+#define TX_RING_SIZE 16 /* Must be power of two */
+#define TX_RING_MOD_MASK 15 /* for this to work */
+
+#if (((RX_RING_SIZE + TX_RING_SIZE) * 8) > PAGE_SIZE)
+#error "FEC: descriptor ring size constants too large"
+#endif
+
+/* Interrupt events/masks. */
+#define FEC_ENET_HBERR ((uint)0x80000000) /* Heartbeat error */
+#define FEC_ENET_BABR ((uint)0x40000000) /* Babbling receiver */
+#define FEC_ENET_BABT ((uint)0x20000000) /* Babbling transmitter */
+#define FEC_ENET_GRA ((uint)0x10000000) /* Graceful stop complete */
+#define FEC_ENET_TXF ((uint)0x08000000) /* Full frame transmitted */
+#define FEC_ENET_TXB ((uint)0x04000000) /* A buffer was transmitted */
+#define FEC_ENET_RXF ((uint)0x02000000) /* Full frame received */
+#define FEC_ENET_RXB ((uint)0x01000000) /* A buffer was received */
+#define FEC_ENET_MII ((uint)0x00800000) /* MII interrupt */
+#define FEC_ENET_EBERR ((uint)0x00400000) /* SDMA bus error */
+
+#define FEC_DEFAULT_IMASK (FEC_ENET_TXF | FEC_ENET_RXF | FEC_ENET_MII)
+
+/* The FEC stores dest/src/type, data, and checksum for receive packets.
+ */
+#define PKT_MAXBUF_SIZE 1518
+#define PKT_MINBUF_SIZE 64
+#define PKT_MAXBLR_SIZE 1520
+
+/* This device has up to three irqs on some platforms */
+#define FEC_IRQ_NUM 3
+
+/*
+ * The 5270/5271/5280/5282/532x RX control register also contains maximum frame
+ * size bits. Other FEC hardware does not, so we need to take that into
+ * account when setting it.
+ */
+#if defined(CONFIG_M523x) || defined(CONFIG_M527x) || defined(CONFIG_M528x) || \
+ defined(CONFIG_M520x) || defined(CONFIG_M532x) || defined(CONFIG_ARM)
+#define OPT_FRAME_SIZE (PKT_MAXBUF_SIZE << 16)
+#else
+#define OPT_FRAME_SIZE 0
+#endif
+
+static unsigned int rx_pool_size = 2 * RX_RING_SIZE;
+module_param(rx_pool_size, int, 0444);
+MODULE_PARM_DESC(rx_pool_size, "Receive buffer pool size");
+
+#ifndef rtnetdev_priv
+#define rtnetdev_priv(ndev) (ndev)->priv
+#endif
+
+/* The FEC buffer descriptors track the ring buffers. The rx_bd_base and
+ * tx_bd_base always point to the base of the buffer descriptors. The
+ * cur_rx and cur_tx point to the currently available buffer.
+ * The dirty_tx tracks the current buffer that is being sent by the
+ * controller. The cur_tx and dirty_tx are equal under both completely
+ * empty and completely full conditions. The empty/ready indicator in
+ * the buffer descriptor determines the actual condition.
+ */
+struct fec_enet_private {
+ /* Hardware registers of the FEC device */
+ void __iomem *hwp;
+
+ struct net_device *netdev; /* linux netdev needed for phy handling */
+
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(3,5,0)
+ struct clk *clk_ipg;
+ struct clk *clk_ahb;
+#else
+ struct clk *clk;
+#endif
+
+ /* The saved address of a sent-in-place packet/buffer, for skfree(). */
+ unsigned char *tx_bounce[TX_RING_SIZE];
+ struct rtskb *tx_skbuff[TX_RING_SIZE];
+ struct rtskb *rx_skbuff[RX_RING_SIZE];
+ ushort skb_cur;
+ ushort skb_dirty;
+
+ /* CPM dual port RAM relative addresses */
+ dma_addr_t bd_dma;
+ /* Address of Rx and Tx buffers */
+ struct bufdesc *rx_bd_base;
+ struct bufdesc *tx_bd_base;
+ /* The next free ring entry */
+ struct bufdesc *cur_rx, *cur_tx;
+ /* The ring entries to be free()ed */
+ struct bufdesc *dirty_tx;
+
+ uint tx_full;
+ /* hold while accessing the HW like ringbuffer for tx/rx but not MAC */
+ rtdm_lock_t hw_lock;
+
+ struct platform_device *pdev;
+
+ int opened;
+ int dev_id;
+
+ /* Phylib and MDIO interface */
+ struct mii_bus *mii_bus;
+ struct phy_device *phy_dev;
+ int mii_timeout;
+ uint phy_speed;
+ phy_interface_t phy_interface;
+ int link;
+ int full_duplex;
+ struct completion mdio_done;
+ int irq[FEC_IRQ_NUM];
+
+ /* RTnet */
+ struct device *dev;
+ rtdm_irq_t irq_handle[3];
+ rtdm_nrtsig_t mdio_done_sig;
+ struct rtskb_queue skb_pool;
+ struct net_device_stats stats;
+};
+
+/* For phy handling */
+struct fec_enet_netdev_priv {
+ struct rtnet_device *rtdev;
+};
+
+/* FEC MII MMFR bits definition */
+#define FEC_MMFR_ST (1 << 30)
+#define FEC_MMFR_OP_READ (2 << 28)
+#define FEC_MMFR_OP_WRITE (1 << 28)
+#define FEC_MMFR_PA(v) ((v & 0x1f) << 23)
+#define FEC_MMFR_RA(v) ((v & 0x1f) << 18)
+#define FEC_MMFR_TA (2 << 16)
+#define FEC_MMFR_DATA(v) (v & 0xffff)
+
+#define FEC_MII_TIMEOUT 30000 /* us */
+
+/* Transmitter timeout */
+#define TX_TIMEOUT (2 * HZ)
+
+static int mii_cnt;
+
+static void *swap_buffer(void *bufaddr, int len)
+{
+ int i;
+ unsigned int *buf = bufaddr;
+
+ for (i = 0; i < (len + 3) / 4; i++, buf++)
+ *buf = cpu_to_be32(*buf);
+
+ return bufaddr;
+}
+
+static int
+fec_enet_start_xmit(struct rtskb *skb, struct rtnet_device *ndev)
+{
+ struct fec_enet_private *fep = rtnetdev_priv(ndev);
+ const struct platform_device_id *id_entry =
+ platform_get_device_id(fep->pdev);
+ struct bufdesc *bdp;
+ void *bufaddr;
+ unsigned short status;
+ unsigned long context;
+
+ if (!fep->link) {
+ /* Link is down or autonegotiation is in progress. */
+ printk("%s: tx link down!.\n", ndev->name);
+ rtnetif_stop_queue(ndev);
+ return 1; /* RTnet: will call kfree_rtskb() */
+ }
+
+ rtdm_lock_get_irqsave(&fep->hw_lock, context);
+
+ /* RTnet */
+ if (skb->xmit_stamp)
+ *skb->xmit_stamp = cpu_to_be64(rtdm_clock_read() +
+ *skb->xmit_stamp);
+
+ /* Fill in a Tx ring entry */
+ bdp = fep->cur_tx;
+
+ status = bdp->cbd_sc;
+
+ if (status & BD_ENET_TX_READY) {
+ /* Ooops. All transmit buffers are full. Bail out.
+ * This should not happen, since ndev->tbusy should be set.
+ */
+ printk("%s: tx queue full!.\n", ndev->name);
+ rtdm_lock_put_irqrestore(&fep->hw_lock, context);
+ return 1; /* RTnet: will call kfree_rtskb() */
+ }
+
+ /* Clear all of the status flags */
+ status &= ~BD_ENET_TX_STATS;
+
+ /* Set buffer length and buffer pointer */
+ bufaddr = skb->data;
+ bdp->cbd_datlen = skb->len;
+
+ /*
+ * On some FEC implementations data must be aligned on
+ * 4-byte boundaries. Use bounce buffers to copy data
+ * and get it aligned. Ugh.
+ */
+ if (((unsigned long) bufaddr) & FEC_ALIGNMENT) {
+ unsigned int index;
+ index = bdp - fep->tx_bd_base;
+ memcpy(fep->tx_bounce[index], skb->data, skb->len);
+ bufaddr = fep->tx_bounce[index];
+ }
+
+ /*
+ * Some design made an incorrect assumption on endian mode of
+ * the system that it's running on. As the result, driver has to
+ * swap every frame going to and coming from the controller.
+ */
+ if (id_entry->driver_data & FEC_QUIRK_SWAP_FRAME)
+ swap_buffer(bufaddr, skb->len);
+
+ /* Save skb pointer */
+ fep->tx_skbuff[fep->skb_cur] = skb;
+
+ fep->stats.tx_bytes += skb->len;
+ fep->skb_cur = (fep->skb_cur+1) & TX_RING_MOD_MASK;
+
+ /* Push the data cache so the CPM does not get stale memory
+ * data.
+ */
+ bdp->cbd_bufaddr = dma_map_single(&fep->pdev->dev, bufaddr,
+ FEC_ENET_TX_FRSIZE, DMA_TO_DEVICE);
+
+ /* Send it on its way. Tell FEC it's ready, interrupt when done,
+ * it's the last BD of the frame, and to put the CRC on the end.
+ */
+ status |= (BD_ENET_TX_READY | BD_ENET_TX_INTR
+ | BD_ENET_TX_LAST | BD_ENET_TX_TC);
+ bdp->cbd_sc = status;
+
+ /* Trigger transmission start */
+ writel(0, fep->hwp + FEC_X_DES_ACTIVE);
+
+ /* If this was the last BD in the ring, start at the beginning again. */
+ if (status & BD_ENET_TX_WRAP)
+ bdp = fep->tx_bd_base;
+ else
+ bdp++;
+
+ if (bdp == fep->dirty_tx) {
+ fep->tx_full = 1;
+ rtnetif_stop_queue(ndev);
+ }
+
+ fep->cur_tx = bdp;
+
+ rtdm_lock_put_irqrestore(&fep->hw_lock, context);
+
+ return NETDEV_TX_OK;
+}
+
+/* This function is called to start or restart the FEC during a link
+ * change. This only happens when switching between half and full
+ * duplex.
+ */
+static void
+fec_restart(struct rtnet_device *ndev, int duplex)
+{
+ struct fec_enet_private *fep = rtnetdev_priv(ndev);
+ const struct platform_device_id *id_entry =
+ platform_get_device_id(fep->pdev);
+ int i;
+ u32 temp_mac[2];
+ u32 rcntl = OPT_FRAME_SIZE | 0x04;
+ u32 ecntl = 0x2; /* ETHEREN */
+
+ /* Whack a reset. We should wait for this. */
+ writel(1, fep->hwp + FEC_ECNTRL);
+ udelay(10);
+
+ /*
+ * enet-mac reset will reset mac address registers too,
+ * so need to reconfigure it.
+ */
+ if (id_entry->driver_data & FEC_QUIRK_ENET_MAC) {
+ memcpy(&temp_mac, ndev->dev_addr, ETH_ALEN);
+ writel(cpu_to_be32(temp_mac[0]), fep->hwp + FEC_ADDR_LOW);
+ writel(cpu_to_be32(temp_mac[1]), fep->hwp + FEC_ADDR_HIGH);
+ }
+
+ /* Clear any outstanding interrupt. */
+ writel(0xffc00000, fep->hwp + FEC_IEVENT);
+
+ /* Reset all multicast. */
+ writel(0, fep->hwp + FEC_GRP_HASH_TABLE_HIGH);
+ writel(0, fep->hwp + FEC_GRP_HASH_TABLE_LOW);
+#ifndef CONFIG_M5272
+ writel(0, fep->hwp + FEC_HASH_TABLE_HIGH);
+ writel(0, fep->hwp + FEC_HASH_TABLE_LOW);
+#endif
+
+ /* Set maximum receive buffer size. */
+ writel(PKT_MAXBLR_SIZE, fep->hwp + FEC_R_BUFF_SIZE);
+
+ /* Set receive and transmit descriptor base. */
+ writel(fep->bd_dma, fep->hwp + FEC_R_DES_START);
+ writel((unsigned long)fep->bd_dma + sizeof(struct bufdesc) * RX_RING_SIZE,
+ fep->hwp + FEC_X_DES_START);
+
+ fep->dirty_tx = fep->cur_tx = fep->tx_bd_base;
+ fep->cur_rx = fep->rx_bd_base;
+
+ /* Reset SKB transmit buffers. */
+ fep->skb_cur = fep->skb_dirty = 0;
+ for (i = 0; i <= TX_RING_MOD_MASK; i++) {
+ if (fep->tx_skbuff[i]) {
+ dev_kfree_rtskb(fep->tx_skbuff[i]);
+ fep->tx_skbuff[i] = NULL;
+ }
+ }
+
+ /* Enable MII mode */
+ if (duplex) {
+ /* FD enable */
+ writel(0x04, fep->hwp + FEC_X_CNTRL);
+ } else {
+ /* No Rcv on Xmit */
+ rcntl |= 0x02;
+ writel(0x0, fep->hwp + FEC_X_CNTRL);
+ }
+
+ fep->full_duplex = duplex;
+
+ /* Set MII speed */
+ writel(fep->phy_speed, fep->hwp + FEC_MII_SPEED);
+
+ /*
+ * The phy interface and speed need to get configured
+ * differently on enet-mac.
+ */
+ if (id_entry->driver_data & FEC_QUIRK_ENET_MAC) {
+ /* Enable flow control and length check */
+ rcntl |= 0x40000000 | 0x00000020;
+
+ /* RGMII, RMII or MII */
+ if (fep->phy_interface == PHY_INTERFACE_MODE_RGMII)
+ rcntl |= (1 << 6);
+ else if (fep->phy_interface == PHY_INTERFACE_MODE_RMII)
+ rcntl |= (1 << 8);
+ else
+ rcntl &= ~(1 << 8);
+
+ /* 1G, 100M or 10M */
+ if (fep->phy_dev) {
+ if (fep->phy_dev->speed == SPEED_1000)
+ ecntl |= (1 << 5);
+ else if (fep->phy_dev->speed == SPEED_100)
+ rcntl &= ~(1 << 9);
+ else
+ rcntl |= (1 << 9);
+ }
+ } else {
+#ifdef FEC_MIIGSK_ENR
+ if (id_entry->driver_data & FEC_QUIRK_USE_GASKET) {
+ u32 cfgr;
+ /* disable the gasket and wait */
+ writel(0, fep->hwp + FEC_MIIGSK_ENR);
+ while (readl(fep->hwp + FEC_MIIGSK_ENR) & 4)
+ udelay(1);
+
+ /*
+ * configure the gasket:
+ * RMII, 50 MHz, no loopback, no echo
+ * MII, 25 MHz, no loopback, no echo
+ */
+ cfgr = (fep->phy_interface == PHY_INTERFACE_MODE_RMII)
+ ? BM_MIIGSK_CFGR_RMII : BM_MIIGSK_CFGR_MII;
+ if (fep->phy_dev && fep->phy_dev->speed == SPEED_10)
+ cfgr |= BM_MIIGSK_CFGR_FRCONT_10M;
+ writel(cfgr, fep->hwp + FEC_MIIGSK_CFGR);
+
+ /* re-enable the gasket */
+ writel(2, fep->hwp + FEC_MIIGSK_ENR);
+ }
+#endif
+ }
+ writel(rcntl, fep->hwp + FEC_R_CNTRL);
+
+ if (id_entry->driver_data & FEC_QUIRK_ENET_MAC) {
+ /* enable ENET endian swap */
+ ecntl |= (1 << 8);
+ /* enable ENET store and forward mode */
+ writel(1 << 8, fep->hwp + FEC_X_WMRK);
+ }
+
+ /* And last, enable the transmit and receive processing */
+ writel(ecntl, fep->hwp + FEC_ECNTRL);
+ writel(0, fep->hwp + FEC_R_DES_ACTIVE);
+
+ /* Enable interrupts we wish to service */
+ writel(FEC_DEFAULT_IMASK, fep->hwp + FEC_IMASK);
+}
+
+static void
+fec_stop(struct rtnet_device *ndev)
+{
+ struct fec_enet_private *fep = rtnetdev_priv(ndev);
+ const struct platform_device_id *id_entry =
+ platform_get_device_id(fep->pdev);
+ u32 rmii_mode = readl(fep->hwp + FEC_R_CNTRL) & (1 << 8);
+
+ /* We cannot expect a graceful transmit stop without link !!! */
+ if (fep->link) {
+ writel(1, fep->hwp + FEC_X_CNTRL); /* Graceful transmit stop */
+ udelay(10);
+ if (!(readl(fep->hwp + FEC_IEVENT) & FEC_ENET_GRA))
+ printk("fec_stop : Graceful transmit stop did not complete !\n");
+ }
+
+ /* Whack a reset. We should wait for this. */
+ writel(1, fep->hwp + FEC_ECNTRL);
+ udelay(10);
+ writel(fep->phy_speed, fep->hwp + FEC_MII_SPEED);
+ writel(FEC_DEFAULT_IMASK, fep->hwp + FEC_IMASK);
+
+ /* We have to keep ENET enabled to have MII interrupt stay working */
+ if (id_entry->driver_data & FEC_QUIRK_ENET_MAC) {
+ writel(2, fep->hwp + FEC_ECNTRL);
+ writel(rmii_mode, fep->hwp + FEC_R_CNTRL);
+ }
+}
+
+static void
+fec_enet_tx(struct rtnet_device *ndev)
+{
+ struct fec_enet_private *fep;
+ struct bufdesc *bdp;
+ unsigned short status;
+ struct rtskb *skb;
+
+ fep = rtnetdev_priv(ndev);
+ rtdm_lock_get(&fep->hw_lock);
+ bdp = fep->dirty_tx;
+
+ while (((status = bdp->cbd_sc) & BD_ENET_TX_READY) == 0) {
+ if (bdp == fep->cur_tx && fep->tx_full == 0)
+ break;
+
+ dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr,
+ FEC_ENET_TX_FRSIZE, DMA_TO_DEVICE);
+ bdp->cbd_bufaddr = 0;
+
+ skb = fep->tx_skbuff[fep->skb_dirty];
+ /* Check for errors. */
+ if (status & (BD_ENET_TX_HB | BD_ENET_TX_LC |
+ BD_ENET_TX_RL | BD_ENET_TX_UN |
+ BD_ENET_TX_CSL)) {
+ fep->stats.tx_errors++;
+ if (status & BD_ENET_TX_HB) /* No heartbeat */
+ fep->stats.tx_heartbeat_errors++;
+ if (status & BD_ENET_TX_LC) /* Late collision */
+ fep->stats.tx_window_errors++;
+ if (status & BD_ENET_TX_RL) /* Retrans limit */
+ fep->stats.tx_aborted_errors++;
+ if (status & BD_ENET_TX_UN) /* Underrun */
+ fep->stats.tx_fifo_errors++;
+ if (status & BD_ENET_TX_CSL) /* Carrier lost */
+ fep->stats.tx_carrier_errors++;
+ } else {
+ fep->stats.tx_packets++;
+ }
+
+ if (status & BD_ENET_TX_READY)
+ printk("HEY! Enet xmit interrupt and TX_READY.\n");
+
+ /* Deferred means some collisions occurred during transmit,
+ * but we eventually sent the packet OK.
+ */
+ if (status & BD_ENET_TX_DEF)
+ fep->stats.collisions++;
+
+ /* Free the sk buffer associated with this last transmit */
+ dev_kfree_rtskb(skb); /* RTnet */
+ fep->tx_skbuff[fep->skb_dirty] = NULL;
+ fep->skb_dirty = (fep->skb_dirty + 1) & TX_RING_MOD_MASK;
+
+ /* Update pointer to next buffer descriptor to be transmitted */
+ if (status & BD_ENET_TX_WRAP)
+ bdp = fep->tx_bd_base;
+ else
+ bdp++;
+
+ /* Since we have freed up a buffer, the ring is no longer full
+ */
+ if (fep->tx_full) {
+ fep->tx_full = 0;
+ if (rtnetif_queue_stopped(ndev))
+ rtnetif_wake_queue(ndev);
+ }
+ }
+ fep->dirty_tx = bdp;
+ rtdm_lock_put(&fep->hw_lock);
+}
+
+
+/* During a receive, the cur_rx points to the current incoming buffer.
+ * When we update through the ring, if the next incoming buffer has
+ * not been given to the system, we just set the empty indicator,
+ * effectively tossing the packet.
+ */
+static void
+fec_enet_rx(struct rtnet_device *ndev, int *packets, nanosecs_abs_t *time_stamp)
+{
+ struct fec_enet_private *fep = rtnetdev_priv(ndev);
+ const struct platform_device_id *id_entry =
+ platform_get_device_id(fep->pdev);
+ struct bufdesc *bdp;
+ unsigned short status;
+ struct rtskb *skb;
+ ushort pkt_len;
+ __u8 *data;
+
+#ifdef CONFIG_M532x
+ flush_cache_all();
+#endif
+ rtdm_lock_get(&fep->hw_lock);
+
+ /* First, grab all of the stats for the incoming packet.
+ * These get messed up if we get called due to a busy condition.
+ */
+ bdp = fep->cur_rx;
+
+ while (!((status = bdp->cbd_sc) & BD_ENET_RX_EMPTY)) {
+
+ /* Since we have allocated space to hold a complete frame,
+ * the last indicator should be set.
+ */
+ if ((status & BD_ENET_RX_LAST) == 0)
+ printk("FEC ENET: rcv is not +last\n");
+
+ if (!fep->opened)
+ goto rx_processing_done;
+
+ /* Check for errors. */
+ if (status & (BD_ENET_RX_LG | BD_ENET_RX_SH | BD_ENET_RX_NO |
+ BD_ENET_RX_CR | BD_ENET_RX_OV)) {
+ fep->stats.rx_errors++;
+ if (status & (BD_ENET_RX_LG | BD_ENET_RX_SH)) {
+ /* Frame too long or too short. */
+ fep->stats.rx_length_errors++;
+ }
+ if (status & BD_ENET_RX_NO) /* Frame alignment */
+ fep->stats.rx_frame_errors++;
+ if (status & BD_ENET_RX_CR) /* CRC Error */
+ fep->stats.rx_crc_errors++;
+ if (status & BD_ENET_RX_OV) /* FIFO overrun */
+ fep->stats.rx_fifo_errors++;
+ }
+
+ /* Report late collisions as a frame error.
+ * On this error, the BD is closed, but we don't know what we
+ * have in the buffer. So, just drop this frame on the floor.
+ */
+ if (status & BD_ENET_RX_CL) {
+ fep->stats.rx_errors++;
+ fep->stats.rx_frame_errors++;
+ goto rx_processing_done;
+ }
+
+ /* Process the incoming frame. */
+ fep->stats.rx_packets++;
+ pkt_len = bdp->cbd_datlen;
+ fep->stats.rx_bytes += pkt_len;
+ data = (__u8*)__va(bdp->cbd_bufaddr);
+
+ dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr,
+ FEC_ENET_TX_FRSIZE, DMA_FROM_DEVICE);
+
+ if (id_entry->driver_data & FEC_QUIRK_SWAP_FRAME)
+ swap_buffer(data, pkt_len);
+
+ /* This does 16 byte alignment, exactly what we need.
+ * The packet length includes FCS, but we don't want to
+ * include that when passing upstream as it messes up
+ * bridging applications.
+ */
+ skb = dev_alloc_rtskb(pkt_len - 4 + NET_IP_ALIGN,
+ &fep->skb_pool); /* RTnet */
+
+ if (unlikely(!skb)) {
+ printk("%s: Memory squeeze, dropping packet.\n",
+ ndev->name);
+ fep->stats.rx_dropped++;
+ } else {
+ rtskb_reserve(skb, NET_IP_ALIGN);
+ rtskb_put(skb, pkt_len - 4); /* Make room */
+ memcpy(skb->data, data, pkt_len - 4);
+ skb->protocol = rt_eth_type_trans(skb, ndev);
+ skb->rtdev = ndev;
+ skb->time_stamp = *time_stamp;
+ rtnetif_rx(skb);
+ (*packets)++; /* RTnet */
+ }
+
+ bdp->cbd_bufaddr = dma_map_single(&fep->pdev->dev, data,
+ FEC_ENET_TX_FRSIZE, DMA_FROM_DEVICE);
+rx_processing_done:
+ /* Clear the status flags for this buffer */
+ status &= ~BD_ENET_RX_STATS;
+
+ /* Mark the buffer empty */
+ status |= BD_ENET_RX_EMPTY;
+ bdp->cbd_sc = status;
+
+ /* Update BD pointer to next entry */
+ if (status & BD_ENET_RX_WRAP)
+ bdp = fep->rx_bd_base;
+ else
+ bdp++;
+ /* Doing this here will keep the FEC running while we process
+ * incoming frames. On a heavily loaded network, we should be
+ * able to keep up at the expense of system resources.
+ */
+ writel(0, fep->hwp + FEC_R_DES_ACTIVE);
+ }
+ fep->cur_rx = bdp;
+
+ rtdm_lock_put(&fep->hw_lock);
+}
+
+static int
+fec_enet_interrupt(rtdm_irq_t *irq_handle)
+{
+ struct rtnet_device *ndev =
+ rtdm_irq_get_arg(irq_handle, struct rtnet_device); /* RTnet */
+ struct fec_enet_private *fep = rtnetdev_priv(ndev);
+ uint int_events;
+ irqreturn_t ret = RTDM_IRQ_NONE;
+ /* RTnet */
+ nanosecs_abs_t time_stamp = rtdm_clock_read();
+ int packets = 0;
+
+ do {
+ int_events = readl(fep->hwp + FEC_IEVENT);
+ writel(int_events, fep->hwp + FEC_IEVENT);
+
+ if (int_events & FEC_ENET_RXF) {
+ ret = RTDM_IRQ_HANDLED;
+ fec_enet_rx(ndev, &packets, &time_stamp);
+ }
+
+ /* Transmit OK, or non-fatal error. Update the buffer
+ * descriptors. FEC handles all errors, we just discover
+ * them as part of the transmit process.
+ */
+ if (int_events & FEC_ENET_TXF) {
+ ret = RTDM_IRQ_HANDLED;
+ fec_enet_tx(ndev);
+ }
+
+ if (int_events & FEC_ENET_MII) {
+ ret = RTDM_IRQ_HANDLED;
+ rtdm_nrtsig_pend(&fep->mdio_done_sig);
+ }
+ } while (int_events);
+
+ if (packets > 0)
+ rt_mark_stack_mgr(ndev);
+
+ return ret;
+}
+
+
+
+/* ------------------------------------------------------------------------- */
+static void __inline__ fec_get_mac(struct rtnet_device *ndev)
+{
+ struct fec_enet_private *fep = rtnetdev_priv(ndev);
+ struct fec_platform_data *pdata = fep->pdev->dev.platform_data;
+ unsigned char *iap, tmpaddr[ETH_ALEN];
+
+ /*
+ * try to get mac address in following order:
+ *
+ * 1) module parameter via kernel command line in form
+ * fec.macaddr=0x00,0x04,0x9f,0x01,0x30,0xe0
+ */
+ iap = macaddr;
+
+#ifdef CONFIG_OF
+ /*
+ * 2) from device tree data
+ */
+ if (!is_valid_ether_addr(iap)) {
+ struct device_node *np = fep->pdev->dev.of_node;
+ if (np) {
+ const char *mac = of_get_mac_address(np);
+ if (mac)
+ iap = (unsigned char *) mac;
+ }
+ }
+#endif
+
+ /*
+ * 3) from flash or fuse (via platform data)
+ */
+ if (!is_valid_ether_addr(iap)) {
+#ifdef CONFIG_M5272
+ if (FEC_FLASHMAC)
+ iap = (unsigned char *)FEC_FLASHMAC;
+#else
+ if (pdata)
+ iap = (unsigned char *)&pdata->mac;
+#endif
+ }
+
+ /*
+ * 4) FEC mac registers set by bootloader
+ */
+ if (!is_valid_ether_addr(iap)) {
+ *((unsigned long *) &tmpaddr[0]) =
+ be32_to_cpu(readl(fep->hwp + FEC_ADDR_LOW));
+ *((unsigned short *) &tmpaddr[4]) =
+ be16_to_cpu(readl(fep->hwp + FEC_ADDR_HIGH) >> 16);
+ iap = &tmpaddr[0];
+ }
+
+ memcpy(ndev->dev_addr, iap, ETH_ALEN);
+
+ /* Adjust MAC if using macaddr */
+ if (iap == macaddr)
+ ndev->dev_addr[ETH_ALEN-1] = macaddr[ETH_ALEN-1] + fep->dev_id;
+}
+
+/* ------------------------------------------------------------------------- */
+
+/*
+ * Phy section
+ */
+static void fec_enet_mdio_done(rtdm_nrtsig_t nrt_sig, void* data)
+{
+ struct fec_enet_private *fep = data;
+
+ complete(&fep->mdio_done);
+}
+
+static void fec_enet_adjust_link(struct net_device *netdev)
+{
+ struct fec_enet_netdev_priv *npriv = netdev_priv(netdev);
+ struct rtnet_device *ndev = npriv->rtdev;
+ struct fec_enet_private *fep = rtnetdev_priv(ndev);
+ struct phy_device *phy_dev = fep->phy_dev;
+ unsigned long context;
+
+ int status_change = 0;
+
+ rtdm_lock_get_irqsave(&fep->hw_lock, context);
+
+ /* Prevent a state halted on mii error */
+ if (fep->mii_timeout && phy_dev->state == PHY_HALTED) {
+ phy_dev->state = PHY_RESUMING;
+ goto spin_unlock;
+ }
+
+ /* Duplex link change */
+ if (phy_dev->link) {
+ if (fep->full_duplex != phy_dev->duplex) {
+ fec_restart(ndev, phy_dev->duplex);
+ /* prevent unnecessary second fec_restart() below */
+ fep->link = phy_dev->link;
+ status_change = 1;
+ }
+ }
+
+ /* Link on or off change */
+ if (phy_dev->link != fep->link) {
+ fep->link = phy_dev->link;
+ if (phy_dev->link)
+ fec_restart(ndev, phy_dev->duplex);
+ else
+ fec_stop(ndev);
+ status_change = 1;
+ }
+
+spin_unlock:
+ rtdm_lock_put_irqrestore(&fep->hw_lock, context);
+
+ if (status_change)
+ phy_print_status(phy_dev);
+}
+
+static int fec_enet_mdio_read(struct mii_bus *bus, int mii_id, int regnum)
+{
+ struct fec_enet_private *fep = bus->priv;
+ unsigned long time_left;
+
+ fep->mii_timeout = 0;
+ init_completion(&fep->mdio_done);
+
+ /* start a read op */
+ writel(FEC_MMFR_ST | FEC_MMFR_OP_READ |
+ FEC_MMFR_PA(mii_id) | FEC_MMFR_RA(regnum) |
+ FEC_MMFR_TA, fep->hwp + FEC_MII_DATA);
+
+ /* wait for end of transfer */
+ time_left = wait_for_completion_timeout(&fep->mdio_done,
+ usecs_to_jiffies(FEC_MII_TIMEOUT));
+ if (time_left == 0) {
+ fep->mii_timeout = 1;
+ printk(KERN_ERR "FEC: MDIO read timeout\n");
+ return -ETIMEDOUT;
+ }
+
+ /* return value */
+ return FEC_MMFR_DATA(readl(fep->hwp + FEC_MII_DATA));
+}
+
+static int fec_enet_mdio_write(struct mii_bus *bus, int mii_id, int regnum,
+ u16 value)
+{
+ struct fec_enet_private *fep = bus->priv;
+ unsigned long time_left;
+
+ fep->mii_timeout = 0;
+ init_completion(&fep->mdio_done);
+
+ /* start a write op */
+ writel(FEC_MMFR_ST | FEC_MMFR_OP_WRITE |
+ FEC_MMFR_PA(mii_id) | FEC_MMFR_RA(regnum) |
+ FEC_MMFR_TA | FEC_MMFR_DATA(value),
+ fep->hwp + FEC_MII_DATA);
+
+ /* wait for end of transfer */
+ time_left = wait_for_completion_timeout(&fep->mdio_done,
+ usecs_to_jiffies(FEC_MII_TIMEOUT));
+ if (time_left == 0) {
+ fep->mii_timeout = 1;
+ printk(KERN_ERR "FEC: MDIO write timeout\n");
+ return -ETIMEDOUT;
+ }
+
+ return 0;
+}
+
+static int fec_enet_mdio_reset(struct mii_bus *bus)
+{
+ return 0;
+}
+
+static int fec_enet_mii_probe(struct rtnet_device *ndev)
+{
+ struct fec_enet_private *fep = rtnetdev_priv(ndev);
+ const struct platform_device_id *id_entry =
+ platform_get_device_id(fep->pdev);
+ struct phy_device *phy_dev = NULL;
+ char mdio_bus_id[MII_BUS_ID_SIZE];
+ char phy_name[MII_BUS_ID_SIZE + 3];
+ int phy_id;
+ int dev_id = fep->dev_id;
+
+ fep->phy_dev = NULL;
+
+ /* check for attached phy */
+ for (phy_id = 0; (phy_id < PHY_MAX_ADDR); phy_id++) {
+ if ((fep->mii_bus->phy_mask & (1 << phy_id)))
+ continue;
+ if (fep->mii_bus->phy_map[phy_id] == NULL)
+ continue;
+ if (fep->mii_bus->phy_map[phy_id]->phy_id == 0)
+ continue;
+ if (dev_id--)
+ continue;
+ strncpy(mdio_bus_id, fep->mii_bus->id, MII_BUS_ID_SIZE);
+ break;
+ }
+
+ if (phy_id >= PHY_MAX_ADDR) {
+ printk(KERN_INFO
+ "%s: no PHY, assuming direct connection to switch\n",
+ ndev->name);
+ strncpy(mdio_bus_id, "fixed-0", MII_BUS_ID_SIZE);
+ phy_id = 0;
+ }
+
+ snprintf(phy_name, sizeof(phy_name), PHY_ID_FMT, mdio_bus_id, phy_id);
+ /* attach the mac to the phy using the dummy linux netdev */
+ phy_dev = phy_connect(fep->netdev, phy_name, &fec_enet_adjust_link, 0,
+ fep->phy_interface);
+ if (IS_ERR(phy_dev)) {
+ printk(KERN_ERR "%s: could not attach to PHY\n", ndev->name);
+ return PTR_ERR(phy_dev);
+ }
+
+ /* mask with MAC supported features */
+ if (id_entry->driver_data & FEC_QUIRK_HAS_GBIT)
+ phy_dev->supported &= PHY_GBIT_FEATURES;
+ else
+ phy_dev->supported &= PHY_BASIC_FEATURES;
+
+ phy_dev->advertising = phy_dev->supported;
+
+ fep->phy_dev = phy_dev;
+ fep->link = 0;
+ fep->full_duplex = 0;
+
+ printk(KERN_INFO
+ "%s: Freescale FEC PHY driver [%s] (mii_bus:phy_addr=%s, irq=%d)\n",
+ ndev->name,
+ fep->phy_dev->drv->name, dev_name(&fep->phy_dev->dev),
+ fep->phy_dev->irq);
+
+ return 0;
+}
+
+static int fec_enet_mii_init(struct platform_device *pdev)
+{
+ static struct mii_bus *fec0_mii_bus;
+ struct rtnet_device *ndev = platform_get_drvdata(pdev);
+ struct fec_enet_private *fep = rtnetdev_priv(ndev);
+ const struct platform_device_id *id_entry =
+ platform_get_device_id(fep->pdev);
+ int err = -ENXIO, i;
+
+ /*
+ * The dual fec interfaces are not equivalent with enet-mac.
+ * Here are the differences:
+ *
+ * - fec0 supports MII & RMII modes while fec1 only supports RMII
+ * - fec0 acts as the 1588 time master while fec1 is slave
+ * - external phys can only be configured by fec0
+ *
+ * That is to say fec1 can not work independently. It only works
+ * when fec0 is working. The reason behind this design is that the
+ * second interface is added primarily for Switch mode.
+ *
+ * Because of the last point above, both phys are attached on fec0
+ * mdio interface in board design, and need to be configured by
+ * fec0 mii_bus.
+ */
+ if ((id_entry->driver_data & FEC_QUIRK_ENET_MAC) && fep->dev_id > 0) {
+ /* fec1 uses fec0 mii_bus */
+ if (mii_cnt && fec0_mii_bus) {
+ fep->mii_bus = fec0_mii_bus;
+ mii_cnt++;
+ return 0;
+ }
+ return -ENOENT;
+ }
+
+ fep->mii_timeout = 0;
+
+ /*
+ * Set MII speed to 2.5 MHz (= clk_get_rate() / 2 * phy_speed)
+ *
+ * The formula for FEC MDC is 'ref_freq / (MII_SPEED x 2)' while
+ * for ENET-MAC is 'ref_freq / ((MII_SPEED + 1) x 2)'. The i.MX28
+ * Reference Manual has an error on this, and gets fixed on i.MX6Q
+ * document.
+ */
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(3,5,0)
+ fep->phy_speed = DIV_ROUND_UP(clk_get_rate(fep->clk_ahb), 5000000);
+#else
+ fep->phy_speed = DIV_ROUND_UP(clk_get_rate(fep->clk), 5000000);
+#endif
+ if (id_entry->driver_data & FEC_QUIRK_ENET_MAC)
+ fep->phy_speed--;
+ fep->phy_speed <<= 1;
+ writel(fep->phy_speed, fep->hwp + FEC_MII_SPEED);
+
+ fep->mii_bus = mdiobus_alloc();
+ if (fep->mii_bus == NULL) {
+ err = -ENOMEM;
+ goto err_out;
+ }
+
+ fep->mii_bus->name = "fec_enet_mii_bus";
+ fep->mii_bus->read = fec_enet_mdio_read;
+ fep->mii_bus->write = fec_enet_mdio_write;
+ fep->mii_bus->reset = fec_enet_mdio_reset;
+ snprintf(fep->mii_bus->id, MII_BUS_ID_SIZE, "%s-%x",
+ pdev->name, fep->dev_id + 1);
+ fep->mii_bus->priv = fep;
+ fep->mii_bus->parent = &pdev->dev;
+
+ fep->mii_bus->irq = kmalloc(sizeof(int) * PHY_MAX_ADDR, GFP_KERNEL);
+ if (!fep->mii_bus->irq) {
+ err = -ENOMEM;
+ goto err_out_free_mdiobus;
+ }
+
+ for (i = 0; i < PHY_MAX_ADDR; i++)
+ fep->mii_bus->irq[i] = PHY_POLL;
+
+ if (rtdm_nrtsig_init(&fep->mdio_done_sig, fec_enet_mdio_done, fep)) {
+ printk("%s: rtdm_nrtsig_init failed\n", __func__);
+ goto err_out_free_mdio_irq;
+ }
+
+ if (mdiobus_register(fep->mii_bus))
+ goto err_out_destroy_nrt;
+
+ mii_cnt++;
+
+ /* save fec0 mii_bus */
+ if (id_entry->driver_data & FEC_QUIRK_ENET_MAC)
+ fec0_mii_bus = fep->mii_bus;
+
+ return 0;
+
+err_out_destroy_nrt:
+ rtdm_nrtsig_destroy(&fep->mdio_done_sig);
+err_out_free_mdio_irq:
+ kfree(fep->mii_bus->irq);
+err_out_free_mdiobus:
+ mdiobus_free(fep->mii_bus);
+err_out:
+ return err;
+}
+
+static void fec_enet_mii_remove(struct fec_enet_private *fep)
+{
+ if (--mii_cnt == 0) {
+ mdiobus_unregister(fep->mii_bus);
+ kfree(fep->mii_bus->irq);
+ mdiobus_free(fep->mii_bus);
+ }
+ rtdm_nrtsig_destroy(&fep->mdio_done_sig);
+}
+
+static int
+fec_enet_ioctl(struct rtnet_device *ndev, unsigned int request, void *arg)
+{
+ struct fec_enet_private *fep = rtnetdev_priv(ndev);
+ struct phy_device *phydev = fep->phy_dev;
+ struct ifreq *ifr = arg;
+ struct ethtool_value *value;
+ struct ethtool_cmd cmd;
+ int err = 0;
+
+ if (!rtnetif_running(ndev))
+ return -EINVAL;
+
+ if (!phydev)
+ return -ENODEV;
+
+ switch (request) {
+ case SIOCETHTOOL:
+ value = (struct ethtool_value *)ifr->ifr_data;
+ switch (value->cmd) {
+ case ETHTOOL_GLINK:
+ value->data = fep->link;
+ if (copy_to_user(&value->data, &fep->link,
+ sizeof(value->data)))
+ err = -EFAULT;
+ break;
+ case ETHTOOL_GSET:
+ memset(&cmd, 0, sizeof(cmd));
+ cmd.cmd = ETHTOOL_GSET;
+ err = phy_ethtool_gset(phydev, &cmd);
+ if (err)
+ break;
+ if (copy_to_user(ifr->ifr_data, &cmd, sizeof(cmd)))
+ err = -EFAULT;
+ break;
+ case ETHTOOL_SSET:
+ if (copy_from_user(&cmd, ifr->ifr_data, sizeof(cmd)))
+ err = -EFAULT;
+ else
+ err = phy_ethtool_sset(phydev, &cmd);
+ break;
+ }
+ break;
+ default:
+ err = -EOPNOTSUPP;
+ break;
+ }
+
+ return err;
+}
+
+static void fec_enet_free_buffers(struct rtnet_device *ndev)
+{
+ struct fec_enet_private *fep = rtnetdev_priv(ndev);
+ int i;
+ struct rtskb *skb;
+ struct bufdesc *bdp;
+
+ bdp = fep->rx_bd_base;
+ for (i = 0; i < RX_RING_SIZE; i++) {
+ skb = fep->rx_skbuff[i];
+
+ if (bdp->cbd_bufaddr)
+ dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr,
+ FEC_ENET_RX_FRSIZE, DMA_FROM_DEVICE);
+ if (skb)
+ dev_kfree_rtskb(skb); /* RTnet */
+ bdp++;
+ }
+
+ bdp = fep->tx_bd_base;
+ for (i = 0; i < TX_RING_SIZE; i++)
+ kfree(fep->tx_bounce[i]);
+}
+
+static int fec_enet_alloc_buffers(struct rtnet_device *ndev)
+{
+ struct fec_enet_private *fep = rtnetdev_priv(ndev);
+ int i;
+ struct rtskb *skb;
+ struct bufdesc *bdp;
+
+ bdp = fep->rx_bd_base;
+ for (i = 0; i < RX_RING_SIZE; i++) {
+ skb = dev_alloc_rtskb(FEC_ENET_RX_FRSIZE,
+ &fep->skb_pool); /* RTnet */
+ if (!skb) {
+ fec_enet_free_buffers(ndev);
+ return -ENOMEM;
+ }
+ fep->rx_skbuff[i] = skb;
+
+ bdp->cbd_bufaddr = dma_map_single(&fep->pdev->dev, skb->data,
+ FEC_ENET_RX_FRSIZE, DMA_FROM_DEVICE);
+ bdp->cbd_sc = BD_ENET_RX_EMPTY;
+ bdp++;
+ }
+
+ /* Set the last buffer to wrap. */
+ bdp--;
+ bdp->cbd_sc |= BD_SC_WRAP;
+
+ bdp = fep->tx_bd_base;
+ for (i = 0; i < TX_RING_SIZE; i++) {
+ fep->tx_bounce[i] = kmalloc(FEC_ENET_TX_FRSIZE, GFP_KERNEL);
+
+ bdp->cbd_sc = 0;
+ bdp->cbd_bufaddr = 0;
+ bdp++;
+ }
+
+ /* Set the last buffer to wrap. */
+ bdp--;
+ bdp->cbd_sc |= BD_SC_WRAP;
+
+ return 0;
+}
+
+static int
+fec_enet_open(struct rtnet_device *ndev)
+{
+ struct fec_enet_private *fep = rtnetdev_priv(ndev);
+ int ret;
+
+ /* I should reset the ring buffers here, but I don't yet know
+ * a simple way to do that.
+ */
+
+ ret = fec_enet_alloc_buffers(ndev);
+ if (ret)
+ return ret;
+
+ /* RTnet */
+ rt_stack_connect(ndev, &STACK_manager);
+
+ /* Probe and connect to PHY when open the interface */
+ ret = fec_enet_mii_probe(ndev);
+ if (ret) {
+ fec_enet_free_buffers(ndev);
+ return ret;
+ }
+ phy_start(fep->phy_dev);
+ rtnetif_start_queue(ndev);
+ fep->opened = 1;
+ return 0;
+}
+
+static int
+fec_enet_close(struct rtnet_device *ndev)
+{
+ struct fec_enet_private *fep = rtnetdev_priv(ndev);
+
+ /* Don't know what to do yet. */
+ fep->opened = 0;
+ rtnetif_stop_queue(ndev);
+ fec_stop(ndev);
+
+ if (fep->phy_dev) {
+ phy_stop(fep->phy_dev);
+ phy_disconnect(fep->phy_dev);
+ }
+
+ fec_enet_free_buffers(ndev);
+
+ /* RTnet */
+ rt_stack_disconnect(ndev);
+
+ return 0;
+}
+
+#ifdef CONFIG_RTNET_MULTICAST
+/* Set or clear the multicast filter for this adaptor.
+ * Skeleton taken from sunlance driver.
+ * The CPM Ethernet implementation allows Multicast as well as individual
+ * MAC address filtering. Some of the drivers check to make sure it is
+ * a group multicast address, and discard those that are not. I guess I
+ * will do the same for now, but just remove the test if you want
+ * individual filtering as well (do the upper net layers want or support
+ * this kind of feature?).
+ */
+
+#define HASH_BITS 6 /* #bits in hash */
+#define CRC32_POLY 0xEDB88320
+
+static void set_multicast_list(struct rtnet_device *ndev)
+{
+ struct fec_enet_private *fep = rtnetdev_priv(ndev);
+ struct netdev_hw_addr *ha;
+ unsigned int i, bit, data, crc, tmp;
+ unsigned char hash;
+
+ if (ndev->flags & IFF_PROMISC) {
+ tmp = readl(fep->hwp + FEC_R_CNTRL);
+ tmp |= 0x8;
+ writel(tmp, fep->hwp + FEC_R_CNTRL);
+ return;
+ }
+
+ tmp = readl(fep->hwp + FEC_R_CNTRL);
+ tmp &= ~0x8;
+ writel(tmp, fep->hwp + FEC_R_CNTRL);
+
+ if (ndev->flags & IFF_ALLMULTI) {
+ /* Catch all multicast addresses, so set the
+ * filter to all 1's
+ */
+ writel(0xffffffff, fep->hwp + FEC_GRP_HASH_TABLE_HIGH);
+ writel(0xffffffff, fep->hwp + FEC_GRP_HASH_TABLE_LOW);
+
+ return;
+ }
+
+ /* Clear filter and add the addresses in hash register
+ */
+ writel(0, fep->hwp + FEC_GRP_HASH_TABLE_HIGH);
+ writel(0, fep->hwp + FEC_GRP_HASH_TABLE_LOW);
+
+ rtnetdev_for_each_mc_addr(ha, ndev) {
+ /* calculate crc32 value of mac address */
+ crc = 0xffffffff;
+
+ for (i = 0; i < ndev->addr_len; i++) {
+ data = ha->addr[i];
+ for (bit = 0; bit < 8; bit++, data >>= 1) {
+ crc = (crc >> 1) ^
+ (((crc ^ data) & 1) ? CRC32_POLY : 0);
+ }
+ }
+
+ /* only upper 6 bits (HASH_BITS) are used
+ * which point to specific bit in he hash registers
+ */
+ hash = (crc >> (32 - HASH_BITS)) & 0x3f;
+
+ if (hash > 31) {
+ tmp = readl(fep->hwp + FEC_GRP_HASH_TABLE_HIGH);
+ tmp |= 1 << (hash - 32);
+ writel(tmp, fep->hwp + FEC_GRP_HASH_TABLE_HIGH);
+ } else {
+ tmp = readl(fep->hwp + FEC_GRP_HASH_TABLE_LOW);
+ tmp |= 1 << hash;
+ writel(tmp, fep->hwp + FEC_GRP_HASH_TABLE_LOW);
+ }
+ }
+}
+#endif /* CONFIG_RTNET_MULTICAST */
+
+#ifdef ORIGINAL_CODE
+/* Set a MAC change in hardware. */
+static int
+fec_set_mac_address(struct rtnet_device *ndev, void *p)
+{
+ struct fec_enet_private *fep = rtnetdev_priv(ndev);
+ struct sockaddr *addr = p;
+
+ if (!is_valid_ether_addr(addr->sa_data))
+ return -EADDRNOTAVAIL;
+
+ memcpy(ndev->dev_addr, addr->sa_data, ndev->addr_len);
+
+ writel(ndev->dev_addr[3] | (ndev->dev_addr[2] << 8) |
+ (ndev->dev_addr[1] << 16) | (ndev->dev_addr[0] << 24),
+ fep->hwp + FEC_ADDR_LOW);
+ writel((ndev->dev_addr[5] << 16) | (ndev->dev_addr[4] << 24),
+ fep->hwp + FEC_ADDR_HIGH);
+ return 0;
+}
+
+#ifdef CONFIG_NET_POLL_CONTROLLER
+/*
+ * fec_poll_controller: FEC Poll controller function
+ * @dev: The FEC network adapter
+ *
+ * Polled functionality used by netconsole and others in non interrupt mode
+ *
+ */
+void fec_poll_controller(struct rtnet_device *dev)
+{
+ int i;
+ struct fec_enet_private *fep = rtnetdev_priv(dev);
+
+ for (i = 0; i < FEC_IRQ_NUM; i++) {
+ if (fep->irq[i] > 0) {
+ disable_irq(fep->irq[i]);
+ fec_enet_interrupt(fep->irq[i], dev);
+ enable_irq(fep->irq[i]);
+ }
+ }
+}
+#endif /* ORIGINAL_CODE */
+
+static const struct rtnet_device_ops fec_netdev_ops = {
+ .ndo_open = fec_enet_open,
+ .ndo_stop = fec_enet_close,
+ .ndo_start_xmit = fec_enet_start_xmit,
+ .ndo_set_rx_mode = set_multicast_list,
+ .ndo_change_mtu = eth_change_mtu,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_tx_timeout = fec_timeout,
+ .ndo_set_mac_address = fec_set_mac_address,
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ .ndo_poll_controller = fec_poll_controller,
+#endif
+};
+#endif
+
+/* RTnet: get statistics */
+static struct net_device_stats *fec_get_stats(struct rtnet_device *ndev)
+{
+ struct fec_enet_private *fep = rtnetdev_priv(ndev);
+ return &fep->stats;
+}
+
+ /*
+ * XXX: We need to clean up on failure exits here.
+ *
+ */
+static int fec_enet_init(struct rtnet_device *ndev)
+{
+ struct fec_enet_private *fep = rtnetdev_priv(ndev);
+ struct bufdesc *cbd_base;
+ struct bufdesc *bdp;
+ int i;
+
+ /* Allocate memory for buffer descriptors. */
+ cbd_base = dma_alloc_coherent(NULL, PAGE_SIZE, &fep->bd_dma,
+ GFP_KERNEL);
+ if (!cbd_base) {
+ printk("FEC: allocate descriptor memory failed?\n");
+ return -ENOMEM;
+ }
+
+ rtdm_lock_init(&fep->hw_lock);
+
+ /* Get the Ethernet address */
+ fec_get_mac(ndev);
+
+ /* Set receive and transmit descriptor base. */
+ fep->rx_bd_base = cbd_base;
+ fep->tx_bd_base = cbd_base + RX_RING_SIZE;
+
+ /* RTnet: specific entries in the device structure */
+ ndev->open = fec_enet_open;
+ ndev->stop = fec_enet_close;
+ ndev->hard_start_xmit = fec_enet_start_xmit;
+ ndev->get_stats = fec_get_stats;
+ ndev->do_ioctl = fec_enet_ioctl;
+#ifdef CONFIG_RTNET_MULTICAST
+ ndev->set_multicast_list = &set_multicast_list;
+#endif
+
+ /* Initialize the receive buffer descriptors. */
+ bdp = fep->rx_bd_base;
+ for (i = 0; i < RX_RING_SIZE; i++) {
+
+ /* Initialize the BD for every fragment in the page. */
+ bdp->cbd_sc = 0;
+ bdp++;
+ }
+
+ /* Set the last buffer to wrap */
+ bdp--;
+ bdp->cbd_sc |= BD_SC_WRAP;
+
+ /* ...and the same for transmit */
+ bdp = fep->tx_bd_base;
+ for (i = 0; i < TX_RING_SIZE; i++) {
+
+ /* Initialize the BD for every fragment in the page. */
+ bdp->cbd_sc = 0;
+ bdp->cbd_bufaddr = 0;
+ bdp++;
+ }
+
+ /* Set the last buffer to wrap */
+ bdp--;
+ bdp->cbd_sc |= BD_SC_WRAP;
+
+ fec_restart(ndev, 0);
+
+ return 0;
+}
+
+#ifdef CONFIG_OF
+static int __devinit fec_get_phy_mode_dt(struct platform_device *pdev)
+{
+ struct device_node *np = pdev->dev.of_node;
+
+ if (np)
+ return of_get_phy_mode(np);
+
+ return -ENODEV;
+}
+
+static void __devinit fec_reset_phy(struct platform_device *pdev)
+{
+ int err, phy_reset;
+ struct device_node *np = pdev->dev.of_node;
+
+ if (!np)
+ return;
+
+ phy_reset = of_get_named_gpio(np, "phy-reset-gpios", 0);
+ err = gpio_request_one(phy_reset, GPIOF_OUT_INIT_LOW, "phy-reset");
+ if (err) {
+ pr_debug("FEC: failed to get gpio phy-reset: %d\n", err);
+ return;
+ }
+ msleep(1);
+ gpio_set_value(phy_reset, 1);
+}
+#else /* CONFIG_OF */
+static inline int fec_get_phy_mode_dt(struct platform_device *pdev)
+{
+ return -ENODEV;
+}
+
+static inline void fec_reset_phy(struct platform_device *pdev)
+{
+ /*
+ * In case of platform probe, the reset has been done
+ * by machine code.
+ */
+}
+#endif /* CONFIG_OF */
+
+static int __devinit
+fec_probe(struct platform_device *pdev)
+{
+ struct fec_enet_netdev_priv *npriv;
+ struct fec_enet_private *fep;
+ struct fec_platform_data *pdata;
+ struct rtnet_device *ndev;
+ int i, irq, ret = 0;
+ struct resource *r;
+ const struct of_device_id *of_id;
+ static int dev_id;
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(3,5,0)
+ struct pinctrl *pinctrl;
+#endif
+
+ of_id = of_match_device(fec_dt_ids, &pdev->dev);
+ if (of_id)
+ pdev->id_entry = of_id->data;
+
+ r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!r)
+ return -ENXIO;
+
+ r = request_mem_region(r->start, resource_size(r), pdev->name);
+ if (!r)
+ return -EBUSY;
+
+ /* Init network device */
+ ndev = rt_alloc_etherdev(sizeof(struct fec_enet_private));
+ if (!ndev) {
+ ret = -ENOMEM;
+ goto failed_alloc_etherdev;
+ }
+
+ /* RTnet */
+ rtdev_alloc_name(ndev, "rteth%d");
+ rt_rtdev_connect(ndev, &RTDEV_manager);
+ RTNET_SET_MODULE_OWNER(ndev);
+ ndev->vers = RTDEV_VERS_2_0;
+
+ /* setup board info structure */
+ fep = rtnetdev_priv(ndev);
+ memset(fep, 0, sizeof(*fep));
+
+ /* RTnet: allocate dummy linux netdev structure for phy handling */
+ fep->netdev = alloc_etherdev(sizeof(struct fec_enet_netdev_priv));
+ if (!fep->netdev)
+ goto failed_alloc_netdev;
+ SET_NETDEV_DEV(fep->netdev, &pdev->dev);
+ npriv = netdev_priv(fep->netdev);
+ npriv->rtdev = ndev;
+
+ fep->hwp = ioremap(r->start, resource_size(r));
+ fep->pdev = pdev;
+ fep->dev_id = dev_id++;
+
+ if (!fep->hwp) {
+ ret = -ENOMEM;
+ goto failed_ioremap;
+ }
+
+ platform_set_drvdata(pdev, ndev);
+
+ ret = fec_get_phy_mode_dt(pdev);
+ if (ret < 0) {
+ pdata = pdev->dev.platform_data;
+ if (pdata)
+ fep->phy_interface = pdata->phy;
+ else
+ fep->phy_interface = PHY_INTERFACE_MODE_MII;
+ } else {
+ fep->phy_interface = ret;
+ }
+
+ fec_reset_phy(pdev);
+
+ for (i = 0; i < FEC_IRQ_NUM; i++) {
+ irq = platform_get_irq(pdev, i);
+ if (irq < 0) {
+ if (i)
+ break;
+ ret = irq;
+ goto failed_irq;
+ }
+ ret = rtdm_irq_request(&fep->irq_handle[i], irq,
+ fec_enet_interrupt, 0, ndev->name, ndev);
+ if (ret) {
+ while (--i >= 0) {
+ irq = platform_get_irq(pdev, i);
+ rtdm_irq_free(&fep->irq_handle[i]);
+ }
+ goto failed_irq;
+ }
+ }
+
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(3,5,0)
+ pinctrl = devm_pinctrl_get_select_default(&pdev->dev);
+ if (IS_ERR(pinctrl)) {
+ ret = PTR_ERR(pinctrl);
+ goto failed_pin;
+ }
+
+ fep->clk_ipg = devm_clk_get(&pdev->dev, "ipg");
+ if (IS_ERR(fep->clk_ipg)) {
+ ret = PTR_ERR(fep->clk_ipg);
+ goto failed_clk;
+ }
+
+ fep->clk_ahb = devm_clk_get(&pdev->dev, "ahb");
+ if (IS_ERR(fep->clk_ahb)) {
+ ret = PTR_ERR(fep->clk_ahb);
+ goto failed_clk;
+ }
+
+ clk_prepare_enable(fep->clk_ahb);
+ clk_prepare_enable(fep->clk_ipg);
+#else
+ fep->clk = clk_get(&pdev->dev, "fec_clk");
+ if (IS_ERR(fep->clk)) {
+ ret = PTR_ERR(fep->clk);
+ goto failed_clk;
+ }
+
+ clk_prepare_enable(fep->clk);
+#endif
+
+ ret = fec_enet_init(ndev);
+ if (ret)
+ goto failed_init;
+
+ ret = fec_enet_mii_init(pdev);
+ if (ret)
+ goto failed_mii_init;
+
+ /* Carrier starts down, phylib will bring it up */
+ rtnetif_carrier_off(ndev);
+
+ /* RTnet: setup the RTnet socket buffer */
+ if (rtskb_pool_init(&fep->skb_pool, rx_pool_size) < rx_pool_size) {
+ printk("[RTNet] Not enough memory\n");
+ ret = -ENOMEM;
+ goto failed_init;
+ }
+
+ /* RTnet: register the network interface */
+ ret = rt_register_rtnetdev(ndev);
+ if (ret)
+ goto failed_register;
+
+ return 0;
+
+failed_register:
+ rtskb_pool_release(&fep->skb_pool);
+ fec_enet_mii_remove(fep);
+failed_mii_init:
+failed_init:
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(3,5,0)
+ clk_disable_unprepare(fep->clk_ahb);
+ clk_disable_unprepare(fep->clk_ipg);
+#else
+ clk_disable_unprepare(fep->clk);
+#endif
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(3,5,0)
+failed_pin:
+#endif
+failed_clk:
+ for (i = 0; i < FEC_IRQ_NUM; i++) {
+ irq = platform_get_irq(pdev, i);
+ if (irq > 0)
+ rtdm_irq_free(&fep->irq_handle[i]);
+ }
+failed_irq:
+ iounmap(fep->hwp);
+failed_ioremap:
+ free_netdev(fep->netdev);
+failed_alloc_netdev:
+ rtdev_free(ndev); /* RTnet */
+failed_alloc_etherdev:
+ release_mem_region(r->start, resource_size(r));
+
+ return ret;
+}
+
+static int __devexit
+fec_drv_remove(struct platform_device *pdev)
+{
+ struct rtnet_device *ndev = platform_get_drvdata(pdev);
+ struct fec_enet_private *fep = rtnetdev_priv(ndev);
+ struct resource *r;
+ int i;
+
+ /* RTnet */
+ rt_unregister_rtnetdev(ndev);
+ rt_rtdev_disconnect(ndev);
+
+ fec_enet_mii_remove(fep);
+ for (i = 0; i < FEC_IRQ_NUM; i++) {
+ int irq = platform_get_irq(pdev, i);
+ if (irq > 0)
+ rtdm_irq_free(&fep->irq_handle[i]);
+ }
+
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(3,5,0)
+ clk_disable_unprepare(fep->clk_ahb);
+ clk_disable_unprepare(fep->clk_ipg);
+#else
+ clk_disable_unprepare(fep->clk);
+#endif
+ iounmap(fep->hwp);
+
+ /* RTnet */
+ free_netdev(fep->netdev);
+ rtskb_pool_release(&fep->skb_pool);
+ rtdev_free(ndev);
+
+ r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ BUG_ON(!r);
+ release_mem_region(r->start, resource_size(r));
+
+ platform_set_drvdata(pdev, NULL);
+
+ return 0;
+}
+
+#ifdef CONFIG_PM
+static int
+fec_suspend(struct device *dev)
+{
+ struct rtnet_device *ndev = dev_get_drvdata(dev);
+ struct fec_enet_private *fep = rtnetdev_priv(ndev);
+
+ if (rtnetif_running(ndev)) {
+ fec_stop(ndev);
+ rtnetif_device_detach(ndev);
+ }
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(3,5,0)
+ clk_disable_unprepare(fep->clk_ahb);
+ clk_disable_unprepare(fep->clk_ipg);
+#else
+ clk_disable_unprepare(fep->clk);
+#endif
+ return 0;
+}
+
+static int
+fec_resume(struct device *dev)
+{
+ struct rtnet_device *ndev = dev_get_drvdata(dev);
+ struct fec_enet_private *fep = rtnetdev_priv(ndev);
+
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(3,5,0)
+ clk_prepare_enable(fep->clk_ahb);
+ clk_prepare_enable(fep->clk_ipg);
+#else
+ clk_prepare_enable(fep->clk);
+#endif
+ if (rtnetif_running(ndev)) {
+ fec_restart(ndev, fep->full_duplex);
+ rtnetif_device_attach(ndev);
+ }
+
+ return 0;
+}
+
+static const struct dev_pm_ops fec_pm_ops = {
+ .suspend = fec_suspend,
+ .resume = fec_resume,
+ .freeze = fec_suspend,
+ .thaw = fec_resume,
+ .poweroff = fec_suspend,
+ .restore = fec_resume,
+};
+#endif
+
+static struct platform_driver fec_driver = {
+ .driver = {
+ .name = DRIVER_NAME,
+ .owner = THIS_MODULE,
+#ifdef CONFIG_PM
+ .pm = &fec_pm_ops,
+#endif
+ .of_match_table = fec_dt_ids,
+ },
+ .id_table = fec_devtype,
+ .probe = fec_probe,
+ .remove = __devexit_p(fec_drv_remove),
+};
+
+#if LINUX_VERSION_CODE < KERNEL_VERSION(3,2,0)
+static int __init
+fec_enet_module_init(void)
+{
+ printk(KERN_INFO "RT FEC Ethernet Driver\n");
+
+ return platform_driver_register(&fec_driver);
+}
+
+static void __exit
+fec_enet_cleanup(void)
+{
+ platform_driver_unregister(&fec_driver);
+}
+
+module_exit(fec_enet_cleanup);
+module_init(fec_enet_module_init);
+#else
+module_platform_driver(fec_driver);
+#endif
diff --git a/drivers/rt_fec.h b/drivers/rt_fec.h
new file mode 100644
index 0000000..2982777
--- /dev/null
+++ b/drivers/rt_fec.h
@@ -0,0 +1,153 @@
+/****************************************************************************/
+
+/*
+ * fec.h -- Fast Ethernet Controller for Motorola ColdFire SoC
+ * processors.
+ *
+ * (C) Copyright 2000-2005, Greg Ungerer (ge...@sn...)
+ * (C) Copyright 2000-2001, Lineo (www.lineo.com)
+ */
+
+/****************************************************************************/
+#ifndef RT_FEC_H
+#define RT_FEC_H
+/****************************************************************************/
+
+#if defined(CONFIG_M523x) || defined(CONFIG_M527x) || defined(CONFIG_M528x) || \
+ defined(CONFIG_M520x) || defined(CONFIG_M532x) || \
+ defined(CONFIG_ARCH_MXC) || defined(CONFIG_SOC_IMX28)
+/*
+ * Just figures, Motorola would have to change the offsets for
+ * registers in the same peripheral device on different models
+ * of the ColdFire!
+ */
+#define FEC_IEVENT 0x004 /* Interrupt event reg */
+#define FEC_IMASK 0x008 /* Interrupt mask reg */
+#define FEC_R_DES_ACTIVE 0x010 /* Receive descriptor reg */
+#define FEC_X_DES_ACTIVE 0x014 /* Transmit descriptor reg */
+#define FEC_ECNTRL 0x024 /* Ethernet control reg */
+#define FEC_MII_DATA 0x040 /* MII manage frame reg */
+#define FEC_MII_SPEED 0x044 /* MII speed control reg */
+#define FEC_MIB_CTRLSTAT 0x064 /* MIB control/status reg */
+#define FEC_R_CNTRL 0x084 /* Receive control reg */
+#define FEC_X_CNTRL 0x0c4 /* Transmit Control reg */
+#define FEC_ADDR_LOW 0x0e4 /* Low 32bits MAC address */
+#define FEC_ADDR_HIGH 0x0e8 /* High 16bits MAC address */
+#define FEC_OPD 0x0ec /* Opcode + Pause duration */
+#define FEC_HASH_TABLE_HIGH 0x118 /* High 32bits hash table */
+#define FEC_HASH_TABLE_LOW 0x11c /* Low 32bits hash table */
+#define FEC_GRP_HASH_TABLE_HIGH 0x120 /* High 32bits hash table */
+#define FEC_GRP_HASH_TABLE_LOW 0x124 /* Low 32bits hash table */
+#define FEC_X_WMRK 0x144 /* FIFO transmit water mark */
+#define FEC_R_BOUND 0x14c /* FIFO receive bound reg */
+#define FEC_R_FSTART 0x150 /* FIFO receive start reg */
+#define FEC_R_DES_START 0x180 /* Receive descriptor ring */
+#define FEC_X_DES_START 0x184 /* Transmit descriptor ring */
+#define FEC_R_BUFF_SIZE 0x188 /* Maximum receive buff size */
+#define FEC_TACC 0x1c0 /* Transmit accelerator reg */
+#define FEC_MIIGSK_CFGR 0x300 /* MIIGSK Configuration reg */
+#define FEC_MIIGSK_ENR 0x308 /* MIIGSK Enable reg */
+
+#define BM_MIIGSK_CFGR_MII 0x00
+#define BM_MIIGSK_CFGR_RMII 0x01
+#define BM_MIIGSK_CFGR_FRCONT_10M 0x40
+
+#else
+
+#define FEC_ECNTRL 0x000 /* Ethernet control reg */
+#define FEC_IEVENT 0x004 /* Interrupt even reg */
+#define FEC_IMASK 0x008 /* Interrupt mask reg */
+#define FEC_IVEC 0x00c /* Interrupt vec status reg */
+#define FEC_R_DES_ACTIVE 0x010 /* Receive descriptor reg */
+#define FEC_X_DES_ACTIVE 0x014 /* Transmit descriptor reg */
+#define FEC_MII_DATA 0x040 /* MII manage frame reg */
+#define FEC_MII_SPEED 0x044 /* MII speed control reg */
+#define FEC_R_BOUND 0x08c /* FIFO receive bound reg */
+#define FEC_R_FSTART 0x090 /* FIFO receive start reg */
+#define FEC_X_WMRK 0x0a4 /* FIFO transmit water mark */
+#define FEC_X_FSTART 0x0ac /* FIFO transmit start reg */
+#define FEC_R_CNTRL 0x104 /* Receive control reg */
+#define FEC_MAX_FRM_LEN 0x108 /* Maximum frame length reg */
+#define FEC_X_CNTRL 0x144 /* Transmit Control reg */
+#define FEC_ADDR_LOW 0x3c0 /* Low 32bits MAC address */
+#define FEC_ADDR_HIGH 0x3c4 /* High 16bits MAC address */
+#define FEC_GRP_HASH_TABLE_HIGH 0x3c8 /* High 32bits hash table */
+#define FEC_GRP_HASH_TABLE_LOW 0x3cc /* Low 32bits hash table */
+#define FEC_R_DES_START 0x3d0 /* Receive descriptor ring */
+#define FEC_X_DES_START 0x3d4 /* Transmit descriptor ring */
+#define FEC_R_BUFF_SIZE 0x3d8 /* Maximum receive buff size */
+#define FEC_FIFO_RAM 0x400 /* FIFO RAM buffer */
+
+#endif /* CONFIG_M5272 */
+
+
+/*
+ * Define the buffer descriptor structure.
+ */
+#if defined(CONFIG_ARCH_MXC) || defined(CONFIG_SOC_IMX28)
+struct bufdesc {
+ unsigned short cbd_datlen; /* Data length */
+ unsigned short cbd_sc; /* Control and status info */
+ unsigned long cbd_bufaddr; /* Buffer address */
+};
+#else
+struct bufdesc {
+ unsigned short cbd_sc; /* Control and status info */
+ unsigned short cbd_datlen; /* Data length */
+ unsigned long cbd_bufaddr; /* Buffer address */
+};
+#endif
+
+/*
+ * The following definitions courtesy of commproc.h, which where
+ * Copyright (c) 1997 Dan Malek (dm...@jl...).
+ */
+#define BD_SC_EMPTY ((ushort)0x8000) /* Receive is empty */
+#define BD_SC_READY ((ushort)0x8000) /* Transmit is ready */
+#define BD_SC_WRAP ((ushort)0x2000) ...
[truncated message content] |
|
From: Jan K. <jan...@we...> - 2012-07-17 08:02:39
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 2011-01-17 14:39, Jan Kiszka wrote: > Hi all, > > recently some RTnet user was worried again about using RTnet > together with non-GPL compatible applications (typically > commercially licensed). I pointed out what we always stated here: > > There is no reason why using RTnet from a non-GPL user space > application should be denied. Also, I do not recall any case where > some copyright holder stated this more restrictively (originally, > at least I considered even certain in-kernel setups OK, but I was > still young at that time ;) ). > > However, when looking at the two headers you may need in your user > space application, rtnet.h and sometime also rtmac.h, they do not > imply this. Rather, they are explicitly licensed under GPL v2+. And > there is no other exceptional notice in the RTnet source tree. > > In order to finally and clearly express the intended usage model, > I picked up the license exception for headers that you can find in > the Xenomai project, even passed it through the legal department of > my employer and added it tentatively two the mentioned headers. > Here is the diff of rtnet.h, rtmac.h would be extended likewise: > > diff --git a/stack/include/rtnet.h b/stack/include/rtnet.h index > 9a325bc..4afa270 100644 --- a/stack/include/rtnet.h +++ > b/stack/include/rtnet.h @@ -21,6 +21,24 @@ * along with this > program; if not, write to the Free Software * Foundation, Inc., > 675 Mass Ave, Cambridge, MA 02139, USA. * + * As a special > exception to the GNU General Public license, the RTnet + * project > allows you to use this header file in unmodified form to produce + > * application programs executing in user-space which use RTnet > services by + * normal system calls. The resulting executable will > not be covered by the + * GNU General Public License merely as a > result of this header file use. + * Instead, this header file use > will be considered normal use of RTnet and + * not a "derived > work" in the sense of the GNU General Public License. + * + * This > exception does not apply when the application code is built as a + > * static or dynamically loadable portion of the Linux kernel nor > does the + * exception override other reasons justifying > application of the GNU General + * Public License. + * + * This > exception applies only to the code released by the RTnet project + > * under the name RTnet and bearing this exception notice. If you > copy code + * from other sources into a copy of RTnet, the > exception does not apply to + * the code that you add in this > way. + * */ > > #ifndef __RTNET_H_ > > > I still need to dig through the available logs, identifying the > potentially affected copyright holders. So this is also a public > call to all those who think to own some bits of the affected > headers or user space API. Please state if you agree or disagree > with the clarification. > > There is no schedule yet when to apply the change. Once the > copyright owner recherche is done, I will define a grace period > after which the change will be applied to give people I'm unable to > contact another chance to reply. > > That said, no one already running RTnet aside a commercial > application needs to be concerned. I'm not going to send any lawyer > after you, even if you are using it in a kernel application. > However, but that's nothing new, you should still reconsider your > license situation in that case as there are a few more Linux kernel > copyright holders and you may use more interfaces than just RTnet. > To finally follow up on this topic: I've finished investigating the copyright situation of both headers and could confirm that I was actually the only contributor to them. So I now changed the license terms as announced and pushed the result yesterday. It will be part of the next release 0.9.13 that should be rolled out these days. If you feel uncomfortable using older versions of these headers but are otherwise bound to a particular release, it is also possible to locally import the new versions into older RTnet releases. Jan -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.16 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAlAFHBMACgkQitSsb3rl5xTG4gCfcCiR5BRe8jShZF0M3cHt8m3T 4UIAnjZQJSirQFBL66KtfyPcvt9BfsJE =aeKV -----END PGP SIGNATURE----- |
|
From: Jan K. <jan...@si...> - 2012-02-09 18:13:46
|
On 2012-01-19 16:28, Petr Cervenka wrote: > > From: Petr Cervenka <gr...@ce...> > > This patch adds missing brackets, which were forgotten during porting of rt8169 driver to rtnet. It could possibly lead to deadlock when corrupted packets were received. Patch also disables unnecessary debug messages. > > Signed-off-by: Petr Cervenka <gr...@ce...> > --- > Changes: > * commented out RTL8169_DEBUG > * added missing brackets > > --- Thanks, applied both (finally...) Jan -- Siemens AG, Corporate Technology, CT T DE IT 1 Corporate Competence Center Embedded Linux |
|
From: Petr C. <gr...@ce...> - 2012-01-19 15:29:07
|
From: Petr Cervenka <gr...@ce...>
This patch adds missing brackets, which were forgotten during porting of rt8169 driver to rtnet. It could possibly lead to deadlock when corrupted packets were received. Patch also disables unnecessary debug messages.
Signed-off-by: Petr Cervenka <gr...@ce...>
---
Changes:
* commented out RTL8169_DEBUG
* added missing brackets
---
diff --git a/drivers/experimental/rt_r8169.c b/drivers/experimental/rt_r8169.c
index 484a658..b716c53 100644
--- a/drivers/experimental/rt_r8169.c
+++ b/drivers/experimental/rt_r8169.c
@@ -85,7 +85,7 @@ RTL8169_VERSION "2.2" <2004/08/09>
#define RTL8169_DRIVER_NAME MODULENAME " RTnet Gigabit Ethernet driver " RTL8169_VERSION
#define PFX MODULENAME ": "
-#define RTL8169_DEBUG
+//#define RTL8169_DEBUG
#undef RTL8169_JUMBO_FRAME_SUPPORT /*** RTnet: no not enable! ***/
#undef RTL8169_HW_FLOW_CONTROL_SUPPORT
@@ -1711,11 +1711,12 @@ static void rtl8169_rx_interrupt (struct rtnet_device *rtdev, struct rtl8169_pri
priv->stats.rx_errors++;
if ( le32_to_cpu(rxdesc->status) & (RxRWT|RxRUNT) )
priv->stats.rx_length_errors++;
- if ( le32_to_cpu(rxdesc->status) & RxCRC)
+ if ( le32_to_cpu(rxdesc->status) & RxCRC) {
/* in the rt_via-rhine.c there's a lock around the incrementation... we'll do that also here <kk> */
rtdm_lock_get(&priv->lock); /*** RTnet ***/
priv->stats.rx_crc_errors++;
rtdm_lock_put(&priv->lock); /*** RTnet ***/
+ }
}
else{
pkt_size=(int)(le32_to_cpu(rxdesc->status) & 0x00001FFF)-4;
|
|
From: Petr C. <gr...@ce...> - 2012-01-19 15:17:59
|
From: Petr Cervenka <gr...@ce...>
The patch lowers significantly latency of driver/e1000e for specific hardware (eg. 82572EI), when no module parameter is explicitly specified.
Signed-off-by: Petr Cervenka <gr...@ce...>
---
Change of default module parameter values of e1000e driver for lower latency:
* TxIntDelay: 0
* TxAbsDelay: 0
* InterruptThrottleRate : 0
---
diff --git a/drivers/e1000e/param.c b/drivers/e1000e/param.c
index 4dd9b63..91354fc 100644
--- a/drivers/e1000e/param.c
+++ b/drivers/e1000e/param.c
@@ -67,9 +67,11 @@ MODULE_PARM_DESC(copybreak,
* Tx interrupt delay needs to typically be set to something non-zero
*
* Valid Range: 0-65535
+ *
+ * Default Value: 0 for rtnet
*/
E1000_PARAM(TxIntDelay, "Transmit Interrupt Delay");
-#define DEFAULT_TIDV 8
+#define DEFAULT_TIDV 0
#define MAX_TXDELAY 0xFFFF
#define MIN_TXDELAY 0
@@ -77,9 +79,11 @@ E1000_PARAM(TxIntDelay, "Transmit Interrupt Delay");
* Transmit Absolute Interrupt Delay in units of 1.024 microseconds
*
* Valid Range: 0-65535
+ *
+ * Default Value: 0 for rtnet
*/
E1000_PARAM(TxAbsIntDelay, "Transmit Absolute Interrupt Delay");
-#define DEFAULT_TADV 32
+#define DEFAULT_TADV 0
#define MAX_TXABSDELAY 0xFFFF
#define MIN_TXABSDELAY 0
@@ -106,9 +110,11 @@ E1000_PARAM(RxAbsIntDelay, "Receive Absolute Interrupt Delay");
* Interrupt Throttle Rate (interrupts/sec)
*
* Valid Range: 100-100000 (0=off, 1=dynamic, 3=dynamic conservative)
+ *
+ * Default Value: 0 for rtnet
*/
E1000_PARAM(InterruptThrottleRate, "Interrupt Throttling Rate");
-#define DEFAULT_ITR 3
+#define DEFAULT_ITR 0
#define MAX_ITR 100000
#define MIN_ITR 100
@@ -380,7 +386,7 @@ void __devinit e1000e_check_options(struct e1000_adapter *adapter)
}
} else {
adapter->itr_setting = opt.def;
- adapter->itr = 20000;
+ adapter->itr = 0;
}
}
{ /* Interrupt Mode */
|
|
From: Jesper C. <jb...@th...> - 2011-12-16 15:13:20
|
On 2011-12-15 11:17, Jan Kiszka wrote:
> On 2011-12-14 15:32, Jesper Christensen wrote:
>> From: Jesper B. Christensen <jb...@th...>
>>
>> Avoid having to re-solicit hw. addresses when configuring a ip-address
> 'in' or 'for'
>
>> the same subnet as the old ip-address. This will enable one to implement
>> fast ip-failover.
>>
>>
>> Signed-off-by: Jesper B. Christensen <jb...@th...>
>> ---
>> Changes v2:
>> * Move code to selective deletion path
>> * Split up assignment
>> * missing rtdev_dereference
>> ---
>>
>> diff --git a/stack/ipv4/af_inet.c b/stack/ipv4/af_inet.c
>> index 8b394be..4464608 100644
>> --- a/stack/ipv4/af_inet.c
>> +++ b/stack/ipv4/af_inet.c
>> @@ -239,7 +239,21 @@ static void rt_ip_ifup(struct rtnet_device *rtdev,
>> int i;
>>
>>
>> - rt_ip_route_del_all(rtdev); /* cleanup routing table */
>> + /* Only delete our own address if the new address is
>> + on the same subnet */
>> + if (rtdev->broadcast_ip == up_cmd->args.up.broadcast_ip) {
> Sorry, missed some important things on first run:
>
> if up_cmd->args.up.ip_addr == 0 || rtdev->flags & IFF_LOOPBACK, all
> routes need to be removed as before.
>
> Also, better say "Delete only the loopback routes..." and move the
> comment into the corresponding block. That makes it clearer that the
> alternative is deleting all routes.
>
>> + rt_ip_route_del_host(rtdev->local_ip, rtdev);
> This should not be executed if local_ip was 0.
>
>> +
>> + /* Delete our loopback route for the device */
>> + tmp = rtdev_get_loopback();
>> + if (tmp != NULL) {
>> + rt_ip_route_del_host(rtdev->local_ip, tmp);
>> + rtdev_dereference(tmp);
>> + }
> Don't understand this anymore: Why do we need to remove the loopback
> route like this? The above should do the trick, doesn't it?
It doesn't seem like it does. rtifconfig rteth<x> down/up etc. leaves
the loopback route intact on my board. Maybe you could verify this?
>
>> + } else {
>> + rt_ip_route_del_all(rtdev); /* cleanup routing table */
>> + }
>> +
>>
>> if (up_cmd->args.up.ip_addr != 0xFFFFFFFF) {
>> rtdev->local_ip = up_cmd->args.up.ip_addr;
>>
> Jan
>
|
|
From: Jan K. <jan...@si...> - 2011-12-15 10:18:03
|
On 2011-12-14 15:32, Jesper Christensen wrote:
> From: Jesper B. Christensen <jb...@th...>
>
> Avoid having to re-solicit hw. addresses when configuring a ip-address
'in' or 'for'
> the same subnet as the old ip-address. This will enable one to implement
> fast ip-failover.
>
>
> Signed-off-by: Jesper B. Christensen <jb...@th...>
> ---
> Changes v2:
> * Move code to selective deletion path
> * Split up assignment
> * missing rtdev_dereference
> ---
>
> diff --git a/stack/ipv4/af_inet.c b/stack/ipv4/af_inet.c
> index 8b394be..4464608 100644
> --- a/stack/ipv4/af_inet.c
> +++ b/stack/ipv4/af_inet.c
> @@ -239,7 +239,21 @@ static void rt_ip_ifup(struct rtnet_device *rtdev,
> int i;
>
>
> - rt_ip_route_del_all(rtdev); /* cleanup routing table */
> + /* Only delete our own address if the new address is
> + on the same subnet */
> + if (rtdev->broadcast_ip == up_cmd->args.up.broadcast_ip) {
Sorry, missed some important things on first run:
if up_cmd->args.up.ip_addr == 0 || rtdev->flags & IFF_LOOPBACK, all
routes need to be removed as before.
Also, better say "Delete only the loopback routes..." and move the
comment into the corresponding block. That makes it clearer that the
alternative is deleting all routes.
> + rt_ip_route_del_host(rtdev->local_ip, rtdev);
This should not be executed if local_ip was 0.
> +
> + /* Delete our loopback route for the device */
> + tmp = rtdev_get_loopback();
> + if (tmp != NULL) {
> + rt_ip_route_del_host(rtdev->local_ip, tmp);
> + rtdev_dereference(tmp);
> + }
Don't understand this anymore: Why do we need to remove the loopback
route like this? The above should do the trick, doesn't it?
> + } else {
> + rt_ip_route_del_all(rtdev); /* cleanup routing table */
> + }
> +
>
> if (up_cmd->args.up.ip_addr != 0xFFFFFFFF) {
> rtdev->local_ip = up_cmd->args.up.ip_addr;
>
Jan
--
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
|
|
From: Jesper C. <jb...@th...> - 2011-12-14 14:35:45
|
From: Jesper B. Christensen <jb...@th...>
Avoid having to re-solicit hw. addresses when configuring a ip-address
the same subnet as the old ip-address. This will enable one to implement
fast ip-failover.
Signed-off-by: Jesper B. Christensen <jb...@th...>
---
Changes v2:
* Move code to selective deletion path
* Split up assignment
* missing rtdev_dereference
---
diff --git a/stack/ipv4/af_inet.c b/stack/ipv4/af_inet.c
index 8b394be..4464608 100644
--- a/stack/ipv4/af_inet.c
+++ b/stack/ipv4/af_inet.c
@@ -239,7 +239,21 @@ static void rt_ip_ifup(struct rtnet_device *rtdev,
int i;
- rt_ip_route_del_all(rtdev); /* cleanup routing table */
+ /* Only delete our own address if the new address is
+ on the same subnet */
+ if (rtdev->broadcast_ip == up_cmd->args.up.broadcast_ip) {
+ rt_ip_route_del_host(rtdev->local_ip, rtdev);
+
+ /* Delete our loopback route for the device */
+ tmp = rtdev_get_loopback();
+ if (tmp != NULL) {
+ rt_ip_route_del_host(rtdev->local_ip, tmp);
+ rtdev_dereference(tmp);
+ }
+ } else {
+ rt_ip_route_del_all(rtdev); /* cleanup routing table */
+ }
+
if (up_cmd->args.up.ip_addr != 0xFFFFFFFF) {
rtdev->local_ip = up_cmd->args.up.ip_addr;
--
/Jesper
|
|
From: Jan K. <jan...@si...> - 2011-12-14 14:18:50
|
On 2011-12-14 14:46, Jesper Christensen wrote: > Sorry about that. > > The use case is fast failover where a active ip address (on the same > subnet) can be brought up on an interface without having to re-solicit > the hw addresses that were gathered while the interface was standby (had > a standby ip address). Sounds reasonable. Just fold the motivation into some v2 of your patch. Jan > > /Jesper > > > On 2011-12-14 14:24, Jan Kiszka wrote: >> Please follow the format >> >> [PATCH] Subject line >> >> Patch description lines, specifically explaining the "why". >> >> Signed-off >> (see also >> http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=Documentation/SubmittingPatches;h=689e2371095cc5dfea9927120009341f369159aa;hb=f6f94e2ab1b33f0082ac22d71f66385a60d8157f#l297) >> >> The comments below apply if you can explain the use case to me. The >> standard Linux networking stack seems to behave like current RTnet. >> >> On 2011-12-13 17:22, Jesper Christensen wrote: >>> >>> diff --git a/stack/ipv4/af_inet.c b/stack/ipv4/af_inet.c >>> index 8b394be..18754cf 100644 >>> --- a/stack/ipv4/af_inet.c >>> +++ b/stack/ipv4/af_inet.c >>> @@ -239,7 +239,16 @@ static void rt_ip_ifup(struct rtnet_device *rtdev, >>> int i; >>> >>> >>> - rt_ip_route_del_all(rtdev); /* cleanup routing table */ >>> + /* Only delete our own address if the new address is >>> + on the same subnet */ >>> + if (rtdev->broadcast_ip == up_cmd->args.up.broadcast_ip) >>> + rt_ip_route_del_host(rtdev->local_ip, rtdev); >>> + else >>> + rt_ip_route_del_all(rtdev); /* cleanup routing table */ >>> + >>> + /* Delete our loopback route for the device */ >> The following block should be moved into the selective deletion path >> above (as del_all already does it for us). >> >>> + if ((tmp = rtdev_get_loopback()) != NULL) >> Please split up assignment and comparison into two lines (yes, that used >> to be done differently in past here as well). >> >>> + rt_ip_route_del_host(rtdev->local_ip, tmp); >> rtdev_dereference(tmp) is missing. >> >>> >>> if (up_cmd->args.up.ip_addr != 0xFFFFFFFF) { >>> rtdev->local_ip = up_cmd->args.up.ip_addr; >> Thanks, >> Jan >> > -- Siemens AG, Corporate Technology, CT T DE IT 1 Corporate Competence Center Embedded Linux |
|
From: Jesper C. <jb...@th...> - 2011-12-14 14:05:40
|
Sorry about that. The use case is fast failover where a active ip address (on the same subnet) can be brought up on an interface without having to re-solicit the hw addresses that were gathered while the interface was standby (had a standby ip address). /Jesper On 2011-12-14 14:24, Jan Kiszka wrote: > Please follow the format > > [PATCH] Subject line > > Patch description lines, specifically explaining the "why". > > Signed-off > (see also > http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=Documentation/SubmittingPatches;h=689e2371095cc5dfea9927120009341f369159aa;hb=f6f94e2ab1b33f0082ac22d71f66385a60d8157f#l297) > > The comments below apply if you can explain the use case to me. The > standard Linux networking stack seems to behave like current RTnet. > > On 2011-12-13 17:22, Jesper Christensen wrote: >> >> diff --git a/stack/ipv4/af_inet.c b/stack/ipv4/af_inet.c >> index 8b394be..18754cf 100644 >> --- a/stack/ipv4/af_inet.c >> +++ b/stack/ipv4/af_inet.c >> @@ -239,7 +239,16 @@ static void rt_ip_ifup(struct rtnet_device *rtdev, >> int i; >> >> >> - rt_ip_route_del_all(rtdev); /* cleanup routing table */ >> + /* Only delete our own address if the new address is >> + on the same subnet */ >> + if (rtdev->broadcast_ip == up_cmd->args.up.broadcast_ip) >> + rt_ip_route_del_host(rtdev->local_ip, rtdev); >> + else >> + rt_ip_route_del_all(rtdev); /* cleanup routing table */ >> + >> + /* Delete our loopback route for the device */ > The following block should be moved into the selective deletion path > above (as del_all already does it for us). > >> + if ((tmp = rtdev_get_loopback()) != NULL) > Please split up assignment and comparison into two lines (yes, that used > to be done differently in past here as well). > >> + rt_ip_route_del_host(rtdev->local_ip, tmp); > rtdev_dereference(tmp) is missing. > >> >> if (up_cmd->args.up.ip_addr != 0xFFFFFFFF) { >> rtdev->local_ip = up_cmd->args.up.ip_addr; > Thanks, > Jan > |
|
From: Jan K. <jan...@si...> - 2011-12-14 13:25:00
|
Please follow the format [PATCH] Subject line Patch description lines, specifically explaining the "why". Signed-off (see also http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=Documentation/SubmittingPatches;h=689e2371095cc5dfea9927120009341f369159aa;hb=f6f94e2ab1b33f0082ac22d71f66385a60d8157f#l297) The comments below apply if you can explain the use case to me. The standard Linux networking stack seems to behave like current RTnet. On 2011-12-13 17:22, Jesper Christensen wrote: > > > diff --git a/stack/ipv4/af_inet.c b/stack/ipv4/af_inet.c > index 8b394be..18754cf 100644 > --- a/stack/ipv4/af_inet.c > +++ b/stack/ipv4/af_inet.c > @@ -239,7 +239,16 @@ static void rt_ip_ifup(struct rtnet_device *rtdev, > int i; > > > - rt_ip_route_del_all(rtdev); /* cleanup routing table */ > + /* Only delete our own address if the new address is > + on the same subnet */ > + if (rtdev->broadcast_ip == up_cmd->args.up.broadcast_ip) > + rt_ip_route_del_host(rtdev->local_ip, rtdev); > + else > + rt_ip_route_del_all(rtdev); /* cleanup routing table */ > + > + /* Delete our loopback route for the device */ The following block should be moved into the selective deletion path above (as del_all already does it for us). > + if ((tmp = rtdev_get_loopback()) != NULL) Please split up assignment and comparison into two lines (yes, that used to be done differently in past here as well). > + rt_ip_route_del_host(rtdev->local_ip, tmp); rtdev_dereference(tmp) is missing. > > if (up_cmd->args.up.ip_addr != 0xFFFFFFFF) { > rtdev->local_ip = up_cmd->args.up.ip_addr; Thanks, Jan -- Siemens AG, Corporate Technology, CT T DE IT 1 Corporate Competence Center Embedded Linux |
|
From: Jesper C. <jb...@th...> - 2011-12-13 16:41:48
|
diff --git a/stack/ipv4/af_inet.c b/stack/ipv4/af_inet.c
index 8b394be..18754cf 100644
--- a/stack/ipv4/af_inet.c
+++ b/stack/ipv4/af_inet.c
@@ -239,7 +239,16 @@ static void rt_ip_ifup(struct rtnet_device *rtdev,
int i;
- rt_ip_route_del_all(rtdev); /* cleanup routing table */
+ /* Only delete our own address if the new address is
+ on the same subnet */
+ if (rtdev->broadcast_ip == up_cmd->args.up.broadcast_ip)
+ rt_ip_route_del_host(rtdev->local_ip, rtdev);
+ else
+ rt_ip_route_del_all(rtdev); /* cleanup routing table */
+
+ /* Delete our loopback route for the device */
+ if ((tmp = rtdev_get_loopback()) != NULL)
+ rt_ip_route_del_host(rtdev->local_ip, tmp);
if (up_cmd->args.up.ip_addr != 0xFFFFFFFF) {
rtdev->local_ip = up_cmd->args.up.ip_addr;
--
1.7.5.4
--
/Jesper
|
|
From: Jan K. <jan...@we...> - 2011-11-23 18:01:48
|
On 2011-11-23 09:15, Anders Blomdell wrote: > On 11/23/2011 12:07 PM, Petr Cervenka wrote: >> Hello. >> >> I'm sending a patch which adds "cards" parameter to the e1000e driver >> module. > Aren't commands like: > > echo 0000:07:01.0 > /sys/bus/pci/drivers/<old_driver>/unbind > echo 0000:07:01.0 > /sys/bus/pci/drivers/e1000e/bind > > enough? Yes, and therefore I didn't add it anymore this time. Jan |
|
From: Anders B. <and...@co...> - 2011-11-23 11:15:45
|
On 11/23/2011 12:07 PM, Petr Cervenka wrote: > Hello. > > I'm sending a patch which adds "cards" parameter to the e1000e driver > module. Aren't commands like: echo 0000:07:01.0 > /sys/bus/pci/drivers/<old_driver>/unbind echo 0000:07:01.0 > /sys/bus/pci/drivers/e1000e/bind enough? > > Best regards > Petr Cervenka > > > > ------------------------------------------------------------------------------ > All the data continuously generated in your IT infrastructure > contains a definitive record of customers, application performance, > security threats, fraudulent activity, and more. Splunk takes this > data and makes sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-novd2d > > > > _______________________________________________ > RTnet-developers mailing list > RTn...@li... > https://lists.sourceforge.net/lists/listinfo/rtnet-developers -- Anders Blomdell Email: and...@co... Department of Automatic Control Lund University Phone: +46 46 222 4625 P.O. Box 118 Fax: +46 46 138118 SE-221 00 Lund, Sweden |
|
From: Petr C. <gr...@ce...> - 2011-11-23 11:07:49
|
Hello. I'm sending a patch which adds "cards" parameter to the e1000e driver module. Best regards Petr Cervenka |
|
From: Jan K. <jan...@we...> - 2011-11-18 13:03:14
|
On 2011-11-17 11:30, Wolfgang Grandegger wrote: > From: Wolfgang Grandegger <wg...@de...> > > I finally find time to push my IBM EMAC RTnet driver. This series of > patches adds support for the IBM EMAC controller on AMCC 4xx SOCs. > For Linux 3.x, two general fixes are needed. It also requires the > RTDM function rtdm_ratelimit(). I have already sent a corresponding > patch to the Xenomai mailing list. Have a look to the README for > further information. Cool, thanks a lot. Will have a close look once I'm back in the Northern hemisphere. Cheers from Brazil, Jan |
|
From: Wolfgang G. <wg...@gr...> - 2011-11-17 13:47:33
|
From: Wolfgang Grandegger <wg...@de...> I finally find time to push my IBM EMAC RTnet driver. This series of patches adds support for the IBM EMAC controller on AMCC 4xx SOCs. For Linux 3.x, two general fixes are needed. It also requires the RTDM function rtdm_ratelimit(). I have already sent a corresponding patch to the Xenomai mailing list. Have a look to the README for further information. Wolfgang Grandegger (5): configure: add support for Linux version 3.x Fix issues with RW_LOCK_UNLOCKED for Linux 3.x drivers/ibm_newemac: add README and Linux patches for 2.6.36.4 and 3.0.4 drivers/ibm_newemac: add original files from Linux as first step drivers/ibm_newemac: add driver for the IBM EMAC on AMCC 4xx SOCs configure.ac | 24 +- drivers/GNUmakefile.am | 4 + drivers/ibm_newemac/GNUmakefile.am | 49 + drivers/ibm_newemac/Makefile.kbuild | 10 + drivers/ibm_newemac/README | 32 + drivers/ibm_newemac/core.c | 3281 ++++++++++++++++++++ drivers/ibm_newemac/core.h | 472 +++ .../ibm_newemac/linux-2.6.36.4-rtdm-ibm-emac.patch | 875 ++++++ .../ibm_newemac/linux-3.0.4-rtdm-ibm-emac.patch | 875 ++++++ stack/rtmac/rtmac_disc.c | 4 + stack/rtnet_chrdev.c | 4 + 11 files changed, 5624 insertions(+), 6 deletions(-) create mode 100644 drivers/ibm_newemac/GNUmakefile.am create mode 100644 drivers/ibm_newemac/Makefile.kbuild create mode 100644 drivers/ibm_newemac/README create mode 100644 drivers/ibm_newemac/core.c create mode 100644 drivers/ibm_newemac/core.h create mode 100644 drivers/ibm_newemac/linux-2.6.36.4-rtdm-ibm-emac.patch create mode 100644 drivers/ibm_newemac/linux-3.0.4-rtdm-ibm-emac.patch -- 1.7.4.1 |
|
From: Wolfgang G. <wg...@gr...> - 2011-11-17 13:47:32
|
From: Wolfgang Grandegger <wg...@de...>
Signed-off-by: Wolfgang Grandegger <wg...@de...>
---
configure.ac | 12 ++++++------
1 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/configure.ac b/configure.ac
index b95a805..c50811e 100644
--- a/configure.ac
+++ b/configure.ac
@@ -413,7 +413,7 @@ case "${RTEXT_LINUX_VERSION}" in
AC_MSG_ERROR([*** Unsupported kernel version $RTEXT_LINUX_VERSION - please upgrade at least to 2.4.19])
fi
;;
- 2.6.*)
+ 2.6.*|3.*)
;;
*)
AC_MSG_ERROR([*** Unsupported kernel version $RTEXT_LINUX_VERSION])
@@ -426,7 +426,7 @@ dnl ======================================================================
dnl import settings from RT-extensions
dnl ======================================================================
-# kbuild (linux 2.6) or not
+# kbuild (linux 2.6/3) or not
case "${RTEXT_LINUX_VERSION}" in
2.4.*)
unset CONFIG_KBUILD
@@ -440,7 +440,7 @@ case "${RTEXT_LINUX_VERSION}" in
unset CONFIG_KBUILD_LEGACY
MODULE_SYMVERS=Modules.symvers
;;
- 2.6.*)
+ 2.6.*|3.*)
CONFIG_KBUILD=y
unset CONFIG_KBUILD_LEGACY
MODULE_SYMVERS=Module.symvers
@@ -581,7 +581,7 @@ AC_SUBST(CROSS_COMPILE)
dnl ======================================================================
-dnl decide to build for 2.4 or 2.6 kernel
+dnl decide to build for 2.4 or 2.6/3 kernel
dnl ======================================================================
if test x$CONFIG_KBUILD = x; then
@@ -684,7 +684,7 @@ CPPFLAGS="${ac_save_CPPFLAGS} -D__KERNEL__ -I${RTEXT_LINUX_DIR}/include ${RTEXT_
# headers, depending on the RTAI version which has been
# identified. (rpm)
-if test "${CONFIG_KBUILD}" = "y"; then # building against linux-2.6
+if test "${CONFIG_KBUILD}" = "y"; then # building against linux-2.6/3
BS_CHECK_KHEADERS([rtdm/rtdm_driver.h],
[],
[AC_MSG_ERROR([*** header not found or working, please check RT-extension installation])],
@@ -1437,7 +1437,7 @@ AC_DEFINE_UNQUOTED(RTNET_RTDM_VER,
dnl ======================================================================
-dnl create links to Makefiles used by linux-2.6.x
+dnl create links to Makefiles used by linux 2.6.x or 3.x.x
dnl ======================================================================
if test x$CONFIG_KBUILD = xy; then
--
1.7.4.1
|
|
From: Wolfgang G. <wg...@gr...> - 2011-11-17 13:47:29
|
From: Wolfgang Grandegger <wg...@de...>
The driver requires a separate kernel patch implementing a common
real-time capable Memory Access Layer (MAL). Furthermore, the
PHY interface is also used directly from the Linux kernel.
Signed-off-by: Wolfgang Grandegger <wg...@de...>
---
configure.ac | 12 +
drivers/GNUmakefile.am | 4 +
drivers/ibm_newemac/GNUmakefile.am | 49 ++++
drivers/ibm_newemac/Makefile.kbuild | 10 +
drivers/ibm_newemac/core.c | 443 ++++++++++++++++++++++++-----------
drivers/ibm_newemac/core.h | 24 ++-
6 files changed, 399 insertions(+), 143 deletions(-)
create mode 100644 drivers/ibm_newemac/GNUmakefile.am
create mode 100644 drivers/ibm_newemac/Makefile.kbuild
diff --git a/configure.ac b/configure.ac
index c50811e..6afe868 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1049,6 +1049,16 @@ AC_ARG_ENABLE(igb,
AC_MSG_RESULT([${CONFIG_RTNET_DRV_IGB:-n}])
AM_CONDITIONAL(CONFIG_RTNET_DRV_IGB,[test "$CONFIG_RTNET_DRV_IGB" = "y"])
+AC_MSG_CHECKING([whether to build IBM EMAC (Gigabit) driver])
+AC_ARG_ENABLE(ibm-emac,
+ AS_HELP_STRING([--enable-ibm-emac], [build IBM EMAC driver]),
+ [case "$enableval" in
+ y | yes) CONFIG_RTNET_DRV_IBM_EMAC=y ;;
+ *) CONFIG_RTNET_DRV_IBM_EMAC=n ;;
+ esac])
+AC_MSG_RESULT([${CONFIG_RTNET_DRV_IBM_EMAC:-n}])
+AM_CONDITIONAL(CONFIG_RTNET_DRV_IBM_EMAC,[test "$CONFIG_RTNET_DRV_IBM_EMAC" = "y"])
+
dnl ======================================================================
dnl Stack parameters
dnl ======================================================================
@@ -1448,6 +1458,7 @@ if test x$CONFIG_KBUILD = xy; then
drivers/mpc52xx_fec \
drivers/tulip \
drivers/igb \
+ drivers/ibm_newemac \
drivers/experimental \
drivers/experimental/rt2500 \
drivers/experimental/e1000 \
@@ -1532,6 +1543,7 @@ AC_CONFIG_FILES([ \
drivers/mpc52xx_fec/GNUmakefile \
drivers/tulip/GNUmakefile \
drivers/igb/GNUmakefile \
+ drivers/ibm_newemac/GNUmakefile \
drivers/experimental/GNUmakefile \
drivers/experimental/rt2500/GNUmakefile \
drivers/experimental/e1000/GNUmakefile \
diff --git a/drivers/GNUmakefile.am b/drivers/GNUmakefile.am
index 9ebb290..669a2d6 100644
--- a/drivers/GNUmakefile.am
+++ b/drivers/GNUmakefile.am
@@ -20,6 +20,10 @@ if CONFIG_RTNET_DRV_IGB
OPTDIRS += igb
endif
+if CONFIG_RTNET_DRV_IBM_EMAC
+OPTDIRS += ibm_newemac
+endif
+
SUBDIRS = experimental $(OPTDIRS)
moduledir = $(DESTDIR)$(RTNET_MODULE_DIR)
diff --git a/drivers/ibm_newemac/GNUmakefile.am b/drivers/ibm_newemac/GNUmakefile.am
new file mode 100644
index 0000000..c23c773
--- /dev/null
+++ b/drivers/ibm_newemac/GNUmakefile.am
@@ -0,0 +1,49 @@
+moduledir = $(DESTDIR)$(RTNET_MODULE_DIR)
+modext = $(RTNET_MODULE_EXT)
+
+EXTRA_LIBRARIES = libkernel_ibm_emac.a
+
+libkernel_ibm_emac_a_CPPFLAGS = \
+ $(RTEXT_KMOD_CFLAGS) \
+ -I$(top_srcdir)/stack/include \
+ -I$(top_builddir)/stack/include
+
+libkernel_ibm_emac_a_SOURCES = \
+ core.h \
+ core.c
+
+# ethtool
+
+OBJS = rt_ibm_emac$(modext)
+
+rt_ibm_emac.o: libkernel_ibm_emac.a
+ $(LD) --whole-archive $< -r -o $@
+
+all-local: all-local$(modext)
+
+# 2.4 build
+all-local.o: $(OBJS)
+
+# 2.6 build
+all-local.ko: @RTNET_KBUILD_ENV@
+all-local.ko: $(libkernel_ibm_emac_a_SOURCES) FORCE
+ $(RTNET_KBUILD_CMD)
+
+install-exec-local: $(OBJS)
+ $(mkinstalldirs) $(moduledir)
+ $(INSTALL_DATA) $^ $(moduledir)
+
+uninstall-local:
+ for MOD in $(OBJS); do $(RM) $(moduledir)/$$MOD; done
+
+clean-local: $(libkernel_ibm_emac_a_SOURCES)
+ $(RTNET_KBUILD_CLEAN)
+
+distclean-local:
+ $(RTNET_KBUILD_DISTCLEAN)
+
+EXTRA_DIST = Makefile.kbuild
+
+DISTCLEANFILES = Makefile
+
+.PHONY: FORCE
diff --git a/drivers/ibm_newemac/Makefile.kbuild b/drivers/ibm_newemac/Makefile.kbuild
new file mode 100644
index 0000000..57b7bca
--- /dev/null
+++ b/drivers/ibm_newemac/Makefile.kbuild
@@ -0,0 +1,10 @@
+EXTRA_CFLAGS += \
+ -Idrivers/net/ibm_newemac \
+ $(rtext_kmod_cflags) \
+ -I$(top_srcdir)/stack/include \
+ -I$(top_builddir)/stack/include \
+ -I$(srcdir)
+
+obj-m += $(build_targets)
+
+rt_ibm_emac-objs := $(build_objs)
diff --git a/drivers/ibm_newemac/core.c b/drivers/ibm_newemac/core.c
index 519e19e..357f1ab 100644
--- a/drivers/ibm_newemac/core.c
+++ b/drivers/ibm_newemac/core.c
@@ -32,7 +32,6 @@
#include <linux/types.h>
#include <linux/pci.h>
#include <linux/etherdevice.h>
-#include <linux/skbuff.h>
#include <linux/crc32.h>
#include <linux/ethtool.h>
#include <linux/mii.h>
@@ -67,9 +66,9 @@
* at the same time and didn't come up with code I liked :(. --ebs
*/
-#define DRV_NAME "emac"
+#define DRV_NAME "rt-emac"
#define DRV_VERSION "3.54"
-#define DRV_DESC "PPC 4xx OCP EMAC driver"
+#define DRV_DESC "RTnet: PPC 4xx OCP EMAC driver"
MODULE_DESCRIPTION(DRV_DESC);
MODULE_AUTHOR
@@ -89,7 +88,7 @@ MODULE_LICENSE("GPL");
/* If packet size is less than this number, we allocate small skb and copy packet
* contents into it instead of just sending original big skb up
*/
-#define EMAC_RX_COPY_THRESH CONFIG_IBM_NEW_EMAC_RX_COPY_THRESHOLD
+#define EMAC_RX_COPY_THRESH 0 //256
/* Since multiple EMACs share MDIO lines in various ways, we need
* to avoid re-using the same PHY ID in cases where the arch didn't
@@ -99,7 +98,7 @@ MODULE_LICENSE("GPL");
* EMAC "sets" (multiple ASICs containing several EMACs) though we can
* probably require in that case to have explicit PHY IDs in the device-tree
*/
-static u32 busy_phy_map;
+extern u32 busy_phy_map;
static DEFINE_MUTEX(emac_phy_map_lock);
/* This is the wait queue used to wait on any event related to probe, that
@@ -174,9 +173,11 @@ static inline void emac_rx_clk_default(struct emac_instance *dev)
#define STOP_TIMEOUT_1000 13
#define STOP_TIMEOUT_1000_JUMBO 73
+#ifdef WITH_MULTICAST
static unsigned char default_mcast_addr[] = {
0x01, 0x80, 0xC2, 0x00, 0x00, 0x01
};
+#endif
/* Please, keep in sync with struct ibm_emac_stats/ibm_emac_error_stats */
static const char emac_stats_keys[EMAC_ETHTOOL_STATS_COUNT][ETH_GSTRING_LEN] = {
@@ -197,9 +198,15 @@ static const char emac_stats_keys[EMAC_ETHTOOL_STATS_COUNT][ETH_GSTRING_LEN] = {
"tx_errors"
};
+#ifdef ORIGINAL
static irqreturn_t emac_irq(int irq, void *dev_instance);
+#else
+static int emac_irq(rtdm_irq_t *irq_handle);
+#endif
static void emac_clean_tx_ring(struct emac_instance *dev);
+#ifdef WITH_MULTICAST
static void __emac_set_multicast_list(struct emac_instance *dev);
+#endif
static inline int emac_phy_supports_gige(int phy_mode)
{
@@ -300,27 +307,42 @@ static void emac_rx_disable(struct emac_instance *dev)
static inline void emac_netif_stop(struct emac_instance *dev)
{
+#ifdef ORIGINAL
netif_tx_lock_bh(dev->ndev);
netif_addr_lock(dev->ndev);
+#endif
+#ifdef WITH_MULTICAST
dev->no_mcast = 1;
+#endif
+#ifdef ORIGINAL
netif_addr_unlock(dev->ndev);
netif_tx_unlock_bh(dev->ndev);
dev->ndev->trans_start = jiffies; /* prevent tx timeout */
mal_poll_disable(dev->mal, &dev->commac);
netif_tx_disable(dev->ndev);
+#else
+ mal_poll_disable(dev->mal, &dev->commac);
+ rtnetif_stop_queue(dev->ndev);
+#endif
}
static inline void emac_netif_start(struct emac_instance *dev)
{
+#ifdef ORIGINAL
netif_tx_lock_bh(dev->ndev);
netif_addr_lock(dev->ndev);
+#endif
+#ifdef WITH_MULTICAST
dev->no_mcast = 0;
- if (dev->mcast_pending && netif_running(dev->ndev))
+ if (dev->mcast_pending && rtnetif_running(dev->ndev))
__emac_set_multicast_list(dev);
+#endif
+#ifdef ORIGINAL
netif_addr_unlock(dev->ndev);
netif_tx_unlock_bh(dev->ndev);
+#endif
- netif_wake_queue(dev->ndev);
+ rtnetif_wake_queue(dev->ndev);
/* NOTE: unconditional netif_wake_queue is only appropriate
* so long as all callers are assured to have free tx slots
@@ -385,6 +407,7 @@ static int emac_reset(struct emac_instance *dev)
}
}
+#ifdef ORIGINAL
static void emac_hash_mc(struct emac_instance *dev)
{
const int regs = EMAC_XAHT_REGS(dev);
@@ -412,10 +435,11 @@ static void emac_hash_mc(struct emac_instance *dev)
for (i = 0; i < regs; i++)
out_be32(gaht_base + i, gaht_temp[i]);
}
+#endif
-static inline u32 emac_iff2rmr(struct net_device *ndev)
+static inline u32 emac_iff2rmr(struct rtnet_device *ndev)
{
- struct emac_instance *dev = netdev_priv(ndev);
+ struct emac_instance *dev = rtnetdev_priv(ndev);
u32 r;
r = EMAC_RMR_SP | EMAC_RMR_SFCS | EMAC_RMR_IAE | EMAC_RMR_BAE;
@@ -427,11 +451,13 @@ static inline u32 emac_iff2rmr(struct net_device *ndev)
if (ndev->flags & IFF_PROMISC)
r |= EMAC_RMR_PME;
+#ifdef ORIGINAL
else if (ndev->flags & IFF_ALLMULTI ||
(netdev_mc_count(ndev) > EMAC_XAHT_SLOTS(dev)))
r |= EMAC_RMR_PMME;
else if (!netdev_mc_empty(ndev))
r |= EMAC_RMR_MAE;
+#endif
return r;
}
@@ -533,8 +559,8 @@ static inline u32 emac_calc_rwmr(struct emac_instance *dev,
static int emac_configure(struct emac_instance *dev)
{
struct emac_regs __iomem *p = dev->emacp;
- struct net_device *ndev = dev->ndev;
- int tx_size, rx_size, link = netif_carrier_ok(dev->ndev);
+ struct rtnet_device *ndev = dev->ndev;
+ int tx_size, rx_size, link = rtnetif_carrier_ok(dev->ndev);
u32 r, mr1 = 0;
DBG(dev, "configure" NL);
@@ -633,8 +659,10 @@ static int emac_configure(struct emac_instance *dev)
/* Receive mode register */
r = emac_iff2rmr(ndev);
+#ifdef ORIGINAL
if (r & EMAC_RMR_MAE)
emac_hash_mc(dev);
+#endif
out_be32(&p->rmr, r);
/* FIFOs thresholds */
@@ -736,15 +764,16 @@ static void emac_reset_work(struct work_struct *work)
mutex_unlock(&dev->link_lock);
}
-static void emac_tx_timeout(struct net_device *ndev)
+#ifdef ORIGINAL
+static void emac_tx_timeout(struct rtnet_device *ndev)
{
- struct emac_instance *dev = netdev_priv(ndev);
+ struct emac_instance *dev = rtnetdev_priv(ndev);
DBG(dev, "tx_timeout" NL);
schedule_work(&dev->reset_work);
}
-
+#endif
static inline int emac_phy_done(struct emac_instance *dev, u32 stacr)
{
@@ -888,9 +917,9 @@ static void __emac_mdio_write(struct emac_instance *dev, u8 id, u8 reg,
mutex_unlock(&dev->mdio_lock);
}
-static int emac_mdio_read(struct net_device *ndev, int id, int reg)
+static int emac_mdio_read(struct rtnet_device *ndev, int id, int reg)
{
- struct emac_instance *dev = netdev_priv(ndev);
+ struct emac_instance *dev = rtnetdev_priv(ndev);
int res;
res = __emac_mdio_read((dev->mdio_instance &&
@@ -900,9 +929,9 @@ static int emac_mdio_read(struct net_device *ndev, int id, int reg)
return res;
}
-static void emac_mdio_write(struct net_device *ndev, int id, int reg, int val)
+static void emac_mdio_write(struct rtnet_device *ndev, int id, int reg, int val)
{
- struct emac_instance *dev = netdev_priv(ndev);
+ struct emac_instance *dev = rtnetdev_priv(ndev);
__emac_mdio_write((dev->mdio_instance &&
dev->phy.gpcs_address != id) ?
@@ -910,6 +939,7 @@ static void emac_mdio_write(struct net_device *ndev, int id, int reg, int val)
(u8) id, (u8) reg, (u16) val);
}
+#ifdef ORIGINAL
/* Tx lock BH */
static void __emac_set_multicast_list(struct emac_instance *dev)
{
@@ -944,13 +974,13 @@ static void __emac_set_multicast_list(struct emac_instance *dev)
}
/* Tx lock BH */
-static void emac_set_multicast_list(struct net_device *ndev)
+static void emac_set_multicast_list(struct rtnet_device *ndev)
{
- struct emac_instance *dev = netdev_priv(ndev);
+ struct emac_instance *dev = rtnetdev_priv(ndev);
DBG(dev, "multicast" NL);
- BUG_ON(!netif_running(dev->ndev));
+ BUG_ON(!rtnetif_running(dev->ndev));
if (dev->no_mcast) {
dev->mcast_pending = 1;
@@ -958,7 +988,9 @@ static void emac_set_multicast_list(struct net_device *ndev)
}
__emac_set_multicast_list(dev);
}
+#endif
+#ifdef ORIGINAL
static int emac_resize_rx_ring(struct emac_instance *dev, int new_mtu)
{
int rx_sync_size = emac_rx_sync_size(new_mtu);
@@ -972,7 +1004,7 @@ static int emac_resize_rx_ring(struct emac_instance *dev, int new_mtu)
if (dev->rx_sg_skb) {
++dev->estats.rx_dropped_resize;
- dev_kfree_skb(dev->rx_sg_skb);
+ kfree_rtskb(dev->rx_sg_skb);
dev->rx_sg_skb = NULL;
}
@@ -995,16 +1027,19 @@ static int emac_resize_rx_ring(struct emac_instance *dev, int new_mtu)
/* Second pass, allocate new skbs */
for (i = 0; i < NUM_RX_BUFF; ++i) {
- struct sk_buff *skb = alloc_skb(rx_skb_size, GFP_ATOMIC);
+ struct rtskb *skb = dev_alloc_rtskb(rx_skb_size,
+ &dev->skb_pool);
if (!skb) {
ret = -ENOMEM;
goto oom;
}
BUG_ON(!dev->rx_skb[i]);
- dev_kfree_skb(dev->rx_skb[i]);
+ kfree_rtskb(dev->rx_skb[i]);
+
+ skb->rtdev = dev->ndev;
- skb_reserve(skb, EMAC_RX_SKB_HEADROOM + 2);
+ rtskb_reserve(skb, EMAC_RX_SKB_HEADROOM + 2);
dev->rx_desc[i].data_ptr =
dma_map_single(&dev->ofdev->dev, skb->data - 2, rx_sync_size,
DMA_FROM_DEVICE) + 2;
@@ -1032,11 +1067,13 @@ static int emac_resize_rx_ring(struct emac_instance *dev, int new_mtu)
return ret;
}
+#endif
+#ifdef ORIGINAL
/* Process ctx, rtnl_lock semaphore */
-static int emac_change_mtu(struct net_device *ndev, int new_mtu)
+static int emac_change_mtu(struct rtnet_device *ndev, int new_mtu)
{
- struct emac_instance *dev = netdev_priv(ndev);
+ struct emac_instance *dev = rtnetdev_priv(ndev);
int ret = 0;
if (new_mtu < EMAC_MIN_MTU || new_mtu > dev->max_mtu)
@@ -1044,7 +1081,7 @@ static int emac_change_mtu(struct net_device *ndev, int new_mtu)
DBG(dev, "change_mtu(%d)" NL, new_mtu);
- if (netif_running(ndev)) {
+ if (rtnetif_running(ndev)) {
/* Check if we really need to reinitialize RX ring */
if (emac_rx_skb_size(ndev->mtu) != emac_rx_skb_size(new_mtu))
ret = emac_resize_rx_ring(dev, new_mtu);
@@ -1058,6 +1095,7 @@ static int emac_change_mtu(struct net_device *ndev, int new_mtu)
return ret;
}
+#endif
static void emac_clean_tx_ring(struct emac_instance *dev)
{
@@ -1065,7 +1103,7 @@ static void emac_clean_tx_ring(struct emac_instance *dev)
for (i = 0; i < NUM_TX_BUFF; ++i) {
if (dev->tx_skb[i]) {
- dev_kfree_skb(dev->tx_skb[i]);
+ kfree_rtskb(dev->tx_skb[i]);
dev->tx_skb[i] = NULL;
if (dev->tx_desc[i].ctrl & MAL_TX_CTRL_READY)
++dev->estats.tx_dropped;
@@ -1082,13 +1120,13 @@ static void emac_clean_rx_ring(struct emac_instance *dev)
for (i = 0; i < NUM_RX_BUFF; ++i)
if (dev->rx_skb[i]) {
dev->rx_desc[i].ctrl = 0;
- dev_kfree_skb(dev->rx_skb[i]);
+ kfree_rtskb(dev->rx_skb[i]);
dev->rx_skb[i] = NULL;
dev->rx_desc[i].data_ptr = 0;
}
if (dev->rx_sg_skb) {
- dev_kfree_skb(dev->rx_sg_skb);
+ kfree_rtskb(dev->rx_sg_skb);
dev->rx_sg_skb = NULL;
}
}
@@ -1096,14 +1134,18 @@ static void emac_clean_rx_ring(struct emac_instance *dev)
static inline int emac_alloc_rx_skb(struct emac_instance *dev, int slot,
gfp_t flags)
{
- struct sk_buff *skb = alloc_skb(dev->rx_skb_size, flags);
+ struct rtskb *skb;
+
+ skb = dev_alloc_rtskb(dev->rx_skb_size, &dev->skb_pool);
if (unlikely(!skb))
return -ENOMEM;
dev->rx_skb[slot] = skb;
dev->rx_desc[slot].data_len = 0;
- skb_reserve(skb, EMAC_RX_SKB_HEADROOM + 2);
+ skb->rtdev = dev->ndev;
+
+ rtskb_reserve(skb, EMAC_RX_SKB_HEADROOM + 2);
dev->rx_desc[slot].data_ptr =
dma_map_single(&dev->ofdev->dev, skb->data - 2, dev->rx_sync_size,
DMA_FROM_DEVICE) + 2;
@@ -1116,7 +1158,7 @@ static inline int emac_alloc_rx_skb(struct emac_instance *dev, int slot,
static void emac_print_link_status(struct emac_instance *dev)
{
- if (netif_carrier_ok(dev->ndev))
+ if (rtnetif_carrier_ok(dev->ndev))
printk(KERN_INFO "%s: link is up, %d %s%s\n",
dev->ndev->name, dev->phy.speed,
dev->phy.duplex == DUPLEX_FULL ? "FDX" : "HDX",
@@ -1127,15 +1169,18 @@ static void emac_print_link_status(struct emac_instance *dev)
}
/* Process ctx, rtnl_lock semaphore */
-static int emac_open(struct net_device *ndev)
+static int emac_open(struct rtnet_device *ndev)
{
- struct emac_instance *dev = netdev_priv(ndev);
+ struct emac_instance *dev = rtnetdev_priv(ndev);
int err, i;
DBG(dev, "open" NL);
+ rt_stack_connect(ndev, &STACK_manager);
+
/* Setup error IRQ handler */
- err = request_irq(dev->emac_irq, emac_irq, 0, "EMAC", dev);
+ err = rtdm_irq_request(&dev->emac_irq_handle, dev->emac_irq,
+ emac_irq, 0, "EMAC", dev);
if (err) {
printk(KERN_ERR "%s: failed to request IRQ %d\n",
ndev->name, dev->emac_irq);
@@ -1164,11 +1209,11 @@ static int emac_open(struct net_device *ndev)
if (dev->phy.def->ops->poll_link(&dev->phy)) {
dev->phy.def->ops->read_link(&dev->phy);
emac_rx_clk_default(dev);
- netif_carrier_on(dev->ndev);
+ rtnetif_carrier_on(dev->ndev);
link_poll_interval = PHY_POLL_LINK_ON;
} else {
emac_rx_clk_tx(dev);
- netif_carrier_off(dev->ndev);
+ rtnetif_carrier_off(dev->ndev);
link_poll_interval = PHY_POLL_LINK_OFF;
}
dev->link_polling = 1;
@@ -1176,10 +1221,12 @@ static int emac_open(struct net_device *ndev)
schedule_delayed_work(&dev->link_work, link_poll_interval);
emac_print_link_status(dev);
} else
- netif_carrier_on(dev->ndev);
+ rtnetif_carrier_on(dev->ndev);
+#ifdef ORIGINAL
/* Required for Pause packet support in EMAC */
dev_mc_add_global(ndev, default_mcast_addr);
+#endif
emac_configure(dev);
mal_poll_add(dev->mal, &dev->commac);
@@ -1195,7 +1242,7 @@ static int emac_open(struct net_device *ndev)
return 0;
oom:
emac_clean_rx_ring(dev);
- free_irq(dev->emac_irq, dev);
+ rtdm_irq_free(&dev->emac_irq_handle);
return -ENOMEM;
}
@@ -1247,12 +1294,12 @@ static void emac_link_timer(struct work_struct *work)
goto bail;
if (dev->phy.def->ops->poll_link(&dev->phy)) {
- if (!netif_carrier_ok(dev->ndev)) {
+ if (!rtnetif_carrier_ok(dev->ndev)) {
emac_rx_clk_default(dev);
/* Get new link parameters */
dev->phy.def->ops->read_link(&dev->phy);
- netif_carrier_on(dev->ndev);
+ rtnetif_carrier_on(dev->ndev);
emac_netif_stop(dev);
emac_full_tx_reset(dev);
emac_netif_start(dev);
@@ -1260,10 +1307,14 @@ static void emac_link_timer(struct work_struct *work)
}
link_poll_interval = PHY_POLL_LINK_ON;
} else {
- if (netif_carrier_ok(dev->ndev)) {
+ if (rtnetif_carrier_ok(dev->ndev)) {
emac_rx_clk_tx(dev);
- netif_carrier_off(dev->ndev);
+ rtnetif_carrier_off(dev->ndev);
+#ifdef ORIGINAL
netif_tx_disable(dev->ndev);
+#else
+ rtnetif_stop_queue(dev->ndev);
+#endif
emac_reinitialize(dev);
emac_print_link_status(dev);
}
@@ -1274,33 +1325,45 @@ static void emac_link_timer(struct work_struct *work)
mutex_unlock(&dev->link_lock);
}
+#ifdef ORIGINAL
static void emac_force_link_update(struct emac_instance *dev)
{
- netif_carrier_off(dev->ndev);
+ rtnetif_carrier_off(dev->ndev);
smp_rmb();
if (dev->link_polling) {
+#if LINUX_VERSION_CODE <= KERNEL_VERSION(2,6,37)
cancel_rearming_delayed_work(&dev->link_work);
+#else
+ cancel_delayed_work_sync(&dev->link_work);
+#endif
if (dev->link_polling)
schedule_delayed_work(&dev->link_work, PHY_POLL_LINK_OFF);
}
}
+#endif
/* Process ctx, rtnl_lock semaphore */
-static int emac_close(struct net_device *ndev)
+static int emac_close(struct rtnet_device *ndev)
{
- struct emac_instance *dev = netdev_priv(ndev);
+ struct emac_instance *dev = rtnetdev_priv(ndev);
DBG(dev, "close" NL);
if (dev->phy.address >= 0) {
dev->link_polling = 0;
+#if LINUX_VERSION_CODE <= KERNEL_VERSION(2,6,37)
cancel_rearming_delayed_work(&dev->link_work);
+#else
+ cancel_delayed_work_sync(&dev->link_work);
+#endif
}
mutex_lock(&dev->link_lock);
emac_netif_stop(dev);
dev->opened = 0;
mutex_unlock(&dev->link_lock);
+ rt_stack_disconnect(ndev);
+
emac_rx_disable(dev);
emac_tx_disable(dev);
mal_disable_rx_channel(dev->mal, dev->mal_rx_chan);
@@ -1312,13 +1375,13 @@ static int emac_close(struct net_device *ndev)
free_irq(dev->emac_irq, dev);
- netif_carrier_off(ndev);
+ rtnetif_carrier_off(ndev);
return 0;
}
static inline u16 emac_tx_csum(struct emac_instance *dev,
- struct sk_buff *skb)
+ struct rtskb *skb)
{
if (emac_has_feature(dev, EMAC_FTR_HAS_TAH) &&
(skb->ip_summed == CHECKSUM_PARTIAL)) {
@@ -1331,7 +1394,8 @@ static inline u16 emac_tx_csum(struct emac_instance *dev,
static inline int emac_xmit_finish(struct emac_instance *dev, int len)
{
struct emac_regs __iomem *p = dev->emacp;
- struct net_device *ndev = dev->ndev;
+ struct rtnet_device *ndev = dev->ndev;
+
/* Send the packet out. If the if makes a significant perf
* difference, then we can store the TMR0 value in "dev"
@@ -1343,11 +1407,13 @@ static inline int emac_xmit_finish(struct emac_instance *dev, int len)
out_be32(&p->tmr0, EMAC_TMR0_XMIT);
if (unlikely(++dev->tx_cnt == NUM_TX_BUFF)) {
- netif_stop_queue(ndev);
+ rtnetif_stop_queue(ndev);
DBG2(dev, "stopped TX queue" NL);
}
+#ifdef ORIGINAL
ndev->trans_start = jiffies;
+#endif
++dev->stats.tx_packets;
dev->stats.tx_bytes += len;
@@ -1355,15 +1421,23 @@ static inline int emac_xmit_finish(struct emac_instance *dev, int len)
}
/* Tx lock BH */
-static int emac_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+static int emac_start_xmit(struct rtskb *skb, struct rtnet_device *ndev)
{
- struct emac_instance *dev = netdev_priv(ndev);
+ struct emac_instance *dev = rtnetdev_priv(ndev);
unsigned int len = skb->len;
- int slot;
+ int slot, err;
+ u16 ctrl;
+ rtdm_lockctx_t context;
+
+ rtdm_lock_get_irqsave(&dev->lock, context);
- u16 ctrl = EMAC_TX_CTRL_GFCS | EMAC_TX_CTRL_GP | MAL_TX_CTRL_READY |
+ ctrl = EMAC_TX_CTRL_GFCS | EMAC_TX_CTRL_GP | MAL_TX_CTRL_READY |
MAL_TX_CTRL_LAST | emac_tx_csum(dev, skb);
+ if (skb->xmit_stamp)
+ *skb->xmit_stamp = cpu_to_be64(rtdm_clock_read() +
+ *skb->xmit_stamp);
+
slot = dev->tx_slot++;
if (dev->tx_slot == NUM_TX_BUFF) {
dev->tx_slot = 0;
@@ -1380,7 +1454,11 @@ static int emac_start_xmit(struct sk_buff *skb, struct net_device *ndev)
wmb();
dev->tx_desc[slot].ctrl = ctrl;
- return emac_xmit_finish(dev, len);
+ err = emac_xmit_finish(dev, len);
+
+ rtdm_lock_put_irqrestore(&dev->lock, context);
+
+ return err;
}
static inline int emac_xmit_split(struct emac_instance *dev, int slot,
@@ -1412,13 +1490,14 @@ static inline int emac_xmit_split(struct emac_instance *dev, int slot,
return slot;
}
+#ifdef ORIGINAL
/* Tx lock BH disabled (SG version for TAH equipped EMACs) */
-static int emac_start_xmit_sg(struct sk_buff *skb, struct net_device *ndev)
+static int emac_start_xmit_sg(struct rtskb *skb, struct rtnet_device *ndev)
{
- struct emac_instance *dev = netdev_priv(ndev);
+ struct emac_instance *dev = rtnetdev_priv(ndev);
int nr_frags = skb_shinfo(skb)->nr_frags;
int len = skb->len, chunk;
- int slot, i;
+ int slot, i, err;
u16 ctrl;
u32 pd;
@@ -1476,7 +1555,7 @@ static int emac_start_xmit_sg(struct sk_buff *skb, struct net_device *ndev)
dev->tx_desc[dev->tx_slot].ctrl = ctrl;
dev->tx_slot = (slot + 1) % NUM_TX_BUFF;
- return emac_xmit_finish(dev, skb->len);
+ return emac_xmit_finish(dev, skb->len);
undo_frame:
/* Well, too bad. Our previous estimation was overly optimistic.
@@ -1491,10 +1570,11 @@ static int emac_start_xmit_sg(struct sk_buff *skb, struct net_device *ndev)
++dev->estats.tx_undo;
stop_queue:
- netif_stop_queue(ndev);
+ rtnetif_stop_queue(ndev);
DBG2(dev, "stopped TX queue" NL);
return NETDEV_TX_BUSY;
}
+#endif
/* Tx lock BHs */
static void emac_parse_tx_error(struct emac_instance *dev, u16 ctrl)
@@ -1528,6 +1608,7 @@ static void emac_poll_tx(void *param)
{
struct emac_instance *dev = param;
u32 bad_mask;
+ rtdm_lockctx_t context;
DBG2(dev, "poll_tx, %d %d" NL, dev->tx_cnt, dev->ack_slot);
@@ -1536,18 +1617,22 @@ static void emac_poll_tx(void *param)
else
bad_mask = EMAC_IS_BAD_TX;
+#ifdef ORIGINAL
netif_tx_lock_bh(dev->ndev);
+#else
+ rtdm_lock_get_irqsave(&dev->lock, context);
+#endif
if (dev->tx_cnt) {
u16 ctrl;
int slot = dev->ack_slot, n = 0;
again:
ctrl = dev->tx_desc[slot].ctrl;
if (!(ctrl & MAL_TX_CTRL_READY)) {
- struct sk_buff *skb = dev->tx_skb[slot];
+ struct rtskb *skb = dev->tx_skb[slot];
++n;
if (skb) {
- dev_kfree_skb(skb);
+ kfree_rtskb(skb);
dev->tx_skb[slot] = NULL;
}
slot = (slot + 1) % NUM_TX_BUFF;
@@ -1560,20 +1645,24 @@ static void emac_poll_tx(void *param)
}
if (n) {
dev->ack_slot = slot;
- if (netif_queue_stopped(dev->ndev) &&
+ if (rtnetif_queue_stopped(dev->ndev) &&
dev->tx_cnt < EMAC_TX_WAKEUP_THRESH)
- netif_wake_queue(dev->ndev);
+ rtnetif_wake_queue(dev->ndev);
- DBG2(dev, "tx %d pkts" NL, n);
+ DBG2(dev, "tx %d pkts, slot %d" NL, n, slot);
}
}
+#ifdef ORIGINAL
netif_tx_unlock_bh(dev->ndev);
+#else
+ rtdm_lock_put_irqrestore(&dev->lock, context);
+#endif
}
static inline void emac_recycle_rx_skb(struct emac_instance *dev, int slot,
int len)
{
- struct sk_buff *skb = dev->rx_skb[slot];
+ struct rtskb *skb = dev->rx_skb[slot];
DBG2(dev, "recycle %d %d" NL, slot, len);
@@ -1615,7 +1704,7 @@ static void emac_parse_rx_error(struct emac_instance *dev, u16 ctrl)
}
static inline void emac_rx_csum(struct emac_instance *dev,
- struct sk_buff *skb, u16 ctrl)
+ struct rtskb *skb, u16 ctrl)
{
#ifdef CONFIG_IBM_NEW_EMAC_TAH
if (!ctrl && dev->tah_dev) {
@@ -1633,12 +1722,12 @@ static inline int emac_rx_sg_append(struct emac_instance *dev, int slot)
if (unlikely(tot_len + 2 > dev->rx_skb_size)) {
++dev->estats.rx_dropped_mtu;
- dev_kfree_skb(dev->rx_sg_skb);
+ kfree_rtskb(dev->rx_sg_skb);
dev->rx_sg_skb = NULL;
} else {
- cacheable_memcpy(skb_tail_pointer(dev->rx_sg_skb),
+ cacheable_memcpy(rtskb_tail_pointer(dev->rx_sg_skb),
dev->rx_skb[slot]->data, len);
- skb_put(dev->rx_sg_skb, len);
+ rtskb_put(dev->rx_sg_skb, len);
emac_recycle_rx_skb(dev, slot, len);
return 0;
}
@@ -1658,7 +1747,7 @@ static int emac_poll_rx(void *param, int budget)
again:
while (budget > 0) {
int len;
- struct sk_buff *skb;
+ struct rtskb *skb;
u16 ctrl = dev->rx_desc[slot].ctrl;
if (ctrl & MAL_RX_CTRL_EMPTY)
@@ -1687,12 +1776,15 @@ static int emac_poll_rx(void *param, int budget)
}
if (len && len < EMAC_RX_COPY_THRESH) {
- struct sk_buff *copy_skb =
- alloc_skb(len + EMAC_RX_SKB_HEADROOM + 2, GFP_ATOMIC);
+ struct rtskb *copy_skb =
+ dev_alloc_rtskb(len + EMAC_RX_SKB_HEADROOM + 2,
+ &dev->skb_pool);
if (unlikely(!copy_skb))
goto oom;
- skb_reserve(copy_skb, EMAC_RX_SKB_HEADROOM + 2);
+ copy_skb->rtdev = dev->ndev;
+
+ rtskb_reserve(copy_skb, EMAC_RX_SKB_HEADROOM + 2);
cacheable_memcpy(copy_skb->data - 2, skb->data - 2,
len + 2);
emac_recycle_rx_skb(dev, slot, len);
@@ -1700,13 +1792,18 @@ static int emac_poll_rx(void *param, int budget)
} else if (unlikely(emac_alloc_rx_skb(dev, slot, GFP_ATOMIC)))
goto oom;
- skb_put(skb, len);
+ rtskb_put(skb, len);
push_packet:
- skb->protocol = eth_type_trans(skb, dev->ndev);
+ skb->protocol = rt_eth_type_trans(skb, dev->ndev);
+ skb->time_stamp = dev->mal->time_stamp;
emac_rx_csum(dev, skb, ctrl);
- if (unlikely(netif_receive_skb(skb) == NET_RX_DROP))
+#ifdef ORIGINAL
+ if (unlikely(rtnetif_receive_skb(skb) == NET_RX_DROP))
++dev->estats.rx_dropped_stack;
+#else
+ rtnetif_rx(skb);
+#endif
next:
++dev->stats.rx_packets;
skip:
@@ -1724,7 +1821,7 @@ static int emac_poll_rx(void *param, int budget)
emac_recycle_rx_skb(dev, slot, 0);
} else {
dev->rx_sg_skb = skb;
- skb_put(skb, len);
+ rtskb_put(skb, len);
}
} else if (!emac_rx_sg_append(dev, slot) &&
(ctrl & MAL_RX_CTRL_LAST)) {
@@ -1736,7 +1833,7 @@ static int emac_poll_rx(void *param, int budget)
if (unlikely(ctrl && ctrl != EMAC_RX_TAH_BAD_CSUM)) {
emac_parse_rx_error(dev, ctrl);
++dev->estats.rx_dropped_error;
- dev_kfree_skb(skb);
+ kfree_rtskb(skb);
len = 0;
} else
goto push_packet;
@@ -1766,7 +1863,7 @@ static int emac_poll_rx(void *param, int budget)
if (dev->rx_sg_skb) {
DBG2(dev, "dropping partial rx packet" NL);
++dev->estats.rx_dropped_error;
- dev_kfree_skb(dev->rx_sg_skb);
+ kfree_rtskb(dev->rx_sg_skb);
dev->rx_sg_skb = NULL;
}
@@ -1775,6 +1872,9 @@ static int emac_poll_rx(void *param, int budget)
emac_rx_enable(dev);
dev->rx_slot = 0;
}
+
+ if (received)
+ rt_mark_stack_mgr(dev->ndev);
return received;
}
@@ -1817,14 +1917,17 @@ static void emac_rxde(void *param)
}
/* Hard IRQ */
-static irqreturn_t emac_irq(int irq, void *dev_instance)
+static int emac_irq(rtdm_irq_t *irq_handle)
{
- struct emac_instance *dev = dev_instance;
+ struct rtnet_device *netdev = rtdm_irq_get_arg(irq_handle,
+ struct rtnet_device);
+ struct emac_instance *dev = rtnetdev_priv(netdev);
struct emac_regs __iomem *p = dev->emacp;
struct emac_error_stats *st = &dev->estats;
u32 isr;
+ rtdm_lockctx_t context;
- spin_lock(&dev->lock);
+ rtdm_lock_get_irqsave(&dev->lock, context);
isr = in_be32(&p->isr);
out_be32(&p->isr, isr);
@@ -1862,23 +1965,23 @@ static irqreturn_t emac_irq(int irq, void *dev_instance)
if (isr & EMAC_ISR_TE)
++st->tx_errors;
- spin_unlock(&dev->lock);
+ rtdm_lock_put_irqrestore(&dev->lock, context);
- return IRQ_HANDLED;
+ return RTDM_IRQ_HANDLED;
}
-static struct net_device_stats *emac_stats(struct net_device *ndev)
+static struct net_device_stats *emac_stats(struct rtnet_device *ndev)
{
- struct emac_instance *dev = netdev_priv(ndev);
+ struct emac_instance *dev = rtnetdev_priv(ndev);
struct emac_stats *st = &dev->stats;
struct emac_error_stats *est = &dev->estats;
struct net_device_stats *nst = &dev->nstats;
- unsigned long flags;
+ rtdm_lockctx_t context;
DBG2(dev, "stats" NL);
/* Compute "legacy" statistics */
- spin_lock_irqsave(&dev->lock, flags);
+ rtdm_lock_get_irqsave(&dev->lock, context);
nst->rx_packets = (unsigned long)st->rx_packets;
nst->rx_bytes = (unsigned long)st->rx_bytes;
nst->tx_packets = (unsigned long)st->tx_packets;
@@ -1916,7 +2019,7 @@ static struct net_device_stats *emac_stats(struct net_device *ndev)
est->tx_bd_excessive_collisions +
est->tx_bd_late_collision +
est->tx_bd_multple_collisions);
- spin_unlock_irqrestore(&dev->lock, flags);
+ rtdm_lock_put_irqrestore(&dev->lock, context);
return nst;
}
@@ -1934,11 +2037,12 @@ static struct mal_commac_ops emac_commac_sg_ops = {
.rxde = &emac_rxde,
};
+#ifdef ORIGINAL
/* Ethtool support */
-static int emac_ethtool_get_settings(struct net_device *ndev,
+static int emac_ethtool_get_settings(struct rtnet_device *ndev,
struct ethtool_cmd *cmd)
{
- struct emac_instance *dev = netdev_priv(ndev);
+ struct emac_instance *dev = rtnetdev_priv(ndev);
cmd->supported = dev->phy.features;
cmd->port = PORT_MII;
@@ -1956,10 +2060,10 @@ static int emac_ethtool_get_settings(struct net_device *ndev,
return 0;
}
-static int emac_ethtool_set_settings(struct net_device *ndev,
+static int emac_ethtool_set_settings(struct rtnet_device *ndev,
struct ethtool_cmd *cmd)
{
- struct emac_instance *dev = netdev_priv(ndev);
+ struct emac_instance *dev = rtnetdev_priv(ndev);
u32 f = dev->phy.features;
DBG(dev, "set_settings(%d, %d, %d, 0x%08x)" NL,
@@ -2027,17 +2131,17 @@ static int emac_ethtool_set_settings(struct net_device *ndev,
return 0;
}
-static void emac_ethtool_get_ringparam(struct net_device *ndev,
+static void emac_ethtool_get_ringparam(struct rtnet_device *ndev,
struct ethtool_ringparam *rp)
{
rp->rx_max_pending = rp->rx_pending = NUM_RX_BUFF;
rp->tx_max_pending = rp->tx_pending = NUM_TX_BUFF;
}
-static void emac_ethtool_get_pauseparam(struct net_device *ndev,
+static void emac_ethtool_get_pauseparam(struct rtnet_device *ndev,
struct ethtool_pauseparam *pp)
{
- struct emac_instance *dev = netdev_priv(ndev);
+ struct emac_instance *dev = rtnetdev_priv(ndev);
mutex_lock(&dev->link_lock);
if ((dev->phy.features & SUPPORTED_Autoneg) &&
@@ -2053,9 +2157,9 @@ static void emac_ethtool_get_pauseparam(struct net_device *ndev,
mutex_unlock(&dev->link_lock);
}
-static u32 emac_ethtool_get_rx_csum(struct net_device *ndev)
+static u32 emac_ethtool_get_rx_csum(struct rtnet_device *ndev)
{
- struct emac_instance *dev = netdev_priv(ndev);
+ struct emac_instance *dev = rtnetdev_priv(ndev);
return dev->tah_dev != NULL;
}
@@ -2070,9 +2174,9 @@ static int emac_get_regs_len(struct emac_instance *dev)
EMAC_ETHTOOL_REGS_SIZE(dev);
}
-static int emac_ethtool_get_regs_len(struct net_device *ndev)
+static int emac_ethtool_get_regs_len(struct rtnet_device *ndev)
{
- struct emac_instance *dev = netdev_priv(ndev);
+ struct emac_instance *dev = rtnetdev_priv(ndev);
int size;
size = sizeof(struct emac_ethtool_regs_hdr) +
@@ -2103,10 +2207,10 @@ static void *emac_dump_regs(struct emac_instance *dev, void *buf)
}
}
-static void emac_ethtool_get_regs(struct net_device *ndev,
+static void emac_ethtool_get_regs(struct rtnet_device *ndev,
struct ethtool_regs *regs, void *buf)
{
- struct emac_instance *dev = netdev_priv(ndev);
+ struct emac_instance *dev = rtnetdev_priv(ndev);
struct emac_ethtool_regs_hdr *hdr = buf;
hdr->components = 0;
@@ -2128,9 +2232,9 @@ static void emac_ethtool_get_regs(struct net_device *ndev,
}
}
-static int emac_ethtool_nway_reset(struct net_device *ndev)
+static int emac_ethtool_nway_reset(struct rtnet_device *ndev)
{
- struct emac_instance *dev = netdev_priv(ndev);
+ struct emac_instance *dev = rtnetdev_priv(ndev);
int res = 0;
DBG(dev, "nway_reset" NL);
@@ -2151,7 +2255,7 @@ static int emac_ethtool_nway_reset(struct net_device *ndev)
return res;
}
-static int emac_ethtool_get_sset_count(struct net_device *ndev, int stringset)
+static int emac_ethtool_get_sset_count(struct rtnet_device *ndev, int stringset)
{
if (stringset == ETH_SS_STATS)
return EMAC_ETHTOOL_STATS_COUNT;
@@ -2159,28 +2263,28 @@ static int emac_ethtool_get_sset_count(struct net_device *ndev, int stringset)
return -EINVAL;
}
-static void emac_ethtool_get_strings(struct net_device *ndev, u32 stringset,
+static void emac_ethtool_get_strings(struct rtnet_device *ndev, u32 stringset,
u8 * buf)
{
if (stringset == ETH_SS_STATS)
memcpy(buf, &emac_stats_keys, sizeof(emac_stats_keys));
}
-static void emac_ethtool_get_ethtool_stats(struct net_device *ndev,
+static void emac_ethtool_get_ethtool_stats(struct rtnet_device *ndev,
struct ethtool_stats *estats,
u64 * tmp_stats)
{
- struct emac_instance *dev = netdev_priv(ndev);
+ struct emac_instance *dev = rtnetdev_priv(ndev);
memcpy(tmp_stats, &dev->stats, sizeof(dev->stats));
tmp_stats += sizeof(dev->stats) / sizeof(u64);
memcpy(tmp_stats, &dev->estats, sizeof(dev->estats));
}
-static void emac_ethtool_get_drvinfo(struct net_device *ndev,
+static void emac_ethtool_get_drvinfo(struct rtnet_device *ndev,
struct ethtool_drvinfo *info)
{
- struct emac_instance *dev = netdev_priv(ndev);
+ struct emac_instance *dev = rtnetdev_priv(ndev);
strcpy(info->driver, "ibm_emac");
strcpy(info->version, DRV_VERSION);
@@ -2214,9 +2318,9 @@ static const struct ethtool_ops emac_ethtool_ops = {
.get_sg = ethtool_op_get_sg,
};
-static int emac_ioctl(struct net_device *ndev, struct ifreq *rq, int cmd)
+static int emac_ioctl(struct rtnet_device *ndev, struct ifreq *rq, int cmd)
{
- struct emac_instance *dev = netdev_priv(ndev);
+ struct emac_instance *dev = rtnetdev_priv(ndev);
struct mii_ioctl_data *data = if_mii(rq);
DBG(dev, "ioctl %08x" NL, cmd);
@@ -2241,6 +2345,7 @@ static int emac_ioctl(struct net_device *ndev, struct ifreq *rq, int cmd)
return -EOPNOTSUPP;
}
}
+#endif
struct emac_depentry {
u32 phandle;
@@ -2381,11 +2486,11 @@ static int __devinit emac_read_uint_prop(struct device_node *np, const char *nam
static int __devinit emac_init_phy(struct emac_instance *dev)
{
struct device_node *np = dev->ofdev->dev.of_node;
- struct net_device *ndev = dev->ndev;
+ struct rtnet_device *ndev = dev->ndev;
u32 phy_map, adv;
int i;
- dev->phy.dev = ndev;
+ dev->phy.dev = (void *)ndev;
dev->phy.mode = dev->phy_mode;
/* PHY-less configuration.
@@ -2413,8 +2518,9 @@ static int __devinit emac_init_phy(struct emac_instance *dev)
DBG(dev, "PHY maps %08x %08x" NL, dev->phy_map, busy_phy_map);
- dev->phy.mdio_read = emac_mdio_read;
- dev->phy.mdio_write = emac_mdio_write;
+ /* Dirty hack. Let's hope that the first parameter is not used inside */
+ dev->phy.mdio_read = (void *)emac_mdio_read;
+ dev->phy.mdio_write = (void *)emac_mdio_write;
/* Enable internal clock source */
#ifdef CONFIG_PPC_DCR_NATIVE
@@ -2455,7 +2561,6 @@ static int __devinit emac_init_phy(struct emac_instance *dev)
if (!(phy_map & 1)) {
int r;
busy_phy_map |= 1 << i;
-
/* Quick check if there is a PHY at the address */
r = emac_mdio_read(dev->ndev, i, MII_BMCR);
if (r == 0xffff || r < 0)
@@ -2693,6 +2798,7 @@ static int __devinit emac_init_config(struct emac_instance *dev)
return 0;
}
+#ifdef ORIGINAL
static const struct net_device_ops emac_netdev_ops = {
.ndo_open = emac_open,
.ndo_stop = emac_close,
@@ -2718,11 +2824,16 @@ static const struct net_device_ops emac_gige_netdev_ops = {
.ndo_start_xmit = emac_start_xmit_sg,
.ndo_change_mtu = emac_change_mtu,
};
+#endif
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,39)
static int __devinit emac_probe(struct platform_device *ofdev,
const struct of_device_id *match)
+#else
+static int __devinit emac_probe(struct platform_device *ofdev)
+#endif
{
- struct net_device *ndev;
+ struct rtnet_device *ndev;
struct emac_instance *dev;
struct device_node *np = ofdev->dev.of_node;
struct device_node **blist = NULL;
@@ -2742,22 +2853,31 @@ static int __devinit emac_probe(struct platform_device *ofdev,
/* Allocate our net_device structure */
err = -ENOMEM;
- ndev = alloc_etherdev(sizeof(struct emac_instance));
+ ndev = rt_alloc_etherdev(sizeof(struct emac_instance));
if (!ndev) {
printk(KERN_ERR "%s: could not allocate ethernet device!\n",
np->full_name);
goto err_gone;
}
- dev = netdev_priv(ndev);
+
+ rtdev_alloc_name(ndev, "rteth%d");
+ rt_rtdev_connect(ndev, &RTDEV_manager);
+ RTNET_SET_MODULE_OWNER(ndev);
+
+ ndev->vers = RTDEV_VERS_2_0;
+
+ dev = rtnetdev_priv(ndev);
dev->ndev = ndev;
dev->ofdev = ofdev;
dev->blist = blist;
+#ifdef ORIGINAL
SET_NETDEV_DEV(ndev, &ofdev->dev);
+#endif
/* Initialize some embedded data structures */
mutex_init(&dev->mdio_lock);
mutex_init(&dev->link_lock);
- spin_lock_init(&dev->lock);
+ rtdm_lock_init(&dev->lock);
INIT_WORK(&dev->reset_work, emac_reset_work);
/* Init various config data based on device-tree */
@@ -2804,6 +2924,7 @@ static int __devinit emac_probe(struct platform_device *ofdev,
dev->mdio_instance = dev_get_drvdata(&dev->mdio_dev->dev);
/* Register with MAL */
+ dev->commac.rtdm = 1; /* use MAL from RDTM context */
dev->commac.ops = &emac_commac_ops;
dev->commac.dev = dev;
dev->commac.tx_chan_mask = MAL_CHAN_MASK(dev->mal_tx_chan);
@@ -2829,8 +2950,8 @@ static int __devinit emac_probe(struct platform_device *ofdev,
/* Clean rings */
memset(dev->tx_desc, 0, NUM_TX_BUFF * sizeof(struct mal_descriptor));
memset(dev->rx_desc, 0, NUM_RX_BUFF * sizeof(struct mal_descriptor));
- memset(dev->tx_skb, 0, NUM_TX_BUFF * sizeof(struct sk_buff *));
- memset(dev->rx_skb, 0, NUM_RX_BUFF * sizeof(struct sk_buff *));
+ memset(dev->tx_skb, 0, NUM_TX_BUFF * sizeof(struct rtskb *));
+ memset(dev->rx_skb, 0, NUM_RX_BUFF * sizeof(struct rtskb *));
/* Attach to ZMII, if needed */
if (emac_has_feature(dev, EMAC_FTR_HAS_ZMII) &&
@@ -2862,6 +2983,7 @@ static int __devinit emac_probe(struct platform_device *ofdev,
if (dev->tah_dev)
ndev->features |= NETIF_F_IP_CSUM | NETIF_F_SG;
+#ifdef ORIGINAL
ndev->watchdog_timeo = 5 * HZ;
if (emac_phy_supports_gige(dev->phy_mode)) {
ndev->netdev_ops = &emac_gige_netdev_ops;
@@ -2869,11 +2991,26 @@ static int __devinit emac_probe(struct platform_device *ofdev,
} else
ndev->netdev_ops = &emac_netdev_ops;
SET_ETHTOOL_OPS(ndev, &emac_ethtool_ops);
+#else
+ if (emac_phy_supports_gige(dev->phy_mode))
+ dev->commac.ops = &emac_commac_sg_ops;
+ ndev->open = emac_open;
+ ndev->stop = emac_close;
+ ndev->get_stats = emac_stats;
+ ndev->hard_start_xmit = emac_start_xmit;
+#endif
- netif_carrier_off(ndev);
- netif_stop_queue(ndev);
+ if (rtskb_pool_init(&dev->skb_pool, NUM_RX_BUFF * 2) <
+ NUM_RX_BUFF * 2) {
+ rtskb_pool_release(&dev->skb_pool);
+ return -ENOMEM;
+ }
- err = register_netdev(ndev);
+ rtnetif_carrier_off(ndev);
+ rtnetif_stop_queue(ndev);
+
+ strcpy(ndev->name, "rteth%d");
+ err = rt_register_rtnetdev(ndev);
if (err) {
printk(KERN_ERR "%s: failed to register net device (%d)!\n",
np->full_name, err);
@@ -2889,7 +3026,6 @@ static int __devinit emac_probe(struct platform_device *ofdev,
/* There's a new kid in town ! Let's tell everybody */
wake_up_all(&emac_probe_wait);
-
printk(KERN_INFO "%s: EMAC-%d %s, MAC %pM\n",
ndev->name, dev->cell_index, np->full_name, ndev->dev_addr);
@@ -2900,7 +3036,9 @@ static int __devinit emac_probe(struct platform_device *ofdev,
printk("%s: found %s PHY (0x%02x)\n", ndev->name,
dev->phy.def->name, dev->phy.address);
+#ifdef ORIGINAL
emac_dbg_register(dev);
+#endif
/* Life is good */
return 0;
@@ -2928,7 +3066,7 @@ static int __devinit emac_probe(struct platform_device *ofdev,
if (dev->emac_irq != NO_IRQ)
irq_dispose_mapping(dev->emac_irq);
err_free:
- free_netdev(ndev);
+ rtdev_free(ndev);
err_gone:
/* if we were on the bootlist, remove us as we won't show up and
* wake up all waiters to notify them in case they were waiting
@@ -2949,9 +3087,13 @@ static int __devexit emac_remove(struct platform_device *ofdev)
dev_set_drvdata(&ofdev->dev, NULL);
- unregister_netdev(dev->ndev);
+ rt_unregister_rtnetdev(dev->ndev);
+#if LINUX_VERSION_CODE <= KERNEL_VERSION(2,6,37)
flush_scheduled_work();
+#else
+ cancel_work_sync(&dev->reset_work);
+#endif
if (emac_has_feature(dev, EMAC_FTR_HAS_TAH))
tah_detach(dev->tah_dev, dev->tah_port);
@@ -2960,10 +3102,17 @@ static int __devexit emac_remove(struct platform_device *ofdev)
if (emac_has_feature(dev, EMAC_FTR_HAS_ZMII))
zmii_detach(dev->zmii_dev, dev->zmii_port);
+ busy_phy_map &= ~(1 << dev->phy.address);
+ DBG(dev, "busy_phy_map now %#x" NL, busy_phy_map);
+
mal_unregister_commac(dev->mal, &dev->commac);
emac_put_deps(dev);
+ rtskb_pool_release(&dev->skb_pool);
+
+#ifdef ORIGINAL
emac_dbg_unregister(dev);
+#endif
iounmap(dev->emacp);
if (dev->wol_irq != NO_IRQ)
@@ -2971,12 +3120,11 @@ static int __devexit emac_remove(struct platform_device *ofdev)
if (dev->emac_irq != NO_IRQ)
irq_dispose_mapping(dev->emac_irq);
- free_netdev(dev->ndev);
+ rtdev_free(dev->ndev);
return 0;
}
-/* XXX Features in here should be replaced by properties... */
static struct of_device_id emac_match[] =
{
{
@@ -2995,9 +3143,13 @@ static struct of_device_id emac_match[] =
};
MODULE_DEVICE_TABLE(of, emac_match);
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,39)
static struct of_platform_driver emac_driver = {
+#else
+static struct platform_driver emac_driver = {
+#endif
.driver = {
- .name = "emac",
+ .name = "rt-emac",
.owner = THIS_MODULE,
.of_match_table = emac_match,
},
@@ -3047,16 +3199,21 @@ static void __init emac_make_bootlist(void)
static int __init emac_init(void)
{
+#ifdef ORIGINAL
int rc;
+#endif
printk(KERN_INFO DRV_DESC ", version " DRV_VERSION "\n");
+#ifdef ORIGINAL
/* Init debug stuff */
emac_init_debug();
+#endif
/* Build EMAC boot list */
emac_make_bootlist();
+#ifdef ORIGINAL
/* Init submodules */
rc = mal_init();
if (rc)
@@ -3086,14 +3243,27 @@ static int __init emac_init(void)
mal_exit();
err:
return rc;
+#else
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,39)
+ return of_register_platform_driver(&emac_driver);
+#else
+ return platform_driver_register(&emac_driver);
+#endif
+#endif
}
static void __exit emac_exit(void)
{
+#ifdef ORIGINAL
int i;
-
+#endif
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,39)
of_unregister_platform_driver(&emac_driver);
+#else
+ platform_driver_unregister(&emac_driver);
+#endif
+#ifdef ORIGINAL
tah_exit();
rgmii_exit();
zmii_exit();
@@ -3104,6 +3274,7 @@ static void __exit emac_exit(void)
for (i = 0; i < EMAC_BOOT_LIST_SIZE; i++)
if (emac_boot_list[i])
of_node_put(emac_boot_list[i]);
+#endif
}
module_init(emac_init);
diff --git a/drivers/ibm_newemac/core.h b/drivers/ibm_newemac/core.h
index 9e37e3d..5f66d32 100644
--- a/drivers/ibm_newemac/core.h
+++ b/drivers/ibm_newemac/core.h
@@ -33,12 +33,15 @@
#include <linux/netdevice.h>
#include <linux/dma-mapping.h>
#include <linux/spinlock.h>
+#include <linux/of.h>
#include <linux/of_platform.h>
#include <linux/slab.h>
#include <asm/io.h>
#include <asm/dcr.h>
+#include <rtnet_port.h>
+
#include "emac.h"
#include "phy.h"
#include "zmii.h"
@@ -47,8 +50,8 @@
#include "tah.h"
#include "debug.h"
-#define NUM_TX_BUFF CONFIG_IBM_NEW_EMAC_TXB
-#define NUM_RX_BUFF CONFIG_IBM_NEW_EMAC_RXB
+#define NUM_TX_BUFF 16
+#define NUM_RX_BUFF 32
/* Simple sanity check */
#if NUM_TX_BUFF > 256 || NUM_RX_BUFF > 256
@@ -167,7 +170,8 @@ struct emac_error_stats {
/ sizeof(u64))
struct emac_instance {
- struct net_device *ndev;
+ struct rtnet_device *ndev;
+ struct rtskb_queue skb_pool;
struct resource rsrc_regs;
struct emac_regs __iomem *emacp;
struct platform_device *ofdev;
@@ -218,6 +222,7 @@ struct emac_instance {
/* IRQs */
int wol_irq;
int emac_irq;
+ rtdm_irq_t emac_irq_handle;
/* OPB bus frequency in Mhz */
u32 opb_bus_freq;
@@ -252,12 +257,12 @@ struct emac_instance {
struct mal_descriptor *rx_desc;
int rx_slot;
- struct sk_buff *rx_sg_skb; /* 1 */
+ struct rtskb *rx_sg_skb; /* 1 */
int rx_skb_size;
int rx_sync_size;
- struct sk_buff *tx_skb[NUM_TX_BUFF];
- struct sk_buff *rx_skb[NUM_RX_BUFF];
+ struct rtskb *tx_skb[NUM_TX_BUFF];
+ struct rtskb *rx_skb[NUM_RX_BUFF];
/* Stats
*/
@@ -269,11 +274,13 @@ struct emac_instance {
*/
int reset_failed;
int stop_timeout; /* in us */
+#ifdef WITH_MULTICAST
int no_mcast;
int mcast_pending;
+#endif
int opened;
struct work_struct reset_work;
- spinlock_t lock;
+ rtdm_lock_t lock;
};
/*
@@ -459,4 +466,7 @@ struct emac_ethtool_regs_subhdr {
#define EMAC4_ETHTOOL_REGS_SIZE(dev) ((dev)->rsrc_regs.end - \
(dev)->rsrc_regs.start + 1)
+#define rtnetdev_priv(ndev) (ndev)->priv
+#define rtskb_tail_pointer(skb) (skb)->tail
+
#endif /* __IBM_NEWEMAC_CORE_H */
--
1.7.4.1
|
|
From: Wolfgang G. <wg...@gr...> - 2011-11-17 13:47:18
|
From: Wolfgang Grandegger <wg...@de...>
These files are from mainline Linux v2.6.36
Signed-off-by: Wolfgang Grandegger <wg...@de...>
---
drivers/ibm_newemac/core.c | 3110 ++++++++++++++++++++++++++++++++++++++++++++
drivers/ibm_newemac/core.h | 462 +++++++
2 files changed, 3572 insertions(+), 0 deletions(-)
create mode 100644 drivers/ibm_newemac/core.c
create mode 100644 drivers/ibm_newemac/core.h
diff --git a/drivers/ibm_newemac/core.c b/drivers/ibm_newemac/core.c
new file mode 100644
index 0000000..519e19e
--- /dev/null
+++ b/drivers/ibm_newemac/core.c
@@ -0,0 +1,3110 @@
+/*
+ * drivers/net/ibm_newemac/core.c
+ *
+ * Driver for PowerPC 4xx on-chip ethernet controller.
+ *
+ * Copyright 2007 Benjamin Herrenschmidt, IBM Corp.
+ * <be...@ke...>
+ *
+ * Based on the arch/ppc version of the driver:
+ *
+ * Copyright (c) 2004, 2005 Zultys Technologies.
+ * Eugene Surovegin <eug...@zu...> or <eb...@eb...>
+ *
+ * Based on original work by
+ * Matt Porter <mp...@ke...>
+ * (c) 2003 Benjamin Herrenschmidt <be...@ke...>
+ * Armin Kuster <ak...@mv...>
+ * Johnnie Peters <jp...@mv...>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/sched.h>
+#include <linux/string.h>
+#include <linux/errno.h>
+#include <linux/delay.h>
+#include <linux/types.h>
+#include <linux/pci.h>
+#include <linux/etherdevice.h>
+#include <linux/skbuff.h>
+#include <linux/crc32.h>
+#include <linux/ethtool.h>
+#include <linux/mii.h>
+#include <linux/bitops.h>
+#include <linux/workqueue.h>
+#include <linux/of.h>
+#include <linux/slab.h>
+
+#include <asm/processor.h>
+#include <asm/io.h>
+#include <asm/dma.h>
+#include <asm/uaccess.h>
+#include <asm/dcr.h>
+#include <asm/dcr-regs.h>
+
+#include "core.h"
+
+/*
+ * Lack of dma_unmap_???? calls is intentional.
+ *
+ * API-correct usage requires additional support state information to be
+ * maintained for every RX and TX buffer descriptor (BD). Unfortunately, due to
+ * EMAC design (e.g. TX buffer passed from network stack can be split into
+ * several BDs, dma_map_single/dma_map_page can be used to map particular BD),
+ * maintaining such information will add additional overhead.
+ * Current DMA API implementation for 4xx processors only ensures cache coherency
+ * and dma_unmap_???? routines are empty and are likely to stay this way.
+ * I decided to omit dma_unmap_??? calls because I don't want to add additional
+ * complexity just for the sake of following some abstract API, when it doesn't
+ * add any real benefit to the driver. I understand that this decision maybe
+ * controversial, but I really tried to make code API-correct and efficient
+ * at the same time and didn't come up with code I liked :(. --ebs
+ */
+
+#define DRV_NAME "emac"
+#define DRV_VERSION "3.54"
+#define DRV_DESC "PPC 4xx OCP EMAC driver"
+
+MODULE_DESCRIPTION(DRV_DESC);
+MODULE_AUTHOR
+ ("Eugene Surovegin <eug...@zu...> or <eb...@eb...>");
+MODULE_LICENSE("GPL");
+
+/*
+ * PPC64 doesn't (yet) have a cacheable_memcpy
+ */
+#ifdef CONFIG_PPC64
+#define cacheable_memcpy(d,s,n) memcpy((d),(s),(n))
+#endif
+
+/* minimum number of free TX descriptors required to wake up TX process */
+#define EMAC_TX_WAKEUP_THRESH (NUM_TX_BUFF / 4)
+
+/* If packet size is less than this number, we allocate small skb and copy packet
+ * contents into it instead of just sending original big skb up
+ */
+#define EMAC_RX_COPY_THRESH CONFIG_IBM_NEW_EMAC_RX_COPY_THRESHOLD
+
+/* Since multiple EMACs share MDIO lines in various ways, we need
+ * to avoid re-using the same PHY ID in cases where the arch didn't
+ * setup precise phy_map entries
+ *
+ * XXX This is something that needs to be reworked as we can have multiple
+ * EMAC "sets" (multiple ASICs containing several EMACs) though we can
+ * probably require in that case to have explicit PHY IDs in the device-tree
+ */
+static u32 busy_phy_map;
+static DEFINE_MUTEX(emac_phy_map_lock);
+
+/* This is the wait queue used to wait on any event related to probe, that
+ * is discovery of MALs, other EMACs, ZMII/RGMIIs, etc...
+ */
+static DECLARE_WAIT_QUEUE_HEAD(emac_probe_wait);
+
+/* Having stable interface names is a doomed idea. However, it would be nice
+ * if we didn't have completely random interface names at boot too :-) It's
+ * just a matter of making everybody's life easier. Since we are doing
+ * threaded probing, it's a bit harder though. The base idea here is that
+ * we make up a list of all emacs in the device-tree before we register the
+ * driver. Every emac will then wait for the previous one in the list to
+ * initialize before itself. We should also keep that list ordered by
+ * cell_index.
+ * That list is only 4 entries long, meaning that additional EMACs don't
+ * get ordering guarantees unless EMAC_BOOT_LIST_SIZE is increased.
+ */
+
+#define EMAC_BOOT_LIST_SIZE 4
+static struct device_node *emac_boot_list[EMAC_BOOT_LIST_SIZE];
+
+/* How long should I wait for dependent devices ? */
+#define EMAC_PROBE_DEP_TIMEOUT (HZ * 5)
+
+/* I don't want to litter system log with timeout errors
+ * when we have brain-damaged PHY.
+ */
+static inline void emac_report_timeout_error(struct emac_instance *dev,
+ const char *error)
+{
+ if (emac_has_feature(dev, EMAC_FTR_440GX_PHY_CLK_FIX |
+ EMAC_FTR_460EX_PHY_CLK_FIX |
+ EMAC_FTR_440EP_PHY_CLK_FIX))
+ DBG(dev, "%s" NL, error);
+ else if (net_ratelimit())
+ printk(KERN_ERR "%s: %s\n", dev->ofdev->dev.of_node->full_name,
+ error);
+}
+
+/* EMAC PHY clock workaround:
+ * 440EP/440GR has more sane SDR0_MFR register implementation than 440GX,
+ * which allows controlling each EMAC clock
+ */
+static inline void emac_rx_clk_tx(struct emac_instance *dev)
+{
+#ifdef CONFIG_PPC_DCR_NATIVE
+ if (emac_has_feature(dev, EMAC_FTR_440EP_PHY_CLK_FIX))
+ dcri_clrset(SDR0, SDR0_MFR,
+ 0, SDR0_MFR_ECS >> dev->cell_index);
+#endif
+}
+
+static inline void emac_rx_clk_default(struct emac_instance *dev)
+{
+#ifdef CONFIG_PPC_DCR_NATIVE
+ if (emac_has_feature(dev, EMAC_FTR_440EP_PHY_CLK_FIX))
+ dcri_clrset(SDR0, SDR0_MFR,
+ SDR0_MFR_ECS >> dev->cell_index, 0);
+#endif
+}
+
+/* PHY polling intervals */
+#define PHY_POLL_LINK_ON HZ
+#define PHY_POLL_LINK_OFF (HZ / 5)
+
+/* Graceful stop timeouts in us.
+ * We should allow up to 1 frame time (full-duplex, ignoring collisions)
+ */
+#define STOP_TIMEOUT_10 1230
+#define STOP_TIMEOUT_100 124
+#define STOP_TIMEOUT_1000 13
+#define STOP_TIMEOUT_1000_JUMBO 73
+
+static unsigned char default_mcast_addr[] = {
+ 0x01, 0x80, 0xC2, 0x00, 0x00, 0x01
+};
+
+/* Please, keep in sync with struct ibm_emac_stats/ibm_emac_error_stats */
+static const char emac_stats_keys[EMAC_ETHTOOL_STATS_COUNT][ETH_GSTRING_LEN] = {
+ "rx_packets", "rx_bytes", "tx_packets", "tx_bytes", "rx_packets_csum",
+ "tx_packets_csum", "tx_undo", "rx_dropped_stack", "rx_dropped_oom",
+ "rx_dropped_error", "rx_dropped_resize", "rx_dropped_mtu",
+ "rx_stopped", "rx_bd_errors", "rx_bd_overrun", "rx_bd_bad_packet",
+ "rx_bd_runt_packet", "rx_bd_short_event", "rx_bd_alignment_error",
+ "rx_bd_bad_fcs", "rx_bd_packet_too_long", "rx_bd_out_of_range",
+ "rx_bd_in_range", "rx_parity", "rx_fifo_overrun", "rx_overrun",
+ "rx_bad_packet", "rx_runt_packet", "rx_short_event",
+ "rx_alignment_error", "rx_bad_fcs", "rx_packet_too_long",
+ "rx_out_of_range", "rx_in_range", "tx_dropped", "tx_bd_errors",
+ "tx_bd_bad_fcs", "tx_bd_carrier_loss", "tx_bd_excessive_deferral",
+ "tx_bd_excessive_collisions", "tx_bd_late_collision",
+ "tx_bd_multple_collisions", "tx_bd_single_collision",
+ "tx_bd_underrun", "tx_bd_sqe", "tx_parity", "tx_underrun", "tx_sqe",
+ "tx_errors"
+};
+
+static irqreturn_t emac_irq(int irq, void *dev_instance);
+static void emac_clean_tx_ring(struct emac_instance *dev);
+static void __emac_set_multicast_list(struct emac_instance *dev);
+
+static inline int emac_phy_supports_gige(int phy_mode)
+{
+ return phy_mode == PHY_MODE_GMII ||
+ phy_mode == PHY_MODE_RGMII ||
+ phy_mode == PHY_MODE_SGMII ||
+ phy_mode == PHY_MODE_TBI ||
+ phy_mode == PHY_MODE_RTBI;
+}
+
+static inline int emac_phy_gpcs(int phy_mode)
+{
+ return phy_mode == PHY_MODE_SGMII ||
+ phy_mode == PHY_MODE_TBI ||
+ phy_mode == PHY_MODE_RTBI;
+}
+
+static inline void emac_tx_enable(struct emac_instance *dev)
+{
+ struct emac_regs __iomem *p = dev->emacp;
+ u32 r;
+
+ DBG(dev, "tx_enable" NL);
+
+ r = in_be32(&p->mr0);
+ if (!(r & EMAC_MR0_TXE))
+ out_be32(&p->mr0, r | EMAC_MR0_TXE);
+}
+
+static void emac_tx_disable(struct emac_instance *dev)
+{
+ struct emac_regs __iomem *p = dev->emacp;
+ u32 r;
+
+ DBG(dev, "tx_disable" NL);
+
+ r = in_be32(&p->mr0);
+ if (r & EMAC_MR0_TXE) {
+ int n = dev->stop_timeout;
+ out_be32(&p->mr0, r & ~EMAC_MR0_TXE);
+ while (!(in_be32(&p->mr0) & EMAC_MR0_TXI) && n) {
+ udelay(1);
+ --n;
+ }
+ if (unlikely(!n))
+ emac_report_timeout_error(dev, "TX disable timeout");
+ }
+}
+
+static void emac_rx_enable(struct emac_instance *dev)
+{
+ struct emac_regs __iomem *p = dev->emacp;
+ u32 r;
+
+ if (unlikely(test_bit(MAL_COMMAC_RX_STOPPED, &dev->commac.flags)))
+ goto out;
+
+ DBG(dev, "rx_enable" NL);
+
+ r = in_be32(&p->mr0);
+ if (!(r & EMAC_MR0_RXE)) {
+ if (unlikely(!(r & EMAC_MR0_RXI))) {
+ /* Wait if previous async disable is still in progress */
+ int n = dev->stop_timeout;
+ while (!(r = in_be32(&p->mr0) & EMAC_MR0_RXI) && n) {
+ udelay(1);
+ --n;
+ }
+ if (unlikely(!n))
+ emac_report_timeout_error(dev,
+ "RX disable timeout");
+ }
+ out_be32(&p->mr0, r | EMAC_MR0_RXE);
+ }
+ out:
+ ;
+}
+
+static void emac_rx_disable(struct emac_instance *dev)
+{
+ struct emac_regs __iomem *p = dev->emacp;
+ u32 r;
+
+ DBG(dev, "rx_disable" NL);
+
+ r = in_be32(&p->mr0);
+ if (r & EMAC_MR0_RXE) {
+ int n = dev->stop_timeout;
+ out_be32(&p->mr0, r & ~EMAC_MR0_RXE);
+ while (!(in_be32(&p->mr0) & EMAC_MR0_RXI) && n) {
+ udelay(1);
+ --n;
+ }
+ if (unlikely(!n))
+ emac_report_timeout_error(dev, "RX disable timeout");
+ }
+}
+
+static inline void emac_netif_stop(struct emac_instance *dev)
+{
+ netif_tx_lock_bh(dev->ndev);
+ netif_addr_lock(dev->ndev);
+ dev->no_mcast = 1;
+ netif_addr_unlock(dev->ndev);
+ netif_tx_unlock_bh(dev->ndev);
+ dev->ndev->trans_start = jiffies; /* prevent tx timeout */
+ mal_poll_disable(dev->mal, &dev->commac);
+ netif_tx_disable(dev->ndev);
+}
+
+static inline void emac_netif_start(struct emac_instance *dev)
+{
+ netif_tx_lock_bh(dev->ndev);
+ netif_addr_lock(dev->ndev);
+ dev->no_mcast = 0;
+ if (dev->mcast_pending && netif_running(dev->ndev))
+ __emac_set_multicast_list(dev);
+ netif_addr_unlock(dev->ndev);
+ netif_tx_unlock_bh(dev->ndev);
+
+ netif_wake_queue(dev->ndev);
+
+ /* NOTE: unconditional netif_wake_queue is only appropriate
+ * so long as all callers are assured to have free tx slots
+ * (taken from tg3... though the case where that is wrong is
+ * not terribly harmful)
+ */
+ mal_poll_enable(dev->mal, &dev->commac);
+}
+
+static inline void emac_rx_disable_async(struct emac_instance *dev)
+{
+ struct emac_regs __iomem *p = dev->emacp;
+ u32 r;
+
+ DBG(dev, "rx_disable_async" NL);
+
+ r = in_be32(&p->mr0);
+ if (r & EMAC_MR0_RXE)
+ out_be32(&p->mr0, r & ~EMAC_MR0_RXE);
+}
+
+static int emac_reset(struct emac_instance *dev)
+{
+ struct emac_regs __iomem *p = dev->emacp;
+ int n = 20;
+
+ DBG(dev, "reset" NL);
+
+ if (!dev->reset_failed) {
+ /* 40x erratum suggests stopping RX channel before reset,
+ * we stop TX as well
+ */
+ emac_rx_disable(dev);
+ emac_tx_disable(dev);
+ }
+
+#ifdef CONFIG_PPC_DCR_NATIVE
+ /* Enable internal clock source */
+ if (emac_has_feature(dev, EMAC_FTR_460EX_PHY_CLK_FIX))
+ dcri_clrset(SDR0, SDR0_ETH_CFG,
+ 0, SDR0_ETH_CFG_ECS << dev->cell_index);
+#endif
+
+ out_be32(&p->mr0, EMAC_MR0_SRST);
+ while ((in_be32(&p->mr0) & EMAC_MR0_SRST) && n)
+ --n;
+
+#ifdef CONFIG_PPC_DCR_NATIVE
+ /* Enable external clock source */
+ if (emac_has_feature(dev, EMAC_FTR_460EX_PHY_CLK_FIX))
+ dcri_clrset(SDR0, SDR0_ETH_CFG,
+ SDR0_ETH_CFG_ECS << dev->cell_index, 0);
+#endif
+
+ if (n) {
+ dev->reset_failed = 0;
+ return 0;
+ } else {
+ emac_report_timeout_error(dev, "reset timeout");
+ dev->reset_failed = 1;
+ return -ETIMEDOUT;
+ }
+}
+
+static void emac_hash_mc(struct emac_instance *dev)
+{
+ const int regs = EMAC_XAHT_REGS(dev);
+ u32 *gaht_base = emac_gaht_base(dev);
+ u32 gaht_temp[regs];
+ struct netdev_hw_addr *ha;
+ int i;
+
+ DBG(dev, "hash_mc %d" NL, netdev_mc_count(dev->ndev));
+
+ memset(gaht_temp, 0, sizeof (gaht_temp));
+
+ netdev_for_each_mc_addr(ha, dev->ndev) {
+ int slot, reg, mask;
+ DBG2(dev, "mc %pM" NL, ha->addr);
+
+ slot = EMAC_XAHT_CRC_TO_SLOT(dev,
+ ether_crc(ETH_ALEN, ha->addr));
+ reg = EMAC_XAHT_SLOT_TO_REG(dev, slot);
+ mask = EMAC_XAHT_SLOT_TO_MASK(dev, slot);
+
+ gaht_temp[reg] |= mask;
+ }
+
+ for (i = 0; i < regs; i++)
+ out_be32(gaht_base + i, gaht_temp[i]);
+}
+
+static inline u32 emac_iff2rmr(struct net_device *ndev)
+{
+ struct emac_instance *dev = netdev_priv(ndev);
+ u32 r;
+
+ r = EMAC_RMR_SP | EMAC_RMR_SFCS | EMAC_RMR_IAE | EMAC_RMR_BAE;
+
+ if (emac_has_feature(dev, EMAC_FTR_EMAC4))
+ r |= EMAC4_RMR_BASE;
+ else
+ r |= EMAC_RMR_BASE;
+
+ if (ndev->flags & IFF_PROMISC)
+ r |= EMAC_RMR_PME;
+ else if (ndev->flags & IFF_ALLMULTI ||
+ (netdev_mc_count(ndev) > EMAC_XAHT_SLOTS(dev)))
+ r |= EMAC_RMR_PMME;
+ else if (!netdev_mc_empty(ndev))
+ r |= EMAC_RMR_MAE;
+
+ return r;
+}
+
+static u32 __emac_calc_base_mr1(struct emac_instance *dev, int tx_size, int rx_size)
+{
+ u32 ret = EMAC_MR1_VLE | EMAC_MR1_IST | EMAC_MR1_TR0_MULT;
+
+ DBG2(dev, "__emac_calc_base_mr1" NL);
+
+ switch(tx_size) {
+ case 2048:
+ ret |= EMAC_MR1_TFS_2K;
+ break;
+ default:
+ printk(KERN_WARNING "%s: Unknown Tx FIFO size %d\n",
+ dev->ndev->name, tx_size);
+ }
+
+ switch(rx_size) {
+ case 16384:
+ ret |= EMAC_MR1_RFS_16K;
+ break;
+ case 4096:
+ ret |= EMAC_MR1_RFS_4K;
+ break;
+ default:
+ printk(KERN_WARNING "%s: Unknown Rx FIFO size %d\n",
+ dev->ndev->name, rx_size);
+ }
+
+ return ret;
+}
+
+static u32 __emac4_calc_base_mr1(struct emac_instance *dev, int tx_size, int rx_size)
+{
+ u32 ret = EMAC_MR1_VLE | EMAC_MR1_IST | EMAC4_MR1_TR |
+ EMAC4_MR1_OBCI(dev->opb_bus_freq / 1000000);
+
+ DBG2(dev, "__emac4_calc_base_mr1" NL);
+
+ switch(tx_size) {
+ case 16384:
+ ret |= EMAC4_MR1_TFS_16K;
+ break;
+ case 4096:
+ ret |= EMAC4_MR1_TFS_4K;
+ break;
+ case 2048:
+ ret |= EMAC4_MR1_TFS_2K;
+ break;
+ default:
+ printk(KERN_WARNING "%s: Unknown Tx FIFO size %d\n",
+ dev->ndev->name, tx_size);
+ }
+
+ switch(rx_size) {
+ case 16384:
+ ret |= EMAC4_MR1_RFS_16K;
+ break;
+ case 4096:
+ ret |= EMAC4_MR1_RFS_4K;
+ break;
+ case 2048:
+ ret |= EMAC4_MR1_RFS_2K;
+ break;
+ default:
+ printk(KERN_WARNING "%s: Unknown Rx FIFO size %d\n",
+ dev->ndev->name, rx_size);
+ }
+
+ return ret;
+}
+
+static u32 emac_calc_base_mr1(struct emac_instance *dev, int tx_size, int rx_size)
+{
+ return emac_has_feature(dev, EMAC_FTR_EMAC4) ?
+ __emac4_calc_base_mr1(dev, tx_size, rx_size) :
+ __emac_calc_base_mr1(dev, tx_size, rx_size);
+}
+
+static inline u32 emac_calc_trtr(struct emac_instance *dev, unsigned int size)
+{
+ if (emac_has_feature(dev, EMAC_FTR_EMAC4))
+ return ((size >> 6) - 1) << EMAC_TRTR_SHIFT_EMAC4;
+ else
+ return ((size >> 6) - 1) << EMAC_TRTR_SHIFT;
+}
+
+static inline u32 emac_calc_rwmr(struct emac_instance *dev,
+ unsigned int low, unsigned int high)
+{
+ if (emac_has_feature(dev, EMAC_FTR_EMAC4))
+ return (low << 22) | ( (high & 0x3ff) << 6);
+ else
+ return (low << 23) | ( (high & 0x1ff) << 7);
+}
+
+static int emac_configure(struct emac_instance *dev)
+{
+ struct emac_regs __iomem *p = dev->emacp;
+ struct net_device *ndev = dev->ndev;
+ int tx_size, rx_size, link = netif_carrier_ok(dev->ndev);
+ u32 r, mr1 = 0;
+
+ DBG(dev, "configure" NL);
+
+ if (!link) {
+ out_be32(&p->mr1, in_be32(&p->mr1)
+ | EMAC_MR1_FDE | EMAC_MR1_ILE);
+ udelay(100);
+ } else if (emac_reset(dev) < 0)
+ return -ETIMEDOUT;
+
+ if (emac_has_feature(dev, EMAC_FTR_HAS_TAH))
+ tah_reset(dev->tah_dev);
+
+ DBG(dev, " link = %d duplex = %d, pause = %d, asym_pause = %d\n",
+ link, dev->phy.duplex, dev->phy.pause, dev->phy.asym_pause);
+
+ /* Default fifo sizes */
+ tx_size = dev->tx_fifo_size;
+ rx_size = dev->rx_fifo_size;
+
+ /* No link, force loopback */
+ if (!link)
+ mr1 = EMAC_MR1_FDE | EMAC_MR1_ILE;
+
+ /* Check for full duplex */
+ else if (dev->phy.duplex == DUPLEX_FULL)
+ mr1 |= EMAC_MR1_FDE | EMAC_MR1_MWSW_001;
+
+ /* Adjust fifo sizes, mr1 and timeouts based on link speed */
+ dev->stop_timeout = STOP_TIMEOUT_10;
+ switch (dev->phy.speed) {
+ case SPEED_1000:
+ if (emac_phy_gpcs(dev->phy.mode)) {
+ mr1 |= EMAC_MR1_MF_1000GPCS | EMAC_MR1_MF_IPPA(
+ (dev->phy.gpcs_address != 0xffffffff) ?
+ dev->phy.gpcs_address : dev->phy.address);
+
+ /* Put some arbitrary OUI, Manuf & Rev IDs so we can
+ * identify this GPCS PHY later.
+ */
+ out_be32(&p->u1.emac4.ipcr, 0xdeadbeef);
+ } else
+ mr1 |= EMAC_MR1_MF_1000;
+
+ /* Extended fifo sizes */
+ tx_size = dev->tx_fifo_size_gige;
+ rx_size = dev->rx_fifo_size_gige;
+
+ if (dev->ndev->mtu > ETH_DATA_LEN) {
+ if (emac_has_feature(dev, EMAC_FTR_EMAC4))
+ mr1 |= EMAC4_MR1_JPSM;
+ else
+ mr1 |= EMAC_MR1_JPSM;
+ dev->stop_timeout = STOP_TIMEOUT_1000_JUMBO;
+ } else
+ dev->stop_timeout = STOP_TIMEOUT_1000;
+ break;
+ case SPEED_100:
+ mr1 |= EMAC_MR1_MF_100;
+ dev->stop_timeout = STOP_TIMEOUT_100;
+ break;
+ default: /* make gcc happy */
+ break;
+ }
+
+ if (emac_has_feature(dev, EMAC_FTR_HAS_RGMII))
+ rgmii_set_speed(dev->rgmii_dev, dev->rgmii_port,
+ dev->phy.speed);
+ if (emac_has_feature(dev, EMAC_FTR_HAS_ZMII))
+ zmii_set_speed(dev->zmii_dev, dev->zmii_port, dev->phy.speed);
+
+ /* on 40x erratum forces us to NOT use integrated flow control,
+ * let's hope it works on 44x ;)
+ */
+ if (!emac_has_feature(dev, EMAC_FTR_NO_FLOW_CONTROL_40x) &&
+ dev->phy.duplex == DUPLEX_FULL) {
+ if (dev->phy.pause)
+ mr1 |= EMAC_MR1_EIFC | EMAC_MR1_APP;
+ else if (dev->phy.asym_pause)
+ mr1 |= EMAC_MR1_APP;
+ }
+
+ /* Add base settings & fifo sizes & program MR1 */
+ mr1 |= emac_calc_base_mr1(dev, tx_size, rx_size);
+ out_be32(&p->mr1, mr1);
+
+ /* Set individual MAC address */
+ out_be32(&p->iahr, (ndev->dev_addr[0] << 8) | ndev->dev_addr[1]);
+ out_be32(&p->ialr, (ndev->dev_addr[2] << 24) |
+ (ndev->dev_addr[3] << 16) | (ndev->dev_addr[4] << 8) |
+ ndev->dev_addr[5]);
+
+ /* VLAN Tag Protocol ID */
+ out_be32(&p->vtpid, 0x8100);
+
+ /* Receive mode register */
+ r = emac_iff2rmr(ndev);
+ if (r & EMAC_RMR_MAE)
+ emac_hash_mc(dev);
+ out_be32(&p->rmr, r);
+
+ /* FIFOs thresholds */
+ if (emac_has_feature(dev, EMAC_FTR_EMAC4))
+ r = EMAC4_TMR1((dev->mal_burst_size / dev->fifo_entry_size) + 1,
+ tx_size / 2 / dev->fifo_entry_size);
+ else
+ r = EMAC_TMR1((dev->mal_burst_size / dev->fifo_entry_size) + 1,
+ tx_size / 2 / dev->fifo_entry_size);
+ out_be32(&p->tmr1, r);
+ out_be32(&p->trtr, emac_calc_trtr(dev, tx_size / 2));
+
+ /* PAUSE frame is sent when RX FIFO reaches its high-water mark,
+ there should be still enough space in FIFO to allow the our link
+ partner time to process this frame and also time to send PAUSE
+ frame itself.
+
+ Here is the worst case scenario for the RX FIFO "headroom"
+ (from "The Switch Book") (100Mbps, without preamble, inter-frame gap):
+
+ 1) One maximum-length frame on TX 1522 bytes
+ 2) One PAUSE frame time 64 bytes
+ 3) PAUSE frame decode time allowance 64 bytes
+ 4) One maximum-length frame on RX 1522 bytes
+ 5) Round-trip propagation delay of the link (100Mb) 15 bytes
+ ----------
+ 3187 bytes
+
+ I chose to set high-water mark to RX_FIFO_SIZE / 4 (1024 bytes)
+ low-water mark to RX_FIFO_SIZE / 8 (512 bytes)
+ */
+ r = emac_calc_rwmr(dev, rx_size / 8 / dev->fifo_entry_size,
+ rx_size / 4 / dev->fifo_entry_size);
+ out_be32(&p->rwmr, r);
+
+ /* Set PAUSE timer to the maximum */
+ out_be32(&p->ptr, 0xffff);
+
+ /* IRQ sources */
+ r = EMAC_ISR_OVR | EMAC_ISR_BP | EMAC_ISR_SE |
+ EMAC_ISR_ALE | EMAC_ISR_BFCS | EMAC_ISR_PTLE | EMAC_ISR_ORE |
+ EMAC_ISR_IRE | EMAC_ISR_TE;
+ if (emac_has_feature(dev, EMAC_FTR_EMAC4))
+ r |= EMAC4_ISR_TXPE | EMAC4_ISR_RXPE /* | EMAC4_ISR_TXUE |
+ EMAC4_ISR_RXOE | */;
+ out_be32(&p->iser, r);
+
+ /* We need to take GPCS PHY out of isolate mode after EMAC reset */
+ if (emac_phy_gpcs(dev->phy.mode)) {
+ if (dev->phy.gpcs_address != 0xffffffff)
+ emac_mii_reset_gpcs(&dev->phy);
+ else
+ emac_mii_reset_phy(&dev->phy);
+ }
+
+ return 0;
+}
+
+static void emac_reinitialize(struct emac_instance *dev)
+{
+ DBG(dev, "reinitialize" NL);
+
+ emac_netif_stop(dev);
+ if (!emac_configure(dev)) {
+ emac_tx_enable(dev);
+ emac_rx_enable(dev);
+ }
+ emac_netif_start(dev);
+}
+
+static void emac_full_tx_reset(struct emac_instance *dev)
+{
+ DBG(dev, "full_tx_reset" NL);
+
+ emac_tx_disable(dev);
+ mal_disable_tx_channel(dev->mal, dev->mal_tx_chan);
+ emac_clean_tx_ring(dev);
+ dev->tx_cnt = dev->tx_slot = dev->ack_slot = 0;
+
+ emac_configure(dev);
+
+ mal_enable_tx_channel(dev->mal, dev->mal_tx_chan);
+ emac_tx_enable(dev);
+ emac_rx_enable(dev);
+}
+
+static void emac_reset_work(struct work_struct *work)
+{
+ struct emac_instance *dev = container_of(work, struct emac_instance, reset_work);
+
+ DBG(dev, "reset_work" NL);
+
+ mutex_lock(&dev->link_lock);
+ if (dev->opened) {
+ emac_netif_stop(dev);
+ emac_full_tx_reset(dev);
+ emac_netif_start(dev);
+ }
+ mutex_unlock(&dev->link_lock);
+}
+
+static void emac_tx_timeout(struct net_device *ndev)
+{
+ struct emac_instance *dev = netdev_priv(ndev);
+
+ DBG(dev, "tx_timeout" NL);
+
+ schedule_work(&dev->reset_work);
+}
+
+
+static inline int emac_phy_done(struct emac_instance *dev, u32 stacr)
+{
+ int done = !!(stacr & EMAC_STACR_OC);
+
+ if (emac_has_feature(dev, EMAC_FTR_STACR_OC_INVERT))
+ done = !done;
+
+ return done;
+};
+
+static int __emac_mdio_read(struct emac_instance *dev, u8 id, u8 reg)
+{
+ struct emac_regs __iomem *p = dev->emacp;
+ u32 r = 0;
+ int n, err = -ETIMEDOUT;
+
+ mutex_lock(&dev->mdio_lock);
+
+ DBG2(dev, "mdio_read(%02x,%02x)" NL, id, reg);
+
+ /* Enable proper MDIO port */
+ if (emac_has_feature(dev, EMAC_FTR_HAS_ZMII))
+ zmii_get_mdio(dev->zmii_dev, dev->zmii_port);
+ if (emac_has_feature(dev, EMAC_FTR_HAS_RGMII))
+ rgmii_get_mdio(dev->rgmii_dev, dev->rgmii_port);
+
+ /* Wait for management interface to become idle */
+ n = 20;
+ while (!emac_phy_done(dev, in_be32(&p->stacr))) {
+ udelay(1);
+ if (!--n) {
+ DBG2(dev, " -> timeout wait idle\n");
+ goto bail;
+ }
+ }
+
+ /* Issue read command */
+ if (emac_has_feature(dev, EMAC_FTR_EMAC4))
+ r = EMAC4_STACR_BASE(dev->opb_bus_freq);
+ else
+ r = EMAC_STACR_BASE(dev->opb_bus_freq);
+ if (emac_has_feature(dev, EMAC_FTR_STACR_OC_INVERT))
+ r |= EMAC_STACR_OC;
+ if (emac_has_feature(dev, EMAC_FTR_HAS_NEW_STACR))
+ r |= EMACX_STACR_STAC_READ;
+ else
+ r |= EMAC_STACR_STAC_READ;
+ r |= (reg & EMAC_STACR_PRA_MASK)
+ | ((id & EMAC_STACR_PCDA_MASK) << EMAC_STACR_PCDA_SHIFT);
+ out_be32(&p->stacr, r);
+
+ /* Wait for read to complete */
+ n = 200;
+ while (!emac_phy_done(dev, (r = in_be32(&p->stacr)))) {
+ udelay(1);
+ if (!--n) {
+ DBG2(dev, " -> timeout wait complete\n");
+ goto bail;
+ }
+ }
+
+ if (unlikely(r & EMAC_STACR_PHYE)) {
+ DBG(dev, "mdio_read(%02x, %02x) failed" NL, id, reg);
+ err = -EREMOTEIO;
+ goto bail;
+ }
+
+ r = ((r >> EMAC_STACR_PHYD_SHIFT) & EMAC_STACR_PHYD_MASK);
+
+ DBG2(dev, "mdio_read -> %04x" NL, r);
+ err = 0;
+ bail:
+ if (emac_has_feature(dev, EMAC_FTR_HAS_RGMII))
+ rgmii_put_mdio(dev->rgmii_dev, dev->rgmii_port);
+ if (emac_has_feature(dev, EMAC_FTR_HAS_ZMII))
+ zmii_put_mdio(dev->zmii_dev, dev->zmii_port);
+ mutex_unlock(&dev->mdio_lock);
+
+ return err == 0 ? r : err;
+}
+
+static void __emac_mdio_write(struct emac_instance *dev, u8 id, u8 reg,
+ u16 val)
+{
+ struct emac_regs __iomem *p = dev->emacp;
+ u32 r = 0;
+ int n, err = -ETIMEDOUT;
+
+ mutex_lock(&dev->mdio_lock);
+
+ DBG2(dev, "mdio_write(%02x,%02x,%04x)" NL, id, reg, val);
+
+ /* Enable proper MDIO port */
+ if (emac_has_feature(dev, EMAC_FTR_HAS_ZMII))
+ zmii_get_mdio(dev->zmii_dev, dev->zmii_port);
+ if (emac_has_feature(dev, EMAC_FTR_HAS_RGMII))
+ rgmii_get_mdio(dev->rgmii_dev, dev->rgmii_port);
+
+ /* Wait for management interface to be idle */
+ n = 20;
+ while (!emac_phy_done(dev, in_be32(&p->stacr))) {
+ udelay(1);
+ if (!--n) {
+ DBG2(dev, " -> timeout wait idle\n");
+ goto bail;
+ }
+ }
+
+ /* Issue write command */
+ if (emac_has_feature(dev, EMAC_FTR_EMAC4))
+ r = EMAC4_STACR_BASE(dev->opb_bus_freq);
+ else
+ r = EMAC_STACR_BASE(dev->opb_bus_freq);
+ if (emac_has_feature(dev, EMAC_FTR_STACR_OC_INVERT))
+ r |= EMAC_STACR_OC;
+ if (emac_has_feature(dev, EMAC_FTR_HAS_NEW_STACR))
+ r |= EMACX_STACR_STAC_WRITE;
+ else
+ r |= EMAC_STACR_STAC_WRITE;
+ r |= (reg & EMAC_STACR_PRA_MASK) |
+ ((id & EMAC_STACR_PCDA_MASK) << EMAC_STACR_PCDA_SHIFT) |
+ (val << EMAC_STACR_PHYD_SHIFT);
+ out_be32(&p->stacr, r);
+
+ /* Wait for write to complete */
+ n = 200;
+ while (!emac_phy_done(dev, in_be32(&p->stacr))) {
+ udelay(1);
+ if (!--n) {
+ DBG2(dev, " -> timeout wait complete\n");
+ goto bail;
+ }
+ }
+ err = 0;
+ bail:
+ if (emac_has_feature(dev, EMAC_FTR_HAS_RGMII))
+ rgmii_put_mdio(dev->rgmii_dev, dev->rgmii_port);
+ if (emac_has_feature(dev, EMAC_FTR_HAS_ZMII))
+ zmii_put_mdio(dev->zmii_dev, dev->zmii_port);
+ mutex_unlock(&dev->mdio_lock);
+}
+
+static int emac_mdio_read(struct net_device *ndev, int id, int reg)
+{
+ struct emac_instance *dev = netdev_priv(ndev);
+ int res;
+
+ res = __emac_mdio_read((dev->mdio_instance &&
+ dev->phy.gpcs_address != id) ?
+ dev->mdio_instance : dev,
+ (u8) id, (u8) reg);
+ return res;
+}
+
+static void emac_mdio_write(struct net_device *ndev, int id, int reg, int val)
+{
+ struct emac_instance *dev = netdev_priv(ndev);
+
+ __emac_mdio_write((dev->mdio_instance &&
+ dev->phy.gpcs_address != id) ?
+ dev->mdio_instance : dev,
+ (u8) id, (u8) reg, (u16) val);
+}
+
+/* Tx lock BH */
+static void __emac_set_multicast_list(struct emac_instance *dev)
+{
+ struct emac_regs __iomem *p = dev->emacp;
+ u32 rmr = emac_iff2rmr(dev->ndev);
+
+ DBG(dev, "__multicast %08x" NL, rmr);
+
+ /* I decided to relax register access rules here to avoid
+ * full EMAC reset.
+ *
+ * There is a real problem with EMAC4 core if we use MWSW_001 bit
+ * in MR1 register and do a full EMAC reset.
+ * One TX BD status update is delayed and, after EMAC reset, it
+ * never happens, resulting in TX hung (it'll be recovered by TX
+ * timeout handler eventually, but this is just gross).
+ * So we either have to do full TX reset or try to cheat here :)
+ *
+ * The only required change is to RX mode register, so I *think* all
+ * we need is just to stop RX channel. This seems to work on all
+ * tested SoCs. --ebs
+ *
+ * If we need the full reset, we might just trigger the workqueue
+ * and do it async... a bit nasty but should work --BenH
+ */
+ dev->mcast_pending = 0;
+ emac_rx_disable(dev);
+ if (rmr & EMAC_RMR_MAE)
+ emac_hash_mc(dev);
+ out_be32(&p->rmr, rmr);
+ emac_rx_enable(dev);
+}
+
+/* Tx lock BH */
+static void emac_set_multicast_list(struct net_device *ndev)
+{
+ struct emac_instance *dev = netdev_priv(ndev);
+
+ DBG(dev, "multicast" NL);
+
+ BUG_ON(!netif_running(dev->ndev));
+
+ if (dev->no_mcast) {
+ dev->mcast_pending = 1;
+ return;
+ }
+ __emac_set_multicast_list(dev);
+}
+
+static int emac_resize_rx_ring(struct emac_instance *dev, int new_mtu)
+{
+ int rx_sync_size = emac_rx_sync_size(new_mtu);
+ int rx_skb_size = emac_rx_skb_size(new_mtu);
+ int i, ret = 0;
+
+ mutex_lock(&dev->link_lock);
+ emac_netif_stop(dev);
+ emac_rx_disable(dev);
+ mal_disable_rx_channel(dev->mal, dev->mal_rx_chan);
+
+ if (dev->rx_sg_skb) {
+ ++dev->estats.rx_dropped_resize;
+ dev_kfree_skb(dev->rx_sg_skb);
+ dev->rx_sg_skb = NULL;
+ }
+
+ /* Make a first pass over RX ring and mark BDs ready, dropping
+ * non-processed packets on the way. We need this as a separate pass
+ * to simplify error recovery in the case of allocation failure later.
+ */
+ for (i = 0; i < NUM_RX_BUFF; ++i) {
+ if (dev->rx_desc[i].ctrl & MAL_RX_CTRL_FIRST)
+ ++dev->estats.rx_dropped_resize;
+
+ dev->rx_desc[i].data_len = 0;
+ dev->rx_desc[i].ctrl = MAL_RX_CTRL_EMPTY |
+ (i == (NUM_RX_BUFF - 1) ? MAL_RX_CTRL_WRAP : 0);
+ }
+
+ /* Reallocate RX ring only if bigger skb buffers are required */
+ if (rx_skb_size <= dev->rx_skb_size)
+ goto skip;
+
+ /* Second pass, allocate new skbs */
+ for (i = 0; i < NUM_RX_BUFF; ++i) {
+ struct sk_buff *skb = alloc_skb(rx_skb_size, GFP_ATOMIC);
+ if (!skb) {
+ ret = -ENOMEM;
+ goto oom;
+ }
+
+ BUG_ON(!dev->rx_skb[i]);
+ dev_kfree_skb(dev->rx_skb[i]);
+
+ skb_reserve(skb, EMAC_RX_SKB_HEADROOM + 2);
+ dev->rx_desc[i].data_ptr =
+ dma_map_single(&dev->ofdev->dev, skb->data - 2, rx_sync_size,
+ DMA_FROM_DEVICE) + 2;
+ dev->rx_skb[i] = skb;
+ }
+ skip:
+ /* Check if we need to change "Jumbo" bit in MR1 */
+ if ((new_mtu > ETH_DATA_LEN) ^ (dev->ndev->mtu > ETH_DATA_LEN)) {
+ /* This is to prevent starting RX channel in emac_rx_enable() */
+ set_bit(MAL_COMMAC_RX_STOPPED, &dev->commac.flags);
+
+ dev->ndev->mtu = new_mtu;
+ emac_full_tx_reset(dev);
+ }
+
+ mal_set_rcbs(dev->mal, dev->mal_rx_chan, emac_rx_size(new_mtu));
+ oom:
+ /* Restart RX */
+ clear_bit(MAL_COMMAC_RX_STOPPED, &dev->commac.flags);
+ dev->rx_slot = 0;
+ mal_enable_rx_channel(dev->mal, dev->mal_rx_chan);
+ emac_rx_enable(dev);
+ emac_netif_start(dev);
+ mutex_unlock(&dev->link_lock);
+
+ return ret;
+}
+
+/* Process ctx, rtnl_lock semaphore */
+static int emac_change_mtu(struct net_device *ndev, int new_mtu)
+{
+ struct emac_instance *dev = netdev_priv(ndev);
+ int ret = 0;
+
+ if (new_mtu < EMAC_MIN_MTU || new_mtu > dev->max_mtu)
+ return -EINVAL;
+
+ DBG(dev, "change_mtu(%d)" NL, new_mtu);
+
+ if (netif_running(ndev)) {
+ /* Check if we really need to reinitialize RX ring */
+ if (emac_rx_skb_size(ndev->mtu) != emac_rx_skb_size(new_mtu))
+ ret = emac_resize_rx_ring(dev, new_mtu);
+ }
+
+ if (!ret) {
+ ndev->mtu = new_mtu;
+ dev->rx_skb_size = emac_rx_skb_size(new_mtu);
+ dev->rx_sync_size = emac_rx_sync_size(new_mtu);
+ }
+
+ return ret;
+}
+
+static void emac_clean_tx_ring(struct emac_instance *dev)
+{
+ int i;
+
+ for (i = 0; i < NUM_TX_BUFF; ++i) {
+ if (dev->tx_skb[i]) {
+ dev_kfree_skb(dev->tx_skb[i]);
+ dev->tx_skb[i] = NULL;
+ if (dev->tx_desc[i].ctrl & MAL_TX_CTRL_READY)
+ ++dev->estats.tx_dropped;
+ }
+ dev->tx_desc[i].ctrl = 0;
+ dev->tx_desc[i].data_ptr = 0;
+ }
+}
+
+static void emac_clean_rx_ring(struct emac_instance *dev)
+{
+ int i;
+
+ for (i = 0; i < NUM_RX_BUFF; ++i)
+ if (dev->rx_skb[i]) {
+ dev->rx_desc[i].ctrl = 0;
+ dev_kfree_skb(dev->rx_skb[i]);
+ dev->rx_skb[i] = NULL;
+ dev->rx_desc[i].data_ptr = 0;
+ }
+
+ if (dev->rx_sg_skb) {
+ dev_kfree_skb(dev->rx_sg_skb);
+ dev->rx_sg_skb = NULL;
+ }
+}
+
+static inline int emac_alloc_rx_skb(struct emac_instance *dev, int slot,
+ gfp_t flags)
+{
+ struct sk_buff *skb = alloc_skb(dev->rx_skb_size, flags);
+ if (unlikely(!skb))
+ return -ENOMEM;
+
+ dev->rx_skb[slot] = skb;
+ dev->rx_desc[slot].data_len = 0;
+
+ skb_reserve(skb, EMAC_RX_SKB_HEADROOM + 2);
+ dev->rx_desc[slot].data_ptr =
+ dma_map_single(&dev->ofdev->dev, skb->data - 2, dev->rx_sync_size,
+ DMA_FROM_DEVICE) + 2;
+ wmb();
+ dev->rx_desc[slot].ctrl = MAL_RX_CTRL_EMPTY |
+ (slot == (NUM_RX_BUFF - 1) ? MAL_RX_CTRL_WRAP : 0);
+
+ return 0;
+}
+
+static void emac_print_link_status(struct emac_instance *dev)
+{
+ if (netif_carrier_ok(dev->ndev))
+ printk(KERN_INFO "%s: link is up, %d %s%s\n",
+ dev->ndev->name, dev->phy.speed,
+ dev->phy.duplex == DUPLEX_FULL ? "FDX" : "HDX",
+ dev->phy.pause ? ", pause enabled" :
+ dev->phy.asym_pause ? ", asymmetric pause enabled" : "");
+ else
+ printk(KERN_INFO "%s: link is down\n", dev->ndev->name);
+}
+
+/* Process ctx, rtnl_lock semaphore */
+static int emac_open(struct net_device *ndev)
+{
+ struct emac_instance *dev = netdev_priv(ndev);
+ int err, i;
+
+ DBG(dev, "open" NL);
+
+ /* Setup error IRQ handler */
+ err = request_irq(dev->emac_irq, emac_irq, 0, "EMAC", dev);
+ if (err) {
+ printk(KERN_ERR "%s: failed to request IRQ %d\n",
+ ndev->name, dev->emac_irq);
+ return err;
+ }
+
+ /* Allocate RX ring */
+ for (i = 0; i < NUM_RX_BUFF; ++i)
+ if (emac_alloc_rx_skb(dev, i, GFP_KERNEL)) {
+ printk(KERN_ERR "%s: failed to allocate RX ring\n",
+ ndev->name);
+ goto oom;
+ }
+
+ dev->tx_cnt = dev->tx_slot = dev->ack_slot = dev->rx_slot = 0;
+ clear_bit(MAL_COMMAC_RX_STOPPED, &dev->commac.flags);
+ dev->rx_sg_skb = NULL;
+
+ mutex_lock(&dev->link_lock);
+ dev->opened = 1;
+
+ /* Start PHY polling now.
+ */
+ if (dev->phy.address >= 0) {
+ int link_poll_interval;
+ if (dev->phy.def->ops->poll_link(&dev->phy)) {
+ dev->phy.def->ops->read_link(&dev->phy);
+ emac_rx_clk_default(dev);
+ netif_carrier_on(dev->ndev);
+ link_poll_interval = PHY_POLL_LINK_ON;
+ } else {
+ emac_rx_clk_tx(dev);
+ netif_carrier_off(dev->ndev);
+ link_poll_interval = PHY_POLL_LINK_OFF;
+ }
+ dev->link_polling = 1;
+ wmb();
+ schedule_delayed_work(&dev->link_work, link_poll_interval);
+ emac_print_link_status(dev);
+ } else
+ netif_carrier_on(dev->ndev);
+
+ /* Required for Pause packet support in EMAC */
+ dev_mc_add_global(ndev, default_mcast_addr);
+
+ emac_configure(dev);
+ mal_poll_add(dev->mal, &dev->commac);
+ mal_enable_tx_channel(dev->mal, dev->mal_tx_chan);
+ mal_set_rcbs(dev->mal, dev->mal_rx_chan, emac_rx_size(ndev->mtu));
+ mal_enable_rx_channel(dev->mal, dev->mal_rx_chan);
+ emac_tx_enable(dev);
+ emac_rx_enable(dev);
+ emac_netif_start(dev);
+
+ mutex_unlock(&dev->link_lock);
+
+ return 0;
+ oom:
+ emac_clean_rx_ring(dev);
+ free_irq(dev->emac_irq, dev);
+
+ return -ENOMEM;
+}
+
+/* BHs disabled */
+#if 0
+static int emac_link_differs(struct emac_instance *dev)
+{
+ u32 r = in_be32(&dev->emacp->mr1);
+
+ int duplex = r & EMAC_MR1_FDE ? DUPLEX_FULL : DUPLEX_HALF;
+ int speed, pause, asym_pause;
+
+ if (r & EMAC_MR1_MF_1000)
+ speed = SPEED_1000;
+ else if (r & EMAC_MR1_MF_100)
+ speed = SPEED_100;
+ else
+ speed = SPEED_10;
+
+ switch (r & (EMAC_MR1_EIFC | EMAC_MR1_APP)) {
+ case (EMAC_MR1_EIFC | EMAC_MR1_APP):
+ pause = 1;
+ asym_pause = 0;
+ break;
+ case EMAC_MR1_APP:
+ pause = 0;
+ asym_pause = 1;
+ break;
+ default:
+ pause = asym_pause = 0;
+ }
+ return speed != dev->phy.speed || duplex != dev->phy.duplex ||
+ pause != dev->phy.pause || asym_pause != dev->phy.asym_pause;
+}
+#endif
+
+static void emac_link_timer(struct work_struct *work)
+{
+ struct emac_instance *dev =
+ container_of(to_delayed_work(work),
+ struct emac_instance, link_work);
+ int link_poll_interval;
+
+ mutex_lock(&dev->link_lock);
+ DBG2(dev, "link timer" NL);
+
+ if (!dev->opened)
+ goto bail;
+
+ if (dev->phy.def->ops->poll_link(&dev->phy)) {
+ if (!netif_carrier_ok(dev->ndev)) {
+ emac_rx_clk_default(dev);
+ /* Get new link parameters */
+ dev->phy.def->ops->read_link(&dev->phy);
+
+ netif_carrier_on(dev->ndev);
+ emac_netif_stop(dev);
+ emac_full_tx_reset(dev);
+ emac_netif_start(dev);
+ emac_print_link_status(dev);
+ }
+ link_poll_interval = PHY_POLL_LINK_ON;
+ } else {
+ if (netif_carrier_ok(dev->ndev)) {
+ emac_rx_clk_tx(dev);
+ netif_carrier_off(dev->ndev);
+ netif_tx_disable(dev->ndev);
+ emac_reinitialize(dev);
+ emac_print_link_status(dev);
+ }
+ link_poll_interval = PHY_POLL_LINK_OFF;
+ }
+ schedule_delayed_work(&dev->link_work, link_poll_interval);
+ bail:
+ mutex_unlock(&dev->link_lock);
+}
+
+static void emac_force_link_update(struct emac_instance *dev)
+{
+ netif_carrier_off(dev->ndev);
+ smp_rmb();
+ if (dev->link_polling) {
+ cancel_rearming_delayed_work(&dev->link_work);
+ if (dev->link_polling)
+ schedule_delayed_work(&dev->link_work, PHY_POLL_LINK_OFF);
+ }
+}
+
+/* Process ctx, rtnl_lock semaphore */
+static int emac_close(struct net_device *ndev)
+{
+ struct emac_instance *dev = netdev_priv(ndev);
+
+ DBG(dev, "close" NL);
+
+ if (dev->phy.address >= 0) {
+ dev->link_polling = 0;
+ cancel_rearming_delayed_work(&dev->link_work);
+ }
+ mutex_lock(&dev->link_lock);
+ emac_netif_stop(dev);
+ dev->opened = 0;
+ mutex_unlock(&dev->link_lock);
+
+ emac_rx_disable(dev);
+ emac_tx_disable(dev);
+ mal_disable_rx_channel(dev->mal, dev->mal_rx_chan);
+ mal_disable_tx_channel(dev->mal, dev->mal_tx_chan);
+ mal_poll_del(dev->mal, &dev->commac);
+
+ emac_clean_tx_ring(dev);
+ emac_clean_rx_ring(dev);
+
+ free_irq(dev->emac_irq, dev);
+
+ netif_carrier_off(ndev);
+
+ return 0;
+}
+
+static inline u16 emac_tx_csum(struct emac_instance *dev,
+ struct sk_buff *skb)
+{
+ if (emac_has_feature(dev, EMAC_FTR_HAS_TAH) &&
+ (skb->ip_summed == CHECKSUM_PARTIAL)) {
+ ++dev->stats.tx_packets_csum;
+ return EMAC_TX_CTRL_TAH_CSUM;
+ }
+ return 0;
+}
+
+static inline int emac_xmit_finish(struct emac_instance *dev, int len)
+{
+ struct emac_regs __iomem *p = dev->emacp;
+ struct net_device *ndev = dev->ndev;
+
+ /* Send the packet out. If the if makes a significant perf
+ * difference, then we can store the TMR0 value in "dev"
+ * instead
+ */
+ if (emac_has_feature(dev, EMAC_FTR_EMAC4))
+ out_be32(&p->tmr0, EMAC4_TMR0_XMIT);
+ else
+ out_be32(&p->tmr0, EMAC_TMR0_XMIT);
+
+ if (unlikely(++dev->tx_cnt == NUM_TX_BUFF)) {
+ netif_stop_queue(ndev);
+ DBG2(dev, "stopped TX queue" NL);
+ }
+
+ ndev->trans_start = jiffies;
+ ++dev->stats.tx_packets;
+ dev->stats.tx_bytes += len;
+
+ return NETDEV_TX_OK;
+}
+
+/* Tx lock BH */
+static int emac_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+{
+ struct emac_instance *dev = netdev_priv(ndev);
+ unsigned int len = skb->len;
+ int slot;
+
+ u16 ctrl = EMAC_TX_CTRL_GFCS | EMAC_TX_CTRL_GP | MAL_TX_CTRL_READY |
+ MAL_TX_CTRL_LAST | emac_tx_csum(dev, skb);
+
+ slot = dev->tx_slot++;
+ if (dev->tx_slot == NUM_TX_BUFF) {
+ dev->tx_slot = 0;
+ ctrl |= MAL_TX_CTRL_WRAP;
+ }
+
+ DBG2(dev, "xmit(%u) %d" NL, len, slot);
+
+ dev->tx_skb[slot] = skb;
+ dev->tx_desc[slot].data_ptr = dma_map_single(&dev->ofdev->dev,
+ skb->data, len,
+ DMA_TO_DEVICE);
+ dev->tx_desc[slot].data_len = (u16) len;
+ wmb();
+ dev->tx_desc[slot].ctrl = ctrl;
+
+ return emac_xmit_finish(dev, len);
+}
+
+static inline int emac_xmit_split(struct emac_instance *dev, int slot,
+ u32 pd, int len, int last, u16 base_ctrl)
+{
+ while (1) {
+ u16 ctrl = base_ctrl;
+ int chunk = min(len, MAL_MAX_TX_SIZE);
+ len -= chunk;
+
+ slot = (slot + 1) % NUM_TX_BUFF;
+
+ if (last && !len)
+ ctrl |= MAL_TX_CTRL_LAST;
+ if (slot == NUM_TX_BUFF - 1)
+ ctrl |= MAL_TX_CTRL_WRAP;
+
+ dev->tx_skb[slot] = NULL;
+ dev->tx_desc[slot].data_ptr = pd;
+ dev->tx_desc[slot].data_len = (u16) chunk;
+ dev->tx_desc[slot].ctrl = ctrl;
+ ++dev->tx_cnt;
+
+ if (!len)
+ break;
+
+ pd += chunk;
+ }
+ return slot;
+}
+
+/* Tx lock BH disabled (SG version for TAH equipped EMACs) */
+static int emac_start_xmit_sg(struct sk_buff *skb, struct net_device *ndev)
+{
+ struct emac_instance *dev = netdev_priv(ndev);
+ int nr_frags = skb_shinfo(skb)->nr_frags;
+ int len = skb->len, chunk;
+ int slot, i;
+ u16 ctrl;
+ u32 pd;
+
+ /* This is common "fast" path */
+ if (likely(!nr_frags && len <= MAL_MAX_TX_SIZE))
+ return emac_start_xmit(skb, ndev);
+
+ len -= skb->data_len;
+
+ /* Note, this is only an *estimation*, we can still run out of empty
+ * slots because of the additional fragmentation into
+ * MAL_MAX_TX_SIZE-sized chunks
+ */
+ if (unlikely(dev->tx_cnt + nr_frags + mal_tx_chunks(len) > NUM_TX_BUFF))
+ goto stop_queue;
+
+ ctrl = EMAC_TX_CTRL_GFCS | EMAC_TX_CTRL_GP | MAL_TX_CTRL_READY |
+ emac_tx_csum(dev, skb);
+ slot = dev->tx_slot;
+
+ /* skb data */
+ dev->tx_skb[slot] = NULL;
+ chunk = min(len, MAL_MAX_TX_SIZE);
+ dev->tx_desc[slot].data_ptr = pd =
+ dma_map_single(&dev->ofdev->dev, skb->data, len, DMA_TO_DEVICE);
+ dev->tx_desc[slot].data_len = (u16) chunk;
+ len -= chunk;
+ if (unlikely(len))
+ slot = emac_xmit_split(dev, slot, pd + chunk, len, !nr_frags,
+ ctrl);
+ /* skb fragments */
+ for (i = 0; i < nr_frags; ++i) {
+ struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[i];
+ len = frag->size;
+
+ if (unlikely(dev->tx_cnt + mal_tx_chunks(len) >= NUM_TX_BUFF))
+ goto undo_frame;
+
+ pd = dma_map_page(&dev->ofdev->dev, frag->page, frag->page_offset, len,
+ DMA_TO_DEVICE);
+
+ slot = emac_xmit_split(dev, slot, pd, len, i == nr_frags - 1,
+ ctrl);
+ }
+
+ DBG2(dev, "xmit_sg(%u) %d - %d" NL, skb->len, dev->tx_slot, slot);
+
+ /* Attach skb to the last slot so we don't release it too early */
+ dev->tx_skb[slot] = skb;
+
+ /* Send the packet out */
+ if (dev->tx_slot == NUM_TX_BUFF - 1)
+ ctrl |= MAL_TX_CTRL_WRAP;
+ wmb();
+ dev->tx_desc[dev->tx_slot].ctrl = ctrl;
+ dev->tx_slot = (slot + 1) % NUM_TX_BUFF;
+
+ return emac_xmit_finish(dev, skb->len);
+
+ undo_frame:
+ /* Well, too bad. Our previous estimation was overly optimistic.
+ * Undo everything.
+ */
+ while (slot != dev->tx_slot) {
+ dev->tx_desc[slot].ctrl = 0;
+ --dev->tx_cnt;
+ if (--slot < 0)
+ slot = NUM_TX_BUFF - 1;
+ }
+ ++dev->estats.tx_undo;
+
+ stop_queue:
+ netif_stop_queue(ndev);
+ DBG2(dev, "stopped TX queue" NL);
+ return NETDEV_TX_BUSY;
+}
+
+/* Tx lock BHs */
+static void emac_parse_tx_error(struct emac_instance *dev, u16 ctrl)
+{
+ struct emac_error_stats *st = &dev->estats;
+
+ DBG(dev, "BD TX error %04x" NL, ctrl);
+
+ ++st->tx_bd_errors;
+ if (ctrl & EMAC_TX_ST_BFCS)
+ ++st->tx_bd_bad_fcs;
+ if (ctrl & EMAC_TX_ST_LCS)
+ ++st->tx_bd_carrier_loss;
+ if (ctrl & EMAC_TX_ST_ED)
+ ++st->tx_bd_excessive_deferral;
+ if (ctrl & EMAC_TX_ST_EC)
+ ++st->tx_bd_excessive_collisions;
+ if (ctrl & EMAC_TX_ST_LC)
+ ++st->tx_bd_late_collision;
+ if (ctrl & EMAC_TX_ST_MC)
+ ++st->tx_bd_multple_collisions;
+ if (ctrl & EMAC_TX_ST_SC)
+ ++st->tx_bd_single_collision;
+ if (ctrl & EMAC_TX_ST_UR)
+ ++st->tx_bd_underrun;
+ if (ctrl & EMAC_TX_ST_SQE)
+ ++st->tx_bd_sqe;
+}
+
+static void emac_poll_tx(void *param)
+{
+ struct emac_instance *dev = param;
+ u32 bad_mask;
+
+ DBG2(dev, "poll_tx, %d %d" NL, dev->tx_cnt, dev->ack_slot);
+
+ if (emac_has_feature(dev, EMAC_FTR_HAS_TAH))
+ bad_mask = EMAC_IS_BAD_TX_TAH;
+ else
+ bad_mask = EMAC_IS_BAD_TX;
+
+ netif_tx_lock_bh(dev->ndev);
+ if (dev->tx_cnt) {
+ u16 ctrl;
+ int slot = dev->ack_slot, n = 0;
+ again:
+ ctrl = dev->tx_desc[slot].ctrl;
+ if (!(ctrl & MAL_TX_CTRL_READY)) {
+ struct sk_buff *skb = dev->tx_skb[slot];
+ ++n;
+
+ if (skb) {
+ dev_kfree_skb(skb);
+ dev->tx_skb[slot] = NULL;
+ }
+ slot = (slot + 1) % NUM_TX_BUFF;
+
+ if (unlikely(ctrl & bad_mask))
+ emac_parse_tx_error(dev, ctrl);
+
+ if (--dev->tx_cnt)
+ goto again;
+ }
+ if (n) {
+ dev->ack_slot = slot;
+ if (netif_queue_stopped(dev->ndev) &&
+ dev->tx_cnt < EMAC_TX_WAKEUP_THRESH)
+ netif_wake_queue(dev->ndev);
+
+ DBG2(dev, "tx %d pkts" NL, n);
+ }
+ }
+ netif_tx_unlock_bh(dev->ndev);
+}
+
+static inline void emac_recycle_rx_skb(struct emac_instance *dev, int slot,
+ int len)
+{
+ struct sk_buff *skb = dev->rx_skb[slot];
+
+ DBG2(dev, "recycle %d %d" NL, slot, len);
+
+ if (len)
+ dma_map_single(&dev->ofdev->dev, skb->data - 2,
+ EMAC_DMA_ALIGN(len + 2), DMA_FROM_DEVICE);
+
+ dev->rx_desc[slot].data_len = 0;
+ wmb();
+ dev->rx_desc[slot].ctrl = MAL_RX_CTRL_EMPTY |
+ (slot == (NUM_RX_BUFF - 1) ? MAL_RX_CTRL_WRAP : 0);
+}
+
+static void emac_parse_rx_error(struct emac_instance *dev, u16 ctrl)
+{
+ struct emac_error_stats *st = &dev->estats;
+
+ DBG(dev, "BD RX error %04x" NL, ctrl);
+
+ ++st->rx_bd_errors;
+ if (ctrl & EMAC_RX_ST_OE)
+ ++st->rx_bd_overrun;
+ if (ctrl & EMAC_RX_ST_BP)
+ ++st->rx_bd_bad_packet;
+ if (ctrl & EMAC_RX_ST_RP)
+ ++st->rx_bd_runt_packet;
+ if (ctrl & EMAC_RX_ST_SE)
+ ++st->rx_bd_short_event;
+ if (ctrl & EMAC_RX_ST_AE)
+ ++st->rx_bd_alignment_error;
+ if (ctrl & EMAC_RX_ST_BFCS)
+ ++st->rx_bd_bad_fcs;
+ if (ctrl & EMAC_RX_ST_PTL)
+ ++st->rx_bd_packet_too_long;
+ if (ctrl & EMAC_RX_ST_ORE)
+ ++st->rx_bd_out_of_range;
+ if (ctrl & EMAC_RX_ST_IRE)
+ ++st->rx_bd_in_range;
+}
+
+static inline void emac_rx_csum(struct emac_instance *dev,
+ struct sk_buff *skb, u16 ctrl)
+{
+#ifdef CONFIG_IBM_NEW_EMAC_TAH
+ if (!ctrl && dev->tah_dev) {
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ ++dev->stats.rx_packets_csum;
+ }
+#endif
+}
+
+static inline int emac_rx_sg_append(struct emac_instance *dev, int slot)
+{
+ if (likely(dev->rx_sg_skb != NULL)) {
+ int len = dev->rx_desc[slot].data_len;
+ int tot_len = dev->rx_sg_skb->len + len;
+
+ if (unlikely(tot_len + 2 > dev->rx_skb_size)) {
+ ++dev->estats.rx_dropped_mtu;
+ dev_kfree_skb(dev->rx_sg_skb);
+ dev->rx_sg_skb = NULL;
+ } else {
+ cacheable_memcpy(skb_tail_pointer(dev->rx_sg_skb),
+ dev->rx_skb[slot]->data, len);
+ skb_put(dev->rx_sg_skb, len);
+ emac_recycle_rx_skb(dev, slot, len);
+ return 0;
+ }
+ }
+ emac_recycle_rx_skb(dev, slot, 0);
+ return -1;
+}
+
+/* NAPI poll context */
+static int emac_poll_rx(void *param, int budget)
+{
+ struct emac_instance *dev = param;
+ int slot = dev->rx_slot, received = 0;
+
+ DBG2(dev, "poll_rx(%d)" NL, budget);
+
+ again:
+ while (budget > 0) {
+ int len;
+ struct sk_buff *skb;
+ u16 ctrl = dev->rx_desc[slot].ctrl;
+
+ if (ctrl & MAL_RX_CTRL_EMPTY)
+ break;
+
+ skb = dev->rx_skb[slot];
+ mb();
+ len = dev->rx_desc[slot].data_len;
+
+ if (unlikely(!MAL_IS_SINGLE_RX(ctrl)))
+ goto sg;
+
+ ctrl &= EMAC_BAD_RX_MASK;
+ if (unlikely(ctrl && ctrl != EMAC_RX_TAH_BAD_CSUM)) {
+ emac_parse_rx_error(dev, ctrl);
+ ++dev->estats.rx_dropped_error;
+ emac_recycle_rx_skb(dev, slot, 0);
+ len = 0;
+ goto next;
+ }
+
+ if (len < ETH_HLEN) {
+ ++dev->estats.rx_dropped_stack;
+ emac_recycle_rx_skb(dev, slot, len);
+ goto next;
+ }
+
+ if (len && len < EMAC_RX_COPY_THRESH) {
+ struct sk_buff *copy_skb =
+ alloc_skb(len + EMAC_RX_SKB_HEADROOM + 2, GFP_ATOMIC);
+ if (unlikely(!copy_skb))
+ goto oom;
+
+ skb_reserve(copy_skb, EMAC_RX_SKB_HEADROOM + 2);
+ cacheable_memcpy(copy_skb->data - 2, skb->data - 2,
+ len + 2);
+ emac_recycle_rx_skb(dev, slot, len);
+ skb = copy_skb;
+ } else if (unlikely(emac_alloc_rx_skb(dev, slot, GFP_ATOMIC)))
+ goto oom;
+
+ skb_put(skb, len);
+ push_packet:
+ skb->protocol = eth_type_trans(skb, dev->ndev);
+ emac_rx_csum(dev, skb, ctrl);
+
+ if (unlikely(netif_receive_skb(skb) == NET_RX_DROP))
+ ++dev->estats.rx_dropped_stack;
+ next:
+ ++dev->stats.rx_packets;
+ skip:
+ dev->stats.rx_bytes += len;
+ slot = (slot + 1) % NUM_RX_BUFF;
+ --budget;
+ ++received;
+ continue;
+ sg:
+ if (ctrl & MAL_RX_CTRL_FIRST) {
+ BUG_ON(dev->rx_sg_skb);
+ if (unlikely(emac_alloc_rx_skb(dev, slot, GFP_ATOMIC))) {
+ DBG(dev, "rx OOM %d" NL, slot);
+ ++dev->estats.rx_dropped_oom;
+ emac_recycle_rx_skb(dev, slot, 0);
+ } else {
+ dev->rx_sg_skb = skb;
+ skb_put(skb, len);
+ }
+ } else if (!emac_rx_sg_append(dev, slot) &&
+ (ctrl & MAL_RX_CTRL_LAST)) {
+
+ skb = dev->rx_sg_skb;
+ dev->rx_sg_skb = NULL;
+
+ ctrl &= EMAC_BAD_RX_MASK;
+ if (unlikely(ctrl && ctrl != EMAC_RX_TAH_BAD_CSUM)) {
+ emac_parse_rx_error(dev, ctrl);
+ ++dev->estats.rx_dropped_error;
+ dev_kfree_skb(skb);
+ len = 0;
+ } else
+ goto push_packet;
+ }
+ goto skip;
+ oom:
+ DBG(dev, "rx OOM %d" NL, slot);
+ /* Drop the packet and recycle skb */
+ ++dev->estats.rx_dropped_oom;
+ emac_recycle_rx_skb(dev, slot, 0);
+ goto next;
+ }
+
+ if (received) {
+ DBG2(dev, "rx %d BDs" NL, received);
+ dev->rx_slot = slot;
+ }
+
+ if (unlikely(budget && test_bit(MAL_COMMAC_RX_STOPPED, &dev->commac.flags))) {
+ mb();
+ if (!(dev->rx_desc[slot].ctrl & MAL_RX_CTRL_EMPTY)) {
+ DBG2(dev, "rx restart" NL);
+ received = 0;
+ goto again;
+ }
+
+ if (dev->rx_sg_skb) {
+ DBG2(dev, "dropping partial rx packet" NL);
+ ++dev->estats.rx_dropped_error;
+ dev_kfree_skb(dev->rx_sg_skb);
+ dev->rx_sg_skb = NULL;
+ }
+
+ clear_bit(MAL_COMMAC_RX_STOPPED, &dev->commac.flags);
+ mal_enable_rx_channel(dev->mal, dev->mal_rx_chan);
+ emac_rx_enable(dev);
+ dev->rx_slot = 0;
+ }
+ return received;
+}
+
+/* NAPI poll context */
+static int emac_peek_rx(void *param)
+{
+ struct emac_instance *dev = param;
+
+ return !(dev->rx_desc[dev->rx_slot].ctrl & MAL_RX_CTRL_EMPTY);
+}
+
+/* NAPI poll context */
+static int emac_peek_rx_sg(void *param)
+{
+ struct emac_instance *dev = param;
+
+ int slot = dev->rx_slot;
+ while (1) {
+ u16 ctrl = dev->rx_desc[slot].ctrl;
+ if (ctrl & MAL_RX_CTRL_EMPTY)
+ return 0;
+ else if (ctrl & MAL_RX_CTRL_LAST)
+ return 1;
+
+ slot = (slot + 1) % NUM_RX_BUFF;
+
+ /* I'm just being paranoid here :) */
+ if (unlikely(slot == dev->rx_slot))
+ return 0;
+ }
+}
+
+/* Hard IRQ */
+static void emac_rxde(void *param)
+{
+ struct emac_instance *dev = param;
+
+ ++dev->estats.rx_stopped;
+ emac_rx_disable_async(dev);
+}
+
+/* Hard IRQ */
+static irqreturn_t emac_irq(int irq, void *dev_instance)
+{
+ struct emac_instance *dev = dev_instance;
+ struct emac_regs __iomem *p = dev->emacp;
+ struct emac_error_stats *st = &dev->estats;
+ u32 isr;
+
+ spin_lock(&dev->lock);
+
+ isr = in_be32(&p->isr);
+ out_be32(&p->isr, isr);
+
+ DBG(dev, "isr = %08x" NL, isr);
+
+ if (isr & EMAC4_ISR_TXPE)
+ ++st->tx_parity;
+ if (isr & EMAC4_ISR_RXPE)
+ ++st->rx_parity;
+ if (isr & EMAC4_ISR_TXUE)
+ ++st->tx_underrun;
+ if (isr & EMAC4_ISR_RXOE)
+ ++st->rx_fifo_overrun;
+ if (isr & EMAC_ISR_OVR)
+ ++st->rx_overrun;
+ if (isr & EMAC_ISR_BP)
+ ++st->rx_bad_packet;
+ if (isr & EMAC_ISR_RP)
+ ++st->rx_runt_packet;
+ if (isr & EMAC_ISR_SE)
+ ++st->rx_short_event;
+ if (isr & EMAC_ISR_ALE)
+ ++st->rx_alignment_error;
+ if (isr & EMAC_ISR_BFCS)
+ ++st->rx_bad_fcs;
+ if (isr & EMAC_ISR_PTLE)
+ ++st->rx_packet_too_long;
+ if (isr & EMAC_ISR_ORE)
+ ++st->rx_out_of_range;
+ if (isr & EMAC_ISR_IRE)
+ ++st->rx_in_range;
+ if (isr & EMAC_ISR_SQE)
+ ++st->tx_sqe;
+ if (isr & EMAC_ISR_TE)
+ ++st->tx_errors;
+
+ spin_unlock(&dev->lock);
+
+ return IRQ_HANDLED;
+}
+
+static struct net_device_stats *emac_stats(struct net_device *ndev)
+{
+ struct emac_instance *dev = netdev_priv(ndev);
+ struct emac_stats *st = &dev->stats;
+ struct emac_error_stats *est = &dev->estats;
+ struct net_device_stats *nst = &dev->nstats;
+ unsigned long flags;
+
+ DBG2(dev, "stats" NL);
+
+ /* Compute "legacy" statistics */
+ spin_lock_irqsave(&dev->lock, flags);
+ nst->rx_packets = (unsigned long)st->rx_packets;
+ nst->rx_bytes = (unsigned long)st->rx_bytes;
+ nst->tx_packets = (unsigned long)st->tx_packets;
+ nst->tx_bytes = (unsigned long)st->tx_bytes;
+ nst->rx_dropped = (unsigned long)(est->rx_dropped_oom +
+ est->rx_dropped_error +
+ est->rx_dropped_resize +
+ est->rx_dropped_mtu);
+ nst->tx_dropped = (unsigned long)est->tx_dropped;
+
+ nst->rx_errors = (unsigned long)est->rx_bd_errors;
+ nst->rx_fifo_errors = (unsigned long)(est->rx_bd_overrun +
+ est->rx_fifo_overrun +
+ est->rx_overrun);
+ nst->rx_frame_errors = (unsigned long)(est->rx_bd_alignment_error +
+ est->rx_alignment_error);
+ nst->rx_crc_errors = (unsigned long)(est->rx_bd_bad_fcs +
+ est->rx_bad_fcs);
+ nst->rx_length_errors = (unsigned long)(est->rx_bd_runt_packet +
+ est->rx_bd_short_event +
+ est->rx_bd_packet_too_long +
+ est->rx_bd_out_of_range +
+ est->rx_bd_in_range +
+ est->rx_runt_packet +
+ est->rx_short_event +
+ est->rx_packet_too_long +
+ est->rx_out_of_range +
+ est->rx_in_range);
+
+ nst->tx_errors = (unsigned long)(est->tx_bd_errors + est->tx_errors);
+ nst->tx_fifo_errors = (unsigned long)(est->tx_bd_underrun +
+ est->tx_underrun);
+ nst->tx_carrier_errors = (unsigned long)est->tx_bd_carrier_loss;
+ nst->collisions = (unsigned long)(est->tx_bd_excessive_deferral +
+ est->tx_bd_excessive_collisions +
+ est->tx_bd_late_collision +
+ est->tx_bd_multple_collisions);
+ spin_unlock_irqrestore(&dev->lock, flags);
+ return nst;
+}
+
+static struct mal_commac_ops emac_commac_ops = {
+ .poll_tx = &emac_poll_tx,
+ .poll_rx = &emac_poll_rx,
+ .peek_rx = &emac_peek_rx,
+ .rxde = &emac_rxde,
+};
+
+static struct mal_commac_ops emac_commac_sg_ops = {
+ .poll_tx = &emac_poll_tx,
+ .poll_rx = &emac_poll_rx,
+ .peek_rx = &emac_peek_rx_sg,
+ .rxde = &emac_rxde,
+};
+
+/* Ethtool support */
+static int emac_ethtool_get_settings(struct net_device *ndev,
+ struct ethtool_cmd *cmd)
+{
+ struct emac_instance *dev = netdev_priv(ndev);
+
+ cmd->supported = dev->phy.features;
+ cmd->port = PORT_MII;
+ cmd->phy_address = dev->phy.address;
+ cmd->transceiver =
+ dev->phy.address >= 0 ? XCVR_EXTERNAL : XCVR_INTERNAL;
+
+ mutex_lock(&dev->link_lock);
+ cmd->advertising = dev->phy.advertising;
+ cmd->autoneg = dev->phy.autoneg;
+ cmd->speed = dev->phy.speed;
+ cmd->duplex = dev->phy.duplex;
+ mutex_unlock(&dev->link_lock);
+
+ return 0;
+}
+
+static int emac_ethtool_set_settings(struct net_device *ndev,
+ struct ethtool_cmd *cmd)
+{
+ struct emac_instance *dev = netdev_priv(ndev);
+ u32 f = dev->phy.features;
+
+ DBG(dev, "set_settings(%d, %d, %d, 0x%08x)" NL,
+ cmd->autoneg, cmd->speed, cmd->duplex, cmd->advertising);
+
+ /* Basic sanity checks */
+ if (dev->phy.address < 0)
+ return -EOPNOTSUPP;
+ if (cmd->autoneg != AUTONEG_ENABLE && cmd->autoneg != AUTONEG_DISABLE)
+ return -EINVAL;
+ if (cmd->autoneg == AUTONEG_ENABLE && cmd->advertising == 0)
+ return -EINVAL;
+ if (cmd->duplex != DUPLEX_HALF && cmd->duplex != DUPLEX_FULL)
+ return -EINVAL;
+
+ if (cmd->autoneg == AUTONEG_DISABLE) {
+ switch (cmd->speed) {
+ case SPEED_10:
+ if (cmd->duplex == DUPLEX_HALF &&
+ !(f & SUPPORTED_10baseT_Half))
+ return -EINVAL;
+ if (cmd->duplex == DUPLEX_FULL &&
+ !(f & SUPPORTED_10baseT_Full))
+ return -EINVAL;
+ break;
+ case SPEED_100:
+ if (cmd->duplex == DUPLEX_HALF &&
+ !(f & SUPPORTED_100baseT_Half))
+ return -EINVAL;
+ if (cmd->duplex == DUPLEX_FULL &&
+ !(f & SUPPORTED_100baseT_Full))
+ return -EINVAL;
+ break;
+ case SPEED_1000:
+ if (cmd->duplex == DUPLEX_HALF &&
+ !(f & SUPPORTED_1000baseT_Half))
+ return -EINVAL;
+ if (cmd->duplex == DUPLEX_FULL &&
+ !(f & SUPPORTED_1000baseT_Full))
+ return -EINVAL;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ mutex_lock(&dev->link_lock);
+ dev->phy.def->ops->setup_forced(&dev->phy, cmd->speed,
+ cmd->duplex);
+ mutex_unlock(&dev->link_lock);
+
+ } else {
+ if (!(f & SUPPORTED_Autoneg))
+ return -EINVAL;
+
+ mutex_lock(&dev->link_lock);
+ dev->phy.def->ops->setup_aneg(&dev->phy,
+ (cmd->advertising & f) |
+ (dev->phy.advertising &
+ (ADVERTISED_Pause |
+ ADVERTISED_Asym_Pause)));
+ mutex_unlock(&dev->link_lock);
+ }
+ emac_force_link_update(dev);
+
+ return 0;
+}
+
+static void emac_ethtool_get_ringparam(struct net_device *ndev,
+ struct ethtool_ringparam *rp)
+{
+ rp->rx_max_pending = rp->rx_pending = NUM_RX_BUFF;
+ rp->tx_max_pending = rp->tx_pending = NUM_TX_BUFF;
+}
+
+static void emac_ethtool_get_pauseparam(struct net_device *ndev,
+ struct ethtool_pauseparam *pp)
+{
+ struct emac_instance *dev = netdev_priv(ndev);
+
+ mutex_lock(&dev->link_lock);
+ if ((dev->phy.features & SUPPORTED_Autoneg) &&
+ (dev->phy.advertising & (ADVERTISED_Pause | ADVERTISED_Asym_Pause)))
+ pp->autoneg = 1;
+
+ if (dev->phy.duplex == DUPLEX_FULL) {
+ if (dev->phy.pause)
+ pp->rx_pause = pp->tx_pause = 1;
+ else if (dev->phy.asym_pause)
+ pp->tx_pause = 1;
+ }
+ mutex_unlock(&dev->link_lock);
+}
+
+static u32 emac_ethtool_get_rx_csum(struct net_device *ndev)
+{
+ struct emac_instance *dev = netdev_priv(ndev);
+
+ return dev->tah_dev != NULL;
+}
+
+static int emac_get_regs_len(struct emac_instance *dev)
+{
+ if (emac_has_feature(dev, EMAC_FTR_EMAC4))
+ return sizeof(struct emac_ethtool_regs_subhdr) +
+ EMAC4_ETHTOOL_REGS_SIZE(dev);
+ else
+ return sizeof(struct emac_ethtool_regs_subhdr) +
+ EMAC_ETHTOOL_REGS_SIZE(dev);
+}
+
+static int emac_ethtool_get_regs_len(struct net_device *ndev)
+{
+ struct emac_instance *dev = netdev_priv(ndev);
+ int size;
+
+ size = sizeof(struct emac_ethtool_regs_hdr) +
+ emac_get_regs_len(dev) + mal_get_regs_len(dev->mal);
+ if (emac_has_feature(dev, EMAC_FTR_HAS_ZMII))
+ size += zmii_get_regs_len(dev->zmii_dev);
+ if (emac_has_feature(dev, EMAC_FTR_HAS_RGMII))
+ size += rgmii_get_regs_len(dev->rgmii_dev);
+ if (emac_has_feature(dev, EMAC_FTR_HAS_TAH))
+ size += tah_get_regs_len(dev->tah_dev);
+
+ return size;
+}
+
+static void *emac_dump_regs(struct emac_instance *dev, void *buf)
+{
+ struct emac_ethtool_regs_subhdr *hdr = buf;
+
+ hdr->index = dev->cell_index;
+ if (emac_has_feature(dev, EMAC_FTR_EMAC4)) {
+ hdr->version = EMAC4_ETHTOOL_REGS_VER;
+ memcpy_fromio(hdr + 1, dev->emacp, EMAC4_ETHTOOL_REGS_SIZE(dev));
+ return ((void *)(hdr + 1) + EMAC4_ETHTOOL_REGS_SIZE(dev));
+ } else {
+ hdr->version = EMAC_ETHTOOL_REGS_VER;
+ memcpy_fromio(hdr + 1, dev->emacp, EMAC_ETHTOOL_REGS_SIZE(dev));
+ return ((void *)(hdr + 1) + EMAC_ETHTOOL_REGS_SIZE(dev));
+ }
+}
+
+static void emac_ethtool_get_regs(struct net_device *ndev,
+ struct ethtool_regs *regs, void *buf)
+{
+ struct emac_instance *dev = netdev_priv(ndev);
+ struct emac_ethtool_regs_hdr *hdr = buf;
+
+ hdr->components = 0;
+ buf = hdr + 1;
+
+ buf = mal_dump_regs(dev->mal, buf);
+ buf = emac_dump_regs(dev, buf);
+ if (emac_has_feature(dev, EMAC_FTR_HAS_ZMII)) {
+ hdr->components |= EMAC_ETHTOOL_REGS_ZMII;
+ buf = zmii_dump_regs(dev->zmii_dev, buf);
+ }
+ if (emac_has_feature(dev, EMAC_FTR_HAS_RGMII)) {
+ hdr->components |= EMAC_ETHTOOL_REGS_RGMII;
+ buf = rgmii_dump_regs(dev->rgmii_dev, buf);
+ }
+ if (emac_has_feature(dev, EMAC_FTR_HAS_TAH)) {
+ hdr->components |= EMAC_ETHTOOL_REGS_TAH;
+ buf = tah_dump_regs(dev->tah_dev, buf);
+ }
+}
+
+static int emac_ethtool_nway_reset(struct net_device *ndev)
+{
+ struct emac_instance *dev = netdev_priv(ndev);
+ int res = 0;
+
+ DBG(dev, "nway_reset" NL);
+
+ if (dev->phy.address < 0)
+ return -EOPNOTSUPP;
+
+ mutex_lock(&dev->link_lock);
+ if (!dev->phy.autoneg) {
+ res = -EINVAL;
+ goto out;
+ }
+
+ dev->phy.def->ops->setup_aneg(&dev->phy, dev->phy.advertising);
+ out:
+ mutex_unlock(&dev->link_lock);
+ emac_force_link_update(dev);
+ return res;
+}
+
+static int emac_ethtool_get_sset_count(struct net_device *ndev, int stringset)
+{
+ if (stringset == ETH_SS_STATS)
+ return EMAC_ETHTOOL_STATS_COUNT;
+ else
+ return -EINVAL;
+}
+
+static void emac_ethtool_get_strings(struct net_device *ndev, u32 stringset,
+ u8 * buf)
+{
+ if (stringset == ETH_SS_STATS)
+ memcpy(buf, &emac_stats_keys, sizeof(emac_stats_keys));
+}
+
+static void emac_ethtool_get_ethtool_stats(struct net_device *ndev,
+ struct ethtool_stats *estats,
+ u64 * tmp_stats)
+{
+ struct emac_instance *dev = netdev_priv(ndev);
+
+ memcpy(tmp_stats, &dev->stats, sizeof(dev->stats));
+ tmp_stats += sizeof(dev->stats) / sizeof(u64);
+ memcpy(tmp_stats, &dev->estats, sizeof(dev->estats));
+}
+
+static void emac_ethtool_get_drvinfo(struct net_device *ndev,
+ struct ethtool_drvinfo *info)
+{
+ struct emac_instance *dev = netdev_priv(ndev);
+
+ strcpy(info->driver, "ibm_emac");
+ strcpy(info->version, DRV_VERSION);
+ info->fw_version[0] = '\0';
+ sprintf(info->bus_info, "PPC 4xx EMAC-%d %s",
+ dev->cell_index, dev->ofdev->dev.of_node->full_name);
+ info->regdump_len = emac_ethtool_get_regs_len(ndev);
+}
+
+static const struct ethtool_ops emac_ethtool_ops = {
+ .get_settings = emac_ethtool_get_settings,
+ .set_settings = emac_ethtool_set_s...
[truncated message content] |
|
From: Wolfgang G. <wg...@gr...> - 2011-11-17 13:47:04
|
From: Wolfgang Grandegger <wg...@de...> Signed-off-by: Wolfgang Grandegger <wg...@de...> --- stack/rtmac/rtmac_disc.c | 4 ++++ stack/rtnet_chrdev.c | 4 ++++ 2 files changed, 8 insertions(+), 0 deletions(-) diff --git a/stack/rtmac/rtmac_disc.c b/stack/rtmac/rtmac_disc.c index ca124a5..65a6f3f 100644 --- a/stack/rtmac/rtmac_disc.c +++ b/stack/rtmac/rtmac_disc.c @@ -35,7 +35,11 @@ +#if LINUX_VERSION_CODE < KERNEL_VERSION(3,0,0) static rwlock_t disc_list_lock = RW_LOCK_UNLOCKED; +#else +static rwlock_t disc_list_lock = __RW_LOCK_UNLOCKED(ioctl_handler_lock); +#endif LIST_HEAD(disc_list); diff --git a/stack/rtnet_chrdev.c b/stack/rtnet_chrdev.c index be4a496..985ffb3 100644 --- a/stack/rtnet_chrdev.c +++ b/stack/rtnet_chrdev.c @@ -36,7 +36,11 @@ #include <ipv4/route.h> +#if LINUX_VERSION_CODE < KERNEL_VERSION(3,0,0) static rwlock_t ioctl_handler_lock = RW_LOCK_UNLOCKED; +#else +static rwlock_t ioctl_handler_lock = __RW_LOCK_UNLOCKED(ioctl_handler_lock); +#endif LIST_HEAD(ioctl_handlers); -- 1.7.4.1 |
|
From: Wolfgang G. <wg...@gr...> - 2011-11-17 13:47:04
|
From: Wolfgang Grandegger <wg...@de...>
The IBM-EMACS driver requires a separate Linux kernel patch implementing
a common real-time capable Memory Access Layer (MAL) and providing
direct access to the PHY interface from the corresponding Linux driver.
Signed-off-by: Wolfgang Grandegger <wg...@de...>
---
drivers/ibm_newemac/README | 32 +
.../ibm_newemac/linux-2.6.36.4-rtdm-ibm-emac.patch | 875 ++++++++++++++++++++
.../ibm_newemac/linux-3.0.4-rtdm-ibm-emac.patch | 875 ++++++++++++++++++++
3 files changed, 1782 insertions(+), 0 deletions(-)
create mode 100644 drivers/ibm_newemac/README
create mode 100644 drivers/ibm_newemac/linux-2.6.36.4-rtdm-ibm-emac.patch
create mode 100644 drivers/ibm_newemac/linux-3.0.4-rtdm-ibm-emac.patch
diff --git a/drivers/ibm_newemac/README b/drivers/ibm_newemac/README
new file mode 100644
index 0000000..99e991e
--- /dev/null
+++ b/drivers/ibm_newemac/README
@@ -0,0 +1,32 @@
+This RTnet driver is for the EMAC Ethernet controllers on AMCC 4xx
+processors. It requires a *separate* Linux kernel patch to provide a
+real-time capable memory access layer (MAL). See chapter "Technical
+comments" below for further information. At the time of writing, the
+following patches are available in this directory:
+
+- linux-2.6.36.4-rtdm-ibm-emac.patch
+- linux-3.0.4-rtdm-ibm-emac.patch
+
+They also fix a cleanup issue to allow re-binding of the driver. After
+applying the patch, you need to enable "CONFIG_IBM_NEW_EMAC_MAL_RTDM".
+If you use RTnet and Linux networking drivers concurrently on different
+EMAC controllers, you need to unbind the Linux driver from the device
+before loading the RTnet EMAC driver:
+
+# echo "1ef600f00.ethernet" > \
+ /sys/bus/platform/devices/1ef600f00.ethernet/driver/unbind
+# insmod rt_ibm_emac.ko
+
+Technical comments:
+------------------
+
+The two IBM-EMAC on the AMCC 44x processors share a common memory
+access layer (MAL) for transferring packages from the EMAC to the
+memory and vise versa. Unfortunately, there is only one interrupt
+line for RX and TX, which requires handling these interrupts in the
+real-time context. This means, that the two EMACs do influence each
+other to some extend, e.g. heavy traffic on the EMAC used for Linux
+networking does influence the other EMAC used for RTnet traffic.
+So far, I did not see real problems. The maximum round-trip time
+measured with "rtt-sender <-> rtt-responder" did not increase 250us,
+even under heavy packet storms provoked on the EMAC used for Linux.
diff --git a/drivers/ibm_newemac/linux-2.6.36.4-rtdm-ibm-emac.patch b/drivers/ibm_newemac/linux-2.6.36.4-rtdm-ibm-emac.patch
new file mode 100644
index 0000000..bafc780
--- /dev/null
+++ b/drivers/ibm_newemac/linux-2.6.36.4-rtdm-ibm-emac.patch
@@ -0,0 +1,875 @@
+From 7b2172814c8a98d44f963159063707a391c74085 Mon Sep 17 00:00:00 2001
+From: Wolfgang Grandegger <wg...@gr...>
+Date: Thu, 17 Nov 2011 13:11:31 +0100
+Subject: [PATCH] net/ibm_newemac: provide real-time capable RTDM MAL driver
+
+Signed-off-by: Wolfgang Grandegger <wg...@gr...>
+---
+ drivers/net/ibm_newemac/Kconfig | 5 +
+ drivers/net/ibm_newemac/Makefile | 4 +
+ drivers/net/ibm_newemac/core.c | 5 +
+ drivers/net/ibm_newemac/mal.c | 358 ++++++++++++++++++++++++++++++++++---
+ drivers/net/ibm_newemac/mal.h | 27 +++
+ drivers/net/ibm_newemac/phy.c | 7 +
+ drivers/net/ibm_newemac/rgmii.c | 8 +
+ drivers/net/ibm_newemac/zmii.c | 9 +
+ 8 files changed, 394 insertions(+), 29 deletions(-)
+
+diff --git a/drivers/net/ibm_newemac/Kconfig b/drivers/net/ibm_newemac/Kconfig
+index 78a1628..b5696e1 100644
+--- a/drivers/net/ibm_newemac/Kconfig
++++ b/drivers/net/ibm_newemac/Kconfig
+@@ -39,6 +39,11 @@ config IBM_NEW_EMAC_RX_SKB_HEADROOM
+
+ If unsure, set to 0.
+
++config IBM_NEW_EMAC_MAL_RTDM
++ bool "Real-time MAL"
++ depends on IBM_NEW_EMAC && XENO_SKIN_RTDM
++ default n
++
+ config IBM_NEW_EMAC_DEBUG
+ bool "Debugging"
+ depends on IBM_NEW_EMAC
+diff --git a/drivers/net/ibm_newemac/Makefile b/drivers/net/ibm_newemac/Makefile
+index 0b5c995..ab82084 100644
+--- a/drivers/net/ibm_newemac/Makefile
++++ b/drivers/net/ibm_newemac/Makefile
+@@ -2,6 +2,10 @@
+ # Makefile for the PowerPC 4xx on-chip ethernet driver
+ #
+
++ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++EXTRA_CFLAGS += -D__IN_XENOMAI__ -Iinclude/xenomai
++endif
++
+ obj-$(CONFIG_IBM_NEW_EMAC) += ibm_newemac.o
+
+ ibm_newemac-y := mal.o core.o phy.o
+diff --git a/drivers/net/ibm_newemac/core.c b/drivers/net/ibm_newemac/core.c
+index 519e19e..55136b8 100644
+--- a/drivers/net/ibm_newemac/core.c
++++ b/drivers/net/ibm_newemac/core.c
+@@ -2960,6 +2960,9 @@ static int __devexit emac_remove(struct platform_device *ofdev)
+ if (emac_has_feature(dev, EMAC_FTR_HAS_ZMII))
+ zmii_detach(dev->zmii_dev, dev->zmii_port);
+
++ busy_phy_map &= ~(1 << dev->phy.address);
++ DBG(dev, "busy_phy_map now %#x" NL, busy_phy_map);
++
+ mal_unregister_commac(dev->mal, &dev->commac);
+ emac_put_deps(dev);
+
+@@ -3108,3 +3111,5 @@ static void __exit emac_exit(void)
+
+ module_init(emac_init);
+ module_exit(emac_exit);
++
++EXPORT_SYMBOL_GPL(busy_phy_map);
+diff --git a/drivers/net/ibm_newemac/mal.c b/drivers/net/ibm_newemac/mal.c
+index d5717e2..75f04b1 100644
+--- a/drivers/net/ibm_newemac/mal.c
++++ b/drivers/net/ibm_newemac/mal.c
+@@ -18,6 +18,10 @@
+ * Armin Kuster <ak...@mv...>
+ * Copyright 2002 MontaVista Softare Inc.
+ *
++ * Real-time extension required for the RTnet IBM EMAC driver
++ *
++ * Copyright 2011 Wolfgang Grandegger <wg...@de...>
++ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+@@ -31,6 +35,27 @@
+ #include "core.h"
+ #include <asm/dcr-regs.h>
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++#define MAL_IRQ_HANDLED RTDM_IRQ_HANDLED
++#define mal_spin_lock_irqsave(lock, flags) \
++ do { rtdm_lock_get_irqsave(lock, flags); } while (0)
++#define mal_spin_unlock_irqrestore(lock, flags) \
++ do { rtdm_lock_put_irqrestore(lock, flags); } while (0)
++#define mal_spin_lock_init(lock) \
++ do { rtdm_lock_init(lock); } while (0)
++static DEFINE_RTDM_RATELIMIT_STATE(mal_net_ratelimit_state, 5000000000LL, 10);
++#define mal_net_ratelimit() rtdm_ratelimit(&mal_net_ratelimit_state, __func__)
++#else
++#define MAL_IRQ_HANDLED IRQ_HANDLED
++#define mal_spin_lock_irqsave(lock, flags) \
++ do { spin_lock_irqsave(lock, flags);; } while (0)
++#define mal_spin_unlock_irqrestore(lock, flags) \
++ do { spin_unlock_irqrestore(lock, flags); } while (0)
++#define mal_spin_lock_init(lock) \
++ do { spin_lock_init(lock); } while (0)
++#define mal_net_ratelimit() net_ratelimit()
++#endif
++
+ static int mal_count;
+
+ int __devinit mal_register_commac(struct mal_instance *mal,
+@@ -38,27 +63,49 @@ int __devinit mal_register_commac(struct mal_instance *mal,
+ {
+ unsigned long flags;
+
+- spin_lock_irqsave(&mal->lock, flags);
+-
++ mal_spin_lock_irqsave(&mal->lock, flags);
+ MAL_DBG(mal, "reg(%08x, %08x)" NL,
+ commac->tx_chan_mask, commac->rx_chan_mask);
+
+ /* Don't let multiple commacs claim the same channel(s) */
++
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ if (((mal->tx_chan_mask | mal->tx_chan_mask_rtdm) &
++ commac->tx_chan_mask) ||
++ ((mal->rx_chan_mask | mal->rx_chan_mask_rtdm) &
++ commac->rx_chan_mask)) {
++ mal_spin_unlock_irqrestore(&mal->lock, flags);
++ printk(KERN_WARNING "mal%d: COMMAC channels conflict!\n",
++ mal->index);
++ return -EBUSY;
++ }
++#else
+ if ((mal->tx_chan_mask & commac->tx_chan_mask) ||
+ (mal->rx_chan_mask & commac->rx_chan_mask)) {
+- spin_unlock_irqrestore(&mal->lock, flags);
++ mal_spin_unlock_irqrestore(&mal->lock, flags);
+ printk(KERN_WARNING "mal%d: COMMAC channels conflict!\n",
+ mal->index);
+ return -EBUSY;
+ }
++#endif
+
+ if (list_empty(&mal->list))
+ napi_enable(&mal->napi);
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ if (commac->rtdm) {
++ mal->tx_chan_mask_rtdm |= commac->tx_chan_mask;
++ mal->rx_chan_mask_rtdm |= commac->rx_chan_mask;
++ } else {
++ mal->tx_chan_mask |= commac->tx_chan_mask;
++ mal->rx_chan_mask |= commac->rx_chan_mask;
++ }
++#else
+ mal->tx_chan_mask |= commac->tx_chan_mask;
+ mal->rx_chan_mask |= commac->rx_chan_mask;
++#endif
+ list_add(&commac->list, &mal->list);
+
+- spin_unlock_irqrestore(&mal->lock, flags);
++ mal_spin_unlock_irqrestore(&mal->lock, flags);
+
+ return 0;
+ }
+@@ -68,18 +115,28 @@ void mal_unregister_commac(struct mal_instance *mal,
+ {
+ unsigned long flags;
+
+- spin_lock_irqsave(&mal->lock, flags);
++ mal_spin_lock_irqsave(&mal->lock, flags);
+
+ MAL_DBG(mal, "unreg(%08x, %08x)" NL,
+ commac->tx_chan_mask, commac->rx_chan_mask);
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ if (commac->rtdm) {
++ mal->tx_chan_mask_rtdm &= ~commac->tx_chan_mask;
++ mal->rx_chan_mask_rtdm &= ~commac->rx_chan_mask;
++ } else {
++ mal->tx_chan_mask &= ~commac->tx_chan_mask;
++ mal->rx_chan_mask &= ~commac->rx_chan_mask;
++ }
++#else
+ mal->tx_chan_mask &= ~commac->tx_chan_mask;
+ mal->rx_chan_mask &= ~commac->rx_chan_mask;
++#endif
+ list_del_init(&commac->list);
+ if (list_empty(&mal->list))
+ napi_disable(&mal->napi);
+
+- spin_unlock_irqrestore(&mal->lock, flags);
++ mal_spin_unlock_irqrestore(&mal->lock, flags);
+ }
+
+ int mal_set_rcbs(struct mal_instance *mal, int channel, unsigned long size)
+@@ -117,14 +174,14 @@ void mal_enable_tx_channel(struct mal_instance *mal, int channel)
+ {
+ unsigned long flags;
+
+- spin_lock_irqsave(&mal->lock, flags);
++ mal_spin_lock_irqsave(&mal->lock, flags);
+
+ MAL_DBG(mal, "enable_tx(%d)" NL, channel);
+
+ set_mal_dcrn(mal, MAL_TXCASR,
+ get_mal_dcrn(mal, MAL_TXCASR) | MAL_CHAN_MASK(channel));
+
+- spin_unlock_irqrestore(&mal->lock, flags);
++ mal_spin_unlock_irqrestore(&mal->lock, flags);
+ }
+
+ void mal_disable_tx_channel(struct mal_instance *mal, int channel)
+@@ -146,14 +203,14 @@ void mal_enable_rx_channel(struct mal_instance *mal, int channel)
+ if (!(channel % 8))
+ channel >>= 3;
+
+- spin_lock_irqsave(&mal->lock, flags);
++ mal_spin_lock_irqsave(&mal->lock, flags);
+
+ MAL_DBG(mal, "enable_rx(%d)" NL, channel);
+
+ set_mal_dcrn(mal, MAL_RXCASR,
+ get_mal_dcrn(mal, MAL_RXCASR) | MAL_CHAN_MASK(channel));
+
+- spin_unlock_irqrestore(&mal->lock, flags);
++ mal_spin_unlock_irqrestore(&mal->lock, flags);
+ }
+
+ void mal_disable_rx_channel(struct mal_instance *mal, int channel)
+@@ -175,29 +232,36 @@ void mal_poll_add(struct mal_instance *mal, struct mal_commac *commac)
+ {
+ unsigned long flags;
+
+- spin_lock_irqsave(&mal->lock, flags);
++ mal_spin_lock_irqsave(&mal->lock, flags);
+
+ MAL_DBG(mal, "poll_add(%p)" NL, commac);
+
+ /* starts disabled */
+ set_bit(MAL_COMMAC_POLL_DISABLED, &commac->flags);
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ if (commac->rtdm)
++ list_add_tail(&commac->poll_list, &mal->poll_list_rtdm);
++ else
++ list_add_tail(&commac->poll_list, &mal->poll_list);
++#else
+ list_add_tail(&commac->poll_list, &mal->poll_list);
++#endif
+
+- spin_unlock_irqrestore(&mal->lock, flags);
++ mal_spin_unlock_irqrestore(&mal->lock, flags);
+ }
+
+ void mal_poll_del(struct mal_instance *mal, struct mal_commac *commac)
+ {
+ unsigned long flags;
+
+- spin_lock_irqsave(&mal->lock, flags);
++ mal_spin_lock_irqsave(&mal->lock, flags);
+
+ MAL_DBG(mal, "poll_del(%p)" NL, commac);
+
+ list_del(&commac->poll_list);
+
+- spin_unlock_irqrestore(&mal->lock, flags);
++ mal_spin_unlock_irqrestore(&mal->lock, flags);
+ }
+
+ /* synchronized by mal_poll() */
+@@ -218,9 +282,18 @@ static inline void mal_disable_eob_irq(struct mal_instance *mal)
+ MAL_DBG2(mal, "disable_irq" NL);
+ }
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++static int mal_serr(rtdm_irq_t *irq_handle)
++#else
+ static irqreturn_t mal_serr(int irq, void *dev_instance)
++#endif
+ {
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ struct mal_instance *mal = rtdm_irq_get_arg(irq_handle,
++ struct mal_instance);
++#else
+ struct mal_instance *mal = dev_instance;
++#endif
+
+ u32 esr = get_mal_dcrn(mal, MAL_ESR);
+
+@@ -234,51 +307,99 @@ static irqreturn_t mal_serr(int irq, void *dev_instance)
+ /* We ignore Descriptor error,
+ * TXDE or RXDE interrupt will be generated anyway.
+ */
+- return IRQ_HANDLED;
++ return MAL_IRQ_HANDLED;
+ }
+
+ if (esr & MAL_ESR_PEIN) {
+ /* PLB error, it's probably buggy hardware or
+ * incorrect physical address in BD (i.e. bug)
+ */
+- if (net_ratelimit())
++ if (mal_net_ratelimit())
+ printk(KERN_ERR
+ "mal%d: system error, "
+ "PLB (ESR = 0x%08x)\n",
+ mal->index, esr);
+- return IRQ_HANDLED;
++ return MAL_IRQ_HANDLED;
+ }
+
+ /* OPB error, it's probably buggy hardware or incorrect
+ * EBC setup
+ */
+- if (net_ratelimit())
++ if (mal_net_ratelimit())
+ printk(KERN_ERR
+ "mal%d: system error, OPB (ESR = 0x%08x)\n",
+ mal->index, esr);
+ }
+- return IRQ_HANDLED;
++ return MAL_IRQ_HANDLED;
+ }
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++void mal_schedule_poll_nrt(rtdm_nrtsig_t nrt_sig, void* data)
++{
++ struct mal_instance *mal = (struct mal_instance *)data;
++ unsigned long flags;
++
++ local_irq_save(flags);
++ if (likely(napi_schedule_prep(&mal->napi))) {
++ MAL_DBG2(mal, "schedule_poll" NL);
++ __napi_schedule(&mal->napi);
++ } else
++ MAL_DBG2(mal, "already in poll" NL);
++ local_irq_restore(flags);
++}
++#endif
+ static inline void mal_schedule_poll(struct mal_instance *mal)
+ {
+ if (likely(napi_schedule_prep(&mal->napi))) {
+ MAL_DBG2(mal, "schedule_poll" NL);
++#ifndef CONFIG_IBM_NEW_EMAC_MAL_RTDM
+ mal_disable_eob_irq(mal);
++#endif
+ __napi_schedule(&mal->napi);
+ } else
+ MAL_DBG2(mal, "already in poll" NL);
+ }
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++#ifdef OBSOLETE
++static nanosecs_abs_t tstart;
++#endif
++
++static int mal_txeob(rtdm_irq_t *irq_handle)
++#else
+ static irqreturn_t mal_txeob(int irq, void *dev_instance)
++#endif
+ {
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ struct mal_instance *mal = rtdm_irq_get_arg(irq_handle,
++ struct mal_instance);
++#else
+ struct mal_instance *mal = dev_instance;
++#endif
+
++ struct list_head *l;
+ u32 r = get_mal_dcrn(mal, MAL_TXEOBISR);
+
+- MAL_DBG2(mal, "txeob %08x" NL, r);
++ MAL_DBG2(mal, "rt txeob %08x" NL, r);
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ if (r & mal->tx_chan_mask_rtdm) {
++ /* Process TX skbs */
++ list_for_each(l, &mal->poll_list_rtdm) {
++ struct mal_commac *mc =
++ list_entry(l, struct mal_commac, poll_list);
++#ifdef OBSOLETE
++ tstart = rtdm_clock_read();
++#endif
++ mc->ops->poll_tx(mc->dev);
++ }
++ }
++ if (r & mal->tx_chan_mask)
++ rtdm_nrtsig_pend(&mal->schedule_poll_nrt);
++#else
+ mal_schedule_poll(mal);
++#endif
++
+ set_mal_dcrn(mal, MAL_TXEOBISR, r);
+
+ #ifdef CONFIG_PPC_DCR_NATIVE
+@@ -287,18 +408,49 @@ static irqreturn_t mal_txeob(int irq, void *dev_instance)
+ (mfdcri(SDR0, DCRN_SDR_ICINTSTAT) | ICINTSTAT_ICTX));
+ #endif
+
+- return IRQ_HANDLED;
++ return MAL_IRQ_HANDLED;
+ }
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++static int mal_rxeob(rtdm_irq_t *irq_handle)
++#else
+ static irqreturn_t mal_rxeob(int irq, void *dev_instance)
++#endif
+ {
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ struct mal_instance *mal = rtdm_irq_get_arg(irq_handle,
++ struct mal_instance);
++#else
+ struct mal_instance *mal = dev_instance;
++#endif
++ struct list_head *l;
++ u32 r;
+
+- u32 r = get_mal_dcrn(mal, MAL_RXEOBISR);
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ mal->time_stamp = rtdm_clock_read();
++#endif
++ r = get_mal_dcrn(mal, MAL_RXEOBISR);
+
+ MAL_DBG2(mal, "rxeob %08x" NL, r);
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ if (r & mal->rx_chan_mask_rtdm) {
++ list_for_each(l, &mal->poll_list_rtdm) {
++ struct mal_commac *mc =
++ list_entry(l, struct mal_commac, poll_list);
++ if (unlikely(test_bit(MAL_COMMAC_POLL_DISABLED,
++ &mc->flags))) {
++ MAL_DBG(mal, "mc->flags=%#lx\n", mc->flags);
++ continue;
++ }
++ mc->ops->poll_rx(mc->dev, 1024);
++ }
++ }
++ if (r & mal->rx_chan_mask)
++ rtdm_nrtsig_pend(&mal->schedule_poll_nrt);
++#else
+ mal_schedule_poll(mal);
++#endif
+ set_mal_dcrn(mal, MAL_RXEOBISR, r);
+
+ #ifdef CONFIG_PPC_DCR_NATIVE
+@@ -307,76 +459,149 @@ static irqreturn_t mal_rxeob(int irq, void *dev_instance)
+ (mfdcri(SDR0, DCRN_SDR_ICINTSTAT) | ICINTSTAT_ICRX));
+ #endif
+
+- return IRQ_HANDLED;
++ return MAL_IRQ_HANDLED;
+ }
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++static int mal_txde(rtdm_irq_t *irq_handle)
++#else
+ static irqreturn_t mal_txde(int irq, void *dev_instance)
++#endif
+ {
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ struct mal_instance *mal = rtdm_irq_get_arg(irq_handle,
++ struct mal_instance);
++#else
+ struct mal_instance *mal = dev_instance;
++#endif
+
+ u32 deir = get_mal_dcrn(mal, MAL_TXDEIR);
+ set_mal_dcrn(mal, MAL_TXDEIR, deir);
+
+ MAL_DBG(mal, "txde %08x" NL, deir);
+
+- if (net_ratelimit())
++ if (mal_net_ratelimit())
+ printk(KERN_ERR
+ "mal%d: TX descriptor error (TXDEIR = 0x%08x)\n",
+ mal->index, deir);
+
+- return IRQ_HANDLED;
++ return MAL_IRQ_HANDLED;
+ }
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++static int mal_rxde(rtdm_irq_t *irq_handle)
++#else
+ static irqreturn_t mal_rxde(int irq, void *dev_instance)
++#endif
+ {
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ struct mal_instance *mal = rtdm_irq_get_arg(irq_handle,
++ struct mal_instance);
++ int nrtsig_pend = 0;
++#else
+ struct mal_instance *mal = dev_instance;
++#endif
+ struct list_head *l;
+
+ u32 deir = get_mal_dcrn(mal, MAL_RXDEIR);
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ mal->time_stamp = rtdm_clock_read();
++#endif
+ MAL_DBG(mal, "rxde %08x" NL, deir);
+
+ list_for_each(l, &mal->list) {
+ struct mal_commac *mc = list_entry(l, struct mal_commac, list);
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ if (deir & mc->rx_chan_mask) {
++ set_bit(MAL_COMMAC_RX_STOPPED, &mc->flags);
++ mc->ops->rxde(mc->dev);
++ if (mc->rtdm) {
++ mc->ops->poll_tx(mc->dev);
++ mc->ops->poll_rx(mc->dev, 1024);
++ } else {
++ nrtsig_pend++;
++ }
++ }
++#else
+ if (deir & mc->rx_chan_mask) {
+ set_bit(MAL_COMMAC_RX_STOPPED, &mc->flags);
+ mc->ops->rxde(mc->dev);
+ }
++#endif
+ }
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ if (nrtsig_pend)
++ rtdm_nrtsig_pend(&mal->schedule_poll_nrt);
++#else
+ mal_schedule_poll(mal);
++#endif
+ set_mal_dcrn(mal, MAL_RXDEIR, deir);
+
+- return IRQ_HANDLED;
++ return MAL_IRQ_HANDLED;
+ }
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++static int mal_int(rtdm_irq_t *irq_handle)
++#else
+ static irqreturn_t mal_int(int irq, void *dev_instance)
++#endif
+ {
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ struct mal_instance *mal = rtdm_irq_get_arg(irq_handle,
++ struct mal_instance);
++#else
+ struct mal_instance *mal = dev_instance;
++#endif
+ u32 esr = get_mal_dcrn(mal, MAL_ESR);
+
++ MAL_DBG(mal, "int %08x" NL, esr);
++
+ if (esr & MAL_ESR_EVB) {
+ /* descriptor error */
+ if (esr & MAL_ESR_DE) {
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ if (esr & MAL_ESR_CIDT)
++ return mal_rxde(irq_handle);
++ else
++ return mal_txde(irq_handle);
++#else
+ if (esr & MAL_ESR_CIDT)
+ return mal_rxde(irq, dev_instance);
+ else
+ return mal_txde(irq, dev_instance);
++#endif
+ } else { /* SERR */
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ return mal_serr(irq_handle);
++#else
+ return mal_serr(irq, dev_instance);
++#endif
+ }
+ }
+- return IRQ_HANDLED;
++ return MAL_IRQ_HANDLED;
+ }
+
+ void mal_poll_disable(struct mal_instance *mal, struct mal_commac *commac)
+ {
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ if (commac->rtdm) {
++ set_bit(MAL_COMMAC_POLL_DISABLED, &commac->flags);
++ } else {
++ while (test_and_set_bit(MAL_COMMAC_POLL_DISABLED,
++ &commac->flags))
++ msleep(1);
++ napi_synchronize(&mal->napi);
++ }
++#else
+ /* Spinlock-type semantics: only one caller disable poll at a time */
+ while (test_and_set_bit(MAL_COMMAC_POLL_DISABLED, &commac->flags))
+ msleep(1);
+
+ /* Synchronize with the MAL NAPI poller */
+ napi_synchronize(&mal->napi);
++#endif
+ }
+
+ void mal_poll_enable(struct mal_instance *mal, struct mal_commac *commac)
+@@ -389,7 +614,12 @@ void mal_poll_enable(struct mal_instance *mal, struct mal_commac *commac)
+ * probably be delayed until the next interrupt but that's mostly a
+ * non-issue in the context where this is called.
+ */
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ if (!commac->rtdm)
++ napi_schedule(&mal->napi);
++#else
+ napi_schedule(&mal->napi);
++#endif
+ }
+
+ static int mal_poll(struct napi_struct *napi, int budget)
+@@ -429,10 +659,15 @@ static int mal_poll(struct napi_struct *napi, int budget)
+ }
+
+ /* We need to disable IRQs to protect from RXDE IRQ here */
+- spin_lock_irqsave(&mal->lock, flags);
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ local_irq_save(flags);
+ __napi_complete(napi);
++ local_irq_restore(flags);
++#else
++ spin_lock_irqsave(&mal->lock, flags);
+ mal_enable_eob_irq(mal);
+ spin_unlock_irqrestore(&mal->lock, flags);
++#endif
+
+ /* Check for "rotting" packet(s) */
+ list_for_each(l, &mal->poll_list) {
+@@ -443,10 +678,15 @@ static int mal_poll(struct napi_struct *napi, int budget)
+ if (unlikely(mc->ops->peek_rx(mc->dev) ||
+ test_bit(MAL_COMMAC_RX_STOPPED, &mc->flags))) {
+ MAL_DBG2(mal, "rotting packet" NL);
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ if (!napi_reschedule(napi))
++ MAL_DBG2(mal, "already in poll list" NL);
++#else
+ if (napi_reschedule(napi))
+ mal_disable_eob_irq(mal);
+ else
+ MAL_DBG2(mal, "already in poll list" NL);
++#endif
+
+ if (budget > 0)
+ goto again;
+@@ -527,7 +767,11 @@ static int __devinit mal_probe(struct platform_device *ofdev,
+ const u32 *prop;
+ u32 cfg;
+ unsigned long irqflags;
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ rtdm_irq_handler_t hdlr_serr, hdlr_txde, hdlr_rxde;
++#else
+ irq_handler_t hdlr_serr, hdlr_txde, hdlr_rxde;
++#endif
+
+ mal = kzalloc(sizeof(struct mal_instance), GFP_KERNEL);
+ if (!mal) {
+@@ -612,7 +856,18 @@ static int __devinit mal_probe(struct platform_device *ofdev,
+
+ INIT_LIST_HEAD(&mal->poll_list);
+ INIT_LIST_HEAD(&mal->list);
+- spin_lock_init(&mal->lock);
++ mal_spin_lock_init(&mal->lock);
++
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ INIT_LIST_HEAD(&mal->poll_list_rtdm);
++
++ if (rtdm_nrtsig_init(&mal->schedule_poll_nrt, mal_schedule_poll_nrt,
++ (void*)mal)) {
++ printk(KERN_ERR
++ "mal%d: couldn't init mal schedule handler !\n", index);
++ goto fail_unmap;
++ }
++#endif
+
+ init_dummy_netdev(&mal->dummy_dev);
+
+@@ -674,19 +929,44 @@ static int __devinit mal_probe(struct platform_device *ofdev,
+ hdlr_rxde = mal_rxde;
+ }
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ err = rtdm_irq_request(&mal->serr_irq_handle, mal->serr_irq,
++ mal_serr, 0, "MAL SERR", mal);
++#else
+ err = request_irq(mal->serr_irq, hdlr_serr, irqflags, "MAL SERR", mal);
++#endif
+ if (err)
+ goto fail2;
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ err = rtdm_irq_request(&mal->txde_irq_handle, mal->txde_irq,
++ mal_txde, 0, "MAL TX DE", mal);
++#else
+ err = request_irq(mal->txde_irq, hdlr_txde, irqflags, "MAL TX DE", mal);
++#endif
+ if (err)
+ goto fail3;
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ err = rtdm_irq_request(&mal->txeob_irq_handle, mal->txeob_irq,
++ mal_txeob, 0, "MAL TX EOB", mal);
++#else
+ err = request_irq(mal->txeob_irq, mal_txeob, 0, "MAL TX EOB", mal);
++#endif
+ if (err)
+ goto fail4;
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ err = rtdm_irq_request(&mal->rxde_irq_handle, mal->rxde_irq,
++ mal_rxde, 0, "MAL RX DE", mal);
++#else
+ err = request_irq(mal->rxde_irq, hdlr_rxde, irqflags, "MAL RX DE", mal);
++#endif
+ if (err)
+ goto fail5;
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ err = rtdm_irq_request(&mal->rxeob_irq_handle, mal->rxeob_irq,
++ mal_rxeob, 0, "MAL RX EOB", mal);
++#else
+ err = request_irq(mal->rxeob_irq, mal_rxeob, 0, "MAL RX EOB", mal);
++#endif
+ if (err)
+ goto fail6;
+
+@@ -715,7 +995,11 @@ static int __devinit mal_probe(struct platform_device *ofdev,
+ fail6:
+ free_irq(mal->rxde_irq, mal);
+ fail5:
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ rtdm_irq_free(&mal->txeob_irq_handle);
++#else
+ free_irq(mal->txeob_irq, mal);
++#endif
+ fail4:
+ free_irq(mal->txde_irq, mal);
+ fail3:
+@@ -808,3 +1092,19 @@ void mal_exit(void)
+ {
+ of_unregister_platform_driver(&mal_of_driver);
+ }
++
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++EXPORT_SYMBOL_GPL(mal_register_commac);
++EXPORT_SYMBOL_GPL(mal_unregister_commac);
++EXPORT_SYMBOL_GPL(mal_set_rcbs);
++EXPORT_SYMBOL_GPL(mal_tx_bd_offset);
++EXPORT_SYMBOL_GPL(mal_rx_bd_offset);
++EXPORT_SYMBOL_GPL(mal_enable_tx_channel);
++EXPORT_SYMBOL_GPL(mal_disable_tx_channel);
++EXPORT_SYMBOL_GPL(mal_enable_rx_channel);
++EXPORT_SYMBOL_GPL(mal_disable_rx_channel);
++EXPORT_SYMBOL_GPL(mal_poll_add);
++EXPORT_SYMBOL_GPL(mal_poll_del);
++EXPORT_SYMBOL_GPL(mal_poll_enable);
++EXPORT_SYMBOL_GPL(mal_poll_disable);
++#endif
+diff --git a/drivers/net/ibm_newemac/mal.h b/drivers/net/ibm_newemac/mal.h
+index 6608421..0573c76 100644
+--- a/drivers/net/ibm_newemac/mal.h
++++ b/drivers/net/ibm_newemac/mal.h
+@@ -24,6 +24,10 @@
+ #ifndef __IBM_NEWEMAC_MAL_H
+ #define __IBM_NEWEMAC_MAL_H
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++#include <rtdm/rtdm_driver.h>
++#endif
++
+ /*
+ * There are some variations on the MAL, we express them in this driver as
+ * MAL Version 1 and 2 though that doesn't match any IBM terminology.
+@@ -186,6 +190,9 @@ struct mal_commac {
+ u32 tx_chan_mask;
+ u32 rx_chan_mask;
+ struct list_head list;
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ int rtdm;
++#endif
+ };
+
+ struct mal_instance {
+@@ -199,20 +206,40 @@ struct mal_instance {
+ int txde_irq; /* TX Descriptor Error IRQ */
+ int rxde_irq; /* RX Descriptor Error IRQ */
+ int serr_irq; /* MAL System Error IRQ */
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ rtdm_irq_t txeob_irq_handle;
++ rtdm_irq_t rxeob_irq_handle;
++ rtdm_irq_t txde_irq_handle;
++ rtdm_irq_t rxde_irq_handle;
++ rtdm_irq_t serr_irq_handle;
++ rtdm_nrtsig_t schedule_poll_nrt;
++ nanosecs_abs_t time_stamp;
++#endif
+
+ struct list_head poll_list;
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ struct list_head poll_list_rtdm;
++#endif
+ struct napi_struct napi;
+
+ struct list_head list;
+ u32 tx_chan_mask;
+ u32 rx_chan_mask;
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ u32 tx_chan_mask_rtdm;
++ u32 rx_chan_mask_rtdm;
++#endif
+
+ dma_addr_t bd_dma;
+ struct mal_descriptor *bd_virt;
+
+ struct platform_device *ofdev;
+ int index;
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ rtdm_lock_t lock;
++#else
+ spinlock_t lock;
++#endif
+
+ struct net_device dummy_dev;
+
+diff --git a/drivers/net/ibm_newemac/phy.c b/drivers/net/ibm_newemac/phy.c
+index ac9d964..87a0a80 100644
+--- a/drivers/net/ibm_newemac/phy.c
++++ b/drivers/net/ibm_newemac/phy.c
+@@ -535,4 +535,11 @@ int emac_mii_phy_probe(struct mii_phy *phy, int address)
+ return 0;
+ }
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++EXPORT_SYMBOL_GPL(emac_mii_phy_probe);
++EXPORT_SYMBOL_GPL(emac_mii_reset_gpcs);
++EXPORT_SYMBOL_GPL(emac_mii_reset_phy);
++#endif
++
++
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/net/ibm_newemac/rgmii.c b/drivers/net/ibm_newemac/rgmii.c
+index dd61798..9e0a673 100644
+--- a/drivers/net/ibm_newemac/rgmii.c
++++ b/drivers/net/ibm_newemac/rgmii.c
+@@ -337,3 +337,11 @@ void rgmii_exit(void)
+ {
+ of_unregister_platform_driver(&rgmii_driver);
+ }
++
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++EXPORT_SYMBOL_GPL(rgmii_attach);
++EXPORT_SYMBOL_GPL(rgmii_detach);
++EXPORT_SYMBOL_GPL(rgmii_set_speed);
++EXPORT_SYMBOL_GPL(rgmii_get_mdio);
++EXPORT_SYMBOL_GPL(rgmii_put_mdio);
++#endif
+diff --git a/drivers/net/ibm_newemac/zmii.c b/drivers/net/ibm_newemac/zmii.c
+index 34ed6ee..130e62b 100644
+--- a/drivers/net/ibm_newemac/zmii.c
++++ b/drivers/net/ibm_newemac/zmii.c
+@@ -331,3 +331,12 @@ void zmii_exit(void)
+ {
+ of_unregister_platform_driver(&zmii_driver);
+ }
++
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++EXPORT_SYMBOL_GPL(zmii_attach);
++EXPORT_SYMBOL_GPL(zmii_detach);
++EXPORT_SYMBOL_GPL(zmii_get_mdio);
++EXPORT_SYMBOL_GPL(zmii_put_mdio);
++EXPORT_SYMBOL_GPL(zmii_set_speed);
++#endif
++
+--
+1.7.4.1
diff --git a/drivers/ibm_newemac/linux-3.0.4-rtdm-ibm-emac.patch b/drivers/ibm_newemac/linux-3.0.4-rtdm-ibm-emac.patch
new file mode 100644
index 0000000..000c4f7
--- /dev/null
+++ b/drivers/ibm_newemac/linux-3.0.4-rtdm-ibm-emac.patch
@@ -0,0 +1,875 @@
+From d230f5decc7b4fb8edf783de1738312e911ab1c3 Mon Sep 17 00:00:00 2001
+From: Wolfgang Grandegger <wg...@de...>
+Date: Thu, 17 Nov 2011 13:13:29 +0100
+Subject: [PATCH] net/ibm_newemac: provide real-time capable RTDM MAL driver
+
+Signed-off-by: Wolfgang Grandegger <wg...@de...>
+---
+ drivers/net/ibm_newemac/Kconfig | 5 +
+ drivers/net/ibm_newemac/Makefile | 4 +
+ drivers/net/ibm_newemac/core.c | 5 +
+ drivers/net/ibm_newemac/mal.c | 358 ++++++++++++++++++++++++++++++++++---
+ drivers/net/ibm_newemac/mal.h | 27 +++
+ drivers/net/ibm_newemac/phy.c | 7 +
+ drivers/net/ibm_newemac/rgmii.c | 8 +
+ drivers/net/ibm_newemac/zmii.c | 9 +
+ 8 files changed, 394 insertions(+), 29 deletions(-)
+
+diff --git a/drivers/net/ibm_newemac/Kconfig b/drivers/net/ibm_newemac/Kconfig
+index 78a1628..b5696e1 100644
+--- a/drivers/net/ibm_newemac/Kconfig
++++ b/drivers/net/ibm_newemac/Kconfig
+@@ -39,6 +39,11 @@ config IBM_NEW_EMAC_RX_SKB_HEADROOM
+
+ If unsure, set to 0.
+
++config IBM_NEW_EMAC_MAL_RTDM
++ bool "Real-time MAL"
++ depends on IBM_NEW_EMAC && XENO_SKIN_RTDM
++ default n
++
+ config IBM_NEW_EMAC_DEBUG
+ bool "Debugging"
+ depends on IBM_NEW_EMAC
+diff --git a/drivers/net/ibm_newemac/Makefile b/drivers/net/ibm_newemac/Makefile
+index 0b5c995..ab82084 100644
+--- a/drivers/net/ibm_newemac/Makefile
++++ b/drivers/net/ibm_newemac/Makefile
+@@ -2,6 +2,10 @@
+ # Makefile for the PowerPC 4xx on-chip ethernet driver
+ #
+
++ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++EXTRA_CFLAGS += -D__IN_XENOMAI__ -Iinclude/xenomai
++endif
++
+ obj-$(CONFIG_IBM_NEW_EMAC) += ibm_newemac.o
+
+ ibm_newemac-y := mal.o core.o phy.o
+diff --git a/drivers/net/ibm_newemac/core.c b/drivers/net/ibm_newemac/core.c
+index 079450f..bd45db6 100644
+--- a/drivers/net/ibm_newemac/core.c
++++ b/drivers/net/ibm_newemac/core.c
+@@ -2949,6 +2949,9 @@ static int __devexit emac_remove(struct platform_device *ofdev)
+ if (emac_has_feature(dev, EMAC_FTR_HAS_ZMII))
+ zmii_detach(dev->zmii_dev, dev->zmii_port);
+
++ busy_phy_map &= ~(1 << dev->phy.address);
++ DBG(dev, "busy_phy_map now %#x" NL, busy_phy_map);
++
+ mal_unregister_commac(dev->mal, &dev->commac);
+ emac_put_deps(dev);
+
+@@ -3097,3 +3100,5 @@ static void __exit emac_exit(void)
+
+ module_init(emac_init);
+ module_exit(emac_exit);
++
++EXPORT_SYMBOL_GPL(busy_phy_map);
+diff --git a/drivers/net/ibm_newemac/mal.c b/drivers/net/ibm_newemac/mal.c
+index d268f40..9a8f76b 100644
+--- a/drivers/net/ibm_newemac/mal.c
++++ b/drivers/net/ibm_newemac/mal.c
+@@ -18,6 +18,10 @@
+ * Armin Kuster <ak...@mv...>
+ * Copyright 2002 MontaVista Softare Inc.
+ *
++ * Real-time extension required for the RTnet IBM EMAC driver
++ *
++ * Copyright 2011 Wolfgang Grandegger <wg...@de...>
++ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+@@ -31,6 +35,27 @@
+ #include "core.h"
+ #include <asm/dcr-regs.h>
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++#define MAL_IRQ_HANDLED RTDM_IRQ_HANDLED
++#define mal_spin_lock_irqsave(lock, flags) \
++ do { rtdm_lock_get_irqsave(lock, flags); } while (0)
++#define mal_spin_unlock_irqrestore(lock, flags) \
++ do { rtdm_lock_put_irqrestore(lock, flags); } while (0)
++#define mal_spin_lock_init(lock) \
++ do { rtdm_lock_init(lock); } while (0)
++static DEFINE_RTDM_RATELIMIT_STATE(mal_net_ratelimit_state, 5000000000LL, 10);
++#define mal_net_ratelimit() rtdm_ratelimit(&mal_net_ratelimit_state, __func__)
++#else
++#define MAL_IRQ_HANDLED IRQ_HANDLED
++#define mal_spin_lock_irqsave(lock, flags) \
++ do { spin_lock_irqsave(lock, flags);; } while (0)
++#define mal_spin_unlock_irqrestore(lock, flags) \
++ do { spin_unlock_irqrestore(lock, flags); } while (0)
++#define mal_spin_lock_init(lock) \
++ do { spin_lock_init(lock); } while (0)
++#define mal_net_ratelimit() net_ratelimit()
++#endif
++
+ static int mal_count;
+
+ int __devinit mal_register_commac(struct mal_instance *mal,
+@@ -38,27 +63,49 @@ int __devinit mal_register_commac(struct mal_instance *mal,
+ {
+ unsigned long flags;
+
+- spin_lock_irqsave(&mal->lock, flags);
+-
++ mal_spin_lock_irqsave(&mal->lock, flags);
+ MAL_DBG(mal, "reg(%08x, %08x)" NL,
+ commac->tx_chan_mask, commac->rx_chan_mask);
+
+ /* Don't let multiple commacs claim the same channel(s) */
++
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ if (((mal->tx_chan_mask | mal->tx_chan_mask_rtdm) &
++ commac->tx_chan_mask) ||
++ ((mal->rx_chan_mask | mal->rx_chan_mask_rtdm) &
++ commac->rx_chan_mask)) {
++ mal_spin_unlock_irqrestore(&mal->lock, flags);
++ printk(KERN_WARNING "mal%d: COMMAC channels conflict!\n",
++ mal->index);
++ return -EBUSY;
++ }
++#else
+ if ((mal->tx_chan_mask & commac->tx_chan_mask) ||
+ (mal->rx_chan_mask & commac->rx_chan_mask)) {
+- spin_unlock_irqrestore(&mal->lock, flags);
++ mal_spin_unlock_irqrestore(&mal->lock, flags);
+ printk(KERN_WARNING "mal%d: COMMAC channels conflict!\n",
+ mal->index);
+ return -EBUSY;
+ }
++#endif
+
+ if (list_empty(&mal->list))
+ napi_enable(&mal->napi);
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ if (commac->rtdm) {
++ mal->tx_chan_mask_rtdm |= commac->tx_chan_mask;
++ mal->rx_chan_mask_rtdm |= commac->rx_chan_mask;
++ } else {
++ mal->tx_chan_mask |= commac->tx_chan_mask;
++ mal->rx_chan_mask |= commac->rx_chan_mask;
++ }
++#else
+ mal->tx_chan_mask |= commac->tx_chan_mask;
+ mal->rx_chan_mask |= commac->rx_chan_mask;
++#endif
+ list_add(&commac->list, &mal->list);
+
+- spin_unlock_irqrestore(&mal->lock, flags);
++ mal_spin_unlock_irqrestore(&mal->lock, flags);
+
+ return 0;
+ }
+@@ -68,18 +115,28 @@ void mal_unregister_commac(struct mal_instance *mal,
+ {
+ unsigned long flags;
+
+- spin_lock_irqsave(&mal->lock, flags);
++ mal_spin_lock_irqsave(&mal->lock, flags);
+
+ MAL_DBG(mal, "unreg(%08x, %08x)" NL,
+ commac->tx_chan_mask, commac->rx_chan_mask);
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ if (commac->rtdm) {
++ mal->tx_chan_mask_rtdm &= ~commac->tx_chan_mask;
++ mal->rx_chan_mask_rtdm &= ~commac->rx_chan_mask;
++ } else {
++ mal->tx_chan_mask &= ~commac->tx_chan_mask;
++ mal->rx_chan_mask &= ~commac->rx_chan_mask;
++ }
++#else
+ mal->tx_chan_mask &= ~commac->tx_chan_mask;
+ mal->rx_chan_mask &= ~commac->rx_chan_mask;
++#endif
+ list_del_init(&commac->list);
+ if (list_empty(&mal->list))
+ napi_disable(&mal->napi);
+
+- spin_unlock_irqrestore(&mal->lock, flags);
++ mal_spin_unlock_irqrestore(&mal->lock, flags);
+ }
+
+ int mal_set_rcbs(struct mal_instance *mal, int channel, unsigned long size)
+@@ -117,14 +174,14 @@ void mal_enable_tx_channel(struct mal_instance *mal, int channel)
+ {
+ unsigned long flags;
+
+- spin_lock_irqsave(&mal->lock, flags);
++ mal_spin_lock_irqsave(&mal->lock, flags);
+
+ MAL_DBG(mal, "enable_tx(%d)" NL, channel);
+
+ set_mal_dcrn(mal, MAL_TXCASR,
+ get_mal_dcrn(mal, MAL_TXCASR) | MAL_CHAN_MASK(channel));
+
+- spin_unlock_irqrestore(&mal->lock, flags);
++ mal_spin_unlock_irqrestore(&mal->lock, flags);
+ }
+
+ void mal_disable_tx_channel(struct mal_instance *mal, int channel)
+@@ -146,14 +203,14 @@ void mal_enable_rx_channel(struct mal_instance *mal, int channel)
+ if (!(channel % 8))
+ channel >>= 3;
+
+- spin_lock_irqsave(&mal->lock, flags);
++ mal_spin_lock_irqsave(&mal->lock, flags);
+
+ MAL_DBG(mal, "enable_rx(%d)" NL, channel);
+
+ set_mal_dcrn(mal, MAL_RXCASR,
+ get_mal_dcrn(mal, MAL_RXCASR) | MAL_CHAN_MASK(channel));
+
+- spin_unlock_irqrestore(&mal->lock, flags);
++ mal_spin_unlock_irqrestore(&mal->lock, flags);
+ }
+
+ void mal_disable_rx_channel(struct mal_instance *mal, int channel)
+@@ -175,29 +232,36 @@ void mal_poll_add(struct mal_instance *mal, struct mal_commac *commac)
+ {
+ unsigned long flags;
+
+- spin_lock_irqsave(&mal->lock, flags);
++ mal_spin_lock_irqsave(&mal->lock, flags);
+
+ MAL_DBG(mal, "poll_add(%p)" NL, commac);
+
+ /* starts disabled */
+ set_bit(MAL_COMMAC_POLL_DISABLED, &commac->flags);
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ if (commac->rtdm)
++ list_add_tail(&commac->poll_list, &mal->poll_list_rtdm);
++ else
++ list_add_tail(&commac->poll_list, &mal->poll_list);
++#else
+ list_add_tail(&commac->poll_list, &mal->poll_list);
++#endif
+
+- spin_unlock_irqrestore(&mal->lock, flags);
++ mal_spin_unlock_irqrestore(&mal->lock, flags);
+ }
+
+ void mal_poll_del(struct mal_instance *mal, struct mal_commac *commac)
+ {
+ unsigned long flags;
+
+- spin_lock_irqsave(&mal->lock, flags);
++ mal_spin_lock_irqsave(&mal->lock, flags);
+
+ MAL_DBG(mal, "poll_del(%p)" NL, commac);
+
+ list_del(&commac->poll_list);
+
+- spin_unlock_irqrestore(&mal->lock, flags);
++ mal_spin_unlock_irqrestore(&mal->lock, flags);
+ }
+
+ /* synchronized by mal_poll() */
+@@ -218,9 +282,18 @@ static inline void mal_disable_eob_irq(struct mal_instance *mal)
+ MAL_DBG2(mal, "disable_irq" NL);
+ }
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++static int mal_serr(rtdm_irq_t *irq_handle)
++#else
+ static irqreturn_t mal_serr(int irq, void *dev_instance)
++#endif
+ {
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ struct mal_instance *mal = rtdm_irq_get_arg(irq_handle,
++ struct mal_instance);
++#else
+ struct mal_instance *mal = dev_instance;
++#endif
+
+ u32 esr = get_mal_dcrn(mal, MAL_ESR);
+
+@@ -234,51 +307,99 @@ static irqreturn_t mal_serr(int irq, void *dev_instance)
+ /* We ignore Descriptor error,
+ * TXDE or RXDE interrupt will be generated anyway.
+ */
+- return IRQ_HANDLED;
++ return MAL_IRQ_HANDLED;
+ }
+
+ if (esr & MAL_ESR_PEIN) {
+ /* PLB error, it's probably buggy hardware or
+ * incorrect physical address in BD (i.e. bug)
+ */
+- if (net_ratelimit())
++ if (mal_net_ratelimit())
+ printk(KERN_ERR
+ "mal%d: system error, "
+ "PLB (ESR = 0x%08x)\n",
+ mal->index, esr);
+- return IRQ_HANDLED;
++ return MAL_IRQ_HANDLED;
+ }
+
+ /* OPB error, it's probably buggy hardware or incorrect
+ * EBC setup
+ */
+- if (net_ratelimit())
++ if (mal_net_ratelimit())
+ printk(KERN_ERR
+ "mal%d: system error, OPB (ESR = 0x%08x)\n",
+ mal->index, esr);
+ }
+- return IRQ_HANDLED;
++ return MAL_IRQ_HANDLED;
+ }
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++void mal_schedule_poll_nrt(rtdm_nrtsig_t nrt_sig, void* data)
++{
++ struct mal_instance *mal = (struct mal_instance *)data;
++ unsigned long flags;
++
++ local_irq_save(flags);
++ if (likely(napi_schedule_prep(&mal->napi))) {
++ MAL_DBG2(mal, "schedule_poll" NL);
++ __napi_schedule(&mal->napi);
++ } else
++ MAL_DBG2(mal, "already in poll" NL);
++ local_irq_restore(flags);
++}
++#endif
+ static inline void mal_schedule_poll(struct mal_instance *mal)
+ {
+ if (likely(napi_schedule_prep(&mal->napi))) {
+ MAL_DBG2(mal, "schedule_poll" NL);
++#ifndef CONFIG_IBM_NEW_EMAC_MAL_RTDM
+ mal_disable_eob_irq(mal);
++#endif
+ __napi_schedule(&mal->napi);
+ } else
+ MAL_DBG2(mal, "already in poll" NL);
+ }
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++#ifdef OBSOLETE
++static nanosecs_abs_t tstart;
++#endif
++
++static int mal_txeob(rtdm_irq_t *irq_handle)
++#else
+ static irqreturn_t mal_txeob(int irq, void *dev_instance)
++#endif
+ {
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ struct mal_instance *mal = rtdm_irq_get_arg(irq_handle,
++ struct mal_instance);
++#else
+ struct mal_instance *mal = dev_instance;
++#endif
+
++ struct list_head *l;
+ u32 r = get_mal_dcrn(mal, MAL_TXEOBISR);
+
+- MAL_DBG2(mal, "txeob %08x" NL, r);
++ MAL_DBG2(mal, "rt txeob %08x" NL, r);
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ if (r & mal->tx_chan_mask_rtdm) {
++ /* Process TX skbs */
++ list_for_each(l, &mal->poll_list_rtdm) {
++ struct mal_commac *mc =
++ list_entry(l, struct mal_commac, poll_list);
++#ifdef OBSOLETE
++ tstart = rtdm_clock_read();
++#endif
++ mc->ops->poll_tx(mc->dev);
++ }
++ }
++ if (r & mal->tx_chan_mask)
++ rtdm_nrtsig_pend(&mal->schedule_poll_nrt);
++#else
+ mal_schedule_poll(mal);
++#endif
++
+ set_mal_dcrn(mal, MAL_TXEOBISR, r);
+
+ #ifdef CONFIG_PPC_DCR_NATIVE
+@@ -287,18 +408,49 @@ static irqreturn_t mal_txeob(int irq, void *dev_instance)
+ (mfdcri(SDR0, DCRN_SDR_ICINTSTAT) | ICINTSTAT_ICTX));
+ #endif
+
+- return IRQ_HANDLED;
++ return MAL_IRQ_HANDLED;
+ }
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++static int mal_rxeob(rtdm_irq_t *irq_handle)
++#else
+ static irqreturn_t mal_rxeob(int irq, void *dev_instance)
++#endif
+ {
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ struct mal_instance *mal = rtdm_irq_get_arg(irq_handle,
++ struct mal_instance);
++#else
+ struct mal_instance *mal = dev_instance;
++#endif
++ struct list_head *l;
++ u32 r;
+
+- u32 r = get_mal_dcrn(mal, MAL_RXEOBISR);
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ mal->time_stamp = rtdm_clock_read();
++#endif
++ r = get_mal_dcrn(mal, MAL_RXEOBISR);
+
+ MAL_DBG2(mal, "rxeob %08x" NL, r);
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ if (r & mal->rx_chan_mask_rtdm) {
++ list_for_each(l, &mal->poll_list_rtdm) {
++ struct mal_commac *mc =
++ list_entry(l, struct mal_commac, poll_list);
++ if (unlikely(test_bit(MAL_COMMAC_POLL_DISABLED,
++ &mc->flags))) {
++ MAL_DBG(mal, "mc->flags=%#lx\n", mc->flags);
++ continue;
++ }
++ mc->ops->poll_rx(mc->dev, 1024);
++ }
++ }
++ if (r & mal->rx_chan_mask)
++ rtdm_nrtsig_pend(&mal->schedule_poll_nrt);
++#else
+ mal_schedule_poll(mal);
++#endif
+ set_mal_dcrn(mal, MAL_RXEOBISR, r);
+
+ #ifdef CONFIG_PPC_DCR_NATIVE
+@@ -307,76 +459,149 @@ static irqreturn_t mal_rxeob(int irq, void *dev_instance)
+ (mfdcri(SDR0, DCRN_SDR_ICINTSTAT) | ICINTSTAT_ICRX));
+ #endif
+
+- return IRQ_HANDLED;
++ return MAL_IRQ_HANDLED;
+ }
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++static int mal_txde(rtdm_irq_t *irq_handle)
++#else
+ static irqreturn_t mal_txde(int irq, void *dev_instance)
++#endif
+ {
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ struct mal_instance *mal = rtdm_irq_get_arg(irq_handle,
++ struct mal_instance);
++#else
+ struct mal_instance *mal = dev_instance;
++#endif
+
+ u32 deir = get_mal_dcrn(mal, MAL_TXDEIR);
+ set_mal_dcrn(mal, MAL_TXDEIR, deir);
+
+ MAL_DBG(mal, "txde %08x" NL, deir);
+
+- if (net_ratelimit())
++ if (mal_net_ratelimit())
+ printk(KERN_ERR
+ "mal%d: TX descriptor error (TXDEIR = 0x%08x)\n",
+ mal->index, deir);
+
+- return IRQ_HANDLED;
++ return MAL_IRQ_HANDLED;
+ }
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++static int mal_rxde(rtdm_irq_t *irq_handle)
++#else
+ static irqreturn_t mal_rxde(int irq, void *dev_instance)
++#endif
+ {
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ struct mal_instance *mal = rtdm_irq_get_arg(irq_handle,
++ struct mal_instance);
++ int nrtsig_pend = 0;
++#else
+ struct mal_instance *mal = dev_instance;
++#endif
+ struct list_head *l;
+
+ u32 deir = get_mal_dcrn(mal, MAL_RXDEIR);
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ mal->time_stamp = rtdm_clock_read();
++#endif
+ MAL_DBG(mal, "rxde %08x" NL, deir);
+
+ list_for_each(l, &mal->list) {
+ struct mal_commac *mc = list_entry(l, struct mal_commac, list);
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ if (deir & mc->rx_chan_mask) {
++ set_bit(MAL_COMMAC_RX_STOPPED, &mc->flags);
++ mc->ops->rxde(mc->dev);
++ if (mc->rtdm) {
++ mc->ops->poll_tx(mc->dev);
++ mc->ops->poll_rx(mc->dev, 1024);
++ } else {
++ nrtsig_pend++;
++ }
++ }
++#else
+ if (deir & mc->rx_chan_mask) {
+ set_bit(MAL_COMMAC_RX_STOPPED, &mc->flags);
+ mc->ops->rxde(mc->dev);
+ }
++#endif
+ }
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ if (nrtsig_pend)
++ rtdm_nrtsig_pend(&mal->schedule_poll_nrt);
++#else
+ mal_schedule_poll(mal);
++#endif
+ set_mal_dcrn(mal, MAL_RXDEIR, deir);
+
+- return IRQ_HANDLED;
++ return MAL_IRQ_HANDLED;
+ }
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++static int mal_int(rtdm_irq_t *irq_handle)
++#else
+ static irqreturn_t mal_int(int irq, void *dev_instance)
++#endif
+ {
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ struct mal_instance *mal = rtdm_irq_get_arg(irq_handle,
++ struct mal_instance);
++#else
+ struct mal_instance *mal = dev_instance;
++#endif
+ u32 esr = get_mal_dcrn(mal, MAL_ESR);
+
++ MAL_DBG(mal, "int %08x" NL, esr);
++
+ if (esr & MAL_ESR_EVB) {
+ /* descriptor error */
+ if (esr & MAL_ESR_DE) {
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ if (esr & MAL_ESR_CIDT)
++ return mal_rxde(irq_handle);
++ else
++ return mal_txde(irq_handle);
++#else
+ if (esr & MAL_ESR_CIDT)
+ return mal_rxde(irq, dev_instance);
+ else
+ return mal_txde(irq, dev_instance);
++#endif
+ } else { /* SERR */
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ return mal_serr(irq_handle);
++#else
+ return mal_serr(irq, dev_instance);
++#endif
+ }
+ }
+- return IRQ_HANDLED;
++ return MAL_IRQ_HANDLED;
+ }
+
+ void mal_poll_disable(struct mal_instance *mal, struct mal_commac *commac)
+ {
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ if (commac->rtdm) {
++ set_bit(MAL_COMMAC_POLL_DISABLED, &commac->flags);
++ } else {
++ while (test_and_set_bit(MAL_COMMAC_POLL_DISABLED,
++ &commac->flags))
++ msleep(1);
++ napi_synchronize(&mal->napi);
++ }
++#else
+ /* Spinlock-type semantics: only one caller disable poll at a time */
+ while (test_and_set_bit(MAL_COMMAC_POLL_DISABLED, &commac->flags))
+ msleep(1);
+
+ /* Synchronize with the MAL NAPI poller */
+ napi_synchronize(&mal->napi);
++#endif
+ }
+
+ void mal_poll_enable(struct mal_instance *mal, struct mal_commac *commac)
+@@ -389,7 +614,12 @@ void mal_poll_enable(struct mal_instance *mal, struct mal_commac *commac)
+ * probably be delayed until the next interrupt but that's mostly a
+ * non-issue in the context where this is called.
+ */
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ if (!commac->rtdm)
++ napi_schedule(&mal->napi);
++#else
+ napi_schedule(&mal->napi);
++#endif
+ }
+
+ static int mal_poll(struct napi_struct *napi, int budget)
+@@ -429,10 +659,15 @@ static int mal_poll(struct napi_struct *napi, int budget)
+ }
+
+ /* We need to disable IRQs to protect from RXDE IRQ here */
+- spin_lock_irqsave(&mal->lock, flags);
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ local_irq_save(flags);
+ __napi_complete(napi);
++ local_irq_restore(flags);
++#else
++ spin_lock_irqsave(&mal->lock, flags);
+ mal_enable_eob_irq(mal);
+ spin_unlock_irqrestore(&mal->lock, flags);
++#endif
+
+ /* Check for "rotting" packet(s) */
+ list_for_each(l, &mal->poll_list) {
+@@ -443,10 +678,15 @@ static int mal_poll(struct napi_struct *napi, int budget)
+ if (unlikely(mc->ops->peek_rx(mc->dev) ||
+ test_bit(MAL_COMMAC_RX_STOPPED, &mc->flags))) {
+ MAL_DBG2(mal, "rotting packet" NL);
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ if (!napi_reschedule(napi))
++ MAL_DBG2(mal, "already in poll list" NL);
++#else
+ if (napi_reschedule(napi))
+ mal_disable_eob_irq(mal);
+ else
+ MAL_DBG2(mal, "already in poll list" NL);
++#endif
+
+ if (budget > 0)
+ goto again;
+@@ -526,7 +766,11 @@ static int __devinit mal_probe(struct platform_device *ofdev)
+ const u32 *prop;
+ u32 cfg;
+ unsigned long irqflags;
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ rtdm_irq_handler_t hdlr_serr, hdlr_txde, hdlr_rxde;
++#else
+ irq_handler_t hdlr_serr, hdlr_txde, hdlr_rxde;
++#endif
+
+ mal = kzalloc(sizeof(struct mal_instance), GFP_KERNEL);
+ if (!mal) {
+@@ -611,7 +855,18 @@ static int __devinit mal_probe(struct platform_device *ofdev)
+
+ INIT_LIST_HEAD(&mal->poll_list);
+ INIT_LIST_HEAD(&mal->list);
+- spin_lock_init(&mal->lock);
++ mal_spin_lock_init(&mal->lock);
++
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ INIT_LIST_HEAD(&mal->poll_list_rtdm);
++
++ if (rtdm_nrtsig_init(&mal->schedule_poll_nrt, mal_schedule_poll_nrt,
++ (void*)mal)) {
++ printk(KERN_ERR
++ "mal%d: couldn't init mal schedule handler !\n", index);
++ goto fail_unmap;
++ }
++#endif
+
+ init_dummy_netdev(&mal->dummy_dev);
+
+@@ -673,19 +928,44 @@ static int __devinit mal_probe(struct platform_device *ofdev)
+ hdlr_rxde = mal_rxde;
+ }
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ err = rtdm_irq_request(&mal->serr_irq_handle, mal->serr_irq,
++ mal_serr, 0, "MAL SERR", mal);
++#else
+ err = request_irq(mal->serr_irq, hdlr_serr, irqflags, "MAL SERR", mal);
++#endif
+ if (err)
+ goto fail2;
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ err = rtdm_irq_request(&mal->txde_irq_handle, mal->txde_irq,
++ mal_txde, 0, "MAL TX DE", mal);
++#else
+ err = request_irq(mal->txde_irq, hdlr_txde, irqflags, "MAL TX DE", mal);
++#endif
+ if (err)
+ goto fail3;
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ err = rtdm_irq_request(&mal->txeob_irq_handle, mal->txeob_irq,
++ mal_txeob, 0, "MAL TX EOB", mal);
++#else
+ err = request_irq(mal->txeob_irq, mal_txeob, 0, "MAL TX EOB", mal);
++#endif
+ if (err)
+ goto fail4;
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ err = rtdm_irq_request(&mal->rxde_irq_handle, mal->rxde_irq,
++ mal_rxde, 0, "MAL RX DE", mal);
++#else
+ err = request_irq(mal->rxde_irq, hdlr_rxde, irqflags, "MAL RX DE", mal);
++#endif
+ if (err)
+ goto fail5;
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ err = rtdm_irq_request(&mal->rxeob_irq_handle, mal->rxeob_irq,
++ mal_rxeob, 0, "MAL RX EOB", mal);
++#else
+ err = request_irq(mal->rxeob_irq, mal_rxeob, 0, "MAL RX EOB", mal);
++#endif
+ if (err)
+ goto fail6;
+
+@@ -714,7 +994,11 @@ static int __devinit mal_probe(struct platform_device *ofdev)
+ fail6:
+ free_irq(mal->rxde_irq, mal);
+ fail5:
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ rtdm_irq_free(&mal->txeob_irq_handle);
++#else
+ free_irq(mal->txeob_irq, mal);
++#endif
+ fail4:
+ free_irq(mal->txde_irq, mal);
+ fail3:
+@@ -807,3 +1091,19 @@ void mal_exit(void)
+ {
+ platform_driver_unregister(&mal_of_driver);
+ }
++
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++EXPORT_SYMBOL_GPL(mal_register_commac);
++EXPORT_SYMBOL_GPL(mal_unregister_commac);
++EXPORT_SYMBOL_GPL(mal_set_rcbs);
++EXPORT_SYMBOL_GPL(mal_tx_bd_offset);
++EXPORT_SYMBOL_GPL(mal_rx_bd_offset);
++EXPORT_SYMBOL_GPL(mal_enable_tx_channel);
++EXPORT_SYMBOL_GPL(mal_disable_tx_channel);
++EXPORT_SYMBOL_GPL(mal_enable_rx_channel);
++EXPORT_SYMBOL_GPL(mal_disable_rx_channel);
++EXPORT_SYMBOL_GPL(mal_poll_add);
++EXPORT_SYMBOL_GPL(mal_poll_del);
++EXPORT_SYMBOL_GPL(mal_poll_enable);
++EXPORT_SYMBOL_GPL(mal_poll_disable);
++#endif
+diff --git a/drivers/net/ibm_newemac/mal.h b/drivers/net/ibm_newemac/mal.h
+index 6608421..0573c76 100644
+--- a/drivers/net/ibm_newemac/mal.h
++++ b/drivers/net/ibm_newemac/mal.h
+@@ -24,6 +24,10 @@
+ #ifndef __IBM_NEWEMAC_MAL_H
+ #define __IBM_NEWEMAC_MAL_H
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++#include <rtdm/rtdm_driver.h>
++#endif
++
+ /*
+ * There are some variations on the MAL, we express them in this driver as
+ * MAL Version 1 and 2 though that doesn't match any IBM terminology.
+@@ -186,6 +190,9 @@ struct mal_commac {
+ u32 tx_chan_mask;
+ u32 rx_chan_mask;
+ struct list_head list;
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ int rtdm;
++#endif
+ };
+
+ struct mal_instance {
+@@ -199,20 +206,40 @@ struct mal_instance {
+ int txde_irq; /* TX Descriptor Error IRQ */
+ int rxde_irq; /* RX Descriptor Error IRQ */
+ int serr_irq; /* MAL System Error IRQ */
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ rtdm_irq_t txeob_irq_handle;
++ rtdm_irq_t rxeob_irq_handle;
++ rtdm_irq_t txde_irq_handle;
++ rtdm_irq_t rxde_irq_handle;
++ rtdm_irq_t serr_irq_handle;
++ rtdm_nrtsig_t schedule_poll_nrt;
++ nanosecs_abs_t time_stamp;
++#endif
+
+ struct list_head poll_list;
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ struct list_head poll_list_rtdm;
++#endif
+ struct napi_struct napi;
+
+ struct list_head list;
+ u32 tx_chan_mask;
+ u32 rx_chan_mask;
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ u32 tx_chan_mask_rtdm;
++ u32 rx_chan_mask_rtdm;
++#endif
+
+ dma_addr_t bd_dma;
+ struct mal_descriptor *bd_virt;
+
+ struct platform_device *ofdev;
+ int index;
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++ rtdm_lock_t lock;
++#else
+ spinlock_t lock;
++#endif
+
+ struct net_device dummy_dev;
+
+diff --git a/drivers/net/ibm_newemac/phy.c b/drivers/net/ibm_newemac/phy.c
+index ac9d964..87a0a80 100644
+--- a/drivers/net/ibm_newemac/phy.c
++++ b/drivers/net/ibm_newemac/phy.c
+@@ -535,4 +535,11 @@ int emac_mii_phy_probe(struct mii_phy *phy, int address)
+ return 0;
+ }
+
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++EXPORT_SYMBOL_GPL(emac_mii_phy_probe);
++EXPORT_SYMBOL_GPL(emac_mii_reset_gpcs);
++EXPORT_SYMBOL_GPL(emac_mii_reset_phy);
++#endif
++
++
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/net/ibm_newemac/rgmii.c b/drivers/net/ibm_newemac/rgmii.c
+index 4fa53f3..4097c6b 100644
+--- a/drivers/net/ibm_newemac/rgmii.c
++++ b/drivers/net/ibm_newemac/rgmii.c
+@@ -336,3 +336,11 @@ void rgmii_exit(void)
+ {
+ platform_driver_unregister(&rgmii_driver);
+ }
++
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++EXPORT_SYMBOL_GPL(rgmii_attach);
++EXPORT_SYMBOL_GPL(rgmii_detach);
++EXPORT_SYMBOL_GPL(rgmii_set_speed);
++EXPORT_SYMBOL_GPL(rgmii_get_mdio);
++EXPORT_SYMBOL_GPL(rgmii_put_mdio);
++#endif
+diff --git a/drivers/net/ibm_newemac/zmii.c b/drivers/net/ibm_newemac/zmii.c
+index 97449e7..4446b1e 100644
+--- a/drivers/net/ibm_newemac/zmii.c
++++ b/drivers/net/ibm_newemac/zmii.c
+@@ -330,3 +330,12 @@ void zmii_exit(void)
+ {
+ platform_driver_unregister(&zmii_driver);
+ }
++
++#ifdef CONFIG_IBM_NEW_EMAC_MAL_RTDM
++EXPORT_SYMBOL_GPL(zmii_attach);
++EXPORT_SYMBOL_GPL(zmii_detach);
++EXPORT_SYMBOL_GPL(zmii_get_mdio);
++EXPORT_SYMBOL_GPL(zmii_put_mdio);
++EXPORT_SYMBOL_GPL(zmii_set_speed);
++#endif
++
+--
+1.7.4.1
--
1.7.4.1
|
|
From: Jan K. <jan...@we...> - 2011-11-03 08:34:07
|
Hi, git head now contains a framework for making RTnet drivers compatible with Linux using an IOMMU (VT-d in Intel-speak). Drivers can now register callbacks invoked on every rtskb creation or deletion and map/unmap those buffers outside the RT code paths. This is required as services like dma_map_single run into Linux locking when the IOMMU is active. So far rt_igb and the new rt_e1000e are making use of this. And that means you can use those drivers aside KVM on a Xenomai host (for required I-pipe and Xenomai patches see [1]. I suspect I broke some older kernels along these changes. If you find some regression, please report or even post patches. Thanks, Jan [1] http://thread.gmane.org/gmane.linux.kernel.adeos.general/1898 |